Medical ethics are healthier than business ethics

ethics_meter

Compared to most others in society, physicians endorse, and are held to, higher ethical standards.  (To illustrate, here are ethical codes from the AMA and the World Medical Association.)  High standards apply to professionals in other fields as well, especially fiduciaries such as attorneys, accountants, schoolteachers, and judges.  But standards of medical ethics may be among the most stringent.  We put patient welfare first, and anything that interferes with this primary aim, particularly personal gain, is deemed a conflict of interest (COI).  For example, it is legitimate to make money as a physician, i.e., to earn a living, but not in any way that detracts from patient welfare.  These are not black and white distinctions, however, and line-drawing controversies abound.  Offering unneeded treatment solely to boost income is always unethical.  But what about limiting one's practice in lucrative or otherwise pleasant ways: orthopedic surgeons practicing in ski towns, plastic surgeons who only do cosmetic surgery?  What about choosing a more lucrative specialty in the first place?  Accepting only certain types of insurance, or none at all?  Charging for missed or late-cancelled sessions?  Without attempting to resolve any of these examples here, it's noteworthy how much concern is voiced, and ink spilled, over how physicians practice.  To completely escape controversy, we'd have to take a vow of poverty and offer our services for free. In contrast, many other businesses that affect health do not share the physician's ethics.  Precise line-drawing plainly doesn't apply.  Beverage companies peddle diabetes along with refreshment, supplements come adorned with dubious health claims. Snack food can be unhealthy.  Manufacturers and retailers of exercise equipment need not refer customers to more suitable products from competitors.   One can even argue that new cars, not to mention video games, movies, and many other products, discourage people from exercising.  "Patient welfare" simply isn't a priority for most firms — they aren't dealing with patients.  There is no general code of business ethics that makes health its primary aim.  Thus, in extreme cases the government — we the people — step in, by limiting tobacco and alcohol ads for example, or by inspecting meat.  This is one reason we have government: to set priorities, including ethical priorities, that an ungoverned free market cannot or will not.

Some firms do explicitly deal with patients, yet still do not share the physician's ethical standards.  Insurance companies run feel-good ads that obscure their cost-containment mandate.  Medical corporations attract customers or subscribers who are "covered lives" as opposed to individual patients.  Pharmaceutical companies entice the public with all the irrational tricks used to sell other products, then tack on "ask your doctor" to absolve themselves of any medical responsibility.  Pharmacy benefit managers (PBMs) can disallow a physician's prescription wholly on the basis of cost, and without taking medical responsibility.  These are all huge "conflicts of interest" from a physician's point of view. But COI doesn't apply the same way to entities with less stringent professional ethics, where the primary aim is profit, not health.

This makes our burden harder. For the most part, it isn't up to pharmaceutical companies to avoid biasing doctors with their promotional efforts. It's up to us.  Moreover, it's up to us to counter unhealthy biases instilled in the public, like the willingness to use an antipsychotic with significant side-effects to treat routine depression.  Likewise, as long as insurers and PBMs are corporations, no one will compel them through moral persuasion or ethical codes to sideline their economic interests. It's not a conflict for a business to maximize return for its shareholders; it's the main reason they exist. Indeed, too much concern for patient welfare might be criticized, e.g., at a shareholder meeting, as a COI that impedes this primary aim.

Doctors are held to standards that would be absurd in virtually any other business. Historically, these higher ethical standards gave us a special status in society, and earned our patients' trust. The erosion of this special status, and of patient trust, is both a cause and an effect of a health care environment with lower, more businesslike, ethical standards.  The accelerating corporatization of American medicine replaces traditional medical ethics with the looser standard of business ethics.  MD decisions are now vetoed by MBAs.  As a result, patients may see us as replaceable technicians in a corporate infrastructure, and lose the benefits of a personal physician.  In parallel, physicians who are viewed by their patients and employers as mere cogs in the wheel of a large system are more apt to relax their own high ethical standards.  I fear for both our profession and the public as this vicious cycle continues.

While we doctors are busy maintaining our ethics and watching out for COI, other "stakeholders" in health care operate under fewer ethical constraints and enjoy greater profits, often directly at our expense.  It can be maddening, yet physicians have no unified voice to defend ourselves and our work.  Proposed solutions are inescapably political, and polarize us along deeply divided political lines, left versus right.  Ultimately, though, traditional medical ethics and public welfare are on the same side.  Doctors exist to help individual patients — and we will all be individual patients someday.  The looming challenge is whether we can put our internecine struggles aside long enough to save ourselves, our families, and our neighbors.

Image courtesy of Stuart Miles at FreeDigitalPhotos.net

America's top selling drug is an antipsychotic

AbilifyI learned recently that the antipsychotic Abilify is the biggest selling prescription drug in the U.S.  (I try to stay calm and collected here, but that's a fact worth boldface.)  To be a top seller, a drug has to be expensive and also widely used.  Abilify is both.  It's the 14th most prescribed brand-name medication, and it retails for about $30 a pill.  Annual sales are over $7 billion, nearly a billion more than the next runner-up. Yes, you read that right: $30 a pill.  A little more for the higher dosages.  There's no generic equivalent in the U.S. as yet; Canadian and other foreign pharmacies stock the active ingredient, generic aripiprazole, for a fraction of what we pay in the states.  However, Abilify's U.S. patent protection expires next month, and aripiprazole may soon be available here at lower cost.

Abilify is an "atypical" antipsychotic.  This is a confusing term, as these are now the drugs typically prescribed for schizophrenia and other psychotic conditions.  The name comes from their atypical mechanism of action, as compared to the prior generation of antipsychotics.  "Atypicals" also play a useful role in the treatment of bipolar disorder, where traditional medications such as lithium require blood level monitoring, and often multiple doses per day.

Antipsychotics are powerful drugs with considerable risks and side-effects.  But psychosis and mania are powerful too.  As with cancer chemotherapy and narcotic painkillers, a risky and/or toxic treatment can be justified in dire circumstances.  It's also true that one crisis visit to an emergency room, not to mention a psychiatric admission, may cost more than months of Abilify, and can itself be emotionally traumatic.  If Abilify keeps psychosis at bay and prevents hospitalization, the risks are worth it.  The cost is worth it too — if a less expensive generic atypical won't do.  Several are now available.

As I wrote in 2009, the manufacturer Otsuka tapped a much larger market for Abilify as an add-on treatment for depression.  I objected to the consumer ad campaign that trumpeted this expensive, dangerous niche product for common depression.  While there's a role for Abilify in unusually severe, unresponsive depression, advertising it widely as a benign "boost" for one's antidepressant was, and is, irresponsible.  By analogy, the makers of the narcotics OxyContin and Percocet could run ads showing people with bad headaches, and urging fellow headache sufferers to ask their doctors "if Percocet is right for you."

And these are merely the FDA-approved uses of Abilify.  Atypicals are also widely prescribed off-label for use as non-addictive tranquilizers and sleeping pills, and to treat other psychiatric conditions.  There's no advertising for off-label use, so the onus falls squarely on prescribers who balance the risks and benefits of these drugs in a manner that research tends not to support.  In short, a costly, risk-laden medication created to ease the awful but relatively uncommon tragedy of schizophrenia is now the top selling prescription drug in America owing to its widespread use in garden variety depression, anxiety, and insomnia.

It's been said that the top selling drug in any era is a comment on society at that point in time.  Valium held the lead during the 1960s and 70s, suggesting an age of uncertainty and anxiety.  The top spot was taken over by the heartburn and ulcer medication Tagamet in 1979.  Tagamet was the first "blockbuster" drug with more than $1 billion in annual sales. Cholesterol-lowering Lipitor was the biggest seller for nearly a decade after it was released in 1997, the same year the FDA first allowed drug ads targeting consumers.  Pfizer spent tens of millions on such ads — and sold over $125 billion of Lipitor over the years.  The stomach medicine Nexium took over after that.  Without covering all the top sellers, it's fair to say that Americans spend a great deal on prescriptions to deal with emotional distress and unhealthy lifestyles.  The blockbusters also show how mass-marketing brand name drugs has becomes a huge and highly profitable business.

What does it say about us that Abilify holds the top spot now?  What does it mean to live in the Age of Abilify?  First, that we're still looking for happiness and peace in a bottle of pills, costs and risks be damned.  Second, that there's nearly no end to the money the U.S. health care system will spend on problems that can be addressed more economically.  And third, it's a stark reminder that commercial interests seek to expand sales and profits whenever possible.  They find (or create) new markets, promote products by showcasing benefits and concealing drawbacks, appeal to our emotions instead of our rationality.  This is simply how business works.  We should not be surprised, yet we ignore this reality at our peril, particularly when it comes to our health.

Behavioral science versus moral judgment

General George S Patton

George S. Patton, Jr. commanded the Seventh United States Army, and later the Third Army, in the European Theater of World War II.  General Patton, a brilliant strategist as well as larger-than-life fount of harsh words and strong opinions, was also infamous for confronting two soldiers diagnosed with "combat fatigue" — now known as post-traumatic stress disorder, or PTSD — in Sicily in August of 1943.  (One such incident was depicted in the classic 1970 film "Patton" starring George C. Scott.)  Patton called the men cowards, slapped their faces, threatened to shoot one on the spot, and angrily ordered them back to the front lines.  He directed his officers to discipline any soldier making similar complaints.  Patton's commanding officer, General Eisenhower, firmly condemned the incidents and insisted that Patton apologize.  Patton did so reluctantly, always maintaining that combat fatigue was a pretext for "cowardice in the face of the enemy." Seventy years have passed, yet as a society we still feel the tension between moral approval or disapproval on the one hand, and value-neutral scientific or psychological description on the other.  Cowardice is a character flaw, a moral lapse, a weakness.  PTSD, in contrast, is a syndrome that afflicts the virtuous and the vile alike.  We similarly declare violent criminals evil — unless they are judged insane, in which case our moral condemnation suddenly feels misplaced.  Likewise, a student who is lazy or careless needs to shape up to avoid our scorn; a student with ADHD, in contrast, is a victim, not a bad person.

Personality descriptors — brave, cowardly, rebellious, compliant, curious, lazy, perceptive, criminal, and many more — feel incompatible with knowledge of our minds and brains.  It seems the more we explain the roots of human behavior, the less we can pass moral judgment on it.  It doesn't matter if the explanation is biological (e.g., brain tumor, febrile delirium, seizure) or psychological (e.g., PTSD, childhood abuse, "raised that way").  However, perhaps because we feel we know our own minds best, it does seem to matter if we are accounting for ourselves versus others.  We usually explain our own behavior in terms of value-neutral external contingencies — I'm late because I had a lot to do today, not because I'm unreliable — and more apt to tar others with a personality judgment such as "unreliable."  This finding, the Fundamental Attribution Error, has been a staple of social psychology research for decades.

Will we eventually replace moral judgments of others with medical or psychological explanations that lack a blaming or praising tone?  It appears our inclination to judge others will not pass quietly.  Much of the rancor between the political Left and Right concerns the applicability of moral language.  Are felons bad people, or merely raised the wrong way?  Are the poor lazy and entitled, or trapped in poverty by circumstance?  Was General Patton disciplining cowards who were shirking their duty, or was he verbally and physically abusing soldiers who had already been victimized?

The Left and Right disagree over where to draw the line.  But no matter how far we progress in our brain and behavioral sciences, we will still want to voice judgments of others — and negative judgments seem the more compelling.  Humans are notoriously inventive in the use of language to denigrate.  Originally neutral clinical terms like "idiot" and "moron" (and "retarded" and "deluded" and many more) eventually became terms of derision.  Euphemisms like "juvenile delinquent" didn't stay euphemistic for long.  While it may blunt the sharpness of our  scorn in the short term, "politically correct" language won't change this aspect of human nature in any lasting way.

Even logic doesn't stop us.  For example, terrorists are routinely called cowards in public discourse, although it isn't clear why.  Many terrorists voluntarily die in their efforts, an act considered heroic, or at least brave, in other contexts.  They often attack civilian rather than military targets.  But we did that in WWII, and we weren't cowards.  They use guile, sneak onto planes, employ distraction and misdirection — like our "cowardly" Special Forces do.  The point is, we find terrorists despicable, but that isn't a strong enough putdown.  If we didn't call them cowards, we'd have to call them something else to humiliate them.  Mama's boys?

Humans are a funny species.  Uniquely striving for intellectual understanding, yet not so far from the other beasts who purr or growl or screech their approval or protest.  Balancing the aims of morality and science is the stuff of constant, and perhaps endless, political debate.  Ultimately it's irresolvable, yet we do our best to pay homage both to our hearts and our heads.

Defining the competent psychiatrist

psychwclientWhat defines a competent psychiatrist?  To staunch critics of the field, perhaps nothing.  Some believe psychiatry has done far more harm than good, or has never helped anyone, rendering moot the question of competency.  What defines a competent buffoon?  A skillful brute?  An adroit half-wit?  Having just finished Robert Whitaker's Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America (Crown, 2010), a reader might easily conclude that psychiatric competency is a fool's errand.  From directing dank 19th Century asylums, to psychoanalyzing everyone for nearly anything during much of the 20th Century, to doling out truckloads of questionably effective, often hazardous drugs for the past 35 years, perhaps psychiatry is beyond redemption. Of course, I don't think so.  For one thing, critics often disagree about what is wrong with the field.  For every charge of over-diagnosis and overmedicating, another holds that debilitating disorders are under-recognized and under-treated.  A charge that psychiatry has become too "cookbook" and commodified is answered by the complaint that it is too anecdotal and not sufficiently "evidence-based."  Claims that the field stumbles because it is subtle, complex, and understaffed by well-compensated specialists, are met with counter-claims that checklists in primary care clinics can do most of the heavy lifting at less expense.  Contradictory criticisms offer no evidence that the field is faultless.  But the confusion does suggest that psychiatry's limitations reside at a different level of analysis than that engaged by its critics.

For another thing, the undeniable shortcomings of psychiatry don't make the patients disappear.  Whether the field teems with genius humanitarians or raving witchdoctors, there are still families watching their teenage daughters starving themselves to death; beloved aunts and uncles living unwashed and mumbling to themselves on the street; people ending their lives out of temporary tunnel-vision; tormented souls imprisoned in their homes by irrational fears.  And our society still harbors a nagging ethical sense that a crime is committed only when a person knows what he's doing — and that when he doesn't, he deserves help not punishment.

We can admit that psychiatrists are (at times meddlesome) do-gooders who take on misery and heartache and uncontrolled destructive behavior despite deep controversies over how best to help.  It's the same role filled, in different times and places, by clergy, by family, by shamans, by the village as a whole.  Every society fills it by someone.  This is the modest starting point that bootstraps a meaningful definition of psychiatric competency.

Lists of "core competencies" are issued by the Accreditation Council for Graduate Medical Education (ACGME) for psychiatry residents, and by the American Board of Psychiatry and Neurology (ABPN) for board-certified psychiatrists.  Both organizations categorize psychiatric competency under the six headings established by the ACGME for all medical specialties: Patient Care, Medical Knowledge, Interpersonal and Communications Skills, Practice-Based Learning and Improvement, Professionalism, and Systems Based Practice.  (These categories are also used by the Accreditation Council for Continuing Medical Education [ACCME], so that continuing education required to maintain one's medical license addresses one or more of these competency areas.)  A review of either of these detailed lists reveals two important truths.  First, a committee can make any aspirational standard byzantine and lifeless.  And second, in the eyes of  ACGME and ABPN at least, it's not so easy to be a competent psychiatrist.

However, these official competencies are unlikely to satisfy skeptics, nor do they get to the heart of the matter.  No such list can be exhaustive: the ABPN includes knowledge of transcranial magnetic stimulation, presumably a recent addition, but fails to require knowledge of specific pharmaceuticals.  Focus areas such as addiction, forensic, and geriatric psychiatry are mentioned, but not administrative or community psychiatry.  The linguistic philosopher Ludwig Wittgenstein argues that our inability to precisely define natural categories, even simple nouns like "chair," is a feature of language itself, not of psychiatric competence specifically.  Accordingly, any catalog of psychiatric competencies, whether intended to be comprehensive or a "top ten" list, captures some, but not all, of what constitutes a competent psychiatrist.

As implied above, the starting point, although not the end point, for defining the competent psychiatrist is intent.  A psychiatrist aims to relieve suffering in an uncertain human domain.  Brought to bear are skills, knowledge, and personality factors ("professionalism" etc) which bring this goal closer.  These cannot be listed exhaustively: virtually the whole of human knowledge and experience can inform one's understanding of a patient's emotional turmoil.  The best we can say, I believe, is that a competent psychiatrist is curious, has a wide fund of knowledge and life experience, and aims to keep an open mind.  Some of this knowledge certainly should be biomedical.  But knowing about the psychology of aging, common stressors such as job loss and divorce, gender differences, and many other areas are hardly less important. The practitioner's proclivity to observe the human condition both scientifically and humanistically is ultimately a better gauge of competence than whether a specific treatment modality such as TMS has been added to a long list, or whether the practitioner is able to cough up a specific fact.

Given the controversy and uncertainty in the field, another essential of competent practice is humility.  In most cases we don't know the etiology of what we're treating.  Any treatment we offer helps some patients but not others, and nearly always carries risk.  Whitaker makes many good points along these lines.  A competent psychiatrist tempers his or her urge to intervene with the realization that the road to hell is often paved with good intentions.  Psychiatrists virtually always mean well, and (contrary to some critics) help our patients far more often than not.  Nonetheless, a competent psychiatrist is always ready to admit misjudgment or miscalculation.  Self-correction is a feature of competence in psychiatry as well as in many, perhaps all, other domains of human expertise.

For another take on the competent psychiatrist, arriving at a similar endpoint using different reasoning, see this 2011 post by Dr. Raina.

I wrote above that psychiatry's limitations may reside at a different level of analysis than that engaged by its critics.  Psychiatry is a hard job because the brain is the most complex organ, because normality is so hard to define, because human development is a subtle interplay of nature and nurture, and because we don't understand the root causes of many forms of mental distress.  But even if we did know and understand these far better than we do now, the field would still be fraught with controversy and uncertainty.  Our attitudes regarding responsibility, free will, conformity versus deviance, and how we treat each other reflect our politics and deeply held values.  Psychiatry serves as a lightning rod for strong feelings around these matters.  By its very nature, it always will.  Psychiatrists must accept that many will view us skeptically, some with hatred — and others with undeserved adoration — and not let this dissuade us.  A competent psychiatrist hears criticism from individual patients and the public, neither dismissing it unthinkingly, nor allowing it to lead to demoralization and defeat.

Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net.

Living between three and seven

ID-100198982Despite my mostly psychodynamic approach to psychotherapy, I sometimes include cognitive interventions as well.  I think of this as choosing from a variety of tools to suit the moment.  Generally speaking, cognitive techniques (and psychiatric medications) aim for symptom relief, while psychodynamic work aims for structural personality change, with symptom improvement as a byproduct.  There's a time and place for each, their relative value varying from patient to patient.  The following is a cognitive framework I've introduced to a number of patients over the years.  Let me know if it's useful to you. Essentially it's a simple one to ten scale that highlights polarized thinking — "splitting" in dynamic lingo — and encourages modifying it through conscious effort.

Many patients who evidence polarized, black-and-white thinking — who devalue the bad and idealize the good — quickly catch on when I propose that their abject hopelessness and seething rage represent a "one" on a one to ten scale, whereas their over-the-top exuberance rates a "ten."  (Some take it further and claim their despair sinks to "negative 100" and positivity zooms up to "50" on that scale, but usually they'll agree to keep it manageable.)  The key intervention is then to point out that life is mostly lived between three and seven. Realistically speaking, bad experiences in life usually rate a "three" or "four," good experiences a "six" or "seven."  Anything more extreme is rare.  Feelings of "one" and "ten" are almost always exaggerations, polarized distortions that whipsaw the patient's feelings and interpersonal relationships.

The concreteness of speaking in numbers comes easily to most of us.  Once introduced to this scale, some patients spontaneously and enthusiastically rate their own feelings: a troubling encounter "felt like a 'one' but I know it was really a 'three'."  More often they relate an experience in unrealistically glowing terms, and I gently challenge their idealization by asking if it was truly a "ten" or more accurately a solid "seven" (and likewise with a "one" that upon reflection could be re-rated a "three.")  Some patients formerly prone to one-or-ten thinking soon begin sessions by telling me their day feels like a satisfying "six" or a disappointing "four".  Either way, I support this more nuanced assessment and discuss how they may nudge themselves up the scale.

Many patients, particularly those who take a degree of pleasure in the ups and downs of their emotional roller coaster, would never abide a monotonous life stuck at "five."  Where's the fun in that?  Fortunately, the point of the scale is not to aim for stagnation, nor to suggest that the midpoint is ideal.  The realities of life assure that some days will be better than others.  No cognitive trick will stop successes from feeling good and letdowns from feeling bad.  The question is how much.  Attaching numbers to feelings offers a little distance and perspective.  It's a gentle reminder that such emotional exaggeration may be a form of self-torture — and that an apparent "ten" is risky (and literally "too good to be true"), often crashing precipitously into a "one."  Most of the time it's far more comfortable, safe, and sustainable to "live between three and seven."

Of course, it wouldn't be psychodynamic therapy if we stopped there.  The numerical scale offers a useful language to describe unrealistic emotional extremes, and perhaps to help the patient mitigate them through conscious effort.  However, it can't account for the splitting itself, nor change the patient's propensity in any structural way.  For that, we turn to unconscious dynamics, and to a trustworthy, consistent therapeutic relationship that permits emotional nuance to gain a foothold. Rather than being seen as mutually exclusive — itself an unhealthy polarization — cognitive and psychodynamic approaches can complement one another.

Graphic courtesy of Danilo Rizzuti at FreeDigitalPhotos.net