Madness cast out

Madness: A Very Short Introduction - Andrew Scull 2011

Madness cast out

At the beginning of the 1970s, psychoanalysis ruled the roost, at least in the United States. And then it didn’t. More swiftly and silently than the Cheshire cat, psychoanalytic hegemony vanished, leaving behind not a smile, but a fractious group of Freudians and neo-Freudians who squabbled in classically sectarian fashion among themselves. Professors of literature and anthropology tried feverishly to fend off the notion that Freud had turned into an intellectual corpse, but cruel realities suggested otherwise. Psychoanalysts were rapidly defenestrated, their hold over academic departments of psychiatry removed and replaced by laboratory-based neuroscientists and psychopharmacologists. Psychoanalytic institutes found themselves bereft of recruits and forced to abandon their policy of admitting only the medically qualified (a position they had long held to, though it had been explicitly repudiated by the Master). The term ’neurosis’ was expunged from the official nomenclature of mental disorder (along with the category of ’hysteria’, the condition that had given birth to the talking cure). The ’surface’ manifestations of mental diseases that the psychoanalysts had long dismissed as merely the symptoms of the underlying psychodynamic disorders of the personality became instead scientific markers, the very elements that defined different forms of mental disorder. And the control of such symptoms, preferably by chemical means, became the new Holy Grail of the profession. Meanwhile, the public learned to think of madness, not in terms of refrigerator mothers and sex, but as the product of faulty brain biochemistry, bad genes, or neurotransmitters gone haywire, with adjustments of surpluses or shortfalls of internal body chemistry the royal road to happiness and cognitive health.

Like all revolutions, although this series of dramatic changes seemed to materialize all at once, it had much deeper historical roots. The sudden loss of its institutional and intellectual dominance reflected some strategic errors and miscalculations on the part of the psychoanalytic community, but also the unplanned and unintended consequences of a whole series of other developments that dated back to the late 1940s and early 1950s, the era when the analysts first began to entrench themselves at the head of the profession of psychiatry. With its commitment to the analytic paradigm, American psychiatry had been at odds with the more eclectic mix of biological and social psychiatry that was typical almost everywhere else, but the revolutionary transformations that occurred in the United States in the last quarter of the 20th century would all but eliminate these national differences, and in short order produce the Americanization of world psychiatry. The definitions and treatments of madness launched by the American Psychiatric Association and underwritten by Big Pharma, the multinational pharmaceutical industry, whatever their limitations and problems, would triumph everywhere.

The post-war rise of psychoanalysis formed part of, and helped to produce, a massive shift in the social location of most psychiatric practice. In 1940, all but a tiny minority of psychiatrists worked in institutional settings. But as early as 1947, in a remarkable break from these pre-war precedents, more than half of all American psychiatrists worked in private practice or in out-patient clinics. By 1958, as few as 16% plied their trade in traditional state hospitals. Moreover, this rapid shift in the profession’s centre of gravity occurred in the context of an extraordinary expansion in the absolute size of the profession, which multiplied more than five-fold in the space of three decades. Not just the professional elite, but also most of the rank and file had thus abandoned the most severely mentally disturbed to their fate within fifteen years of the end of the war. And yet traditional mental hospitals loomed as large as ever on the American scene, as they did elsewhere in the developed world. In 1903, state and county asylums had contained 150,000 inmates on an average day. That number had grown to 445,000 in 1940, and had peaked at 559,000 in 1955, before falling back slightly to a total of some 535,500 in 1960. While the general population had doubled, the number of psychiatric inmates had quadrupled. Moreover, a steadily greater fraction of the whole was made up of chronic and especially elderly patients. In Massachusetts, for example, supposedly in the vanguard of psychiatric progress, almost 80% of the state mental hospital population had been institutionalized for five years or more by the 1930s. These hundreds of thousands of psychotics were left to the tender mercies of the dregs of the psychiatric profession, many of them foreign-born and foreign-educated, and barely capable of communicating in English. And they remained crammed into the decaying relics of the Victorian enthusiasm for incarcerating the mad. Helpless, hopeless, and highly stigmatized, this institutionalized population that presumably remained at the core of psychiatry’s claims to expertise was increasingly treated as an embarrassment, something that the more ’progressive’ elements of the profession were anxious to leave behind. Ironically, though, it was within this alienated social space that a revolution was brewing, one that would prompt the development of new ideological understandings of the nature, sources, and proper management of the mad; and would play some role (though not the role that conventional wisdom would have us believe) in the virtual disappearance of the asylum as the first line of response to the problems of grave and persistent mental disturbance.

No-one expected that this would be the case when a few employees of the then-obscure French drug company Rhône Poulenc began to experiment in the late 1940s with a chemical compound, chlorpromazine, that had first been synthesized in a German laboratory in the late 19th century. The pharmaceutical industry was then far from occupying its powerful current place in the medical-industrial complex, but the standardization of ’ethical’ drug manufacture (as opposed to the older patent-medicine industry) had been given an enormous stimulus by the discovery of the sulfa drugs and then by the advent of penicillin, a genuine magic bullet that could be employed against infectious diseases, and the race was on to discover other therapeutic compounds from which the industry could derive profits. Chlorpromazine was one such. It was an anti-histamine, a class of drugs that produced drowsiness, among other properties, and Rhône Poulenc thought it might have its uses as an anaesthetic potentiator. Anaesthetics are poisons, whose administration produces unpleasant side effects in many patients exposed to them (though most of us are extremely grateful to trade those side effects for relief from excruciating pain while we undergo surgery). So anything that could help reduce the amount of anaesthetic one required could prove medically desirable.

It was an attractive hypothesis, but a clinical failure. Undeterred, Rhône Poulenc sought other possible uses for the product. Perhaps it might be useful as a treatment for itchy skin, or as an anti-emetic, to control nausea and vomiting? Or it might serve as a general sedative. Trials were run along these lines, and supplies of the drug were made available quite freely to physicians who wanted to experiment with them. And thus a serendipitous discovery came to be made.

Drugs had previously been used in psychiatry. Some 19th-century psychiatrists had experimented with giving their patients marihuana, though most soon abandoned the practice. Opium had been mobilized as a soporific in cases of mania. Later on in the 19th century, chloral hydrate and the bromides had had their enthusiasts (though bromides in excess produced psychotic symptoms, and their widespread use outside the asylum produced toxic reactions that landed substantial numbers of patients in mental hospitals, diagnosed as mad; and chloral, though effective as a sedative, was addicting, and with long-term use led to hallucinations and symptoms akin to delirium tremens). Lithium salts seemed to calm the agitation of manic patients, and some hydrotherapeutic establishments used them in the treatment of their nervous patients. But lithium could easily prove toxic, producing anorexia, depression, even cardiovascular collapse and death. (Its value would later be championed by the Australian psychiatrist John Cade after the Second World War, and the existence of calming effects in mania would prompt some continuing clinical interest in these compounds in Europe and North America. Their use would spread, but only to a degree, for lithium could not be patented, and hence was of minimal interest to the pharmaceutical industry and its marketing machine.) The 1920s had seen experiments with barbiturates, including attempts to place mental patients in chemically induced periods of suspended animation in the hopes that this would produce a cure. But barbiturates, too, had major drawbacks: they were addicting, overdoses could easily prove fatal, and withdrawal symptoms when they were discontinued were highly unpleasant, even dangerous. Besides, like the earlier drugs used by psychiatrists, their use produced mental confusion, impaired judgement, and inability to concentrate, as well as a whole spectrum of physical problems.

But when given to psychotic patients, chlorpromazine seemed to be different. It reduced florid symptomatology, and calmed patients down, producing an indifference that some observers at the time likened to a ’chemical lobotomy’ — then seen as a positive development — but it did not seem to be addictive, or to have many of the other negative effects so prominent when other drugs were given to mental patients. Rhône Poulenc by now had sold the North American rights to the drug to Smith, Kline, and French, who labelled the compound Thorazine. (In Europe, it was originally called Largactil, or ’mighty drug’, in recognition of what was hoped to be its broad range of therapeutic applications.) In the early 1950s, Smith, Kline, and French had also explored a broad range of potential therapeutic applications for the drug, though none produced results likely to prove persuasive with the Food and Drug Administration. Except for the treatment of mental patients, the last application it tried, after the other options had proved disappointing.

The early studies on mental patients were easy to conduct. The patients were quite literally captive, and without civil rights. The notion of informed consent, let alone outside scrutiny of research protocols, was non-existent, and trials were easily set up, though virtually none of them employed research designs that by modern standards produced reliable knowledge. Rarely were control groups employed, and the experimenter was always aware of which patients were receiving the active compounds. The numbers of patients treated were often small, and criteria for assessing ’improvement’ primitive and unreliable, and easily manipulable. The clinicians were uniformly convinced that they were on to something. At the end of 1953, however, a mere five months before it was to be marketed, Thorazine had been tested on a total of only 104 psychiatric patients in North America. Thirteen months later, it was being given to an estimated two million patients in the United States alone. By 1970, US pharmaceutical houses sold over a half billion dollars of psychiatric drugs, of which phenothiazines as a class accounted for more than $110 million. In a pattern that would become familiar in the drug industry, each of Smith, Kline, and French’s competitors rushed to develop marginally different versions of the original drug that they could patent as their own.

Initially, psychoanalytically trained practitioners were inclined to ignore the new so-called anti-psychotic drugs on the grounds that they treated only the symptoms, not the underlying causes of mental disturbance. Subsequently, as it became more difficult to ignore these new interventions, they adopted a subtly different stance: the drugs were useful, but only in reducing surface symptomatology. The real contribution of drug treatment, they argued, was that it made the patients more accessible to analytic treatment, where the serious work of dealing with the underlying issues took place. Others, however, within and outside the psychiatric profession, drew the philosophically indefensible but understandable conclusion that if drugs which acted on the body modified the symptoms of psychiatric disorders, those disorders must surely be rooted in biology. Over time, this reasoning underpinned a move back to a biologically reductionist view of mental illness.

Image

16. Thorazine, or chlorpromazine (marketed in Europe, where it was first synthesized, as Largactil), was the first of the new so-called anti-psychotic, or neuroleptic, drugs. It was approved by the United States Food and Drug Administration in 1954, and a decade later had been administered to an estimated 50 million people worldwide, an astonishing marketing success fuelled in substantial measure by advertisements like this one, taken from the pages of the journal Mental Hospitals, which was published by the American Psychiatric Association

Freudian psychiatrists had always treated diagnostic categories as largely irrelevant, since what mattered were the complexities of the individual’s personality and psychopathology. But as drug development proceeded, the need to standardize the patient population on which new drugs were tested became more pressing, and as new drugs seemed to have an effect on some, but not all, psychiatric patients, it became commercially attractive to try to distinguish different sub-populations among the mad for whom particular families of drugs seemed to work. For the profession as a whole, as increased attention came to be focused on diagnosis, the embarrassment that flowed from the profession’s demonstrable inability to agree on what ailed a particular patient grew less and less tolerable, particularly as outsiders such as lawyers, psychologists, and sociologists used these disagreements to suggest that the psychiatric emperor had no clothes.

Within psychiatry, therefore, a vocal minority began to agitate for a reform of the diagnostic process, one that would standardize labels and categories to ensure maximum agreement between individual psychiatrists, something technically referred to as diagnostic reliability. In essence, this was a call for reviving the sort of descriptive psychopathology pioneered by Emil Kraepelin, and the movement to systematize psychiatric diagnoses is often referred to as ’neo-Kraepelinian’. It was an endeavour the psychoanalysts had little interest in, and by and large, they distanced themselves from the process. When the American Psychiatric Association convened a Task Force to revise the profession’s Diagnostic and Statistical Manual, the analysts made minimal efforts to participate in or influence its deliberations. It would prove, in combination with psychoanalysis’s earlier decision to keep their training institutes separate from university medical schools (diluting their influence in this crucial arena), to be a fatal mistake.

In 1980, when the third edition of the Diagnostic and Statistical Manual appeared, the speculative Freudian aetiologies for various forms of mental illness had vanished. Last-minute protests from analysts preserved the term ’neurosis’, but it was a Pyrrhic victory, for it survived only in parentheses, and would be expunged altogether from the official nomenclature of mental disorder within a very few years. The ’surface’ manifestations of mental diseases that the psychoanalysts had long dismissed as merely the symptoms of the underlying psychodynamic disorders of the personality became instead scientific markers, the very elements that defined different forms of mental disorder. And the control of such symptoms, preferably by chemical means, became the new Holy Grail of the profession.

It was a counter-revolution launched, not from the hallowed and ivied halls of the Harvards and Yales of this world, but of all places from St Louis, by then marginal figures at the Washington University Medical School, and by a renegade Columbia psychiatrist, Robert Spitzer. And its primary weapon was a book, or rather an anti-intellectual system published in book form: a check-list approach to psychiatric diagnosis and treatment that sought maximum inter-rater reliability among psychiatrists confronted by a given patient, with scant regard for whether the new labels that proliferated in its pages cut nature at the joints. Agreement among professionals was enough, particularly on those occasions on which a given diagnosis could be linked to treatment with a particular class of drugs. Indeed, soon enough the polarity would be reversed, and the creation of a new class of drugs would lead to the creation of a new psychiatric ’disease’ to match, just one of the factors that prompted successive editions of the Diagnostic and Statistical Manual to proliferate pages and disorders, like the Yellow Pages on steroids. The scarcely used DSM-II, published in 1968, had recognized a mere 182 different psychiatric disorders. When DSM-III appeared in 1980, this had risen to 265, and by the time another major revision, DSM-IV, appeared in 1994, there were 297 different diagnoses listed. The number of psychiatric illnesses metastasized despite the disappearance of a number of ’mental illnesses’ like homosexuality and hysteria from the official psychiatric lexicon during the last two decades of the 20th century. The profits available from new classes of drugs, however trivial the differences that marked them off from existing compounds, encouraged the transformation of all sorts of daily distress and the ordinary vicissitudes of human existence into new ’diseases’ allegedly caused by ’chemical imbalances’, all resting on the apparent authority of science, but in reality on purely circular reasoning.

The eclipse of psychoanalysis, the lurch back to biological accounts of madness, and the rapid growth of psychopharmacology were not the only dramatic changes in the psychiatric landscape that marked the last four decades of the 20th century. From the Victorian age until the dawn of the 1960s, most of the Bedlam mad could expect a prolonged period of confinement inside one of the warehouses of the unwanted whose distinctive buildings for so long haunted the countryside, and provided mute testimony to the emergence of segregative responses to the management of mental illness. Nowadays, such encounters with the physicality of mass segregation and confinement, and with the peculiar moral architecture which the Victorians constructed to exhibit and contain the dissolute and the degenerate, are increasingly fugitive and fast fading from the realm of possibility. For the other component of the revolutionary changes that enveloped Western responses to mental illness in the closing decades of the last century was the increasingly precipitous abandonment of the bricks and mortar approach, with some mental hospitals simply left to moulder away, and others, irony of ironies, converted into luxury housing for yuppies, their developers taking care to coyly disguise their stigmatizing past. Deinstitutionalization, or decarceration as an alternative ugly neologism had it, was the order of the day. In their tens and even hundreds of thousands, mental patients were discharged from traditional mental hospitals (or refused admission in the first place), and instead consigned to the not-so-tender mercies of treatment in the ’community’.

The transformation this entailed began slowly, uncertainly, and unevenly. Mental hospital censuses had risen almost uninterruptedly for more than a century, and even in England and the United States, where the reversal began, its significance was unclear at first, and it was uncertain whether it would persist. In England, the numbers under confinement peaked in 1954 at 148,100, and fell by fewer than 12,000 by 1960. In the United States, the national decline began a year later, after the census reached a maximum of 558,900, and had fallen to 535,500 in 1960. Thereafter, however, the pace of change accelerated markedly, and the pattern eventually spread to countries like France, Spain, and Italy, where commitment to the old asylum-based approach had lasted longer. By century’s end, the abandonment of the asylum was visible all across Europe and North America. In Italy, led by the charismatic Franco Bassaglia, the political left led the charge. Places like the old asylum for women on San Clemente Island in the Venice lagoon were shuttered, in this case re-opening as a hotel offering a luxurious refuge from the crowds thronging the city, its owners boasting of its past as a monastery, while gliding silently over its less salubrious and more recent role. In California, decarceration was a project of the Reaganite right, led by the grand old man himself, who promised to shut down ’superfluous’ institutions, and on this occasion actually tried to follow through on his rhetoric. In England, Enoch Powell, minister of health in the Macmillan government, not yet consigned to the political wilderness for his views on race and immigration, employed characteristically blunt language, promising to err ’on the side of ruthlessness’ and to ’set the torch to the [mental hospital’s] funeral pyre’. The Thatcherite and Blairite regimes would pursue the path he had spelled out with equal enthusiasm, as would both Republican and Democratic administrations in Washington and in the individual states. The consensus on the desirability of community care has become as overwhelming as the Victorians’ convictions about the merits of the asylum.

Image

17. The central nurses’ station at Milledgeville State Hospital. Once housing a community of upwards of 14,000 staff and inmates, the fate of its buildings mirrors that of many of the vast bins that once housed legions of lunatics

If success is defined as driving down in-patient populations, even in the face of rising admissions into the mental health sector, then decarceration has been an unambiguous triumph. The reputation of the traditional mental hospital had been so blackened by journalists, sociologists, Hollywood film makers, and even many of the psychiatric profession, that ’community care’ has been easily seen as an unambiguous blessing. For some patients, capable of functioning reasonably successfully in the outside world, and perhaps among those who respond relatively well when treated with the new generations of drugs, that has certainly been the case. But for many more, closer scrutiny suggests a very different verdict is warranted.

Our contemporary techniques of containment and damage limitation when it comes to the severely and chronically mentally ill mimic in many respects the place of insanity in the 18th century. Many psychiatric casualties have been thrust back into the arms of their families. Here, largely bereft of official support or subsidy, unpaid carers (usually female) are left to cope as best they can. These burdens are massive, and ultimately, for many families, simply intolerable. As Erving Goffman (known to most only as a critic of asylums) once summarized the case, ’the compensative work required by the well members [of the family] may well cost them the life chances their peers enjoy, blunt their personal careers, paint their lives with tragedy, and turn all their feelings to bitterness’. So it has proved. The state has systematically failed to build up the infrastructure of services and financial supports essential for any workable system of community care, which has become, in the words of Sir Roy Griffiths, in an official survey of British practice, ’a poor relation: everybody’s distant relative, but nobody’s baby’.

Families for the most part cannot or will not absorb all of the burdens the new policies impose, particularly over the long haul, and so examination of the place of insanity in our own era must look also to the sidewalk psychotic (now a familiar feature of most urban landscapes), the boarding house, and the gaol to grasp the range of our current provision (or lack of provision) for the mentally ill. In California, for instance, prisons and gaols have become the single largest purveyors of mental health care. More generally, the board and care industry in the United States has made a substantial living from speculating in this form of misery, taking the welfare payments now available to some of the mentally disabled and developing a network of private institutions that form an analogue of the 18th-century trade in lunacy. For thousands of the old, already suffering in varying degrees from mental confusion and deterioration, deinstitutionalization has meant premature death. For others, it has meant that they have been left to rot and decay, physically and otherwise, in broken-down welfare hotels or in what are termed, with Orwellian euphemism, ’personal care’ nursing homes. For thousands of younger psychotics discharged into the streets, it has meant a nightmare existence in the blighted centres of our cities, amidst neighbourhoods crowded with prostitutes, ex-felons, addicts, alcoholics, and the other human rejects now repressively tolerated by our society.

Some have suggested that the discharge of all these patients was simply the product of the new technological fix psychotropic drugs provided for the problems posed by mental illness. There was, after all, a temporal coincidence between the marketing of the phenothiazines and the decline in overall in-patient numbers in Britain and the United States; and psychiatrists, whose role in treating the chronically mentally ill has mostly shrunk to the writing of prescriptions for these pills, have mostly embraced this simplistic account. Assuredly, anti-psychotic medications have played some role in the process. Thorazine and its derivatives gave psychiatry for the first time a therapeutic modality that was easy to dispense and closely resembled the magic potions that increasingly underpinned the cultural authority of medicine at large. Too bad that the phenothiazines were no psychiatric penicillin, and indeed that they would be responsible for a massive and long-ignored epidemic of iatrogenic illness. They reduced florid symptomatology, and for some patients, at least, provided a measure of relief. After centuries of therapeutic impotence, it was perhaps understandable that the psychiatric profession was so grateful for their arrival and so eager to hype the value of the new pills.

In truth, the anti-psychotics played at best a secondary role in the demise of the asylum. Patient compliance with doctors’ orders is a serious problem even among the sane. Getting psychotics to keep taking their drugs is even more problematic, particularly when they experience unpleasant side effects: loss of spontaneity, severe reductions of motor activity, devastating and stigmatizing neurological side effects (an issue to which I shall return). Then, too, while some patients obtain symptomatic relief while on medication (though that degree of improvement tends to decline over time), a very large fraction — well over 50% — do not obtain even this benefit from their drugs. The early uncontrolled trials that claimed to demonstrate the value of the drugs systematically overstated their usefulness, and it has become steadily more apparent that the therapeutic effectiveness of so-called anti-psychotic drugs has been grossly oversold. In many jurisdictions, mental hospital censuses had begun to decline years before the introduction of the phenothiazines, and careful analysis by an array of scholars who have troubled to look at the available evidence has not shown any clear connection between the introduction of the drugs and deinstitutionalization. Decarceration was driven far more by fiscal concerns — the massive costs of running and replacing the traditional mental hospitals, and the ability of state governments in America to transfer mental health costs to the federal government and to local authorities by turning patients out of state-run facilities — and by conscious shifts in state policy such as tightened commitment laws.

For the pharmaceutical industry, psychiatric drugs were a bonanza, a major source of profits that ran into many billions of dollars. Almost instantly alive to the profit potential of the phenothiazines, drug companies soon began to market another class of psychoactive drugs, the so-called minor tranquillizers. Miltown and Equanil (meprobamate), which made users drowsy, and later on Valium and Librium (the benzodiazapines), which didn’t. The troubles of everyday life were effortlessly redefined as psychiatric illnesses. Here were the pills that proffered a solution to the boredom of the trapped housewife, the blues of overwhelmed mothers and of the fading middle-aged. The Rolling Stones sang sarcastically about ’mother’s little helper’, and a generation of the young raided the family medicine cabinet in search of the happy pills. As early as 1956, statistics suggest that as many as one American in twenty was taking tranquillizers in any given month. Anxiety, tension, unhappiness, all could be smoothed away by medication. Or so some thought. But inevitably, there was a price to be paid. Actually more than one. Constricting the normal range of human emotion, and running from life’s challenges into a chemical fog, might have a short-term appeal, but steadily truncated one’s capacity to cope and diminished one’s ability to function as an autonomous adult. And those taking the drugs became physically habituated to them, till they found it difficult or impossible not to continue using them, for to abandon the pills was to court symptoms and psychic pain worse than those that had driven the decision to use them in the first place. Tranquillizers, in the eyes of a growing number of critics, were as much a problem as a solution, helping those ingesting them along their way to their ’busy dying day’, as the Stones memorably phrased it. By the mid-1970s, in many countries Valium was the single most prescribed drug, but simultaneously growing concerns about its addictive properties were shadowing its use.

The multi-nationals were slower to recognize the similarly large rewards that could flow from exploiting compounds that changed other aspects of people’s moods. A variety of drugs that had some effect on depression had begun to surface from the late 1950s onwards, beginning with Iproniazid, a monoamine oxidase inhibitor, in 1957, and Trofranil and Elavil, so-called tricyclic anti-depressants, in 1958 and 1961 respectively. Perhaps in part because many depressed people suffer in silence, the belief persisted that depression was a comparatively rare condition, and drug companies such as Geigy and Roche, given the opportunity to introduce these drugs, dragged their heels about marketing them. Hypertension, they were convinced, was a far larger market than depression. The belated success of Prozac changed that mindset completely. And changed as well both the professional and the public’s understanding of mental disorder. Depression was now a disease of epidemic proportions, one every drug company wanted a piece of.

The US National Institute of Mental Health proclaimed the 1990s ’the decade of the brain’. A simplistic biological reductionism increasingly became the dominant psychiatric paradigm (ironically exhibiting some parallels with the biological reductionism of a century earlier, when psychiatrists had proclaimed that the mad were ’tainted creatures’, doomed to permanent insanity by the defects of their degenerate, inherited germ plasm). Patients and their families learned to attribute mental illness to faulty brain biochemistry — defects of dopamine or a shortage of seratonin. It was biobabble as deeply misleading and unscientific as the psychobabble it replaced, but as marketing copy it was priceless. Biological reductionism of this sort had a particularly powerful attraction for an increasingly important segment of those affected by the scourge of major mental illness, the parents and other family members of the mental patient. Where the theories of the psychoanalysts had blamed the pathologies of family life, and especially the depredations of the ’refrigerator mother’, for the creation of psychosis, biological accounts of mental illness stressed that madness was an illness like any other. If biochemical imbalances and neurotransmitter defects were the root of the problem, then families were absolved of all blame or responsibility. And if madness was rooted in biology, then surely drugs rather than talk were the appropriate response, since chemicals could alter the internal physiological environment. In embracing biology, family members were thus prime candidates to speak out on the merits of drug treatment, and they have indeed become vocal advocates for the new somaticism. The drug industry has welcomed their support and done much to help to publicize their views. For example, Senator Grassley’s subcommittee of the US Senate’s Finance Committee found that industry subsidies provided as much as 75% of the budget of the National Alliance on Mental Illness, the largest and most prominent organization of activist family members.

Meanwhile, the psychiatric profession itself has been in receipt of massive quantities of research funding from the pharmaceutical industry. Where once psychiatrists had been the most marginal of specialists, existing in a twilight zone on the margins of professional respectability (their talk cures and obsessions with childhood sexuality only amplifying the scorn with which most mainstream doctors viewed them), now they were the darlings of medical school deans, the millions upon millions of their grants and indirect cost recoveries helping to finance the expansion of the medical-industrial complex. The pharmaceutical industry has become the most profitable industrial sector on the planet (in 2002, for example, the profits of the ten largest pharmaceutical houses exceeded the total combined profits of the remaining 490 corporations making up the Fortune 500 list). And a large slice of its profits have derived from the pills directed at mental illness.

In 2008, for example, the most recent year for which we have data, anti-psychotics ranked as the single largest source of revenue among all classes of drugs, accounting for $14.6 billion, or 5% of all expenditures for pharmaceuticals in the United States, by far the most lucrative market in the world; and anti-depressants that same year ranked fifth, accounting for $9.6 billion in sales.

To a quite extraordinary extent, drug money has come to dominate psychiatry. It underwrites psychiatric journals and psychiatric conferences, where the omnipresence of pharmaceutical influence startles the naïve outsider. It makes or mars psychiatric careers through granting or withholding research support. At an even more fundamental level, the very categories within which we think about cognitive and emotional troubles are in flux, and the changes imported into psychiatry’s diagnostic categories seem to many critics to correspond more closely to the marketing needs of the pharmaceutical houses than to advances in basic science.

Writers of this persuasion have been harshly critical of many features of modern psychiatry. They have pointed out, for example, that many of the most generously funded academic psychiatrists have elected to promote lucrative off-label uses for drugs whose initial approval for prescription was awarded on quite other grounds. Following on from the work of these critics, editorials in such major medical journals as the Lancet, the Journal of the American Medical Association, and the New England Journal of Medicine have acknowledged the problems of the misrepresentation of research data and of ghostwriters who produce peer-reviewed ’science’ that surfaces in even the most prestigious journals, with the most eminent names in the field collaborating in the deception. Class-action lawsuits in the United States, and the inquiries of a subcommittee of the Senate Finance Committee, chaired by Senator Grassley, have shown that in a number of cases, researchers have been induced to sign confidentiality agreements, which ensure that inconvenient data have never seen the light of day.

Problems of this sort are not the peculiar province of psychiatry, as the Vioxx painkilling scandal and some of the evidence that surfaced in the course of the controversy over hormone replacement therapy (HRT) serve to remind us. In other respects, too, the situation with respect to psychiatric therapeutics has parallels with what we find in many non-psychiatric disorders, for which the most that modern medicine can offer is a measure of comfort and symptomatic relief. If, for some of those who suffer from the miseries madness brings in its train, anti-psychotics, anti-depressants, and anti-anxiety drugs provide similar advantages, we should not dismiss them out of hand. To speak as some have done of ’toxic psychiatry’ and to suggest that all psychiatric drugs are a snare and a delusion, that they are simply poisons not therapy, is in my view irresponsible and misguided. But it is as well to understand the price this treatment exacts, and to weigh far more carefully than the psychiatric profession and the pharmaceutical industry would seem to have us do, the negative as well as the positive impact of the psychopharmacological revolution.

For twenty years after the introduction of Thorazine, for example, psychiatrists ignored or minimized the side effects that this class of drugs brought in its train. Substantial numbers of patients experience Parkinson-like symptoms: muscular rigidity, shuffling gait, loss of associated movements, and drooling. Others experience ’akithisia’, constant pacing and an inability to keep still. As many as 15—25% of patients on long-term anti-psychotics develop tardive dyskinesia, characterized by sucking and smacking movements of the lips, rocking, and uncontrolled jerky movements of the extremities — behaviours that, ironically, the lay public often views as symptoms of mental illness — and these symptoms are often permanent, apparently the result of iatrogenic (drug-induced) neurological damage to the brain.

The mass marketing of Prozac and other anti-depressants in the 1990s saw the emergence of depression as ’the common cold’ of psychiatry, and the intensive selling of the idea that all sorts of fluctuations in mood were ’illnesses’ amenable to chemical treatment. Particularly prominent were claims that these disorders were traceable to improper levels of seratonin, a neurotransmitter, in the brains of the afflicted — a scientifically discredited notion nonetheless seized upon by the drug marketers, and one responsible for transforming lay conceptions of these conditions. Drawing a hard-and-fast line between sanity and madness has always been a fraught business, and the temptations to expand the boundaries of the pathological have arisen as lucrative, easy-to-prescribe pills have made their appearance in the medical marketplace. ’Social phobia’, for example, was first listed in DSM-III as a rarely encountered ’illness’, but by the time the fourth edition appeared in 1994, psychiatrists suggested that it was afflicting as many as 10% of the population. From the patients’ side of the equation, in some circles it has become almost fashionable to resort to mood-altering drugs, just as, in an earlier generation, some revelled in and boasted about prolonged sessions with their psychoanalysts.

The next edition of the Bible of biological psychiatry, the Diagnostic and Statistical Manual, is in preparation, and, at the time of writing, is slated to appear in 2013, although it is surrounded by some controversy. By all indications, it will further enlarge the realm of mental pathology. The current epidemic of autism, for example, will likely accelerate once the looser category of ’autism spectrum disorder’ is promulgated. A newly proposed category of ’mixed anxiety depressive disorder’ has such nonspecific and widely distributed ’symptoms’ that its creation would likely provoke yet another epidemic. So, too, one suspects, would proposals to sanctify ’temper dysfunctional disorder’, or ’binge-eating disorder’. Likely to have even more profound effects is still another trial balloon that has been floated, something called ’psychiatric risk syndrome’: a category which would allow early psychiatric intervention (presumably with drug treatments) based on a broad array of extraordinarily loosely defined ’symptoms’ that might (or might not) presage later mental illness. Allen Frances, chair of the task force that created DSM-IV, has warned that, if enacted, these proposed changes:

would create tens of millions of newly misidentified false positive ’patients,’ thus greatly exacerbating the problems already caused by an overly inclusive DSM-IV. There would be massive overtreatment with medications that are unnecessary, expensive, and often quite harmful… [and] the inclusion of many normal variants under the rubric of mental illness, with the result that the core concept of ’mental disorder’ is greatly undermined.

What is remarkable about the expansion of the boundaries of mental disorder that has already occurred, let alone the further changes that loom on the horizon, is that, for all the massive expansion of neuroscientific research that the biological turn in psychiatry and the associated drugs revolution have funded, our knowledge of brain function remains in its infancy, and the aetiology of almost all forms of Bedlam madness remains as mysterious as ever. We simply don’t know what the roots of schizophrenia or major depression are (periodic breathless proclamations to the contrary notwithstanding); and the therapeutics at our disposal remain strikingly limited, and are bought at a heavy price. One startling measure of the price patients pay that is seldom attended to, and yet is perhaps the most telling of all: in David Healy’s words, ’Uniquely among major illnesses in the Western world, the life expectancy for patients with serious mental illness has declined.’ This astonishing outcome is a product, ironically, of the metabolic effects of the newer generation of drugs, launched because they have patent protection and thus far higher profit margins: diabetes, massive weight gain, heart disease, and the like, which also have a strongly negative impact on patients’ quality of life. Meanwhile, the social and psychological dimensions of mental disorder are wished away, or passed along to the cheaper and heavily feminized professions of clinical psychology and psychiatric social work to cope with.

Making matters worse, those afflicted with the most serious of mental ills find themselves cast out from even the limited and ambiguous degree of shelter and care once provided in the Victorian asylums. As for psychiatry, which for an interlude saw its major role as listening to patients, it now seems to prefer to listen to Prozac, as the title of a psychiatric best-seller would have it, and to dance to the seductive music played by the pharmaceutical industry. Thus the fate of madness at the beginning of the new millennium.