Tag Archives: mental illness

Bedlam and the Mysterious Air Loom

Air Loom machine

During my college years I was fortunate enough to spend time as a volunteer in a Victorian era psychiatric hospital in the United Kingdom. Fortunate in two ways: that I was able to make some small, yet positive difference to the lives of some of the patients; and, fortunate enough to live on the outside.

Despite the good and professional intentions of the many caring staff the hospital itself — to remain nameless — was a dreary embodiment of many a nightmarish horror flick. The building had dark, endless corridors; small, leaky windows; creaky doors, many with locks exclusively on the outside, and even creakier plumbing; spare cell-like rooms for patients; treatment rooms with passive restraints on chairs and beds. Most locals still called it “____ lunatic asylum”.

All of this leads me to the fascinating and tragic story of James Tilly Matthews, a rebellious (and somewhat paranoid) peace activist who was confined to London’s infamous Bedlam asylum in 1797. He was incarcerated for believing he was being coerced and brainwashed by a mysterious governmental mind control machine known as the “Air Loom”.

Subsequent inquiries pronounced Matthews thoroughly sane, but the British government kept him institutionalized anyway because of his verbal threats against officials and then king, George III. In effect, this made Matthews a political prisoner — precisely that which he had always steadfastly maintained.

Ironically, George III’s well-documented, recurrent and serious mental illness had no adverse effect on his own reign as monarch from 1760-1820. Interestingly enough, Bedlam was the popular name for the Bethlem Royal Hospital, sometimes known as St Mary Bethlehem Hospital.

The word “Bedlam”, of course, later came to be a synonym for confusion and chaos.

Read the entire story of James Tilly Matthews and his nemesis, apothecary and discredited lay-psychiatrist, John Haslam, at Public Domain Review.

Image: Detail from the lower portion of James Tilly Matthews’ illustration of the Air Loom featured in John Haslam’s Illustrations of Madness (1810). Courtesy: Public Domain Review / Wellcome Library, London. Public Domain.

The Illness Known As Evil

What turns a seemingly ordinary person (usually male) into a brutal killer or mass-murderer? How does a quiet computer engineer end up as a cold-blooded executioner of innocents on a terrorist video in 2015? Why does one single guard in a concentration camp lead hundreds of thousands to their deaths during the Second World War? Why do we humans perform acts of such unspeakable brutality and horror?

Since the social sciences have existed researchers have weighed these questions. Is it possible that those who commit such acts of evil are host to a disease of the brain? Some have dubbed this Syndrome E, where E stands for evil. Others are not convinced that evil is a neurological condition with biochemical underpinnings. And so the debate, and the violence, rages on.

From the New Scientist:

The idea that a civilised human being might be capable of barbaric acts is so alien that we often blame our animal instincts – the older, “primitive” areas of the brain taking over and subverting their more rational counterparts. But fresh thinking turns this long-standing explanation on its head. It suggests that people perform brutal acts because the “higher”, more evolved, brain overreaches. The set of brain changes involved has been dubbed Syndrome E – with E standing for evil.

In a world where ideological killings are rife, new insights into this problem are sorely needed. But reframing evil as a disease is controversial. Some believe it could provide justification for heinous acts or hand extreme organisations a recipe for radicalising more young people. Others argue that it denies the reality that we all have the potential for evil within us. Proponents, however, say that if evil really is a pathology, then society ought to try to diagnose susceptible individuals and reduce contagion. And if we can do that, perhaps we can put radicalisation into reverse, too.

Following the second world war, the behaviour of guards in Nazi concentration camps became the subject of study, with some researchers seeing them as willing, ideologically driven executioners, others as mindlessly obeying orders. The debate was reignited in the mid-1990s in the wake of the Rwandan genocide and the Srebrenica massacre in Bosnia. In 1996, The Lancet carried an editorial pointing out that no one was addressing evil from a biological point of view. Neurosurgeon Itzhak Fried, at the University of California, Los Angeles, decided to rise to the challenge.

In a paper published in 1997, he argued that the transformation of non-violent individuals into repetitive killers is characterised by a set of symptoms that suggests a common condition, which he called Syndrome E (see “Seven symptoms of evil“). He suggested that this is the result of “cognitive fracture”, which occurs when a higher brain region, the prefrontal cortex (PFC) – involved in rational thought and decision-making – stops paying attention to signals from more primitive brain regions and goes into overdrive.

The idea captured people’s imaginations, says Fried, because it suggested that you could start to define and describe this basic flaw in the human condition. “Just as a constellation of symptoms such as fever and a cough may signify pneumonia, defining the constellation of symptoms that signify this syndrome may mean that you could recognise it in the early stages.” But it was a theory in search of evidence. Neuroscience has come a long way since then, so Fried organised a conference in Paris earlier this year to revisit the concept.

At the most fundamental level, understanding why people kill is about understanding decision-making, and neuroscientists at the conference homed in on this. Fried’s theory starts with the assumption that people normally have a natural aversion to harming others. If he is correct, the higher brain overrides this instinct in people with Syndrome E. How might that occur?

Etienne Koechlin at the École Normale Supérieure in Paris was able to throw some empirical light on the matter by looking at people obeying rules that conflict with their own preferences. He put volunteers inside a brain scanner and let them choose between two simple tasks, guided by their past experience of which would be the more financially rewarding (paying 6 euros versus 4). After a while he randomly inserted rule-based trials: now there was a colour code indicating which of the two tasks to choose, and volunteers were told that if they disobeyed they would get no money.

Not surprisingly, they followed the rule, even when it meant that choosing the task they had learned would earn them a lower pay-off in the free-choice trials. But something unexpected happened. Although rule-following should have led to a simpler decision, they took longer over it, as if conflicted. In the brain scans, both the lateral and the medial regions of the PFC lit up. The former is known to be sensitive to rules; the latter receives information from the limbic system, an ancient part of the brain that processes emotional states, so is sensitive to our innate preferences. In other words, when following the rule, people still considered their personal preference, but activity in the lateral PFC overrode it.

Of course, playing for a few euros is far removed from choosing to kill fellow humans. However, Koechlin believes his results show that our instinctive values endure even when the game changes. “Rules do not change values, just behaviours,” he says. He interprets this as showing that it is normal, not pathological, for the higher brain to override signals coming from the primitive brain. If Fried’s idea is correct, this process goes into overdrive in Syndrome E, helping to explain how an ordinary person overcomes their squeamishness to kill. The same neuroscience may underlie famous experiments conducted by the psychologist Stanley Milgram at Yale University in the 1960s, which revealed the extraordinary lengths to which people would go out of obedience to an authority figure – even administering what they thought were lethal electric shocks to strangers.

Fried suggests that people experience a visceral reaction when they kill for the first time, but some rapidly become desensitised. And the primary instinct not to harm may be more easily overcome when people are “just following orders”. In unpublished work, Patrick Haggard at University College London has used brain scans to show that this is enough to make us feel less responsible for our actions. “There is something about being coerced that produces a different experience of agency,” he says, “as if people are subjectively able to distance themselves from this unpleasant event they are causing.”

However, what is striking about many accounts of mass killing, both contemporary and historical, is that the perpetrators often choose to kill even when not under orders to do so. In his book Ordinary Men, the historian Christopher Browning recounts the case of a Nazi unit called reserve police battalion 101. No member of this unit was forced to kill. A small minority did so eagerly from the start, but they may have had psychopathic or sadistic tendencies. However, the vast majority of those who were reluctant to kill soon underwent a transformation, becoming just as ruthless. Browning calls them “routinised” killers: it was as if, once they had decided to kill, it quickly became a habit.

Habits have long been considered unthinking, semi-automatic behaviours in which the higher brain is not involved. That seems to support the idea that the primitive brain is in control when seemingly normal people become killers. But this interpretation is challenged by new research by neuroscientist Ann Graybiel at the Massachusetts Institute of Technology. She studies people with common psychiatric disorders, such as addiction and depression, that lead them to habitually make bad decisions. In high-risk, high-stakes situations, they tend to downplay the cost with respect to the benefit and accept an unhealthy level of risk. Graybiel’s work suggests the higher brain is to blame.

In one set of experiments, her group trained rats to acquire habits – following certain runs through mazes. The researchers then suppressed the activity of neurons in an area of the PFC that blocks signals coming from a primitive part of the brain called the amygdala. The rats immediately changed their running behaviour – the habit had been broken. “The old idea that the cognitive brain doesn’t have evaluative access to that habitual behaviour, that it’s beyond its reach, is false,” says Graybiel. “It has moment-to-moment evaluative control.” That’s exciting, she says, because it suggests a way to treat people with maladaptive habits such as obsessive-compulsive disorder, or even, potentially, Syndrome E.

What made the experiment possible was a technique known as optogenetics, which allows light to regulate the activity of genetically engineered neurons in the rat PFC. That wouldn’t be permissible in humans, but cognitive or behavioural therapies, or drugs, could achieve the same effect. Graybiel believes it might even be possible to stop people deciding to kill in the first place by steering them away from the kind of cost-benefit analysis that led them to, say, blow themselves up on a crowded bus. In separate experiments with risk-taking rats, her team found that optogenetically decreasing activity in another part of the limbic system that communicates with the PFC, the striatum, made the rats more risk-averse: “We can just turn a knob and radically alter their behaviour,” she says.

Read the entire article here.

Creativity and Mental Illness

Vincent_van_Gogh-Self_portrait_with_bandaged_ear

The creative genius — oft misunderstood, outcast, tortured, misanthropic, fueled by demon spirits. Yet, this same description would seem to be equally apt at describing many of those who are unfortunate enough to suffer from mental illness. So, could creativity and mental illness be high-level symptoms of a broader underlying spectrum “disorder”? After all, a not insignificant number of people and businesses tend to regard creativity as a behavioral problem — best left outside the front-door to the office. Time to check out the results of the latest psychological study.

From the Guardian:

The ancient Greeks were first to make the point. Shakespeare raised the prospect too. But Lord Byron was, perhaps, the most direct of them all: “We of the craft are all crazy,” he told the Countess of Blessington, casting a wary eye over his fellow poets.

The notion of the tortured artist is a stubborn meme. Creativity, it states, is fuelled by the demons that artists wrestle in their darkest hours. The idea is fanciful to many scientists. But a new study claims the link may be well-founded after all, and written into the twisted molecules of our DNA.

In a large study published on Monday, scientists in Iceland report that genetic factors that raise the risk of bipolar disorder and schizophrenia are found more often in people in creative professions. Painters, musicians, writers and dancers were, on average, 25% more likely to carry the gene variants than professions the scientists judged to be less creative, among which were farmers, manual labourers and salespeople.

Kari Stefansson, founder and CEO of deCODE, a genetics company based in Reykjavik, said the findings, described in the journal Nature Neuroscience, point to a common biology for some mental disorders and creativity. “To be creative, you have to think differently,” he told the Guardian. “And when we are different, we have a tendency to be labelled strange, crazy and even insane.”

The scientists drew on genetic and medical information from 86,000 Icelanders to find genetic variants that doubled the average risk of schizophrenia, and raised the risk of bipolar disorder by more than a third. When they looked at how common these variants were in members of national arts societies, they found a 17% increase compared with non-members.

The researchers went on to check their findings in large medical databases held in the Netherlands and Sweden. Among these 35,000 people, those deemed to be creative (by profession or through answers to a questionnaire) were nearly 25% more likely to carry the mental disorder variants.

Stefansson believes that scores of genes increase the risk of schizophrenia and bipolar disorder. These may alter the ways in which many people think, but in most people do nothing very harmful. But for 1% of the population, genetic factors, life experiences and other influences can culminate in problems, and a diagnosis of mental illness.

“Often, when people are creating something new, they end up straddling between sanity and insanity,” said Stefansson. “I think these results support the old concept of the mad genius. Creativity is a quality that has given us Mozart, Bach, Van Gogh. It’s a quality that is very important for our society. But it comes at a risk to the individual, and 1% of the population pays the price for it.”

Stefansson concedes that his study found only a weak link between the genetic variants for mental illness and creativity. And it is this that other scientists pick up on. The genetic factors that raise the risk of mental problems explained only about 0.25% of the variation in peoples’ artistic ability, the study found. David Cutler, a geneticist at Emory University in Atlanta, puts that number in perspective: “If the distance between me, the least artistic person you are going to meet, and an actual artist is one mile, these variants appear to collectively explain 13 feet of the distance,” he said.

Most of the artist’s creative flair, then, is down to different genetic factors, or to other influences altogether, such as life experiences, that set them on their creative journey.

For Stefansson, even a small overlap between the biology of mental illness and creativity is fascinating. “It means that a lot of the good things we get in life, through creativity, come at a price. It tells me that when it comes to our biology, we have to understand that everything is in some way good and in some way bad,” he said.

Read the entire article here.

Image: Vincent van Gogh, self-portrait, 1889. Courtesy of Courtauld Institute Galleries, London. Wikipaintings.org. Public Domain.

Is Your City Killing You?

The stresses of modern day living are taking a toll on your mind and body. And, more so if you happen to live in an concrete jungle. The results are even more pronounced for those of us living in large urban centers. That’s the finding of some fascinating new brain research out of Germany. Their simple answer to a lower-stress life: move to the countryside.

From The Guardian:

You are lying down with your head in a noisy and tightfitting fMRI brain scanner, which is unnerving in itself. You agreed to take part in this experiment, and at first the psychologists in charge seemed nice.

They set you some rather confusing maths problems to solve against the clock, and you are doing your best, but they aren’t happy. “Can you please concentrate a little better?” they keep saying into your headphones. Or, “You are among the worst performing individuals to have been studied in this laboratory.” Helpful things like that. It is a relief when time runs out.

Few people would enjoy this experience, and indeed the volunteers who underwent it were monitored to make sure they had a stressful time. Their minor suffering, however, provided data for what became a major study, and a global news story. The researchers, led by Dr Andreas Meyer-Lindenberg of the Central Institute of Mental Health in Mannheim, Germany, were trying to find out more about how the brains of different people handle stress. They discovered that city dwellers’ brains, compared with people who live in the countryside, seem not to handle it so well.

To be specific, while Meyer-Lindenberg and his accomplices were stressing out their subjects, they were looking at two brain regions: the amygdalas and the perigenual anterior cingulate cortex (pACC). The amygdalas are known to be involved in assessing threats and generating fear, while the pACC in turn helps to regulate the amygdalas. In stressed citydwellers, the amygdalas appeared more active on the scanner; in people who lived in small towns, less so; in people who lived in the countryside, least of all.

And something even more intriguing was happening in the pACC. Here the important relationship was not with where the the subjects lived at the time, but where they grew up. Again, those with rural childhoods showed the least active pACCs, those with urban ones the most. In the urban group moreover, there seemed not to be the same smooth connection between the behaviour of the two brain regions that was observed in the others. An erratic link between the pACC and the amygdalas is often seen in those with schizophrenia too. And schizophrenic people are much more likely to live in cities.

When the results were published in Nature, in 2011, media all over the world hailed the study as proof that cities send us mad. Of course it proved no such thing – but it did suggest it. Even allowing for all the usual caveats about the limitations of fMRI imaging, the small size of the study group and the huge holes that still remained in our understanding, the results offered a tempting glimpse at the kind of urban warping of our minds that some people, at least, have linked to city life since the days of Sodom and Gomorrah.

The year before the Meyer-Lindenberg study was published, the existence of that link had been established still more firmly by a group of Dutch researchers led by Dr Jaap Peen. In their meta-analysis (essentially a pooling together of many other pieces of research) they found that living in a city roughly doubles the risk of schizophrenia – around the same level of danger that is added by smoking a lot of cannabis as a teenager.

At the same time urban living was found to raise the risk of anxiety disorders and mood disorders by 21% and 39% respectively. Interestingly, however, a person’s risk of addiction disorders seemed not to be affected by where they live. At one time it was considered that those at risk of mental illness were just more likely to move to cities, but other research has now more or less ruled that out.

So why is it that the larger the settlement you live in, the more likely you are to become mentally ill? Another German researcher and clinician, Dr Mazda Adli, is a keen advocate of one theory, which implicates that most paradoxical urban mixture: loneliness in crowds. “Obviously our brains are not perfectly shaped for living in urban environments,” Adli says. “In my view, if social density and social isolation come at the same time and hit high-risk individuals … then city-stress related mental illness can be the consequence.”

Read the entire story here.

Seeking Clues to Suicide

Suicide still ranks highly in many cultures as one of the commonest ways to die. The statistics are sobering — in 2012, more U.S. soldiers committed suicide than died in combat. Despite advances in the treatment of mental illness, little has made a dent in the annual increase in the numbers of those who take their lives. Psychologist Matthew Nock hopes to change this through some innovative research.

From the New York Times:

For reasons that have eluded people forever, many of us seem bent on our own destruction. Recently more human beings have been dying by suicide annually than by murder and warfare combined. Despite the progress made by science, medicine and mental-health care in the 20th century — the sequencing of our genome, the advent of antidepressants, the reconsidering of asylums and lobotomies — nothing has been able to drive down the suicide rate in the general population. In the United States, it has held relatively steady since 1942. Worldwide, roughly one million people kill themselves every year. Last year, more active-duty U.S. soldiers killed themselves than died in combat; their suicide rate has been rising since 2004. Last month, the Centers for Disease Control and Prevention announced that the suicide rate among middle-aged Americans has climbed nearly 30 percent since 1999. In response to that widely reported increase, Thomas Frieden, the director of the C.D.C., appeared on PBS NewsHour and advised viewers to cultivate a social life, get treatment for mental-health problems, exercise and consume alcohol in moderation. In essence, he was saying, keep out of those demographic groups with high suicide rates, which include people with a mental illness like a mood disorder, social isolates and substance abusers, as well as elderly white males, young American Indians, residents of the Southwest, adults who suffered abuse as children and people who have guns handy.

But most individuals in every one of those groups never have suicidal thoughts — even fewer act on them — and no data exist to explain the difference between those who will and those who won’t. We also have no way of guessing when — in the next hour? in the next decade? — known risk factors might lead to an attempt. Our understanding of how suicidal thinking progresses, or how to spot and halt it, is little better now than it was two and a half centuries ago, when we first began to consider suicide a medical rather than philosophical problem and physicians prescribed, to ward it off, buckets of cold water thrown at the head.

“We’ve never gone out and observed, as an ecologist would or a biologist would go out and observe the thing you’re interested in for hours and hours and hours and then understand its basic properties and then work from that,” Matthew K. Nock, the director of Harvard University’s Laboratory for Clinical and Developmental Research, told me. “We’ve never done it.”

It was a bright December morning, and we were in his office on the 12th floor of the building that houses the school’s psychology department, a white concrete slab jutting above its neighbors like a watchtower. Below, Cambridge looked like a toy city — gabled roofs and steeples, a ribbon of road, windshields winking in the sun. Nock had just held a meeting with four members of his research team — he in his swivel chair, they on his sofa — about several of the studies they were running. His blue eyes matched his diamond-plaid sweater, and he was neatly shorn and upbeat. He seemed more like a youth soccer coach, which he is on Saturday mornings for his son’s first-grade team, than an expert in self-destruction.

At the meeting, I listened to Nock and his researchers discuss a study they were collaborating on with the Army. They were calling soldiers who had recently attempted suicide and asking them to explain what they had done and why. Nock hoped that sifting through the interview transcripts for repeated phrasings or themes might suggest predictive patterns that he could design tests to catch. A clinical psychologist, he had trained each of his researchers how to ask specific questions over the telephone. Adam Jaroszewski, an earnest 29-year-old in tortoiseshell glasses, told me that he had been nervous about calling subjects in the hospital, where they were still recovering, and probing them about why they tried to end their lives: Why that moment? Why that method? Could anything have happened to make them change their minds? Though the soldiers had volunteered to talk, Jaroszewski worried about the inflections of his voice: how could he put them at ease and sound caring and grateful for their participation without ceding his neutral scientific tone? Nock, he said, told him that what helped him find a balance between empathy and objectivity was picturing Columbo, the frumpy, polite, persistently quizzical TV detective played by Peter Falk. “Just try to be really, really curious,” Nock said.

That curiosity has made Nock, 39, one of the most original and influential suicide researchers in the world. In 2011, he received a MacArthur genius award for inventing new ways to investigate the hidden workings of a behavior that seems as impossible to untangle, empirically, as love or dreams.

Trying to study what people are thinking before they try to kill themselves is like trying to examine a shadow with a flashlight: the minute you spotlight it, it disappears. Researchers can’t ethically induce suicidal thinking in the lab and watch it develop. Uniquely human, it can’t be observed in other species. And it is impossible to interview anyone who has died by suicide. To understand it, psychologists have most often employed two frustratingly imprecise methods: they have investigated the lives of people who have killed themselves, and any notes that may have been left behind, looking for clues to what their thinking might have been, or they have asked people who have attempted suicide to describe their thought processes — though their mental states may differ from those of people whose attempts were lethal and their recollections may be incomplete or inaccurate. Such investigative methods can generate useful statistics and hypotheses about how a suicidal impulse might start and how it travels from thought to action, but that’s not the same as objective evidence about how it unfolds in real time.

Read the entire article here.

Image: 2007 suicide statistics for 15-24 year-olds. Courtesy of Crimson White, UA.