Tag Archives: good

The Illness Known As Evil

What turns a seemingly ordinary person (usually male) into a brutal killer or mass-murderer? How does a quiet computer engineer end up as a cold-blooded executioner of innocents on a terrorist video in 2015? Why does one single guard in a concentration camp lead hundreds of thousands to their deaths during the Second World War? Why do we humans perform acts of such unspeakable brutality and horror?

Since the social sciences have existed researchers have weighed these questions. Is it possible that those who commit such acts of evil are host to a disease of the brain? Some have dubbed this Syndrome E, where E stands for evil. Others are not convinced that evil is a neurological condition with biochemical underpinnings. And so the debate, and the violence, rages on.

From the New Scientist:

The idea that a civilised human being might be capable of barbaric acts is so alien that we often blame our animal instincts – the older, “primitive” areas of the brain taking over and subverting their more rational counterparts. But fresh thinking turns this long-standing explanation on its head. It suggests that people perform brutal acts because the “higher”, more evolved, brain overreaches. The set of brain changes involved has been dubbed Syndrome E – with E standing for evil.

In a world where ideological killings are rife, new insights into this problem are sorely needed. But reframing evil as a disease is controversial. Some believe it could provide justification for heinous acts or hand extreme organisations a recipe for radicalising more young people. Others argue that it denies the reality that we all have the potential for evil within us. Proponents, however, say that if evil really is a pathology, then society ought to try to diagnose susceptible individuals and reduce contagion. And if we can do that, perhaps we can put radicalisation into reverse, too.

Following the second world war, the behaviour of guards in Nazi concentration camps became the subject of study, with some researchers seeing them as willing, ideologically driven executioners, others as mindlessly obeying orders. The debate was reignited in the mid-1990s in the wake of the Rwandan genocide and the Srebrenica massacre in Bosnia. In 1996, The Lancet carried an editorial pointing out that no one was addressing evil from a biological point of view. Neurosurgeon Itzhak Fried, at the University of California, Los Angeles, decided to rise to the challenge.

In a paper published in 1997, he argued that the transformation of non-violent individuals into repetitive killers is characterised by a set of symptoms that suggests a common condition, which he called Syndrome E (see “Seven symptoms of evil“). He suggested that this is the result of “cognitive fracture”, which occurs when a higher brain region, the prefrontal cortex (PFC) – involved in rational thought and decision-making – stops paying attention to signals from more primitive brain regions and goes into overdrive.

The idea captured people’s imaginations, says Fried, because it suggested that you could start to define and describe this basic flaw in the human condition. “Just as a constellation of symptoms such as fever and a cough may signify pneumonia, defining the constellation of symptoms that signify this syndrome may mean that you could recognise it in the early stages.” But it was a theory in search of evidence. Neuroscience has come a long way since then, so Fried organised a conference in Paris earlier this year to revisit the concept.

At the most fundamental level, understanding why people kill is about understanding decision-making, and neuroscientists at the conference homed in on this. Fried’s theory starts with the assumption that people normally have a natural aversion to harming others. If he is correct, the higher brain overrides this instinct in people with Syndrome E. How might that occur?

Etienne Koechlin at the École Normale Supérieure in Paris was able to throw some empirical light on the matter by looking at people obeying rules that conflict with their own preferences. He put volunteers inside a brain scanner and let them choose between two simple tasks, guided by their past experience of which would be the more financially rewarding (paying 6 euros versus 4). After a while he randomly inserted rule-based trials: now there was a colour code indicating which of the two tasks to choose, and volunteers were told that if they disobeyed they would get no money.

Not surprisingly, they followed the rule, even when it meant that choosing the task they had learned would earn them a lower pay-off in the free-choice trials. But something unexpected happened. Although rule-following should have led to a simpler decision, they took longer over it, as if conflicted. In the brain scans, both the lateral and the medial regions of the PFC lit up. The former is known to be sensitive to rules; the latter receives information from the limbic system, an ancient part of the brain that processes emotional states, so is sensitive to our innate preferences. In other words, when following the rule, people still considered their personal preference, but activity in the lateral PFC overrode it.

Of course, playing for a few euros is far removed from choosing to kill fellow humans. However, Koechlin believes his results show that our instinctive values endure even when the game changes. “Rules do not change values, just behaviours,” he says. He interprets this as showing that it is normal, not pathological, for the higher brain to override signals coming from the primitive brain. If Fried’s idea is correct, this process goes into overdrive in Syndrome E, helping to explain how an ordinary person overcomes their squeamishness to kill. The same neuroscience may underlie famous experiments conducted by the psychologist Stanley Milgram at Yale University in the 1960s, which revealed the extraordinary lengths to which people would go out of obedience to an authority figure – even administering what they thought were lethal electric shocks to strangers.

Fried suggests that people experience a visceral reaction when they kill for the first time, but some rapidly become desensitised. And the primary instinct not to harm may be more easily overcome when people are “just following orders”. In unpublished work, Patrick Haggard at University College London has used brain scans to show that this is enough to make us feel less responsible for our actions. “There is something about being coerced that produces a different experience of agency,” he says, “as if people are subjectively able to distance themselves from this unpleasant event they are causing.”

However, what is striking about many accounts of mass killing, both contemporary and historical, is that the perpetrators often choose to kill even when not under orders to do so. In his book Ordinary Men, the historian Christopher Browning recounts the case of a Nazi unit called reserve police battalion 101. No member of this unit was forced to kill. A small minority did so eagerly from the start, but they may have had psychopathic or sadistic tendencies. However, the vast majority of those who were reluctant to kill soon underwent a transformation, becoming just as ruthless. Browning calls them “routinised” killers: it was as if, once they had decided to kill, it quickly became a habit.

Habits have long been considered unthinking, semi-automatic behaviours in which the higher brain is not involved. That seems to support the idea that the primitive brain is in control when seemingly normal people become killers. But this interpretation is challenged by new research by neuroscientist Ann Graybiel at the Massachusetts Institute of Technology. She studies people with common psychiatric disorders, such as addiction and depression, that lead them to habitually make bad decisions. In high-risk, high-stakes situations, they tend to downplay the cost with respect to the benefit and accept an unhealthy level of risk. Graybiel’s work suggests the higher brain is to blame.

In one set of experiments, her group trained rats to acquire habits – following certain runs through mazes. The researchers then suppressed the activity of neurons in an area of the PFC that blocks signals coming from a primitive part of the brain called the amygdala. The rats immediately changed their running behaviour – the habit had been broken. “The old idea that the cognitive brain doesn’t have evaluative access to that habitual behaviour, that it’s beyond its reach, is false,” says Graybiel. “It has moment-to-moment evaluative control.” That’s exciting, she says, because it suggests a way to treat people with maladaptive habits such as obsessive-compulsive disorder, or even, potentially, Syndrome E.

What made the experiment possible was a technique known as optogenetics, which allows light to regulate the activity of genetically engineered neurons in the rat PFC. That wouldn’t be permissible in humans, but cognitive or behavioural therapies, or drugs, could achieve the same effect. Graybiel believes it might even be possible to stop people deciding to kill in the first place by steering them away from the kind of cost-benefit analysis that led them to, say, blow themselves up on a crowded bus. In separate experiments with risk-taking rats, her team found that optogenetically decreasing activity in another part of the limbic system that communicates with the PFC, the striatum, made the rats more risk-averse: “We can just turn a knob and radically alter their behaviour,” she says.

Read the entire article here.

The Paradox That is Humanity


Fanatical brutality and altruism. Greed and self-sacrifice. Torture and love. Cruelty and remorse. Care and wickedness. These are the paradoxical traits that make us uniquely human. Many people give of themselves, love unconditionally, exhibit kindness, selflessness and compassion at every turn. And yet, describing the immolation, crucifixions and beheadings of fellow humans by humans as inhuman or “beastial” rather misses the point. While some animals maim and kill their own, and even feast on the spoils, humans have risen above all other species to a pinnacle of barbaric behavior that demands that we all continually reflect on our humanity, both good and evil. Sadly, this is not news: persecution of one group by another is encoded in our DNA.

From the Guardian:

It describes itself as “an inclusive school where gospel values underpin a caring and supporting ethos, manifest in care for each individual”. And I have no reason to doubt it. But one of the questions raised by the popularity of Hilary Mantel’s Wolf Hall is whether St Thomas More Catholic School is named after a monster or a saint. With Mantel, gone is the More of heroic humanism popularised by Robert Bolt’s fawning A Man for All Seasons. In its place she reminds us that More was persecutor-in-chief towards those who struggled to see the Bible translated into English and personally responsible for the burning of a number of men who dared question the ultimate authority of the Roman church.

This week’s Wolf Hall episode ended with the death of Middle Temple lawyer James Bainham at Smithfield on 30 April 1532. More tortured Bainham in the Tower of London for questioning the sanctity of Thomas Becket and for speaking out against the financial racket of the doctrine of purgatory that “picked men’s purses”. At first, under the pressure of torture, Bainham recanted his views. But within weeks of being released, Bainham re-asserted them. And so More had him burnt at the stake.

The recent immolation of Jordanian pilot Lieutenant Muadh al-Kasasbeh by Islamic State (Isis) brings home the horrendous reality of what this involves. I watched it on the internet. And I wish I hadn’t. I felt voyeuristic and complicit. And though I justified watching on the grounds that I was going to write about it, and thus (apparently) needed to see the truly horrific footage, I don’t think I was right to do so. As well as seeing things that I will never be able to un-see, I felt morally soiled – as if I had done exactly what Isis had wanted me to do. I mean, if no one ever watched this stuff,  they wouldn’t make it.

Afterwards, I wandered down to Smithfield market to get some air. I sat in a posh cafe and tried to picture what the place must have been like when Bainham was killed. Both then and now, death by burning was a staged event, deliberately public, a theatre of cruelty designed for political/religious instruction. In his book on burnings in 16th century England, the historian Eamon Duffy recounts a burning in Dartford in 1555: “‘Thither came … fruiterers wyth horse loades of cherries, and sold them’.” Can you imagine: passing round the cherries as you watch people burn? What sort of creatures are we?

Yes, religion is the common factor here. But if there is no God (as some say) and religion is a purely human phenomenon, then it is humanity that is also in the dock. For when we speak of these acts as “inhuman”, or of the “inhumanity” of Isis, we are surely kidding ourselves: history teaches that human beings are often exactly like this. We are often viciously cruel and without an ounce of pity and, yet, all too often in denial about our basic capacity for wickedness. One cannot be in denial after watching that video.

And yet the thing that it is almost impossible for us to get our heads around is that this capacity for wickedness can also co-exist with an extraordinary capacity for love and care and self-sacrifice. More, of course, is a perfect case in point. As well as being declared a saint, More was famously one of the early humanists, a friend of Erasmus. In his Utopia, he fantasised about a world where people lived together in harmony, with no private property to divide them. He championed female education and (believe it or not) religious toleration.

Robert Bolt may have only reflected one aspect of More’s character, but he did stand up for what he believed in, even to the point of death. And when More was declared a saint in 1935, it was partially a powerful and deliberate witness to German Christians to do the same. And who would have guessed that, within a few years, apparently civilized Europe would return again to the burning of human bodies, this time on an industrial scale. And this time, not in the name of God.

Read the entire article here.

Image: 12th century Byzantine manuscript illustration depicting Byzantine Greeks (Christian/Eastern Orthodox) punishing Cretan Saracens (Muslim) in the 9th century. Courtesy of Madrid Skylitzes / Wikipedia.


Does Evil Exist?

Humans have a peculiar habit of anthropomorphizing anything that moves, and for that matter, most objects that remain static as well. So, it is not surprising that evil is often personified and even stereotyped; it is said that true evil even has a home somewhere below where you currently stand.

[div class=attrib]From the Guardian:[end-div]

The friction between the presence of evil in our world and belief in a loving creator God sparks some tough questions. For many religious people these are primarily existential questions, as their faith contends with doubt and bewilderment. The biblical figure of Job, the righteous man who loses everything that is dear to him, remains a powerful example of this struggle. But the “problem of evil” is also an intellectual puzzle that has taxed the minds of philosophers and theologians for centuries.

One of the most influential responses to the problem of evil comes from St Augustine. As a young man, Augustine followed the teachings of a Christian sect known as the Manichees. At the heart of Manichean theology was the idea of a cosmic battle between the forces of good and evil. This, of course, proposes one possible solution to the problem of evil: all goodness, purity and light comes from God, and the darkness of evil has a different source.

However, Augustine came to regard this cosmic dualism as heretical, since it undermined God’s sovereignty. Of course, he wanted to hold on to the absolute goodness of God. But if God is the source of all things, where did evil come from? Augustine’s radical answer to this question is that evil does not actually come from anywhere. Rejecting the idea that evil is a positive force, he argues that it is merely a “name for nothing other than the absence of good”.

At first glance this looks like a philosophical sleight of hand. Augustine might try to define evil out of existence, but this cannot diminish the reality of the pain, suffering and cruelty that prompt the question of evil in the first place. As the 20th-century Catholic writer Charles Journet put it, the non-being of evil “can have a terrible reality, like letters carved out of stone”. Any defence of Augustine’s position has to begin by pointing out that his account of evil is metaphysical rather than empirical. In other words, he is not saying that our experience of evil is unreal. On the contrary, since a divinely created world is naturally oriented toward the good, any lack of goodness will be felt as painful, wrong and urgently in need of repair. To say that hunger is “merely” the absence of food is not to deny the intense suffering it involves.

One consequence of Augustine’s mature view of evil as “non-being”, a privation of the good, is that evil eludes our understanding. His sophisticated metaphysics of evil confirms our intuitive response of incomprehension in the face of gratuitous brutality, or of senseless “natural” evil like a child’s cancer. Augustine emphasises that evil is ultimately inexplicable, since it has no substantial existence: “No one therefore must try to get to know from me what I know that I do not know, unless, it may be, in order to learn not to know what must be known to be incapable of being known!” Interestingly, by the way, this mysticism about evil mirrors the “negative theology” which insists that God exceeds the limits of our understanding.

So, by his own admission, Augustine’s “solution” to the problem of evil defends belief in God without properly explaining the kinds of acts which exert real pressure on religious faith. He may be right to point out that the effects of evil tend to be destruction and disorder – a twisting or scarring of nature, and of souls. Nevertheless, believers and non-believers alike will feel that this fails to do justice to the power of evil. We may demand a better account of the apparent positivity of evil – of the fact, for example, that holocausts and massacres often involve meticulous planning, technical innovation and creative processes of justification.

Surprisingly, though, the basic insight of Augustinian theodicy finds support in recent science. In his 2011 book Zero Degrees of Empathy, Cambridge psychopathology professor Simon Baron-Cohen proposes “a new theory of human cruelty”. His goal, he writes, is to replace the “unscientific” term “evil” with the idea of “empathy erosion”: “People said to be cruel or evil are simply at one extreme of the empathy spectrum,” he writes. (He points out, though, that some people at this extreme display no more cruelty than those higher up the empathy scale – they are simply socially isolated.)

Loss of empathy resembles the Augustinian concept of evil in that it is a deficiency of goodness – or, to put it less moralistically, a disruption of normal functioning – rather than a positive force. In this way at least, Baron-Cohen’s theory echoes Augustine’s argument, against the Manicheans, that evil is not an independent reality but, in essence, a lack or a loss.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Marvel Comics Vault of Evil. Courtesy of Wikia / Marvel Comics.[end-div]