Tag Archives: human

The Illness Known As Evil

What turns a seemingly ordinary person (usually male) into a brutal killer or mass-murderer? How does a quiet computer engineer end up as a cold-blooded executioner of innocents on a terrorist video in 2015? Why does one single guard in a concentration camp lead hundreds of thousands to their deaths during the Second World War? Why do we humans perform acts of such unspeakable brutality and horror?

Since the social sciences have existed researchers have weighed these questions. Is it possible that those who commit such acts of evil are host to a disease of the brain? Some have dubbed this Syndrome E, where E stands for evil. Others are not convinced that evil is a neurological condition with biochemical underpinnings. And so the debate, and the violence, rages on.

From the New Scientist:

The idea that a civilised human being might be capable of barbaric acts is so alien that we often blame our animal instincts – the older, “primitive” areas of the brain taking over and subverting their more rational counterparts. But fresh thinking turns this long-standing explanation on its head. It suggests that people perform brutal acts because the “higher”, more evolved, brain overreaches. The set of brain changes involved has been dubbed Syndrome E – with E standing for evil.

In a world where ideological killings are rife, new insights into this problem are sorely needed. But reframing evil as a disease is controversial. Some believe it could provide justification for heinous acts or hand extreme organisations a recipe for radicalising more young people. Others argue that it denies the reality that we all have the potential for evil within us. Proponents, however, say that if evil really is a pathology, then society ought to try to diagnose susceptible individuals and reduce contagion. And if we can do that, perhaps we can put radicalisation into reverse, too.

Following the second world war, the behaviour of guards in Nazi concentration camps became the subject of study, with some researchers seeing them as willing, ideologically driven executioners, others as mindlessly obeying orders. The debate was reignited in the mid-1990s in the wake of the Rwandan genocide and the Srebrenica massacre in Bosnia. In 1996, The Lancet carried an editorial pointing out that no one was addressing evil from a biological point of view. Neurosurgeon Itzhak Fried, at the University of California, Los Angeles, decided to rise to the challenge.

In a paper published in 1997, he argued that the transformation of non-violent individuals into repetitive killers is characterised by a set of symptoms that suggests a common condition, which he called Syndrome E (see “Seven symptoms of evil“). He suggested that this is the result of “cognitive fracture”, which occurs when a higher brain region, the prefrontal cortex (PFC) – involved in rational thought and decision-making – stops paying attention to signals from more primitive brain regions and goes into overdrive.

The idea captured people’s imaginations, says Fried, because it suggested that you could start to define and describe this basic flaw in the human condition. “Just as a constellation of symptoms such as fever and a cough may signify pneumonia, defining the constellation of symptoms that signify this syndrome may mean that you could recognise it in the early stages.” But it was a theory in search of evidence. Neuroscience has come a long way since then, so Fried organised a conference in Paris earlier this year to revisit the concept.

At the most fundamental level, understanding why people kill is about understanding decision-making, and neuroscientists at the conference homed in on this. Fried’s theory starts with the assumption that people normally have a natural aversion to harming others. If he is correct, the higher brain overrides this instinct in people with Syndrome E. How might that occur?

Etienne Koechlin at the École Normale Supérieure in Paris was able to throw some empirical light on the matter by looking at people obeying rules that conflict with their own preferences. He put volunteers inside a brain scanner and let them choose between two simple tasks, guided by their past experience of which would be the more financially rewarding (paying 6 euros versus 4). After a while he randomly inserted rule-based trials: now there was a colour code indicating which of the two tasks to choose, and volunteers were told that if they disobeyed they would get no money.

Not surprisingly, they followed the rule, even when it meant that choosing the task they had learned would earn them a lower pay-off in the free-choice trials. But something unexpected happened. Although rule-following should have led to a simpler decision, they took longer over it, as if conflicted. In the brain scans, both the lateral and the medial regions of the PFC lit up. The former is known to be sensitive to rules; the latter receives information from the limbic system, an ancient part of the brain that processes emotional states, so is sensitive to our innate preferences. In other words, when following the rule, people still considered their personal preference, but activity in the lateral PFC overrode it.

Of course, playing for a few euros is far removed from choosing to kill fellow humans. However, Koechlin believes his results show that our instinctive values endure even when the game changes. “Rules do not change values, just behaviours,” he says. He interprets this as showing that it is normal, not pathological, for the higher brain to override signals coming from the primitive brain. If Fried’s idea is correct, this process goes into overdrive in Syndrome E, helping to explain how an ordinary person overcomes their squeamishness to kill. The same neuroscience may underlie famous experiments conducted by the psychologist Stanley Milgram at Yale University in the 1960s, which revealed the extraordinary lengths to which people would go out of obedience to an authority figure – even administering what they thought were lethal electric shocks to strangers.

Fried suggests that people experience a visceral reaction when they kill for the first time, but some rapidly become desensitised. And the primary instinct not to harm may be more easily overcome when people are “just following orders”. In unpublished work, Patrick Haggard at University College London has used brain scans to show that this is enough to make us feel less responsible for our actions. “There is something about being coerced that produces a different experience of agency,” he says, “as if people are subjectively able to distance themselves from this unpleasant event they are causing.”

However, what is striking about many accounts of mass killing, both contemporary and historical, is that the perpetrators often choose to kill even when not under orders to do so. In his book Ordinary Men, the historian Christopher Browning recounts the case of a Nazi unit called reserve police battalion 101. No member of this unit was forced to kill. A small minority did so eagerly from the start, but they may have had psychopathic or sadistic tendencies. However, the vast majority of those who were reluctant to kill soon underwent a transformation, becoming just as ruthless. Browning calls them “routinised” killers: it was as if, once they had decided to kill, it quickly became a habit.

Habits have long been considered unthinking, semi-automatic behaviours in which the higher brain is not involved. That seems to support the idea that the primitive brain is in control when seemingly normal people become killers. But this interpretation is challenged by new research by neuroscientist Ann Graybiel at the Massachusetts Institute of Technology. She studies people with common psychiatric disorders, such as addiction and depression, that lead them to habitually make bad decisions. In high-risk, high-stakes situations, they tend to downplay the cost with respect to the benefit and accept an unhealthy level of risk. Graybiel’s work suggests the higher brain is to blame.

In one set of experiments, her group trained rats to acquire habits – following certain runs through mazes. The researchers then suppressed the activity of neurons in an area of the PFC that blocks signals coming from a primitive part of the brain called the amygdala. The rats immediately changed their running behaviour – the habit had been broken. “The old idea that the cognitive brain doesn’t have evaluative access to that habitual behaviour, that it’s beyond its reach, is false,” says Graybiel. “It has moment-to-moment evaluative control.” That’s exciting, she says, because it suggests a way to treat people with maladaptive habits such as obsessive-compulsive disorder, or even, potentially, Syndrome E.

What made the experiment possible was a technique known as optogenetics, which allows light to regulate the activity of genetically engineered neurons in the rat PFC. That wouldn’t be permissible in humans, but cognitive or behavioural therapies, or drugs, could achieve the same effect. Graybiel believes it might even be possible to stop people deciding to kill in the first place by steering them away from the kind of cost-benefit analysis that led them to, say, blow themselves up on a crowded bus. In separate experiments with risk-taking rats, her team found that optogenetically decreasing activity in another part of the limbic system that communicates with the PFC, the striatum, made the rats more risk-averse: “We can just turn a knob and radically alter their behaviour,” she says.

Read the entire article here.

Bestial or Human?

Following the recent horrendous mass murders in Lebanon and Paris I heard several politicians and commentators describe the atrocities as “bestial“.  So, if you’re somewhat of a pedant link me you’ll know that bestial means “of or like an animal“. This should make you scratch your head because the terror and bloodshed is nowhere close to bestial — it’s thoroughly human.

Only humans have learned to revel and excel in these types of destructive behaviors, and on such a scale. So, next time your hear someone label such an act as bestial please correct them, and hope that one day we’ll all learn to be more bestial.

And, on the subject of the recent atrocities, I couldn’t agree more with the following two articles: the murderers are certainly following a bankrupt ideology, but they’re far from mindless.

From the Guardian:

During Sunday night’s monologue he [John Oliver, Last Week Tonight show on HBO] took advantage of the US cable channel’s relaxed policy on swearing. “After the many necessary and appropriate moments of silence, I’d like to offer you a moment of premium cable profanity … it’s hardly been 48 hours but there are a few things we can say for certain.

“First, as of now, we know this attack was carried out by gigantic fucking arseholes … possibly working with other fucking arseholes, definitely working in service of an ideology of pure arseholery.

“Second, and this goes almost without saying, fuck these arseholes …

“And, third, it is important to remember, nothing about what these arseholes are trying to do is going to work. France is going to endure and I’ll tell you why. If you are in a war of culture and lifestyle with France, good fucking luck. Go ahead, bring your bankrupt ideology. They’ll bring Jean-Paul Sartre, Edith Piaf, fine wine, Gauloise cigarettes, Camus, camembert, madeleines, macarons, and the fucking croquembouche. You just brought a philosophy of rigorous self-abnegation to a pastry fight, my friend.

Read the entire article here and anthropologist Scott Atran’s (University of Michigan) op-ed, here.

Thirty Going On Sixty or Sixty Going on Thirty?

By now you probably realize that I’m a glutton for human research studies. I’m particularly fond of studies that highlight a particular finding one week, only to be contradicted by the results of another study the following week.

However, despite lack of contradictions, this one published via the Proceedings of the National Academy of Sciences caught my eye. It suggests that we age at remarkably different rates. While most subjects showed a perceived, biological age within a handful of years of their actual, chronological age, there were some surprises. Some 30-year-olds showed a biological age twice that of their chronological age, while some appeared ten years younger.

From the BBC:

A study of people born within a year of each other has uncovered a huge gulf in the speed at which their bodies age.

The report, in Proceedings of the National Academy of Sciences, tracked traits such as weight, kidney function and gum health.

Some of the 38-year-olds were ageing so badly that their “biological age” was on the cusp of retirement.

The team said the next step was to discover what was affecting the pace of ageing.

The international research group followed 954 people from the same town in New Zealand who were all born in 1972-73.

The scientists looked at 18 different ageing-related traits when the group turned 26, 32 and 38 years old.

The analysis showed that at the age of 38, the people’s biological ages ranged from the late-20s to those who were nearly 60.

“They look rough, they look lacking in vitality,” said Prof Terrie Moffitt from Duke University in the US.

The study said some people had almost stopped ageing during the period of the study, while others were gaining nearly three years of biological age for every twelve months that passed.

People with older biological ages tended to do worse in tests of brain function and had a weaker grip.

Most people’s biological age was within a few years of their chronological age. It is unclear how the pace of biological ageing changes through life with these measures.

Read the entire story here.

Will the AIs Let Us Coexist?

At some point in the not too distant future artificial intelligences will far exceed humans in most capacities (except shopping and beer drinking). The scripts according to most Hollywood movies seem to suggest that we, humans, would be (mostly) wiped-out by AI machines, beings, robots or other non-human forms — we being the lesser-organisms, superfluous to AI needs.

Perhaps, we may find an alternate path, to a more benign coexistence, much like that posited in The Culture novels by dearly departed, Iain M. Banks. I’ll go with Mr.Banks’ version. Though, just perhaps, evolution is supposed to leave us behind, replacing our simplistic, selfish intelligence with much more advanced, non-human version.

From the Guardian:

From 2001: A Space Odyssey to Blade Runner and RoboCop to The Matrix, how humans deal with the artificial intelligence they have created has proved a fertile dystopian territory for film-makers. More recently Spike Jonze’s Her and Alex Garland’s forthcoming Ex Machina explore what it might be like to have AI creations living among us and, as Alan Turing’s famous test foregrounded, how tricky it might be to tell the flesh and blood from the chips and code.

These concerns are even troubling some of Silicon Valley’s biggest names: last month Telsa’s Elon Musk described AI as mankind’s “biggest existential threat… we need to be very careful”. What many of us don’t realise is that AI isn’t some far-off technology that only exists in film-maker’s imaginations and computer scientist’s labs. Many of our smartphones employ rudimentary AI techniques to translate languages or answer our queries, while video games employ AI to generate complex, ever-changing gaming scenarios. And so long as Silicon Valley companies such as Google and Facebook continue to acquire AI firms and hire AI experts, AI’s IQ will continue to rise…

Isn’t AI a Steven Spielberg movie?
No arguments there, but the term, which stands for “artificial intelligence”, has a more storied history than Spielberg and Kubrick’s 2001 film. The concept of artificial intelligence goes back to the birth of computing: in 1950, just 14 years after defining the concept of a general-purpose computer, Alan Turing asked “Can machines think?”

It’s something that is still at the front of our minds 64 years later, most recently becoming the core of Alex Garland’s new film, Ex Machina, which sees a young man asked to assess the humanity of a beautiful android. The concept is not a million miles removed from that set out in Turing’s 1950 paper, Computing Machinery and Intelligence, in which he laid out a proposal for the “imitation game” – what we now know as the Turing test. Hook a computer up to text terminal and let it have conversations with a human interrogator, while a real person does the same. The heart of the test is whether, when you ask the interrogator to guess which is the human, “the interrogator [will] decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman”.

Turing said that asking whether machines could pass the imitation game is more useful than the vague and philosophically unclear question of whether or not they “think”. “The original question… I believe to be too meaningless to deserve discussion.” Nonetheless, he thought that by the year 2000, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”.

In terms of natural language, he wasn’t far off. Today, it is not uncommon to hear people talking about their computers being “confused”, or taking a long time to do something because they’re “thinking about it”. But even if we are stricter about what counts as a thinking machine, it’s closer to reality than many people think.

So AI exists already?
It depends. We are still nowhere near to passing Turing’s imitation game, despite reports to the contrary. In June, a chatbot called Eugene Goostman successfully fooled a third of judges in a mock Turing test held in London into thinking it was human. But rather than being able to think, Eugene relied on a clever gimmick and a host of tricks. By pretending to be a 13-year-old boy who spoke English as a second language, the machine explained away its many incoherencies, and with a smattering of crude humour and offensive remarks, managed to redirect the conversation when unable to give a straight answer.

The most immediate use of AI tech is natural language processing: working out what we mean when we say or write a command in colloquial language. For something that babies begin to do before they can even walk, it’s an astonishingly hard task. Consider the phrase beloved of AI researchers – “time flies like an arrow, fruit flies like a banana”. Breaking the sentence down into its constituent parts confuses even native English speakers, let alone an algorithm.

Read the entire article here.

The Enigma of Privacy

Privacy is still a valued and valuable right. It should not be a mere benefit in a democratic society. But, in our current age privacy is becoming an increasingly threatened species. We are surrounded with social networks that share and mine our behaviors and we are assaulted by the snoopers and spooks from local and national governments.

From the Observer:

We have come to the end of privacy; our private lives, as our grandparents would have recognised them, have been winnowed away to the realm of the shameful and secret. To quote ex-tabloid hack Paul McMullan, “privacy is for paedos”. Insidiously, through small concessions that only mounted up over time, we have signed away rights and privileges that other generations fought for, undermining the very cornerstones of our personalities in the process. While outposts of civilisation fight pyrrhic battles, unplugging themselves from the web – “going dark” – the rest of us have come to accept that the majority of our social, financial and even sexual interactions take place over the internet and that someone, somewhere, whether state, press or corporation, is watching.

The past few years have brought an avalanche of news about the extent to which our communications are being monitored: WikiLeaks, the phone-hacking scandal, the Snowden files. Uproar greeted revelations about Facebook’s “emotional contagion” experiment (where it tweaked mathematical formulae driving the news feeds of 700,000 of its members in order to prompt different emotional responses). Cesar A Hidalgo of the Massachusetts Institute of Technology described the Facebook news feed as “like a sausage… Everyone eats it, even though nobody knows how it is made”.

Sitting behind the outrage was a particularly modern form of disquiet – the knowledge that we are being manipulated, surveyed, rendered and that the intelligence behind this is artificial as well as human. Everything we do on the web, from our social media interactions to our shopping on Amazon, to our Netflix selections, is driven by complex mathematical formulae that are invisible and arcane.

Most recently, campaigners’ anger has turned upon the so-called Drip (Data Retention and Investigatory Powers) bill in the UK, which will see internet and telephone companies forced to retain and store their customers’ communications (and provide access to this data to police, government and up to 600 public bodies). Every week, it seems, brings a new furore over corporations – Apple, Google, Facebook – sidling into the private sphere. Often, it’s unclear whether the companies act brazenly because our governments play so fast and loose with their citizens’ privacy (“If you have nothing to hide, you’ve nothing to fear,” William Hague famously intoned); or if governments see corporations feasting upon the private lives of their users and have taken this as a licence to snoop, pry, survey.

We, the public, have looked on, at first horrified, then cynical, then bored by the revelations, by the well-meaning but seemingly useless protests. But what is the personal and psychological impact of this loss of privacy? What legal protection is afforded to those wishing to defend themselves against intrusion? Is it too late to stem the tide now that scenes from science fiction have become part of the fabric of our everyday world?

Novels have long been the province of the great What If?, allowing us to see the ramifications from present events extending into the murky future. As long ago as 1921, Yevgeny Zamyatin imagined One State, the transparent society of his dystopian novel, We. For Orwell, Huxley, Bradbury, Atwood and many others, the loss of privacy was one of the establishing nightmares of the totalitarian future. Dave Eggers’s 2013 novel The Circle paints a portrait of an America without privacy, where a vast, internet-based, multimedia empire surveys and controls the lives of its people, relying on strict adherence to its motto: “Secrets are lies, sharing is caring, and privacy is theft.” We watch as the heroine, Mae, disintegrates under the pressure of scrutiny, finally becoming one of the faceless, obedient hordes. A contemporary (and because of this, even more chilling) account of life lived in the glare of the privacy-free internet is Nikesh Shukla’s Meatspace, which charts the existence of a lonely writer whose only escape is into the shallows of the web. “The first and last thing I do every day,” the book begins, “is see what strangers are saying about me.”

Our age has seen an almost complete conflation of the previously separate spheres of the private and the secret. A taint of shame has crept over from the secret into the private so that anything that is kept from the public gaze is perceived as suspect. This, I think, is why defecation is so often used as an example of the private sphere. Sex and shitting were the only actions that the authorities in Zamyatin’s One State permitted to take place in private, and these remain the battlegrounds of the privacy debate almost a century later. A rather prim leaked memo from a GCHQ operative monitoring Yahoo webcams notes that “a surprising number of people use webcam conversations to show intimate parts of their body to the other person”.

It is to the bathroom that Max Mosley turns when we speak about his own campaign for privacy. “The need for a private life is something that is completely subjective,” he tells me. “You either would mind somebody publishing a film of you doing your ablutions in the morning or you wouldn’t. Personally I would and I think most people would.” In 2008, Mosley’s “sick Nazi orgy”, as the News of the World glossed it, featured in photographs published first in the pages of the tabloid and then across the internet. Mosley’s defence argued, successfully, that the romp involved nothing more than a “standard S&M prison scenario” and the former president of the FIA won £60,000 damages under Article 8 of the European Convention on Human Rights. Now he has rounded on Google and the continued presence of both photographs and allegations on websites accessed via the company’s search engine. If you type “Max Mosley” into Google, the eager autocomplete presents you with “video,” “case”, “scandal” and “with prostitutes”. Half-way down the first page of the search we find a link to a professional-looking YouTube video montage of the NotW story, with no acknowledgment that the claims were later disproved. I watch it several times. I feel a bit grubby.

“The moment the Nazi element of the case fell apart,” Mosley tells me, “which it did immediately, because it was a lie, any claim for public interest also fell apart.”

Here we have a clear example of the blurred lines between secrecy and privacy. Mosley believed that what he chose to do in his private life, even if it included whips and nipple-clamps, should remain just that – private. The News of the World, on the other hand, thought it had uncovered a shameful secret that, given Mosley’s professional position, justified publication. There is a momentary tremor in Mosley’s otherwise fluid delivery as he speaks about the sense of invasion. “Your privacy or your private life belongs to you. Some of it you may choose to make available, some of it should be made available, because it’s in the public interest to make it known. The rest should be yours alone. And if anyone takes it from you, that’s theft and it’s the same as the theft of property.”

Mosley has scored some recent successes, notably in continental Europe, where he has found a culture more suspicious of Google’s sweeping powers than in Britain or, particularly, the US. Courts in France and then, interestingly, Germany, ordered Google to remove pictures of the orgy permanently, with far-reaching consequences for the company. Google is appealing against the rulings, seeing it as absurd that “providers are required to monitor even the smallest components of content they transmit or store for their users”. But Mosley last week extended his action to the UK, filing a claim in the high court in London.

Mosley’s willingness to continue fighting, even when he knows that it means keeping alive the image of his white, septuagenarian buttocks in the minds (if not on the computers) of the public, seems impressively principled. He has fallen victim to what is known as the Streisand Effect, where his very attempt to hide information about himself has led to its proliferation (in 2003 Barbra Streisand tried to stop people taking pictures of her Malibu home, ensuring photos were posted far and wide). Despite this, he continues to battle – both in court, in the media and by directly confronting the websites that continue to display the pictures. It is as if he is using that initial stab of shame, turning it against those who sought to humiliate him. It is noticeable that, having been accused of fetishising one dark period of German history, he uses another to attack Google. “I think, because of the Stasi,” he says, “the Germans can understand that there isn’t a huge difference between the state watching everything you do and Google watching everything you do. Except that, in most European countries, the state tends to be an elected body, whereas Google isn’t. There’s not a lot of difference between the actions of the government of East Germany and the actions of Google.”

All this brings us to some fundamental questions about the role of search engines. Is Google the de facto librarian of the internet, given that it is estimated to handle 40% of all traffic? Is it something more than a librarian, since its algorithms carefully (and with increasing use of your personal data) select the sites it wants you to view? To what extent can Google be held responsible for the content it puts before us?

Read the entire article here.

Post-Siri Relationships

siri

What are we to make of a world when software-driven intelligent agents, artificial intelligence and language processing capabilities combine to deliver a human experience? After all, what does it really mean to be human and can a machine be sentient? We should all be pondering such weighty issues, since this emerging reality may well happen within our lifetimes.

From Technology Review:

In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.

Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?

Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.

But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?

Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.

Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.

There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.

The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.

Read the entire story here.

Image: Siri icon. Courtesy of Cult of Mac / Apple.

Text Stops. LOL

Another sign that some humans are devoid of common sense comes courtesy of New York. The state is designating around 100 traffic rest stops as “Text Stops”. So, drivers — anxious to get in a spontaneous email fix or tweet while behind the wheel — can now text to their thumbb’s content without imperiling the lives of others or themselves.

Perhaps this signals the demise of the scenic rest stop, only to be replaced by zones where drivers can update their digital status and tweet about reality without actually experiencing it. This is also a sign that evolution is being circumvented by artificially protecting those who lack common sense.

From ars technica:

Yesterday, New York Governor Andrew Cuomo announced a new initiative to stop drivers from texting on the road—turn rest areas into “Text Stops” and put up signage that lets people know how many miles they’ll have to hold off on tweeting that witty tweet.

91 rest stops, Park-n-Ride facilities, and parking areas along the New York State Thruway and State Highways will now become special texting zones for motorists who may not have noticed the wayside spots before. WBNG (a local news site) lists the locations of all 91 Text Stops.

“We are always looking at new and better ways to make the highway even safer,” Thruway Authority Executive Director Thomas J. Madison said yesterday, according to WBNG. “Governor Cuomo’s Text Stops initiative is an excellent way for drivers to stay in touch while recognizing the dangers of using mobile devices while driving.”

In total, 289 new signs will alert motorists of the new texting zone locations. The signs advertising the re-purposed zones will be bright blue and will feature messages like “It Can Wait” and the number of miles until the next opportunity to pull over. The state is cracking down on texting in terms of fines as well—the penalty for texting and driving recently increased to $150 and five points on your license, according to BetaBeat. WBNG also notes that in the summer of 2013, New York saw a 365 percent increase in tickets issued for distracted driving.

Read the entire article here, but don’t re-tweet it while driving.

The Rim Fire

One of the largest wildfires in California history — the Rim Fire — threatens some of the most spectacular vistas in the U.S. Yet, as it reshaped part of Yosemite Valley and surroundings it is forcing another reshaping: a fundamental re-thinking of the wildland urban interface (WUI) and the role of human activity in catalyzing natural processes.

From Wired:

For nearly two weeks, the nation has been transfixed by wildfire spreading through Yosemite National Park, threatening to pollute San Francisco’s water supply and destroy some of America’s most cherished landscapes. As terrible as the Rim Fire seems, though, the question of its long-term effects, and whether in some ways it could actually be ecologically beneficial, is a complicated one.

Some parts of Yosemite may be radically altered, entering entire new ecological states. Yet others may be restored to historical conditions that prevailed for for thousands of years from the last Ice Age’s end until the 19th century, when short-sighted fire management disrupted natural fire cycles and transformed the landscape.

In certain areas, “you could absolutely consider it a rebooting, getting the system back to the way it used to be,” said fire ecologist Andrea Thode of Northern Arizona University. “But where there’s a high-severity fire in a system that wasn’t used to having high-severity fires, you’re creating a new system.”

The Rim Fire now covers 300 square miles, making it the largest fire in Yosemite’s recent history and the sixth-largest in California’s. It’s also the latest in a series of exceptionally large fires that over the last several years have burned across the western and southwestern United States.

Fire is a natural, inevitable phenomenon, and one to which western North American ecologies are well-adapted, and even require to sustain themselves. The new fires, though, fueled by drought, a warming climate and forest mismanagement — in particular the buildup of small trees and shrubs caused by decades of fire suppression — may reach sizes and intensities too severe for existing ecosystems to withstand.

The Rim Fire may offer some of both patterns. At high elevations, vegetatively dominated by shrubs and short-needled conifers that produce a dense, slow-to-burn mat of ground cover, fires historically occurred every few hundred years, and they were often intense, reaching the crowns of trees. In such areas, the current fire will fit the usual cycle, said Thode.

Decades- and centuries-old seeds, which have remained dormant in the ground awaiting a suitable moment, will be cracked open by the heat, explained Thode. Exposed to moisture, they’ll begin to germinate and start a process of vegetative succession that results again in forests.

At middle elevations, where most of the Rim Fire is currently concentrated, a different fire dynamic prevails. Those forests are dominated by long-needled conifers that produce a fluffy, fast-burning ground cover. Left undisturbed, fires occur regularly.

“Up until the middle of the 20th century, the forests of that area would burn very frequently. Fires would go through them every five to 12 years,” said Carl Skinner, a U.S. Forest Service ecologist who specializes in relationships between fire and vegetation in northern California. “Because the fires burned as frequently as they did, it kept fuels from accumulating.”

A desire to protect houses, commercial timber and conservation lands by extinguishing these small, frequent fires changed the dynamic. Without fire, dead wood accumulated and small trees grew, creating a forest that’s both exceptionally flammable and structurally suited for transferring flames from ground to tree-crown level, at which point small burns can become infernos.

Though since the 1970s some fires have been allowed to burn naturally in the western parts of Yosemite, that’s not the case where the Rim Fire now burns, said Skinner. An open question, then, is just how big and hot it will burn.

Where the fire is extremely intense, incinerating soil seed banks and root structures from which new trees would quickly sprout, the forest won’t come back, said Skinner. Those areas will become dominated by dense, fast-growing shrubs that burn naturally every few years, killing young trees and creating a sort of ecological lock-in.

If the fire burns at lower intensities, though, it could result in a sort of ecological recalibration, said Skinner. In his work with fellow U.S. Forest Service ecologist Eric Knapp at the Stanislaus-Tuolumne Experimental Forest, Skinner has found that Yosemite’s contemporary, fire-suppressed forests are actually far more homogeneous and less diverse than a century ago.

The fire could “move the forests in a trajectory that’s more like the historical,” said Skinner, both reducing the likelihood of large future fires and generating a mosaic of habitats that contain richer plant and animal communities.

“It may well be that, across a large landscape, certain plants and animals are adapted to having a certain amount of young forest recovering after disturbances,” said forest ecologist Dan Binkley of Colorado State University. “If we’ve had a century of fires, the landscape might not have enough of this.”

Read the entire article here.

Image: Rim Fire, August 2013. Courtesy of Earth Observatory, NASA.

Of Mice and Men

Biomolecular and genetic engineering continue apace. This time researchers have inserted artificially constructed human genes into the cells of living mice.

From the Independent:

Scientists have created genetically-engineered mice with artificial human chromosomes in every cell of their bodies, as part of a series of studies showing that it may be possible to treat genetic diseases with a radically new form of gene therapy.

In one of the unpublished studies, researchers made a human artificial chromosome in the laboratory from chemical building blocks rather than chipping away at an existing human chromosome, indicating the increasingly powerful technology behind the new field of synthetic biology.

The development comes as the Government announces today that it will invest tens of millions of pounds in synthetic biology research in Britain, including an international project to construct all the 16 individual chromosomes of the yeast fungus in order to produce the first synthetic organism with a complex genome.

A synthetic yeast with man-made chromosomes could eventually be used as a platform for making new kinds of biological materials, such as antibiotics or vaccines, while human artificial chromosomes could be used to introduce healthy copies of genes into the diseased organs or tissues of people with genetic illnesses, scientists said.

Researchers involved in the synthetic yeast project emphasised at a briefing in London earlier this week that there are no plans to build human chromosomes and create synthetic human cells in the same way as the artificial yeast project. A project to build human artificial chromosomes is unlikely to win ethical approval in the UK, they said.

However, researchers in the US and Japan are already well advanced in making “mini” human chromosomes called HACs (human artificial chromosomes), by either paring down an existing human chromosome or making them “de novo” in the lab from smaller chemical building blocks.

Natalay Kouprina of the US National Cancer Institute in Bethesda, Maryland, is part of the team that has successfully produced genetically engineered mice with an extra human artificial chromosome in their cells. It is the first time such an advanced form of a synthetic human chromosome made “from scratch” has been shown to work in an animal model, Dr Kouprina said.

“The purpose of developing the human artificial chromosome project is to create a shuttle vector for gene delivery into human cells to study gene function in human cells,” she told The Independent. “Potentially it has applications for gene therapy, for correction of gene deficiency in humans. It is known that there are lots of hereditary diseases due to the mutation of certain genes.”

Read the entire article here.

Image courtesy of Science Daily.

Building a Liver

In yet another breakthrough for medical science, researchers have succeeded in growing a prototypical human liver in the lab.

From the New York Times:

Researchers in Japan have used human stem cells to create tiny human livers like those that arise early in fetal life. When the scientists transplanted the rudimentary livers into mice, the little organs grew, made human liver proteins, and metabolized drugs as human livers do.

They and others caution that these are early days and this is still very much basic research. The liver buds, as they are called, did not turn into complete livers, and the method would have to be scaled up enormously to make enough replacement liver buds to treat a patient. Even then, the investigators say, they expect to replace only 30 percent of a patient’s liver. What they are making is more like a patch than a full liver.

But the promise, in a field that has seen a great deal of dashed hopes, is immense, medical experts said.

“This is a major breakthrough of monumental significance,” said Dr. Hillel Tobias, director of transplantation at the New York University School of Medicine. Dr. Tobias is chairman of the American Liver Foundation’s national medical advisory committee.

“Very impressive,” said Eric Lagasse of the University of Pittsburgh, who studies cell transplantation and liver disease. “It’s novel and very exciting.”

The study was published on Wednesday in the journal Nature.

Although human studies are years away, said Dr. Leonard Zon, director of the stem cell research program at Boston Children’s Hospital, this, to his knowledge, is the first time anyone has used human stem cells, created from human skin cells, to make a functioning solid organ, like a liver, as opposed to bone marrow, a jellylike organ.

Ever since they discovered how to get human stem cells — first from embryos and now, more often, from skin cells — researchers have dreamed of using the cells for replacement tissues and organs. The stem cells can turn into any type of human cell, and so it seemed logical to simply turn them into liver cells, for example, and add them to livers to fill in dead or damaged areas.

But those studies did not succeed. Liver cells did not take up residence in the liver; they did not develop blood supplies or signaling systems. They were not a cure for disease.

Other researchers tried making livers or other organs by growing cells on scaffolds. But that did not work well either. Cells would fall off the scaffolds and die, and the result was never a functioning solid organ.

Researchers have made specialized human cells in petri dishes, but not three-dimensional structures, like a liver.

The investigators, led by Dr. Takanori Takebe of the Yokohama City University Graduate School of Medicine, began with human skin cells, turning them into stem cells. By adding various stimulators and drivers of cell growth, they then turned the stem cells into human liver cells and began trying to make replacement livers.

They say they stumbled upon their solution. When they grew the human liver cells in petri dishes along with blood vessel cells from human umbilical cords and human connective tissue, that mix of cells, to their surprise, spontaneously assembled itself into three-dimensional liver buds, resembling the liver at about five or six weeks of gestation in humans.

Then the researchers transplanted the liver buds into mice, putting them in two places: on the brain and into the abdomen. The brain site allowed them to watch the buds grow. The investigators covered the hole in each animal’s skull with transparent plastic, giving them a direct view of the developing liver buds. The buds grew and developed blood supplies, attaching themselves to the blood vessels of the mice.

The abdominal site allowed them to put more buds in — 12 buds in each of two places in the abdomen, compared with one bud in the brain — which let the investigators ask if the liver buds were functioning like human livers.

They were. They made human liver proteins and also metabolized drugs that human livers — but not mouse livers — metabolize.

The approach makes sense, said Kenneth Zaret, a professor of cellular and developmental biology at the University of Pennsylvania. His research helped establish that blood and connective tissue cells promote dramatic liver growth early in development and help livers establish their own blood supply. On their own, without those other types of cells, liver cells do not develop or form organs.

Read the entire article here.

Image: Diagram of the human liver. Courtesy of Encyclopedia Britannica.

What Makes Us Human

Psychologist Jerome Kagan leaves no stone unturned in his quest to determine what makes us distinctly human. His latest book, The Human Spark: The science of human development, comes up with some fresh conclusions.

From the New Scientist:

What is it that makes humans special, that sets our species apart from all others? It must be something connected with intelligence – but what exactly? People have asked these questions for as long as we can remember. Yet the more we understand the minds of other animals, the more elusive the answers to these questions have become.

The latest person to take up the challenge is Jerome Kagan, a former professor at Harvard University. And not content with pinning down the “human spark” in the title of his new book, he then tries to explain what makes each of us unique.

As a pioneer in the science of developmental psychology, Kagan has an interesting angle. A life spent investigating how a fertilised egg develops into an adult human being provides him with a rich understanding of the mind and how it differs from that of our closest animal cousins.

Human and chimpanzee infants behave in remarkably similar ways for the first four to six months, Kagan notes. It is only during the second year of life that we begin to diverge profoundly. As the toddler’s frontal lobes expand and the connections between the brain sites increase, the human starts to develop the talents that set our species apart. These include “the ability to speak a symbolic language, infer the thoughts and feelings of others, understand the meaning of a prohibited action, and become conscious of their own feelings, intentions and actions”.

Becoming human, as Kagan describes it, is a complex dance of neurobiological changes and psychological advances. All newborns possess the potential to develop the universal human properties “inherent in their genomes”. What makes each of us individual is the unique backdrop of genetics, epigenetics, and the environment against which this development plays out.

Kagan’s research highlighted the role of temperament, which he notes is underpinned by at least 1500 genes, affording huge individual variation. This variation, in turn, influences the way we respond to environmental factors including family, social class, culture and historical era.

But what of that human spark? Kagan seems to locate it in a quartet of qualities: language, consciousness, inference and, especially, morality. This is where things start to get weird. He would like you to believe that morality is uniquely human, which, of course, bolsters his argument. Unfortunately, it also means he has to deny that a rudimentary morality has evolved in other social animals whose survival also depends on cooperation.

Instead, Kagan argues that morality is a distinctive property of our species, just as “fish do not have lungs”. No mention of evolution. So why are we moral, then? “The unique biology of the human brain motivates children and adults to act in ways that will allow them to arrive at the judgement that they are a good person.” That’s it?

Warming to his theme, Kagan argues that in today’s world, where traditional moral standards have been eroded and replaced by a belief in the value of wealth and celebrity, it is increasingly difficult to see oneself as a good person. He thinks this mismatch between our moral imperative and Western culture helps explain the “modern epidemic” of mental illness. Unwittingly, we have created an environment in which the human spark is fading.

Some of Kagan’s ideas are even more outlandish, surely none more so than the assertion that a declining interest in natural sciences may be a consequence of mothers becoming less sexually mysterious than they once were. More worryingly, he doesn’t seem to believe that humans are subject to the same forces of evolution as other animals.

Read the entire article here.

Us: Perhaps It’s All Due to Gene miR-941

Geneticists have discovered a gene that helps explain how humans and apes diverged from their common ancestor around 6 million years ago.

[div class=attrib]From the Guardian:[end-div]

Researchers have discovered a new gene they say helps explain how humans evolved from chimpanzees.

The gene, called miR-941, appears to have played a crucial role in human brain development and could shed light on how we learned to use tools and language, according to scientists.

A team at the University of Edinburgh compared it to 11 other species of mammals, including chimpanzees, gorillas, mice and rats.

The results, published in Nature Communications, showed that the gene is unique to humans.

The team believe it emerged between six and one million years ago, after humans evolved from apes.

Researchers said it is the first time a new gene carried by humans and not by apes has been shown to have a specific function in the human body.

Martin Taylor, who led the study at the Institute of Genetics and Molecular Medicine at the University of Edinburgh, said: “As a species, humans are wonderfully inventive – we are socially and technologically evolving all the time.

“But this research shows that we are innovating at a genetic level too.

“This new molecule sprang from nowhere at a time when our species was undergoing dramatic changes: living longer, walking upright, learning how to use tools and how to communicate.

“We’re now hopeful that we will find more new genes that help show what makes us human.”

The gene is highly active in two areas of the brain, controlling decision-making and language abilities, with the study suggesting it could have a role in the advanced brain functions that make us human.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of ABCNews.[end-div]

First Artists: Neanderthals or Homo Sapiens?

The recent finding in a Spanish cave of a painted “red dot” dating from around 40,800 years ago suggests that our Neanderthal cousins may have beaten our species to claim the prize of “first artist”. Yet, evidence remains scant, and even if this were proven to be the case, we Homo sapiens can certainly lay claim to taking it beyond a “red dot” and making art our very own (and much else too.)

[div class=attrib]From the Guardian:[end-div]

Why do Neanderthals so fascinate Homo sapiens? And why are we so keen to exaggerate their virtues?

It is political correctness gone prehistoric. At every opportunity, people rush to attribute “human” virtues to this extinct human-like species. The latest generosity is to credit them with the first true art.

A recent redating of cave art in Spain has revealed the oldest paintings in Europe. A red dot in the cave El Castillo has now been dated at 40,800 years ago – considerably older than the cave art of Chauvet in France and contemporary with the arrival of the very first “modern humans”, Homo sapiens, in Europe.

This raises two possibilities, point out the researchers. Either the new humans from Africa started painting in caves the moment they entered Europe, or painting was already being done by the Neanderthals who were at that moment the most numerous relatives of modern humans on the European continent. One expert confesses to a “hunch” – which he acknowledges cannot be proven as things stand – that Neanderthals were painters.

That hunch goes against the weight of the existing evidence. Of course that hasn’t stopped it dominating all reports of the story: as far as media impressions go, the Neanderthals were now officially the first artists. Yet nothing of the sort has been proven, and plenty of evidence suggests that the traditional view is still far more likely.

In this view, the precocious development of art in ice age Europe marks out the first appearance of modern human consciousness, the intellectual birth of our species, the hand of Homo sapiens making its mark.

One crucial piece of evidence of where art came from is a piece of red ochre, engraved with abstract lines, that was discovered a decade ago in Blombos cave in South Africa. It is at least 70,000 years old and the oldest unmistakable artwork ever found. It is also a tool to make more art: ochre was great for making red marks on stone. It comes from Africa, where modern humans evolved, and reveals that when Homo sapiens made the move into Europe, our species could already draw on a long legacy of drawing and engraving. In fact, the latest finds at Blombos include a complete painting kit.

In other words, what is so surprising about the idea that Homo sapiens started to apply these skills immediately on discovering the caves of ice age Europe? It has to be more likely, on the face of it, than assuming these early Spanish images are by Neanderthals in the absence of any other solid evidence of paintings by them.

For, moving forward a few thousand years, the paintings of Chauvet and other French caves are certainly by us, Homo sapiens. And they remind us why this first art is so exciting and important: modern humans did not just do dots and handprints but magnificent, realistic portraits of animals. Their art is so superb in quality that it proves the existence of a higher mind, the capacity to create civilisation.

Is it possible that Neanderthals also used pigment to colour walls and also had the mental capacity to invent art? Of course it is, but the evidence at the moment still massively suggests art is a uniquely human achievement, unique, that is, to us – and fundamental to who we are.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A hand stencil in El Castillo cave, Spain, has been dated to earlier than 37,300 years ago and a red dot to earlier than 40,600 years ago, making them the oldest cave paintings in Europe. Courtesy of New Scientist / Pedro Saura.[end-div]

Human Evolution: Stalled

It takes no expert neuroscientist, anthropologist or evolutionary biologist to recognize that human evolution has probably stalled. After all, one only needs to observe our obsession with reality TV. Yes, evolution screeched to a halt around 1999, when reality TV hit critical mass in the mainstream public consciousness. So, what of evolution?

[div class=attrib]From the Wall Street Journal:[end-div]

If you write about genetics and evolution, one of the commonest questions you are likely to be asked at public events is whether human evolution has stopped. It is a surprisingly hard question to answer.

I’m tempted to give a flippant response, borrowed from the biologist Richard Dawkins: Since any human trait that increases the number of babies is likely to gain ground through natural selection, we can say with some confidence that incompetence in the use of contraceptives is probably on the rise (though only if those unintended babies themselves thrive enough to breed in turn).

More seriously, infertility treatment is almost certainly leading to an increase in some kinds of infertility. For example, a procedure called “intra-cytoplasmic sperm injection” allows men with immobile sperm to father children. This is an example of the “relaxation” of selection pressures caused by modern medicine. You can now inherit traits that previously prevented human beings from surviving to adulthood, procreating when they got there or caring for children thereafter. So the genetic diversity of the human genome is undoubtedly increasing.

Or it was until recently. Now, thanks to pre-implantation genetic diagnosis, parents can deliberately choose to implant embryos that lack certain deleterious mutations carried in their families, with the result that genes for Tay-Sachs, Huntington’s and other diseases are retreating in frequency. The old and overblown worry of the early eugenicists—that “bad” mutations were progressively accumulating in the species—is beginning to be addressed not by stopping people from breeding, but by allowing them to breed, safe in the knowledge that they won’t pass on painful conditions.

Still, recent analyses of the human genome reveal a huge number of rare—and thus probably fairly new—mutations. One study, by John Novembre of the University of California, Los Angeles, and his colleagues, looked at 202 genes in 14,002 people and found one genetic variant in somebody every 17 letters of DNA code, much more than expected. “Our results suggest there are many, many places in the genome where one individual, or a few individuals, have something different,” said Dr. Novembre.

Another team, led by Joshua Akey of the University of Washington, studied 1,351 people of European and 1,088 of African ancestry, sequencing 15,585 genes and locating more than a half million single-letter DNA variations. People of African descent had twice as many new mutations as people of European descent, or 762 versus 382. Dr. Akey blames the population explosion of the past 5,000 years for this increase. Not only does a larger population allow more variants; it also implies less severe selection against mildly disadvantageous genes.

So we’re evolving as a species toward greater individual (rather than racial) genetic diversity. But this isn’t what most people mean when they ask if evolution has stopped. Mainly they seem to mean: “Has brain size stopped increasing?” For a process that takes millions of years, any answer about a particular instant in time is close to meaningless. Nonetheless, the short answer is probably “yes.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The “Robot Evolution”. Courtesy of STRK3.[end-div]

Human Evolution Marches On

[div class=attrib]From Wired:[end-div]

Though ongoing human evolution is difficult to see, researchers believe they’ve found signs of rapid genetic changes among the recent residents of a small Canadian town.

Between 1800 and 1940, mothers in Ile aux Coudres, Quebec gave birth at steadily younger ages, with the average age of first maternity dropping from 26 to 22. Increased fertility, and thus larger families, could have been especially useful in the rural settlement’s early history.

According to University of Quebec geneticist Emmanuel Milot and colleagues, other possible explanations, such as changing cultural or environmental influences, don’t fit. The changes appear to reflect biological evolution.

“It is often claimed that modern humans have stopped evolving because cultural and technological advancements have annihilated natural selection,” wrote Milot’s team in their Oct. 3 Proceedings of the National Academy of Sciences paper. “Our study supports the idea that humans are still evolving. It also demonstrates that microevolution is detectable over just a few generations.”

Milot’s team based their study on detailed birth, marriage and death records kept by the Catholic church in Ile aux Coudres, a small and historically isolated French-Canadian island town in the Gulf of St. Lawrence. It wasn’t just the fact that average first birth age — a proxy for fertility — dropped from 26 to 22 in 140 years that suggested genetic changes. After all, culture or environment might have been wholly responsible, as nutrition and healthcare are for recent, rapid changes in human height. Rather, it was how ages dropped that caught their eye.

The patterns fit with models of gene-influenced natural selection. Moreover, thanks to the detailed record-keeping, it was possible to look at other possible explanations. Were better nutrition responsible, for example, improved rates of infant and juvenile mortality should have followed; they didn’t. Neither did the late-19th century transition from farming to more diversified professions.

[div class=attrib]Read more here.[end-div]