Tag Archives: evolution

Zebra Stripes

Zebra_Botswana

Why do zebras have stripes? Well, we’ve all learned from an early age that their peculiar and unique black and white stripes are an adaptation to combat predators. One theory suggests that the stripes are camouflage. Another theory suggests that the stripes are there to confuse predators. Yet another proposes that the stripes are a vivid warning signal.

But Tim Caro, professor of wildlife biology at the University of California, has a thoroughly different idea, conveyed in his new book, Zebra Stripes. After twenty years of study he’s convinced that the zebra’s stripes have a more mundane purpose — a deterrent to pesky biting flies.

From Wired:

At four in the morning, Tim Caro roused his colleagues. Bleary-eyed and grumbling, they followed him to the edge of the village, where the beasts were hiding. He sat them down in chairs, and after letting their eyes adjust for a minute, he asked them if they saw anything. And if so, would they please point where?

Not real beasts. Despite being camped in Tanzania’s Katavi National Park, Caro was asking his colleagues to identify pelts—from a wildebeest, an impala, and a zebra—that he had draped over chairs or clotheslines. Caro wanted to know if the zebra’s stripes gave it any sort of camouflage in the pre-dawn, when many predators hunt, and he needed the sort of replicability he could not count on from the animals roaming the savannah. “I lost a lot of social capital on that experiment,” says Caro. “If you’re going to be woken up at all, it’s important to be woken up for something exciting or unpredictable, and this was neither.”

The experiment was one of hundreds Caro performed over a twenty year scientific odyssey to discover why zebras have stripes—a question that nearly every major biologist since Alfred Russel Wallace has tried to answer. “It became sort of a challenge to me to try and investigate all the existing hypotheses so I could not only identify the right one,” he says, “but just as importantly kill all those remaining.” His new book, Zebra Stripes, chronicles every detail.

Read the entire story here.

Image: Zebras, Botswana. Courtesy: Paul Maritz, 2002. Creative Commons Attribution-Share Alike 3.0.

How and Why Did Metamorphosis Evolve?

papilio_machaon

Evolution is a truly wondrous thing. It has given us eyes and lots of grey matter [which we still don’t use very well]. It has given us the beautiful tiger and shimmering hues and soaring songs of our birds. It has given us the towering Sequoias, creepy insects, gorgeous ocean-bound creatures and invisible bacteria and viruses. Yet for all its wondrous adaptations one evolutionary invention still seems mysteriously supernatural — metamorphosis.

So, how and why did it evolve? A compelling new theory on the origins of insect metamorphosis by James W. Truman and Lynn M. Riddiford is excerpted below (from a detailed article in Scientific American).

The theory posits that a beneficial mutation around 300 million years ago led to the emergence of metamorphosis in insects:

By combining evidence from the fossil record with studies on insect anatomy and development, biologists have established a plausible narrative about the origin of insect metamorphosis, which they continue to revise as new information surfaces. The earliest insects in Earth’s history did not metamorphose; they hatched from eggs, essentially as miniature adults. Between 280 million and 300 million years ago, however, some insects began to mature a little differently—they hatched in forms that neither looked nor behaved like their adult versions. This shift proved remarkably beneficial: young and old insects were no longer competing for the same resources. Metamorphosis was so successful that, today, as many as 65 percent of all animal species on the planet are metamorphosing insects.

And, there are essentially three types of metamorphosis:

Wingless ametabolous insects, such as silverfish and bristletails, undergo little or no metamorphosis. When they hatch from eggs, they already look like adults, albeit tiny ones, and simply grow larger over time through a series of molts in which they shed their exoskeletons. Hemimetaboly, or incomplete metamorphosis, describes insects such as cockroaches, grasshoppers and dragonflies that hatch as nymphs—miniature versions of their adult forms that gradually develop wings and functional genitals as they molt and grow. Holometaboly, or complete metamorphosis, refers to insects such as beetles, flies, butterflies, moths and bees, which hatch as wormlike larvae that eventually enter a quiescent pupal stage before emerging as adults that look nothing like the larvae.

And, it’s backed by a concrete survival and reproductive advantage:

[T]he enormous numbers of metamorphosing insects on the planet speak for its success as a reproductive strategy. The primary advantage of complete metamorphosis is eliminating competition between the young and old. Larval insects and adult insects occupy very different ecological niches. Whereas caterpillars are busy gorging themselves on leaves, completely disinterested in reproduction, butterflies are flitting from flower to flower in search of nectar and mates. Because larvas and adults do not compete with one another for space or resources, more of each can coexist relative to species in which the young and old live in the same places and eat the same things.

Read the entire article here.

Image: Old World Swallowtail (Papilio machaon). Courtesy: fesoj – Otakárek fenyklový [Papilio machaon]. CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=7263187

Of Zebrafish and Men

Zebrafisch

A novel experiment in gene-editing shows how limbs of Earth’s land-dwelling creatures may have evolved from their fishy ancestors.

From University of Chicago:

One of the great transformations required for the descendants of fish to become creatures that could walk on land was the replacement of long, elegant fin rays by fingers and toes. In the Aug. 17, 2016 issue of Nature, scientists from the University of Chicago show that the same cells that make fin rays in fish play a central role in forming the fingers and toes of four-legged creatures.

After three years of painstaking experiments using novel gene-editing techniques and sensitive fate mapping to label and track developing cells in fish, the researchers describe how the small flexible bones found at the ends of fins are related to fingers and toes, which are more suitable for life on land.

“When I first saw these results you could have knocked me over with a feather,” said the study’s senior author, Neil Shubin, PhD, the Robert R. Bensley Distinguished Service Professor of Organismal Biology and Anatomy at the University of Chicago. Shubin is an authority on the transition from fins to limbs.

The team focused on Hox genes, which control the body plan of a growing embryo along the head-to-tail, or shoulder-to-fingertip, axis. Many of these genes are crucial for limb development.

They studied the development of cells, beginning, in some experiments, soon after fertilization, and followed them as they became part of an adult fin. Previous work has shown that when Hox genes, specifically those related to the wrists and digits of mice (HoxD and HoxA), were deleted, the mice did not develop those structures. When Nakamura deleted those same genes in zebrafish, the long fins rays were greatly reduced.

“What matters is not what happens when you knock out a single gene but when you do it in combination,” Nakamura explained. “That’s where the magic happens.”

The researchers also used a high-energy CT scanner to see the minute structures within the adult zebrafish fin. These can be invisible, even to most traditional microscopes. The scans revealed that fish lacking certain genes lost fin rays, but the small bones made of cartilage fin increased in number.

The authors suspect that the mutants that Nakamura made caused cells to stop migrating from the base of the fin to their usual position near the tip. This inability to migrate meant that there were fewer cells to make fin rays, leaving more cells at the fin base to produce cartilage elements.

Read more here. A female specimen of a zebrafish (Danio rerio) breed with fantails. Courtesy: Wikipedia / Azul.

Man-With-Beard and Negative Frequency-Dependent Sexual Selection

[tube]6i8IER7nTfc[/tube]

Culture watchers pronounced “peak beard” around the time of the US Academy Awards in 2013.  Since then celebrities (male) of all stripes and colors have been ditching the hairy chin for a more clean-shaven look. While, I have no interest in the amount or type of stubble on George Clooney’s face, the beard/no-beard debate does raise a more fascinating issue with profound evolutionary consequences. Research shows that certain physical characteristics, including facial hair, become more appealing when they are rare. The converse is also true: certain traits are less appealing when common. Furthermore, studies of social signalling and mating preference in various animals shows the same bias. So, men, if you’re trying to attract the attention of a potential mate it’s time to think more seriously about negative frequency-dependent sexual selection and ditch the conforming hirsute hipster look for something else. Here’s an idea: just be yourself instead of following the herd. Though, I do still like Manuel’s gallic mustache.

From the BBC:

The ebb and flow of men’s beard fashions may be guided by Darwinian selection, according to a new study.

The more beards there are, the less attractive they become – giving clean-shaven men a competitive advantage, say scientists in Sydney, Australia.

When “peak beard” frequency is reached, the pendulum swings back toward lesser-bristled chins – a trend we may be witnessing now, the scientists say.

Their study has been published in the Royal Society journal Biology Letters.

In the experiment, women and men were asked to rate different faces with “four standard levels of beardedness”.

Both beards and clean-shaven faces became more appealing when they were rare.

The pattern mirrors an evolutionary phenomenon – “negative frequency-dependent sexual selection”, or to put it more simply “an advantage to rare traits”.

The bright colours of male guppies vary by this force – which is driven by females’ changing preferences.

Scientists at the University of New South Wales decided to test this hypothesis for men’s facial hair – recruiting volunteers on their Facebook site, The Sex Lab.

“Big thick beards are back with an absolute vengeance and so we thought underlying this fashion, one of the dynamics that might be important is this idea of negative frequency dependence,” said Prof Rob Brooks, one of the study’s authors.

“The idea is that perhaps people start copying the George Clooneys and the Joaquin Phoenixs and start wearing those beards, but then when more and more people get onto the bandwagon the value of being on the bandwagon diminishes, so that might be why we’ve hit ‘peak beard’.”

“Peak beard” was the climax of the trend for beards in professions not naturally associated with a bristly chin – bankers, film stars, and even footballers began sporting facial hair.

Read the entire story here.

Video courtesy of Fawlty Towers / BBC Productions.

Gadzooks, Gosh, Tarnation and the F-Bomb

Blimey! How our lexicon of foul language has evolved! Up to a few hundred years ago most swear words and oaths bore some connection to God, Jesus or other religious figure or event. But the need to display some level of dubious piety and avoid a lightening bolt from the blue led many to invent and mince a whole range of creative euphemisms. Hence, even today, we still hear words like “drat”, “gosh”, “tarnation”, “by george”, “by jove”, “heck”, “strewth”, “odsbodikins”, “gadzooks”, “doggone”.

More recently our linguistic penchant for shock and awe stems mostly from euphemistic — or not — labels for body parts and bodily functions — think: “freaking” or “shit” or “dick” and all manner of “f-words” and “c-words”. Sensitivities aside, many of us are fortunate enough to live in nations that have evolved beyond corporal or even capital punishment for uttering such blasphemous or vulgar indiscretions.

So, the next time your drop the “f-bomb” or a “dagnabbit” in public reflect for a while and thank yourself for supporting your precious democracy over the neighboring theocracy.

From WSJ:

At street level and in popular culture, Americans are freer with profanity now than ever before—or so it might seem to judge by how often people throw around the “F-bomb” or use a certain S-word of scatological meaning as a synonym for “stuff.” Or consider the millions of fans who adore the cartoon series “South Park,” with its pint-size, raucously foul-mouthed characters.

But things might look different to an expedition of anthropologists visiting from Mars. They might conclude that Americans today are as uptight about profanity as were our 19th-century forbears in ascots and petticoats. It’s just that what we think of as “bad” words is different. To us, our ancestors’ word taboos look as bizarre as tribal rituals. But the real question is: How different from them, for better or worse, are we?

In medieval English, at a time when wars were fought in disputes over religious doctrine and authority, the chief category of profanity was, at first, invoking—that is, swearing to—the name of God, Jesus or other religious figures in heated moments, along the lines of “By God!” Even now, we describe profanity as “swearing” or as muttering “oaths.”

It might seem like a kind of obsessive piety to us now, but the culture of that day was largely oral, and swearing—making a sincere oral testament—was a key gesture of commitment. To swear by or to God lightly was considered sinful, which is the origin of the expression to take the Lord’s name in vain (translated from Biblical Hebrew for “emptily”).

The need to avoid such transgressions produced various euphemisms, many of them familiar today, such as “by Jove,” “by George,” “gosh,” “golly” and “Odsbodikins,” which started as “God’s body.” “Zounds!” was a twee shortening of “By his wounds,” as in those of Jesus. A time traveler to the 17th century would encounter variations on that theme such as “Zlids!” and “Znails!”, referring to “his” eyelids and nails.

In the 19th century, “Drat!” was a way to say “God rot.” Around the same time, darn started when people avoided saying “Eternal damnation!” by saying “Tarnation!”, which, because of the D-word hovering around, was easy to recast as “Darnation!”, from which “darn!” was a short step.

By the late 18th century, sex, excretion and the parts associated with same had come to be treated as equally profane as “swearing” in the religious sense. Such matters had always been considered bawdy topics, of course, but the space for ordinary words referring to them had been shrinking for centuries already.

Chaucer had available to him a thoroughly inoffensive word referring to the sex act, swive. An anatomy book in the 1400s could casually refer to a part of the female anatomy with what we today call the C-word. But over time, referring to these things in common conversation came to be regarded with a kind of pearl-clutching horror.

By the 1500s, as English began taking its place alongside Latin as a world language with a copious high literature, a fashion arose for using fancy Latinate terms in place of native English ones for more private matters. Thus was born a slightly antiseptic vocabulary, with words like copulate and penis. Even today modern English has no terms for such things that are neither clinical nor vulgar, along the lines of arm or foot or whistle.

The burgeoning bourgeois culture of the late 1700s, both in Great Britain and America, was especially alarmist about the “down there” aspect of things. In growing cities with stark social stratification, a new gentry developed a new linguistic self-consciousness—more English grammars were published between 1750 and 1800 than had ever appeared before that time.

In speaking of cooked fowl, “white” and “dark” meat originated as terms to avoid mention of breasts and limbs. What one does in a restroom, another euphemism of this era, is only laboriously classified as repose. Bosom and seat (for the backside) originated from the same impulse.

Passages in books of the era can be opaque to us now without an understanding of how particular people had gotten: In Dickens’s “Oliver Twist,” Giles the butler begins, “I got softly out of bed; drew on a pair of…” only to be interrupted with “Ladies present…” after which he dutifully says “…of shoes, sir.” He wanted to say trousers, but because of where pants sit on the body, well…

Or, from the gargantuan Oxford English Dictionary, published in 1884 and copious enough to take up a shelf and bend it, you would never have known in the original edition that the F-word or the C-word existed.

Such moments extend well into the early 20th century. In a number called “Shuffle Off to Buffalo” in the 1932 Broadway musical “42nd Street,” Ginger Rogers sings “He did right by little Nelly / with a shotgun at his bell-” and then interjects “tummy” instead. “Belly” was considered a rude part of the body to refer to; tummy was OK because of its association with children.

Read the entire story here.

The Devout Atheist

Dawkins_aaconfEvolutionary biologist Richard Dawkins sprang to the public’s attention via his immensely popular book The Selfish Gene. Since its publication almost 40 years ago, its author has assumed the unofficial mantle of Atheist-In-Chief. His passionate and impatient defense — some would call it crusading offense — of all things godless has rubbed many the wrong way, including numerous unbelievers. That said, his reasoning remains crystal clear and his focus laser-like. I just wish he would stay away from Twitter.

Check out his foundation here.

From the Guardian:

In Dublin, not long ago, Richard Dawkins visited a steakhouse called Darwin’s. He was in town to give a talk on the origins of life at Trinity College with the American physicist Lawrence Krauss. In the restaurant, a large model gorilla squatted in a corner and a series of sepia paintings of early man hung in the dining room – though, Dawkins pointed out, not quite in the right chronological order. A space by the bar had been refitted to resemble the interior of the Beagle, the vessel on which Charles Darwin sailed to South America in 1831 and conceived his theory of natural selection. “Oh look at this!” Dawkins said, examining the decor. “It’s terrific! Oh, wonderful.”

Over the years, Dawkins, a zoologist by training, has expressed admiration for Darwin in the way a schoolboy might worship a sporting giant. In his first memoir, Dawkins noted the “serendipitous realisation” that his full name – Clinton Richard Dawkins – shared the same initials as Charles Robert Darwin. He owns a prized first edition of On The Origin of Species, which he can quote from memory. For Dawkins, the book is totemic, the founding text of his career. “It’s such a thorough, unanswerable case,” he said one afternoon. “[Darwin] called it one long argument.” As a description of Dawkins’s own life, particularly its late phase, “one long argument” serves fairly well. As the global face of atheism over the last decade, Dawkins has ratcheted up the rhetoric in his self-declared war against religion. He is the general who chooses to fight on the front line – whose scorched-earth tactics have won him fervent admirers, and ferocious enemies. What is less clear, however, is whether he is winning.

Over dinner – chicken for Dawkins, steak for everyone else – he spoke little. He was anxious to leave early in order to discuss the format of the event with Krauss. Though Dawkins gives a talk roughly once a fortnight, he still obsessively overprepares. On this occasion, there was no need – he and Krauss had put on a similar show the night before at the University of Ulster in Belfast. They had also appeared on a radio talkshow, during which they had attempted to debate a creationist (an “idiot”, in Dawkins’s terminology). “She simply tried to shout down everything Lawrence and I said. So she was in effect going la la la la la.” Dawkins stuck his fingers in his ears as he sang.

Krauss and Dawkins have toured frequently as a double act, partners in a global quest to broadcast the wonder of science and the nonexistence of God. Dawkins has been on this mission ever since 1976, when he published The Selfish Gene, the book that made him famous, which has now sold over a million copies. Since then, he has written another 10 influential books on science and evolution, plus The God Delusion, his atheist blockbuster, and become the most prominent of the so-called New Atheists – a group of writers, including Christopher Hitchens and Sam Harris, who published anti-religion polemics in the years after 9/11.

An hour or so after dinner, the Burke Theatre in Trinity College, a large modern lecture hall with banked seating, was full. After separate presentations, Krauss and Dawkins conversed freely, swapping ideas on the origins of life. As he spoke, Dawkins took on a grandfatherly air, as though passing on hard-earned wisdom. He has always sought to inject beauty into biology, and his voice wavered with emotion as he shifted from dry fact to lyrical metaphor.

Dawkins has the stately confidence of one who has spent half a life behind a lectern. He has aged well, thanks to the determined jaw and carved cheekbones of a 1950s matinee idol. His hair remains in the style that has served him for 70 years, a lopsided sweep. A prominent brow and hawkish stare give him a look of constant urgency, as though he is waiting for everyone to catch up. In Dublin, his outfit was academic-on-tour: jacket, woolly jumper and tie, one of a collection hand-painted by his wife, Lalla Ward, which depict penguins, fish, birds of prey.

At the end of the Trinity event, a crowd of about 40 audience members descended on to the stage, clutching books to be signed. Dawkins eventually retreated into the wings to avoid a crush. One young schoolteacher lingered in the hallway long after the rest of the audience had left, in the hope of shaking Dawkins’s hand. Earlier that day, Dawkins had expressed bewilderment at his own celebrity. “I find the epidemic of selfies disconcerting,” he said. “It’s always, ‘one quick photo.’ One quick. But it never is.” Though he is used to receiving a steady flow of letters from fans of The God Delusion and new converts to atheism, he does not perceive himself as a figurehead. “I don’t need to say if I think of myself as a leader,” he said a few weeks later. “I simply need to say the book has sold three million copies.”

Dawkins turned 74 in March this year. To celebrate, he had dinner with Ward at Cherwell Boathouse, a smart restaurant overlooking the river in Oxford; the occasion was marred only slightly by a loud-voiced fellow diner, Dawkins recalled, “who quacked like Donald Duck”. An academic of his eminence could, by now, have eased into a distinguished late period: more books, the odd speech, master of an Oxford college, a gentle tending to his legacy. Though he is in a retrospective phase – one memoir published, a second on its way later this year – peaceful retreat from public life has not been the Dawkins way. “Some people might say why don’t you just get on with gardening,” he said. “I think [there’s a] passion for truth and a passion for justice that doesn’t allow me to do that.”

Instead, Dawkins remains indefatigably active. He rarely takes a holiday, but travels frequently to give talks – in the last four months he has been to Ireland, the Czech Republic, Bulgaria and Brazil. Though he says he prefers to speak about science, God inevitably looms. “I suppose some of what I do is an attempt to change people’s minds about religion,” he said, with some understatement, between events in Ireland. “And I do think that’s a politically important thing to be doing.” For Dawkins, who describes his own politics as “vaguely left”, this means a concern for the state of the world, and a desire, ultimately, to eradicate religion from society. In his mission, Dawkins is still, at heart, a teacher. “I would like to leave the world a better place,” he said. “I like to think my science books have had a positive educational effect, but I also want to leave the world a better place in influencing opinion in other fields where there is illogic, obscurantism, pretension.” Religious faith, for Dawkins, is above all a sign of faulty thinking, of ignorance; he wants to educate the ill-informed out of their mistakes. He sees religion, as he once put it on Twitter, as “an organised licence to be acceptably stupid”.

The two strands of Dawkins’s mission – promoting science, demolishing religion – are intended to be complementary. “If they are antagonistic to each other, that would be regrettable,” he said, “but I don’t see why they should be.” But antagonism is part of Dawkins’s daily life. “I suppose some of the passions that I show are more appropriate to a young man than somebody of my age.” Since his arrival on Twitter in 2008, his public pronouncements have become more combative – and, at times, flamboyantly irritable: “How dare you force your dopey unsubstantiated superstitions on innocent children too young to resist?,” he tweeted last June. “How DARE you?”

— Richard Dawkins (@RichardDawkins)June 10, 2014

How dare you force your dopey unsubstantiated superstitions on innocent children too young to resist? How DARE you?

Read the entire story here.

Image: Richard Dawkins, 34th annual conference of American Atheists (2008). Public domain.

Regression Texting

emoji-2016

Some culture watchers believe we are entering into a linguistic death spiral. Our increasingly tech-driven communication is enabling our language to evolve in unforeseen ways, and some linguists believe the evolution is actually taking us backwards rather than forwards. Enter exhibit one into the record: the 👿 emoji.

From the Guardian:

So it’s official. We are evolving backwards. Emoji, the visual system of communication that is incredibly popular online, is Britain’s fastest-growing language according to Professor Vyv Evans, a linguist at Bangor University.

The comparison he uses is telling – but not in the way the prof, who appears enthusiastic about emojis, presumably intends. “As a visual language emoji has already far eclipsed hieroglyphics, its ancient Egyptian precursor which took centuries to develop,” says Evans.

Perhaps that is because it is easier to go downhill than uphill. After millennia of painful improvement, from illiteracy to Shakespeare and beyond, humanity is rushing to throw it all away. We’re heading back to ancient Egyptian times, next stop the stone age, with a big yellow smiley grin on our faces.

Unicode, the company that created emojis, has announced it will release 36 more of the brainless little icons next year. Demand is massive: 72% of 18- to 25-year-olds find it easier to express their feelings in emoji pictures than through the written word, according to a survey for Talk Talk mobile.

As tends to happen in an age when technology is transforming culture on a daily basis, people relate such news with bland irony or apparent joy. Who wants to be the crusty old conservative who questions progress? But the simplest and most common-sense historical and anthropological evidence tells us that Emoji is not “progress” by any definition. It is plainly a step back.

Evans compares Emoji with ancient Egyptian hieroglyphics. Well indeed. ancient Egypt was a remarkable civilisation, but it had some drawbacks. The Egyptians created a magnificent but static culture. They invented a superb artistic style and powerful mythology – then stuck with these for millennia. Hieroglyphs enabled them to write spells but not to develop a more flexible, questioning literary culture: they left that to the Greeks.

These jumped-up Aegean loudmouths, using an abstract non-pictorial alphabet they got from the Phoenicians, obviously and spectacularly outdid the Egyptians in their range of expression. The Greek alphabet was much more productive than all those lovely Egyptian pictures. That is why there is no ancient Egyptian Iliad or Odyssey.

In other words, there are harsh limits on what you can say with pictures. The written word is infinitely more adaptable. That’s why Greece rather than Egypt leapt forward and why Shakespeare was more articulate than the Aztecs.

Read the entire article here.

Image: A subset of new emojis proposed for adoption in 2016. The third emoji along the top row is, of course, “selfie”. Courtesy of Unicode.

 

Religious Dogma and DNA

Despite ongoing conflicts around the global that are fueled or governed by religious fanaticism it is entirely plausible that our general tendency to supernatural belief is encoded in our DNA. Of course this does not mean that a God or that various gods exist, it merely implies that over time natural selection generally favored those who believed in deities over those did not. We are such complex and contradictory animals.

From NYT:

Most of us find it mind-boggling that some people seem willing to ignore the facts — on climate change, on vaccines, on health care — if the facts conflict with their sense of what someone like them believes. “But those are the facts,” you want to say. “It seems weird to deny them.”

And yet a broad group of scholars is beginning to demonstrate that religious belief and factual belief are indeed different kinds of mental creatures. People process evidence differently when they think with a factual mind-set rather than with a religious mind-set. Even what they count as evidence is different. And they are motivated differently, based on what they conclude. On what grounds do scholars make such claims?

First of all, they have noticed that the very language people use changes when they talk about religious beings, and the changes mean that they think about their realness differently. You do not say, “I believe that my dog is alive.” The fact is so obvious it is not worth stating. You simply talk in ways that presume the dog’s aliveness — you say she’s adorable or hungry or in need of a walk. But to say, “I believe that Jesus Christ is alive” signals that you know that other people might not think so. It also asserts reverence and piety. We seem to regard religious beliefs and factual beliefs with what the philosopher Neil Van Leeuwen calls different “cognitive attitudes.”

Second, these scholars have remarked that when people consider the truth of a religious belief, what the belief does for their lives matters more than, well, the facts. We evaluate factual beliefs often with perceptual evidence. If I believe that the dog is in the study but I find her in the kitchen, I change my belief. We evaluate religious beliefs more with our sense of destiny, purpose and the way we think the world should be. One study found that over 70 percent of people who left a religious cult did so because of a conflict of values. They did not complain that the leader’s views were mistaken. They believed that he was a bad person.

Third, these scholars have found that religious and factual beliefs play different roles in interpreting the same events. Religious beliefs explain why, rather than how. People who understand readily that diseases are caused by natural processes might still attribute sickness at a particular time to demons, or healing to an act of God. The psychologist Cristine H. Legare and her colleagues recently demonstrated that people use both natural and supernatural explanations in this interdependent way across many cultures. They tell a story, as recounted by Tracy Kidder’s book on the anthropologist and physician Paul Farmer, about a woman who had taken her tuberculosis medication and been cured — and who then told Dr. Farmer that she was going to get back at the person who had used sorcery to make her ill. “But if you believe that,” he cried, “why did you take your medicines?” In response to the great doctor she replied, in essence, “Honey, are you incapable of complexity?”

Moreover, people’s reliance on supernatural explanations increases as they age. It may be tempting to think that children are more likely than adults to reach out to magic to explain something, and that they increasingly put that mind-set to the side as they grow up, but the reverse is true. It’s the young kids who seem skeptical when researchers ask them about gods and ancestors, and the adults who seem clear and firm. It seems that supernatural ideas do things for adults they do not yet do for children.

Finally, scholars have determined that people don’t use rational, instrumental reasoning when they deal with religious beliefs. The anthropologist Scott Atran and his colleagues have shown that sacred values are immune to the normal cost-benefit trade-offs that govern other dimensions of our lives. Sacred values are insensitive to quantity (one cartoon can be a profound insult). They don’t respond to material incentives (if you offer people money to give up something that represents their sacred value, and they often become more intractable in their refusal). Sacred values may even have different neural signatures in the brain.

The danger point seems to be when people feel themselves to be completely fused with a group defined by its sacred value. When Mr. Atran and his colleagues surveyed young men in two Moroccan neighborhoods associated with militant jihad (one of them home to five men who helped plot the 2004 Madrid train bombings, and then blew themselves up), they found that those who described themselves as closest to their friends and who upheld Shariah law were also more likely to say that they would suffer grievous harm to defend Shariah law. These people become what Mr. Atran calls “devoted actors” who are unconditionally committed to their sacred value, and they are willing to die for it.

Read the entire article here.

A Physics Based Theory of Life

Carnot_heat_engine

Those who subscribe to the non-creationist theory of the origins of life tend gravitate towards the idea of assembly of self-replicating, organic molecules in our primeval oceans — the so-called primordial soup theory. Recently however, professor Jeremy England of MIT has proposed a thermodynamic explanation, which posits that inorganic matter tends to organize — under the right conditions — in a way that enables it to dissipate increasing amounts of energy. This is one of the fundamental attributes of living organisms.

Could we be the product of the Second Law of Thermodynamics, nothing more than the expression of increasing entropy?

Read more of this fascinating new hypothesis below or check out England’s paper on the Statistical Physics of Self-replication.

From Quanta:

Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

Read the entire story here.

Image: Carnot engine diagram, where an amount of heat QH flows from a high temperature TH furnace through the fluid of the “working body” (working substance) and the remaining heat QC flows into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions. Courtesy of Wikipedia.

 

Non-Adaptive Evolution of the Very Small

Is every feature that arises from evolution an adaptation?  Some evolutionary biologists think not. That is, some traits arising from the process of natural section may be due to random occurrences that natural selection failed to discard. And, it seems that smaller organisms show this quite well. To many adaptationists this is heretical — but too some researchers it opens a new, fruitful avenue of inquiry, and may lead to a fine tuning in our understanding of the evolutionary process.

From New Scientist:

I have spent my life working on slime moulds and they sent me a message that started me thinking. What puzzled me was that two different forms are found side-by-side in the soil everywhere from the tundra to the tropics. The obvious difference lies in the tiny stalks that disperse their spores. In one species this fruiting body is branched, in the other it is not.

I had assumed that the branched and the unbranched forms occupied separate ecological niches but I could not imagine what those niches might be. Perhaps there were none and neither shape had an advantage over the other, as far as natural selection was concerned.

I wrote this up and sent it to a wise and respected friend who responded with a furious letter saying that my conclusion was absurd: it was easy to imagine ways in which the two kinds of stalks might be separate adaptations and co-exist everywhere in the soil. This set me thinking again and I soon realised that both my position and his were guesses. They were hypotheses and neither could be proved.

There is no concept that is more central to evolution than natural selection, so adding this extra dimension of randomness was heresy. Because of the overwhelming success of Darwin’s natural selection, biologists – certainly all evolutionary biologists – find it hard to believe that a feature of any organism can have arisen (with minor exceptions) in any other way. Natural selection favours random genetic mutations that offer an advantage, therefore many people believe that all properties of an organism are an adaptation. If one cannot find the adaptive reason for a feature of an organism, one should just assume that there was once one, or that there is one that will be revealed in the future.

This matter has created some heated arguments. For example, the renowned biologists Stephen Jay Gould and Richard Lewontin wrote an inflammatory paper in 1979 attacking adaptionists for being like Dr Pangloss, the incurable optimist in Voltaire’s 1759 satire Candide. While their point was well taken, its aggressive tone produced counterattacks. Adaptionists assume that every feature of an organism arises as an adaption, but I assume that some features are the results of random mutations that escape being culled by natural selection. This is what I was suggesting for the branched and unbranched fruiting bodies of the slime moulds.

How can these organisms escape the stranglehold of selection? One explanation grabbed me and I have clung to it ever since; in fact it is the backbone of my new book. The reason that these organisms might have shapes that are not governed by natural selection is because they are so small. It turns out there are good reasons why this might be the case.

Development is a long, slow process for large organisms. Humans spend nine months in utero and keep growing in different ways for a long time after birth. An elephant’s gestation is even longer (about two years) and a mouse’s much shorter, but they are all are vastly longer than a single-cell microorganism. Such small forms may divide every few hours; at most their development may span days, but whatever it is it will be a small fraction of that of a larger, more complex organism.

Large organisms develop in a series of steps usually beginning with the fertilisation of an egg that then goes through many cell divisions and an increase in size of the embryo, with many twists and turns as it progresses towards adulthood. These multitudinous steps involve the laying down of complex organs such as a heart or an eye.

Building a complex organism is an immense enterprise, and the steps are often interlocked in a sequence so that if an earlier step fails through a deleterious mutation, the result is very simple: the death of the embryo. I first came across this idea in a 1965 book by Lancelot Law Whyte called Internal Factors in Evolution and have been mystified ever since why the idea has been swallowed by oblivion. His thesis was straightforward. Not only is there selection of organisms in the environment – Darwinian natural selection, which is external – but there is also continuous internal selection during development. Maybe the idea was too simple and straightforward to have taken root.

This fits in neatly with my contention that the shape of microorganisms is more affected by randomness than for large, complex organisms. Being small means very few development steps, with little or no internal selection. The effect of a mutation is likely to be immediately evident in the external morphology, so adult variants are produced with large numbers of different shapes and there is an increased chance that some of these will be untouched by natural selection.

Compare this with what happens in a big, complex organism – a mammal, say. Only those mutations that occur at a late stage of development are likely to be viable – eye or hair colour in humans are obvious examples. Any unfavourable mutation that occurs earlier in development will likely be eliminated by internal selection.

Let us now examine the situation for microorganisms. What is the evidence that their shapes are less likely to be culled by natural selection? The best examples come from organisms that make mineral shells: Radiolaria (pictured) and diatoms with their silica skeletons and Foraminifera with their calciferous shells. About 50,000 species of radiolarians have been described, 100,000 species of diatoms and some 270,000 species among the Foraminifera – all with vastly different shapes. For example, radiolarian skeletons can be shaped like spiny balls, bells, crosses and octagonal pyramids, to name but a few.

If you are a strict adaptionist, you have to find a separate explanation for each shape. If you favour my suggestion that their shapes arose through random mutation and there is little or no selection, the problem vanishes. It turns out that this very problem concerned Darwin. In the third (and subsequent) editions of On the Origin of Species he has a passage that almost takes the wind out of my sails:

“If it were no advantage, these forms would be left by natural selection unimproved or but little improved; and might remain for indefinite ages in their present little advanced condition. And geology tells us that some of the lowest forms, as the infusoria and rhizopods, have remained for an enormous period in nearly their present state.”

Read the entire article here.

DarwinTunes

Charles_DarwinResearchers at Imperial College, London recently posed an intriguing question and have since developed a cool experiment to test it. Does artistic endeavor, such as music, follow the same principles of evolutionary selection in biology, as described by Darwin? That is, does the funkiest survive? Though, one has to wonder what the eminent scientist would have thought about some recent fusion of rap / dubstep / classical.

From the Guardian:

There were some funky beats at Imperial College London on Saturday at its annual science festival. As well as opportunities to create bogeys, see robots dance and try to get physics PhD students to explain their wacky world, this fascinating event included the chance to participate in a public game-like experiment called DarwinTunes.

Participants select tunes and “mate” them with other tunes to create musical offspring: if the offspring are in turn selected by other players, they “survive” and get the chance to reproduce their musical DNA. The experiment is online – you too can try to immortalise your selfish musical genes.

It is a model of evolution in practice that raises fascinating questions about culture and nature. These questions apply to all the arts, not just to dance beats. How does “cultural evolution” work? How close is the analogy between Darwin’s well-proven theory of evolution in nature and the evolution of art, literature and music?

The idea of cultural evolution was boldly defined by Jacob Bronowski as our fundamental human ability “not to accept the environment but to change it”. The moment the first stone tools appeared in Africa, about 2.5m years ago, a new, faster evolution, that of human culture, became visible on Earth: from cave paintings to the Renaissance, from Galileo to the 3D printer, this cultural evolution has advanced at breathtaking speed compared with the massive periods of time it takes nature to evolve new forms.

In DarwinTunes, cultural evolution is modelled as what the experimenters call “the survival of the funkiest”. Pulsing dance beats evolve through selections made by participants, and the music (it is claimed) becomes richer through this process of selection. Yet how does the model really correspond to the story of culture?

One way Darwin’s laws of nature apply to visual art is in the need for every successful form to adapt to its environment. In the forests of west and central Africa, wood carving was until recent times a flourishing art form. In the islands of Greece, where marble could be quarried easily, stone sculpture was more popular. In the modern technological world, the things that easily come to hand are not wood or stone but manufactured products and media images – so artists are inclined to work with the readymade.

At first sight, the thesis of DarwinTunes is a bit crude. Surely it is obvious that artists don’t just obey the selections made by their audience – that is, their consumers. To think they do is to apply the economic laws of our own consumer society across all history. Culture is a lot funkier than that.

Yet just because the laws of evolution need some adjustment to encompass art, that does not mean art is a mysterious spiritual realm impervious to scientific study. In fact, the evolution of evolution – the adjustments made by researchers to Darwin’s theory since it was unveiled in the Victorian age – offers interesting ways to understand culture.

One useful analogy between art and nature is the idea of punctuated equilibrium, introduced by some evolutionary scientists in the 1970s. Just as species may evolve not through a constant smooth process but by spectacular occasional leaps, so the history of art is punctuated by massively innovative eras followed by slower, more conventional periods.

Read the entire story here.

Image: Charles Darwin, 1868, photographed by Julia Margaret Cameron. Courtesy of Wikipedia.

13.6 Billion Versus 4004 BCE

The first number, 13.6 billion, is the age in years of the oldest known star in the cosmos. It was discovered recently by astronomers in Australia at the National University’s Mount Stromlo SkyMapper Observatory. The star is located in our Milky Way galaxy about 6,000 light years away. A little closer to home, in Kentucky at the aptly named Creation Museum, the Synchronological Chart places the beginning of time and all things at 4004 BCE.

Interestingly enough both Australia and Kentucky should not exist according to the flat earth myth or the widespread pre-Columbus view of our world with an edge at the visible horizon. But, the evolution versus creationism debates continue unabated. The chasm between the two camps remains a mere 13.6 billion years give or take a handful of millennia. But perhaps over time, those who subscribe to reason and the scientific method are likely to prevail — an apt example of survival of the most adaptable at work.

Hitch, we still miss you!

From ars technica:

In 1878, the American scholar and minister Sebastian Adams put the final touches on the third edition of his grandest project: a massive Synchronological Chart that covers nothing less than the entire history of the world in parallel, with the deeds of kings and kingdoms running along together in rows over 25 horizontal feet of paper. When the chart reaches 1500 BCE, its level of detail becomes impressive; at 400 CE it becomes eyebrow-raising; at 1300 CE it enters the realm of the wondrous. No wonder, then, that in their 2013 book Cartographies of Time: A History of the Timeline, authors Daniel Rosenberg and Anthony Grafton call Adams’ chart “nineteenth-century America’s surpassing achievement in complexity and synthetic power… a great work of outsider thinking.”

The chart is also the last thing that visitors to Kentucky’s Creation Museum see before stepping into the gift shop, where full-sized replicas can be purchased for $40.

That’s because, in the world described by the museum, Adams’ chart is more than a historical curio; it remains an accurate timeline of world history. Time is said to have begun in 4004 BCE with the creation of Adam, who went on to live for 930 more years. In 2348 BCE, the Earth was then reshaped by a worldwide flood, which created the Grand Canyon and most of the fossil record even as Noah rode out the deluge in an 81,000 ton wooden ark. Pagan practices at the eight-story high Tower of Babel eventually led God to cause a “confusion of tongues” in 2247 BCE, which is why we speak so many different languages today.

Adams notes on the second panel of the chart that “all the history of man, before the flood, extant, or known to us, is found in the first six chapters of Genesis.”

Ken Ham agrees. Ham, CEO of Answers in Genesis (AIG), has become perhaps the foremost living young Earth creationist in the world. He has authored more books and articles than seems humanly possible and has built AIG into a creationist powerhouse. He also made national headlines when the slickly modern Creation Museum opened in 2007.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

They came to see Ken Ham debate TV presenter Bill Nye the Science Guy—an old-school creation v. evolution throwdown for the Powerpoint age. Even before it began, the debate had been good for both men. Traffic to AIG’s website soared by 80 percent, Nye appeared on CNN, tickets sold out in two minutes, and post-debate interviews were lined up with Piers Morgan Live and MSNBC.

While plenty of Ham supporters filled the parking lot, so did people in bow ties and “Bill Nye is my Homeboy” T-shirts. They all followed the stamped dinosaur tracks to the museum’s entrance, where a pack of AIG staffers wearing custom debate T-shirts stood ready to usher them into “Discovery Hall.”

Security at the Creation Museum is always tight; the museum’s security force is made up of sworn (but privately funded) Kentucky peace officers who carry guns, wear flat-brimmed state trooper-style hats, and operate their own K-9 unit. For the debate, Nye and Ham had agreed to more stringent measures. Visitors passed through metal detectors complete with secondary wand screenings, packages were prohibited in the debate hall itself, and the outer gates were closed 15 minutes before the debate began.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Ham had won the coin toss backstage and so stepped to his lectern to deliver brief opening remarks. “Creation is the only viable model of historical science confirmed by observational science in today’s modern scientific era,” he declared, blasting modern textbooks for “imposing the religion of atheism” on students.

“We’re teaching people to think critically!” he said. “It’s the creationists who should be teaching the kids out there.”

And we were off.

Two kinds of science

Digging in the fossil fields of Colorado or North Dakota, scientists regularly uncover the bones of ancient creatures. No one doubts the existence of the bones themselves; they lie on the ground for anyone to observe or weigh or photograph. But in which animal did the bones originate? How long ago did that animal live? What did it look like? One of Ham’s favorite lines is that the past “doesn’t come with tags”—so the prehistory of a stegosaurus thigh bone has to be interpreted by scientists, who use their positions in the present to reconstruct the past.

For mainstream scientists, this is simply an obvious statement of our existential position. Until a real-life Dr. Emmett “Doc” Brown finds a way to power a Delorean with a 1.21 gigawatt flux capacitor in order to shoot someone back through time to observe the flaring-forth of the Universe, the formation of the Earth, or the origins of life, or the prehistoric past can’t be known except by interpretation. Indeed, this isn’t true only of prehistory; as Nye tried to emphasize, forensic scientists routinely use what they know of nature’s laws to reconstruct past events like murders.

For Ham, though, science is broken into two categories, “observational” and “historical,” and only observational science is trustworthy. In the initial 30 minute presentation of his position, Ham hammered the point home.

“You don’t observe the past directly,” he said. “You weren’t there.”

Ham spoke with the polish of a man who has covered this ground a hundred times before, has heard every objection, and has a smooth answer ready for each one.

When Bill Nye talks about evolution, Ham said, that’s “Bill Nye the Historical Science Guy” speaking—with “historical” being a pejorative term.

In Ham’s world, only changes that we can observe directly are the proper domain of science. Thus, when confronted with the issue of speciation, Ham readily admits that contemporary lab experiments on fast-breeding creatures like mosquitoes can produce new species. But he says that’s simply “micro-evolution” below the family level. He doesn’t believe that scientists can observe “macro-evolution,” such as the alteration of a lobe-finned fish into a tiger over millions of years.

Because they can’t see historical events unfold, scientists must rely on reconstructions of the past. Those might be accurate, but they simply rely on too many “assumptions” for Ham to trust them. When confronted during the debate with evidence from ancient trees which have more rings than there are years on the Adams Sychronological Chart, Ham simply shrugged.

“We didn’t see those layers laid down,” he said.

To him, the calculus of “one ring, one year” is merely an assumption when it comes to the past—an assumption possibly altered by cataclysmic events such as Noah’s flood.

In other words, “historical science” is dubious; we should defer instead to the “observational” account of someone who witnessed all past events: God, said to have left humanity an eyewitness account of the world’s creation in the book of Genesis. All historical reconstructions should thus comport with this more accurate observational account.

Mainstream scientists don’t recognize this divide between observational and historical ways of knowing (much as they reject Ham’s distinction between “micro” and “macro” evolution). Dinosaur bones may not come with tags, but neither does observed contemporary reality—think of a doctor presented with a set of patient symptoms, who then has to interpret what she sees in order to arrive at a diagnosis.

Given that the distinction between two kinds of science provides Ham’s key reason for accepting the “eyewitness account” of Genesis as a starting point, it was unsurprising to see Nye take generous whacks at the idea. You can’t observe the past? “That’s what we do in astronomy,” said Nye in his opening presentation. Since light takes time to get here, “All we can do in astronomy is look at the past. By the way, you’re looking at the past right now.”

Those in the present can study the past with confidence, Nye said, because natural laws are generally constant and can be used to extrapolate into the past.

“This idea that you can separate the natural laws of the past from the natural laws you have now is at the heart of our disagreement,” Nye said. “For lack of a better word, it’s magical. I’ve appreciated magic since I was a kid, but it’s not what we want in mainstream science.”

How do scientists know that these natural laws are correctly understood in all their complexity and interplay? What operates as a check on their reconstructions? That’s where the predictive power of evolutionary models becomes crucial, Nye said. Those models of the past should generate predictions which can then be verified—or disproved—through observations in the present.

Read the entire article here.

A Kid’s Book For Adults

book_BoneByBoneOne of the most engaging new books for young children is a picture book that explains evolution. By way of whimsical illustrations and comparisons of animal skeletons the book — Bone By Bone — is able to deliver the story of evolutionary theory in an entertaining and compelling way.

Perhaps, it could be used just as well for those adults who have trouble grappling with the fruits of the scientific method. The Texas School Board of Education would make an ideal place to begin.

Bone By Bone is written by veterinarian Sara Levine.

From Slate:

In some of the best children’s books, dandelions turn into stars, sharks and radishes merge, and pancakes fall from the sky. No one would confuse these magical tales for descriptions of nature. Small children can differentiate between “the real world and the imaginary world,” as psychologist Alison Gopnik has written. They just “don’t see any particular reason for preferring to live in the real one.”

Children’s nuanced understanding of the not-real surely extends to the towering heap of books that feature dinosaurs as playmates who fill buckets of sand or bake chocolate-chip cookies. The imaginative play of these books may be no different to kids than radishsharks and llama dramas.

But as a parent, friendly dinos never steal my heart. I associate them, just a little, with old creationist images of animals frolicking near the Garden of Eden, which carried the message that dinosaurs and man, both created by God on the sixth day, co-existed on the Earth until after the flood. (Never mind the evidence that dinosaurs went extinct millions of years before humans appeared.) The founder of the Creation Museum in Kentucky calls dinosaurs “missionary lizards,” and that phrase echoes in my head when I see all those goofy illustrations of dinosaurs in sunglasses and hats.

I’ve been longing for another kind of picture book: one that appeals to young children’s wildest imagination in service of real evolutionary thinking. Such a book could certainly include dinosaur skeletons or fossils. But Bone by Bone, by veterinarian and professor Sara Levine, fills the niche to near perfection by relying on dogs, rabbits, bats, whales, and humans. Levine plays with differences in their skeletons to groom kids for grand scientific concepts.

Bone by Bone asks kids to imagine what their bodies would look like if they had different configurations of bones, like extra vertebrae, longer limbs, or fewer fingers. “What if your vertebrae didn’t stop at your rear end? What if they kept going?” Levine writes, as a boy peers over his shoulder at the spinal column. “You’d have a tail!”

“What kind of animal would you be if your leg bones were much, much longer than your arm bones?” she wonders, as a girl in pink sneakers rises so tall her face disappears from the page. “A rabbit or a kangaroo!” she says, later adding a pike and a hare. “These animals need strong hind leg bones for jumping.” Levine’s questions and answers are delightfully simple for the scientific heft they carry.

With the lightest possible touch, Levine introduces the idea that bones in different vertebrates are related and that they morph over time. She starts with vertebrae, skulls and ribs. But other structures bear strong kinships in these animals, too. The bone in the center of a horse’s hoof, for instance, is related to a human finger. (“What would happen if your middle fingers and the middle toes were so thick that they supported your whole body?”) The bones that radiate out through a bat’s wing are linked to those in a human hand. (“A web of skin connects the bones to make wings so that a bat can fly.”) This is different from the wings of a bird or an insect; with bats, it’s almost as if they’re swimming through air.

Of course, human hands did not shape-shift into bats’ wings, or vice versa. Both derive from a common ancestral structure, which means they share an evolutionary past. Homology, as this kind of relatedness is called, is among “the first and in many ways the best evidence for evolution,” says Josh Rosenau of the National Center for Science Education. Comparing bones also paves the way for comparing genes and molecules, for grasping evolution at the next level of sophistication. Indeed, it’s hard to look at the bat wings and human hands as presented here without lighting up, at least a little, with these ideas. So many smart writers focus on preparing young kids to read or understand numbers. Why not do more to ready them for the big ideas of science? Why not pave the way for evolution? (This is easier to do with older kids, with books like The Evolution of Calpurnia Tate and Why Don’t Your Eyelashes Grow?)

Read the entire story here.

Image: Bone By Bone, book cover. Courtesy: Lerner Publishing Group

The Global Detective Story of Little Red Riding Hood

Intrepid literary detective work spanning Europe, China, Japan and Africa uncovers the roots of a famous children’s tale.

From the Independent:

Little Red Riding Hood’s closest relative may have been her ill-fated grandmother, but academics have discovered she has long-lost cousins as far away as China and Japan.

Employing scientific analysis commonly used by biologists, anthropologist Jamshid Tehrani has mapped the emergence of the story to an earlier tale from the first century AD – and found it has numerous links to similar stories across the globe.

The Durham University academic traced the roots of Little Red Riding Hood to a folk tale called The Wolf and the Kids, which subsequently “evolved twice”, he claims in his paper, published this week in scientific journal Plos One.

Dr Tehrani, who has previously studied cultural change over generations in areas such as textiles, debunked theories that the tale emerged in China, arriving via the Silk Route. Instead, he traced the origins to European oral traditions, which then spread east.

“The Chinese version is derived from European oral traditions and not vice versa,” he said.

The Chinese took Little Red Riding Hood and The Wolf and the Kids and blended it with local tales, he argued. Often the wolf is replaced with an ogre or a tiger.

The research analysed 58 variants of the tales and looked at 72 plot variables.

The scientific process used was called phylogenetic analysis, used by biologists to group closely-related organisms to map out branches of evolution. Dr Tehrani used maths to model the similarities of the plots and score them on the probability that they have the same origin.

Little Red Riding Hood and The Wolf and the Kids, which concerns a wolf impersonating a goat to trick her kids and eat them, remain as distinct stories. Dr Tehrani described it “like a biologist showing that humans and other apes share a common ancestor but have evolved into distinct species”.

The Wolf and the Kids originated in the 1st century AD, with Little Red Riding Hood branching off 1,000 years later.

The story was immortalised by the Brothers Grimm in the 19th century, based on a tale written by Charles Perrault 200 years earlier. That derived from oral storytelling in France, Austria and northern Italy. Variants of Little Red Riding Hood can be found across Africa and Asia, including The Tiger Grandmother in Japan, China and Korea.

Dr Tehrani said: “My research cracks a long-standing mystery. The African tales turn out to be descended from The Wolf and the Kids but over time, they have evolved to become like Little Red Riding Hood, which is also likely to be descended from The Wolf and the Kids.”

The academic, who is now studying a range of other fairy tales, said: “This exemplifies a process biologists call convergent evolution, in which species independently evolve similar adaptations.”

Read the entire article here.

Image: Old father Wolf eyes up Little Red Riding Hood. Illustration courtesy of Tyler Garrison / Guardian.

Ultra-Conservation of Words

Linguists have traditionally held that words in a language have an average lifespan of around 8,000 years. Words change and are often discarded or replaced over time as the language evolves and co-opts other words from other tongues. English has been particularly adept at collecting many new words from different languages, which partly explains its global popularity.

Recently however, linguists have found that a small group of words have a lifespan that far exceeds the usual understanding. These 15,000-20,000 year old ultra-conserved words may be the linguistic precursors to common cognates — words with similar sound and meaning — that now span many different language families containing hundreds of languages.

From the Washington Post:

You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes!

It’s an odd little speech. But if you went back 15,000 years and spoke these words to hunter-gatherers in Asia in any one of hundreds of modern languages, there is a chance they would understand at least some of what you were saying.

A team of researchers has come up with a list of two dozen “ultraconserved words” that have survived 150 centuries. It includes some predictable entries: “mother,” “not,” “what,” “to hear” and “man.” It also contains surprises: “to flow,” “ashes” and “worm.”

The existence of the long-lived words suggests there was a “proto-Eurasiatic” language that was the common ancestor to about 700 contemporary languages that are the native tongues of more than half the world’s people.

“We’ve never heard this language, and it’s not written down anywhere,” said Mark Pagel, an evolutionary theorist at the University of Reading in England who headed the study published Monday in the Proceedings of the National Academy of Sciences. “But this ancestral language was spoken and heard. People sitting around campfires used it to talk to each other.”

In all, “proto-Eurasiatic” gave birth to seven language families. Several of the world’s important language families, however, fall outside that lineage, such as the one that includes Chinese and Tibetan; several African language families, and those of American Indians and Australian aborigines.

That a spoken sound carrying a specific meaning could remain unchanged over 15,000 years is a controversial idea for most historical linguists.

“Their general view is pessimistic,” said William Croft, a professor of linguistics at the University of New Mexico who studies the evolution of language and was not involved in the study. “They basically think there’s too little evidence to even propose a family like Eurasiatic.” In Croft’s view, however, the new study supports the plausibility of an ancestral language whose audible relics cross tongues today.

Pagel and three collaborators studied “cognates,” which are words that have the same meaning and a similar sound in different languages. Father (English), padre (Italian), pere (French), pater (Latin) and pitar (Sanskrit) are cognates. Those words, however, are from languages in one family, the Indo-European. The researchers looked much further afield, examining seven language families in all.

Read the entire article here and be sure to check out the interactive audio.

Pseudo-Science in Missouri and 2+2=5

Hot on the heels of recent successes by the Texas School Board of Education (SBOE) to revise history and science curricula, legislators in Missouri are planning to redefine commonly accepted scientific principles. Much like the situation in Texas the Missouri House is mandating that intelligent design be taught alongside evolution, in equal measure, in all the state’s schools. But, in a bid to take the lead in reversing thousands of years of scientific progress Missouri plans to redefine the actual scientific framework. So, if you can’t make “intelligent design” fit the principles of accepted science, then just change the principles themselves — first up, change the meanings of the terms “scientific hypothesis” and “scientific theory”.

We suspect that a couple of years from now, in Missouri, 2+2 will be redefined to equal 5, and that logic, deductive reasoning and experimentation will be replaced with mushy green peas.

[div class=attrib]From ars technica:[end-div]

Each year, state legislatures play host to a variety of bills that would interfere with science education. Most of these are variations on a boilerplate intended to get supplementary materials into classrooms criticizing evolution and climate change (or to protect teachers who do). They generally don’t mention creationism, but the clear intent is to sneak religious content into the science classrooms, as evidenced by previous bills introduced by the same lawmakers. Most of them die in the legislature (although the opponents of evolution have seen two successes).

The efforts are common enough that we don’t generally report on them. But, every now and then, a bill comes along veers off this script. And late last month, the Missouri House started considering one that deviates in staggering ways. Instead of being quiet about its intent, it redefines science, provides a clearer definition of intelligent design than any of the idea’s advocates ever have, and mandates equal treatment of the two. In the process, it mangles things so badly that teachers would be prohibited from discussing Mendel’s Laws.

Although even the Wikipedia entry for scientific theory includes definitions provided by the world’s most prestigious organizations of scientists, the bill’s sponsor Rick Brattin has seen fit to invent his own definition. And it’s a head-scratcher: “‘Scientific theory,’ an inferred explanation of incompletely understood phenomena about the physical universe based on limited knowledge, whose components are data, logic, and faith-based philosophy.” The faith or philosophy involved remain unspecified.

Brattin also mentions philosophy when he redefines hypothesis as, “a scientific theory reflecting a minority of scientific opinion which may lack acceptance because it is a new idea, contains faulty logic, lacks supporting data, has significant amounts of conflicting data, or is philosophically unpopular.” The reason for that becomes obvious when he turns to intelligent design, which he defines as a hypothesis. Presumably, he thinks it’s only a hypothesis because it’s philosophically unpopular, since his bill would ensure it ends up in the classrooms.

Intelligent design is roughly the concept that life is so complex that it requires a designer, but even its most prominent advocates have often been a bit wary about defining its arguments all that precisely. Not so with Brattin—he lists 11 concepts that are part of ID. Some of these are old-fashioned creationist claims, like the suggestion that mutations lead to “species degradation” and a lack of transitional fossils. But it also has some distinctive twists like the claim that common features, usually used to infer evolutionary relatedness, are actually a sign of parts re-use by a designer.

Eventually, the bill defines “standard science” as “knowledge disclosed in a truthful and objective manner and the physical universe without any preconceived philosophical demands concerning origin or destiny.” It then demands that all science taught in Missouri classrooms be standard science. But there are some problems with this that become apparent immediately. The bill demands anything taught as scientific law have “no known exceptions.” That would rule out teaching Mendel’s law, which have a huge variety of exceptions, such as when two genes are linked together on the same chromosome.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Seal of Missouri. Courtesy of Wikipedia.[end-div]

Evolution and Autocatalysis

A clever idea about the process of emergence from mathematicians at the University of Vermont has some evolutionary biologists thinking.

[div class=attrib]From MIT Review:[end-div]

One of the most puzzling questions about the origin of life is how the rich chemical landscape that makes life possible came into existence.

This landscape would have consisted among other things of amino acids, proteins and complex RNA molecules. What’s more, these molecules must have been part of a rich network of interrelated chemical reactions which generated them in a reliable way.

Clearly, all that must have happened before life itself emerged. But how?

One idea is that groups of molecules can form autocatalytic sets. These are self-sustaining chemical factories, in which the product of one reaction is the feedstock or catalyst for another. The result is a virtuous, self-contained cycle of chemical creation.

Today, Stuart Kauffman at the University of Vermont in Burlington and a couple of pals take a look at the broader mathematical properties of autocatalytic sets. In examining this bigger picture, they come to an astonishing conclusion that could have remarkable consequences for our understanding of complexity, evolution and the phenomenon of emergence.

They begin by deriving some general mathematical properties of autocatalytic sets, showing that such a set can be made up of many autocatalytic subsets of different types, some of which can overlap.

In other words, autocatalytic sets can have a rich complex structure of their own.

They go on to show how evolution can work on a single autocatalytic set, producing new subsets within it that are mutually dependent on each other.  This process sets up an environment in which newer subsets can evolve.

“In other words, self-sustaining, functionally closed structures can arise at a higher level (an autocatalytic set of autocatalytic sets), i.e., true emergence,” they say.

That’s an interesting view of emergence and certainly seems a sensible approach to the problem of the origin of life. It’s not hard to imagine groups of molecules operating together like this. And indeed, biochemists have recently discovered simple autocatalytic sets that behave in exactly this way.

But what makes the approach so powerful is that the mathematics does not depend on the nature of chemistry–it is substrate independent. So the building blocks in an autocatalytic set need not be molecules at all but any units that can manipulate other units in the required way.

These units can be complex entities in themselves. “Perhaps it is not too far-fetched to think, for example, of the collection of bacterial species in your gut (several hundreds of them) as one big autocatalytic set,” say Kauffman and co.

And they go even further. They point out that the economy is essentially the process of transforming raw materials into products such as hammers and spades that themselves facilitate further transformation of raw materials and so on. “Perhaps we can also view the economy as an (emergent) autocatalytic set, exhibiting some sort of functional closure,” they speculate.

[div class=attrib]Read the entire article after the jump.[end-div]

The Missing Linc

LincRNA that is. Recent discoveries hint at the potentially crucial role of this new class of genetic material in embryonic development, cell and tissue differentiation and even speciation and evolution.

[div class=attrib]From the Economist:[end-div]

THE old saying that where there’s muck, there’s brass has never proved more true than in genetics. Once, and not so long ago, received wisdom was that most of the human genome—perhaps as much as 99% of it—was “junk”. If this junk had a role, it was just to space out the remaining 1%, the genes in which instructions about how to make proteins are encoded, in a useful way in the cell nucleus.

That, it now seems, was about as far from the truth as it is possible to be. The decade or so since the completion of the Human Genome Project has shown that lots of the junk must indeed have a function. The culmination of that demonstration was the publication, in September, of the results of the ENCODE project. This suggested that almost two-thirds of human DNA, rather than just 1% of it, is being copied into molecules of RNA, the chemical that carries protein-making instructions to the sub-cellular factories which turn those proteins out, and that as a consequence, rather than there being just 23,000 genes (namely, the bits of DNA that encode proteins), there may be millions of them.

The task now is to work out what all these extra genes are up to. And a study just published in Genome Biology, by David Kelley and John Rinn of Harvard University, helps do that for one new genetic class, a type known as lincRNAs. In doing so, moreover, Dr Kelley and Dr Rinn show just how complicated the modern science of genetics has become, and hint also at how animal species split from one another.

Lincs in the chain

Molecules of lincRNA are similar to the messenger-RNA molecules which carry protein blueprints. They do not, however, encode proteins. More than 9,000 sorts are known, and most of those whose job has been tracked down are involved in the regulation of other genes, for example by attaching themselves to the DNA switches that control those genes.

LincRNA is rather odd, though. It often contains members of a second class of weird genetic object. These are called transposable elements (or, colloquially, “jumping genes”, because their DNA can hop from one place to another within the genome). Transposable elements come in several varieties, but one group of particular interest are known as endogenous retroviruses. These are the descendants of ancient infections that have managed to hide away in the genome and get themselves passed from generation to generation along with the rest of the genes.

Dr Kelley and Dr Rinn realised that the movement within the genome of transposable elements is a sort of mutation, and wondered if it has evolutionary consequences. Their conclusion is that it does, for when they looked at the relation between such elements and lincRNA genes, they found some intriguing patterns.

In the first place, lincRNAs are much more likely to contain transposable elements than protein-coding genes are. More than 83% do so, in contrast to only 6% of protein-coding genes.

Second, those transposable elements are particularly likely to be endogenous retroviruses, rather than any of the other sorts of element.

Third, the interlopers are usually found in the bit of the gene where the process of copying RNA from the DNA template begins, suggesting they are involved in switching genes on or off.

And fourth, lincRNAs containing one particular type of endogenous retrovirus are especially active in pluripotent stem cells, the embryonic cells that are the precursors of all other cell types. That indicates these lincRNAs have a role in the early development of the embryo.

Previous work suggests lincRNAs are also involved in creating the differences between various sorts of tissue, since many lincRNA genes are active in only one or a few cell types. Given that their principal job is regulating the activities of other genes, this makes sense.

Even more intriguingly, studies of lincRNA genes from species as diverse as people, fruit flies and nematode worms, have found they differ far more from one species to another than do protein-coding genes. They are, in other words, more species specific. And that suggests they may be more important than protein-coding genes in determining the differences between those species.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Darwin’s finches or Galapagos finches. Darwin, 1845. Courtesy of Wikipedia.[end-div]

Us: Perhaps It’s All Due to Gene miR-941

Geneticists have discovered a gene that helps explain how humans and apes diverged from their common ancestor around 6 million years ago.

[div class=attrib]From the Guardian:[end-div]

Researchers have discovered a new gene they say helps explain how humans evolved from chimpanzees.

The gene, called miR-941, appears to have played a crucial role in human brain development and could shed light on how we learned to use tools and language, according to scientists.

A team at the University of Edinburgh compared it to 11 other species of mammals, including chimpanzees, gorillas, mice and rats.

The results, published in Nature Communications, showed that the gene is unique to humans.

The team believe it emerged between six and one million years ago, after humans evolved from apes.

Researchers said it is the first time a new gene carried by humans and not by apes has been shown to have a specific function in the human body.

Martin Taylor, who led the study at the Institute of Genetics and Molecular Medicine at the University of Edinburgh, said: “As a species, humans are wonderfully inventive – we are socially and technologically evolving all the time.

“But this research shows that we are innovating at a genetic level too.

“This new molecule sprang from nowhere at a time when our species was undergoing dramatic changes: living longer, walking upright, learning how to use tools and how to communicate.

“We’re now hopeful that we will find more new genes that help show what makes us human.”

The gene is highly active in two areas of the brain, controlling decision-making and language abilities, with the study suggesting it could have a role in the advanced brain functions that make us human.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of ABCNews.[end-div]

Charles Darwin Runs for Office

British voters may recall Screaming Lord Sutch, 3rd Earl of Harrow, of the Official Monster Raving Loony Party, who ran in over 40 parliamentary elections during the 1980s and 90s. He never won, but garnered a respectable number of votes and many fans (he was also a musician).

The United States followed a more dignified path in the 2012 elections, when Charles Darwin ran for a Congressional seat in Georgia. Darwin failed to win, but collected a respectable 4,000 votes. His opponent, Paul Broun, believes that the Earth “is but about 9,000 years old”. Interestingly, Representative Broun serves on the United States House Committee on Science, Space and Technology.

[div class=attrib]From Slate:[end-div]

Anti-evolution Congressman Paul Broun (R-Ga.) ran unopposed in Tuesday’s election, but nearly 4,000 voters wrote in Charles Darwin to protest their representative’s views. (Broun called evolution “lies straight from the pit of hell.”) Darwin fell more than 205,000 votes short of victory, but what would have happened if the father of evolution had out-polled Broun?

Broun still would have won. Georgia, like many other states, doesn’t count votes for write-in candidates who have not filed a notice of intent to stand for election. Even if the finally tally had been reversed, with Charles Darwin winning 209,000 votes and Paul Broun 4,000, Broun would have kept his job.

That’s not to say dead candidates can’t win elections. It happens all the time, but only when the candidate dies after being placed on the ballot. In Tuesday’s election, Orange County, Fla., tax collector Earl Wood won more than 56 percent of the vote, even though he died in October at the age of 96 after holding the office for more than 40 years. Florida law allowed the Democratic Party, of which Wood was a member, to choose a candidate to receive Wood’s votes. In Alabama, Charles Beasley won a seat on the Bibb County Commission despite dying on Oct. 12. (Beasley’s opponent lamented the challenge of running a negative campaign against a dead man.) The governor will appoint a replacement.

[div class=attrib]Read the entire article after the jump.[end-div]

The Beauty of Ugliness

The endless pursuit of beauty in human affairs probably pre-dates our historical record. We certainly know that ancient Egyptians used cosmetics believing them to offer magical and religious powers, in addition to aesthetic value.

Yet paradoxically beauty it is rather subjective and often fleeting. The French singer, songwriter, composer and bon viveur once said that, “ugliness is superior to beauty because it lasts longer”. Author Stephen Bayley argues in his new book “Ugly: The Aesthetics of Everything”, that beauty is downright boring.

[div class=attrib]From the Telegraph:[end-div]

Beauty is boring. And the evidence is piling up. An article in the journal Psychological Science now confirms what partygoers have known forever: that beauty and charm are no more directly linked than a high IQ and a talent for whistling.

A group of scientists set out to discover whether physically attractive people also have appealing character traits and values, and found, according to Lihi Segal-Caspi, who carried out part of the research, that “beautiful people tend to focus more on conformity and self-promotion than independence and tolerance”.

Certainly, while a room full of beautiful people might be impressively stiff with the whiff of Chanel No 5, the intellectual atmosphere will be carrying a very low charge. If positive at all.

The grizzled and gargoyle-like Parisian chanteur, and legendary lover, Serge Gainsbourg always used to pick up the ugliest girls at parties. This was not simply because predatory male folklore insists that ill-favoured women will be more “grateful”, but because Gainsbourg, a stylish contrarian, knew that the conversation would be better, the uglier the girl.

Beauty is a conformist conspiracy. And the conspirators include the fashion, cosmetics and movie businesses: a terrible Greek chorus of brainless idolatry towards abstract form. The conspirators insist that women – and, nowadays, men, too – should be un-creased, smooth, fat-free, tanned and, with the exception of the skull, hairless. Flawlessly dull. Even Hollywood once acknowledged the weakness of this proposition: Marilyn Monroe was made more attractive still by the addition of a “beauty spot”, a blemish turned into an asset.

The red carpet version of beauty is a feeble, temporary construction. Bodies corrode and erode, sag and bulge, just as cars rust and buildings develop a fine patina over time. This is not to be feared, rather to be understood and enjoyed. Anyone wishing to arrest these processes with the aid of surgery, aerosols, paint, glue, drugs, tape and Lycra must be both very stupid and very vain. Hence the problems encountered in conversation with beautiful people: stupidity and vanity rarely contribute much to wit and creativity.

Fine features may be all very well, but the great tragedy of beauty is that it is so ephemeral. Albert Camus said it “drives us to despair, offering for a minute the glimpse of an eternity that we should like to stretch out over the whole of time”. And Gainsbourg agreed when he said: “Ugliness is superior to beauty because it lasts longer.” A hegemony of beautiful perfection would be intolerable: we need a good measure of ugliness to keep our senses keen. If everything were beautiful, nothing would be.

And yet, despite the evidence against, there has been a conviction that beauty and goodness are somehow inextricably and permanently linked. Political propaganda exploited our primitive fear of ugliness, so we had Second World War American posters of Japanese looking like vampire bats. The Greeks believed that beauty had a moral character: beautiful people – discus-throwers and so on – were necessarily good people. Darwin explained our need for “beauty” in saying that breeding attractive children is a survival characteristic: I may feel the need to fuse my premium genetic material with yours, so that humanity continues in the same fine style.

This became a lazy consensus, described as the “beauty premium” by US economists Markus M Mobius and Tanya S Rosenblat. The “beauty premium” insists that as attractive children grow into attractive adults, they may find it easier to develop agreeable interpersonal communications skills because their audience reacts more favourably to them. In this beauty-related employment theory, short people are less likely to get a good job. As Randy Newman sang: “Short people got no reason to live.” So Darwin’s argument that evolutionary forces favour a certain physical type may be proven in the job market as well as the wider world.

But as soon as you try to grasp the concept of beauty, it disappears.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Extending Moore’s Law Through Evolution

[div class=attrib]From Smithsonian:[end-div]

In 1965, Intel co-founder Gordon Moore made a prediction about computing that has held true to this day. Moore’s law, as it came to be known, forecasted that the number of transistors we’d be able to cram onto a circuit—and thereby, the effective processing speed of our computers—would double roughly every two years. Remarkably enough, this rule has been accurate for nearly 50 years, but most experts now predict that this growth will slow by the end of the decade.

Someday, though, a radical new approach to creating silicon semiconductors might enable this rate to continue—and could even accelerate it. As detailed in a study published in this month’s Proceedings of the National Academy of Sciences, a team of researchers from the University of California at Santa Barbara and elsewhere have harnessed the process of evolution to produce enzymes that create novel semiconductor structures.

“It’s like natural selection, but here, it’s artificial selection,” Daniel Morse, professor emeritus at UCSB and a co-author of the study, said in an interview. After taking an enzyme found in marine sponges and mutating it into many various forms, “we’ve selected the one in a million mutant DNAs capable of making a semiconductor.”

In an earlier study, Morse and other members of the research team had discovered silicatein—a natural enzyme used used by marine sponges to construct their silica skeletons. The mineral, as it happens, also serves as the building block of semiconductor computer chips. “We then asked the question—could we genetically engineer the structure of the enzyme to make it possible to produce other minerals and semiconductors not normally produced by living organisms?” Morse said.

To make this possible, the researchers isolated and made many copies of the part of the sponge’s DNA that codes for silicatein, then intentionally introduced millions of different mutations in the DNA. By chance, some of these would likely lead to mutant forms of silicatein that would produce different semiconductors, rather than silica—a process that mirrors natural selection, albeit on a much shorter time scale, and directed by human choice rather than survival of the fittest.

[div class=attrib]Read the entire article after the jump.[end-div]

The Inevitability of Life: A Tale of Protons and Mitochondria

A fascinating article by Nick Lane a leading researcher into the origins of life. Lane is a Research Fellow at University College London.

He suggests that it would be surprising if simple, bacterial-like, life were not common throughout the universe. However, the acquisition of one cell by another — an event that led to all higher organisms on planet Earth, is an altogether much rarer occurrence. So are we alone in the universe?

[div class=attrib]From the New Scientist:[end-div]

UNDER the intense stare of the Kepler space telescope, more and more planets similar to our own are revealing themselves to us. We haven’t found one exactly like Earth yet, but so many are being discovered that it appears the galaxy must be teeming with habitable planets.

These discoveries are bringing an old paradox back into focus. As physicist Enrico Fermi asked in 1950, if there are many suitable homes for life out there and alien life forms are common, where are they all? More than half a century of searching for extraterrestrial intelligence has so far come up empty-handed.

Of course, the universe is a very big place. Even Frank Drake’s famously optimistic “equation” for life’s probability suggests that we will be lucky to stumble across intelligent aliens: they may be out there, but we’ll never know it. That answer satisfies no one, however.

There are deeper explanations. Perhaps alien civilisations appear and disappear in a galactic blink of an eye, destroying themselves long before they become capable of colonising new planets. Or maybe life very rarely gets started even when conditions are perfect.

If we cannot answer these kinds of questions by looking out, might it be possible to get some clues by looking in? Life arose only once on Earth, and if a sample of one were all we had to go on, no grand conclusions could be drawn. But there is more than that. Looking at a vital ingredient for life – energy – suggests that simple life is common throughout the universe, but it does not inevitably evolve into more complex forms such as animals. I might be wrong, but if I’m right, the immense delay between life first appearing on Earth and the emergence of complex life points to another, very different explanation for why we have yet to discover aliens.

Living things consume an extraordinary amount of energy, just to go on living. The food we eat gets turned into the fuel that powers all living cells, called ATP. This fuel is continually recycled: over the course of a day, humans each churn through 70 to 100 kilograms of the stuff. This huge quantity of fuel is made by enzymes, biological catalysts fine-tuned over aeons to extract every last joule of usable energy from reactions.

The enzymes that powered the first life cannot have been as efficient, and the first cells must have needed a lot more energy to grow and divide – probably thousands or millions of times as much energy as modern cells. The same must be true throughout the universe.

This phenomenal energy requirement is often left out of considerations of life’s origin. What could the primordial energy source have been here on Earth? Old ideas of lightning or ultraviolet radiation just don’t pass muster. Aside from the fact that no living cells obtain their energy this way, there is nothing to focus the energy in one place. The first life could not go looking for energy, so it must have arisen where energy was plentiful.

Today, most life ultimately gets its energy from the sun, but photosynthesis is complex and probably didn’t power the first life. So what did? Reconstructing the history of life by comparing the genomes of simple cells is fraught with problems. Nevertheless, such studies all point in the same direction. The earliest cells seem to have gained their energy and carbon from the gases hydrogen and carbon dioxide. The reaction of H2 with CO2 produces organic molecules directly, and releases energy. That is important, because it is not enough to form simple molecules: it takes buckets of energy to join them up into the long chains that are the building blocks of life.

A second clue to how the first life got its energy comes from the energy-harvesting mechanism found in all known life forms. This mechanism was so unexpected that there were two decades of heated altercations after it was proposed by British biochemist Peter Mitchell in 1961.

Universal force field

Mitchell suggested that cells are powered not by chemical reactions, but by a kind of electricity, specifically by a difference in the concentration of protons (the charged nuclei of hydrogen atoms) across a membrane. Because protons have a positive charge, the concentration difference produces an electrical potential difference between the two sides of the membrane of about 150 millivolts. It might not sound like much, but because it operates over only 5 millionths of a millimetre, the field strength over that tiny distance is enormous, around 30 million volts per metre. That’s equivalent to a bolt of lightning.

Mitchell called this electrical driving force the proton-motive force. It sounds like a term from Star Wars, and that’s not inappropriate. Essentially, all cells are powered by a force field as universal to life on Earth as the genetic code. This tremendous electrical potential can be tapped directly, to drive the motion of flagella, for instance, or harnessed to make the energy-rich fuel ATP.

However, the way in which this force field is generated and tapped is extremely complex. The enzyme that makes ATP is a rotating motor powered by the inward flow of protons. Another protein that helps to generate the membrane potential, NADH dehydrogenase, is like a steam engine, with a moving piston for pumping out protons. These amazing nanoscopic machines must be the product of prolonged natural selection. They could not have powered life from the beginning, which leaves us with a paradox.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Transmission electron microscope image of a thin section cut through an area of mammalian lung tissue. The high magnification image shows a mitochondria. Courtesy of Wikipedia.[end-div]

Human Evolution: Stalled

It takes no expert neuroscientist, anthropologist or evolutionary biologist to recognize that human evolution has probably stalled. After all, one only needs to observe our obsession with reality TV. Yes, evolution screeched to a halt around 1999, when reality TV hit critical mass in the mainstream public consciousness. So, what of evolution?

[div class=attrib]From the Wall Street Journal:[end-div]

If you write about genetics and evolution, one of the commonest questions you are likely to be asked at public events is whether human evolution has stopped. It is a surprisingly hard question to answer.

I’m tempted to give a flippant response, borrowed from the biologist Richard Dawkins: Since any human trait that increases the number of babies is likely to gain ground through natural selection, we can say with some confidence that incompetence in the use of contraceptives is probably on the rise (though only if those unintended babies themselves thrive enough to breed in turn).

More seriously, infertility treatment is almost certainly leading to an increase in some kinds of infertility. For example, a procedure called “intra-cytoplasmic sperm injection” allows men with immobile sperm to father children. This is an example of the “relaxation” of selection pressures caused by modern medicine. You can now inherit traits that previously prevented human beings from surviving to adulthood, procreating when they got there or caring for children thereafter. So the genetic diversity of the human genome is undoubtedly increasing.

Or it was until recently. Now, thanks to pre-implantation genetic diagnosis, parents can deliberately choose to implant embryos that lack certain deleterious mutations carried in their families, with the result that genes for Tay-Sachs, Huntington’s and other diseases are retreating in frequency. The old and overblown worry of the early eugenicists—that “bad” mutations were progressively accumulating in the species—is beginning to be addressed not by stopping people from breeding, but by allowing them to breed, safe in the knowledge that they won’t pass on painful conditions.

Still, recent analyses of the human genome reveal a huge number of rare—and thus probably fairly new—mutations. One study, by John Novembre of the University of California, Los Angeles, and his colleagues, looked at 202 genes in 14,002 people and found one genetic variant in somebody every 17 letters of DNA code, much more than expected. “Our results suggest there are many, many places in the genome where one individual, or a few individuals, have something different,” said Dr. Novembre.

Another team, led by Joshua Akey of the University of Washington, studied 1,351 people of European and 1,088 of African ancestry, sequencing 15,585 genes and locating more than a half million single-letter DNA variations. People of African descent had twice as many new mutations as people of European descent, or 762 versus 382. Dr. Akey blames the population explosion of the past 5,000 years for this increase. Not only does a larger population allow more variants; it also implies less severe selection against mildly disadvantageous genes.

So we’re evolving as a species toward greater individual (rather than racial) genetic diversity. But this isn’t what most people mean when they ask if evolution has stopped. Mainly they seem to mean: “Has brain size stopped increasing?” For a process that takes millions of years, any answer about a particular instant in time is close to meaningless. Nonetheless, the short answer is probably “yes.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The “Robot Evolution”. Courtesy of STRK3.[end-div]

The Evolutionary Benefits of Middle Age

David Bainbridge, author of “Middle Age: A Natural History”, examines the benefits of middle age. Yes, really. For those of us in “middle age” it’s not surprising to see that this period is not limited to decline, disease and senility. Rather, it’s a pre-programmed redistribution of physical and mental resources designed to cope with our ever-increasing life spans.

[div class=attrib]From David Bainbridge over at New Scientist:[end-div]

As a 42-year-old man born in England, I can expect to live for about another 38 years. In other words, I can no longer claim to be young. I am, without doubt, middle-aged.

To some people that is a depressing realization. We are used to dismissing our fifth and sixth decades as a negative chapter in our lives, perhaps even a cause for crisis. But recent scientific findings have shown just how important middle age is for every one of us, and how crucial it has been to the success of our species. Middle age is not just about wrinkles and worry. It is not about getting old. It is an ancient, pivotal episode in the human life span, preprogrammed into us by natural selection, an exceptional characteristic of an exceptional species.

Compared with other animals, humans have a very unusual pattern to our lives. We take a very long time to grow up, we are long-lived, and most of us stop reproducing halfway through our life span. A few other species have some elements of this pattern, but only humans have distorted the course of their lives in such a dramatic way. Most of that distortion is caused by the evolution of middle age, which adds two decades that most other animals simply do not get.

An important clue that middle age isn’t just the start of a downward spiral is that it does not bear the hallmarks of general, passive decline. Most body systems deteriorate very little during this stage of life. Those that do, deteriorate in ways that are very distinctive, are rarely seen in other species and are often abrupt.

For example, our ability to focus on nearby objects declines in a predictable way: Farsightedness is rare at 35 but universal at 50. Skin elasticity also decreases reliably and often surprisingly abruptly in early middle age. Patterns of fat deposition change in predictable, stereotyped ways. Other systems, notably cognition, barely change.

Each of these changes can be explained in evolutionary terms. In general, it makes sense to invest in the repair and maintenance only of body systems that deliver an immediate fitness benefit — that is, those that help to propagate your genes. As people get older, they no longer need spectacular visual acuity or mate-attracting, unblemished skin. Yet they do need their brains, and that is why we still invest heavily in them during middle age.

As for fat — that wonderfully efficient energy store that saved the lives of many of our hard-pressed ancestors — its role changes when we are no longer gearing up to produce offspring, especially in women. As the years pass, less fat is stored in depots ready to meet the demands of reproduction — the breasts, hips and thighs — or under the skin, where it gives a smooth, youthful appearance. Once our babymaking days are over, fat is stored in larger quantities and also stored more centrally, where it is easiest to carry about. That way, if times get tough we can use it for our own survival, thus freeing up food for our younger relatives.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Middle Age Couple Laughing. Courtesy of Cindi Matthews / Flickr.[end-div]

Culturomics

[div class=attrib]From the Wall Street Journal:[end-div]

Can physicists produce insights about language that have eluded linguists and English professors? That possibility was put to the test this week when a team of physicists published a paper drawing on Google’s massive collection of scanned books. They claim to have identified universal laws governing the birth, life course and death of words.

The paper marks an advance in a new field dubbed “Culturomics”: the application of data-crunching to subjects typically considered part of the humanities. Last year a group of social scientists and evolutionary theorists, plus the Google Books team, showed off the kinds of things that could be done with Google’s data, which include the contents of five-million-plus books, dating back to 1800.

Published in Science, that paper gave the best-yet estimate of the true number of words in English—a million, far more than any dictionary has recorded (the 2002 Webster’s Third New International Dictionary has 348,000). More than half of the language, the authors wrote, is “dark matter” that has evaded standard dictionaries.

The paper also tracked word usage through time (each year, for instance, 1% of the world’s English-speaking population switches from “sneaked” to “snuck”). It also showed that we seem to be putting history behind us more quickly, judging by the speed with which terms fall out of use. References to the year “1880” dropped by half in the 32 years after that date, while the half-life of “1973” was a mere decade.

In the new paper, Alexander Petersen, Joel Tenenbaum and their co-authors looked at the ebb and flow of word usage across various fields. “All these different words are battling it out against synonyms, variant spellings and related words,” says Mr. Tenenbaum. “It’s an inherently competitive, evolutionary environment.”

When the scientists analyzed the data, they found striking patterns not just in English but also in Spanish and Hebrew. There has been, the authors say, a “dramatic shift in the birth rate and death rates of words”: Deaths have increased and births have slowed.

English continues to grow—the 2011 Culturonomics paper suggested a rate of 8,500 new words a year. The new paper, however, says that the growth rate is slowing. Partly because the language is already so rich, the “marginal utility” of new words is declining: Existing things are already well described. This led them to a related finding: The words that manage to be born now become more popular than new words used to get, possibly because they describe something genuinely new (think “iPod,” “Internet,” “Twitter”).

Higher death rates for words, the authors say, are largely a matter of homogenization. The explorer William Clark (of Lewis & Clark) spelled “Sioux” 27 different ways in his journals (“Sieoux,” “Seaux,” “Souixx,” etc.), and several of those variants would have made it into 19th-century books. Today spell-checking programs and vigilant copy editors choke off such chaotic variety much more quickly, in effect speeding up the natural selection of words. (The database does not include the world of text- and Twitter-speak, so some of the verbal chaos may just have shifted online.)

[div class=attrib]Read the entire article here.[end-div]

Culture, Language and Genes

In the early 19th century Noah Webster set about re-defining written English. His aim was to standardize the spoken word in the fledgling nation and to distinguish American from British usage. In his own words, “as an independent nation, our honor requires us to have a system of our own, in language as well as government.”

He used his dictionary, which still bears his name today, as a tool to cleanse English of its stubborn reliance on aristocratic pedantry and over-reliance on Latin and Greek. He “simplified” the spelling of numerous words that he believed were contsructed with rules that were all too complicated. Thus, “colour” became “color” and “honour” switched to “honor”; “centre” became “center”, “behaviour” to “behavior”, “traveller” to “traveler”.

Webster offers a perfect example of why humanity seems so adept at fragmenting into diverse cultural groups that thrive through mutual uncomprehension. In “Wired for Culture”, evolutionary biologist Mark Pagel offers a compelling explanation based on that small, yet very selfish biological building block, the gene.

[div class=attrib]From the Wall Street Journal:[end-div]

The island of Gaua, part of Vanuatu in the Pacific, is just 13 miles across, yet it has five distinct native languages. Papua New Guinea, an area only slightly bigger than Texas, has 800 languages, some spoken by just a few thousand people.

Evolutionary biologists have long gotten used to the idea that bodies are just genes’ ways of making more genes, survival machines that carry genes to the next generation. Think of a salmon struggling upstream just to expend its body (now expendable) in spawning. Dr. Pagel’s idea is that cultures are an extension of this: that the way we use culture is to promote the long-term interests of our genes.

It need not be this way. When human beings’ lives became dominated by culture, they could have adopted habits that did not lead to having more descendants. But on the whole we did not; we set about using culture to favor survival of those like us at the expense of other groups, using religion, warfare, cooperation and social allegiance. As Dr. Pagel comments: “Our genes’ gamble at handing over control to…ideas paid off handsomely” in the conquest of the world.

What this means, he argues, is that if our “cultures have promoted our genetic interests throughout our history,” then our “particular culture is not for us, but for our genes.”

We’re expendable. The allegiance we feel to one tribe—religious, sporting, political, linguistic, even racial—is a peculiar mixture of altruism toward the group and hostility to other groups. Throughout history, united groups have stood, while divided ones fell.

Language is the most striking exemplar of Dr. Pagel’s thesis. He calls language “one of the most powerful, dangerous and subversive traits that natural selection has ever devised.” He draws attention to the curious parallels between genetics and linguistics. Both are digital systems, in which words or base pairs are recombined to make an infinite possibility of messages. (Elsewhere I once noted the numerical similarity between Shakespeare’s vocabulary of about 20,000 distinct words and his genome of about 21,000 genes).

Dr. Pagel points out that language is a “technology for rewiring other people’s minds…without either of you having to perform surgery.” But natural section was unlikely to favor such a technology if it helped just the speaker, or just the listener, at the expense of the other. Rather, he says that, just as the language of the genes promotes its own survival via a larger cooperative entity called the body, so language itself endures via the survival of the individual and the tribe.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of PA / Daily Mail.[end-div]

Woman and Man, and Fish?

A widely held aphorism states that owners often look like their pets, or visa versa. So, might it apply to humans and fish? Well, Ted Sabarese a photographer based in New York provides an answer in a series of fascinating portraits.

[div class=attrib]From Kalliopi Monoyios over at Scientific American:[end-div]

I can’t say for certain whether New York based photographer Ted Sabarese had science or evolution in mind when he conceived of this series. But I’m almost glad he never responded to my follow-up questions about his inspiration behind these. Part of the fun of art is its mirror-like quality: everyone sees something different when faced with it because everyone brings a different set of experiences and expectations to the table. When I look at these I see equal parts “you are what you eat,” “your inner fish,” and “United Colors of Benetton.”

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]Discover more of Ted Sabarese’s work here.[end-div]