Category Archives: Biosciences

Zebra Stripes

Zebra_Botswana

Why do zebras have stripes? Well, we’ve all learned from an early age that their peculiar and unique black and white stripes are an adaptation to combat predators. One theory suggests that the stripes are camouflage. Another theory suggests that the stripes are there to confuse predators. Yet another proposes that the stripes are a vivid warning signal.

But Tim Caro, professor of wildlife biology at the University of California, has a thoroughly different idea, conveyed in his new book, Zebra Stripes. After twenty years of study he’s convinced that the zebra’s stripes have a more mundane purpose — a deterrent to pesky biting flies.

From Wired:

At four in the morning, Tim Caro roused his colleagues. Bleary-eyed and grumbling, they followed him to the edge of the village, where the beasts were hiding. He sat them down in chairs, and after letting their eyes adjust for a minute, he asked them if they saw anything. And if so, would they please point where?

Not real beasts. Despite being camped in Tanzania’s Katavi National Park, Caro was asking his colleagues to identify pelts—from a wildebeest, an impala, and a zebra—that he had draped over chairs or clotheslines. Caro wanted to know if the zebra’s stripes gave it any sort of camouflage in the pre-dawn, when many predators hunt, and he needed the sort of replicability he could not count on from the animals roaming the savannah. “I lost a lot of social capital on that experiment,” says Caro. “If you’re going to be woken up at all, it’s important to be woken up for something exciting or unpredictable, and this was neither.”

The experiment was one of hundreds Caro performed over a twenty year scientific odyssey to discover why zebras have stripes—a question that nearly every major biologist since Alfred Russel Wallace has tried to answer. “It became sort of a challenge to me to try and investigate all the existing hypotheses so I could not only identify the right one,” he says, “but just as importantly kill all those remaining.” His new book, Zebra Stripes, chronicles every detail.

Read the entire story here.

Image: Zebras, Botswana. Courtesy: Paul Maritz, 2002. Creative Commons Attribution-Share Alike 3.0.

Send to Kindle

Are You Smarter Than My Octopus?

common-Octopus

My pet octopus has moods. It can change the color of its skin on demand. It watches me with its huge eyes. It’s inquisitive and can manipulate objects. Importantly, my octopus has around half a billion neurons in its brain, compared with around 100 billion in mine, and around 50 million in your pet gerbil.

Ok, let me stop for a moment. I don’t actually have a pet octopus. But the rest is true — about the octopus’ remarkable abilities. So, does it have a mind and is it sentient?

From the Atlantic:

Drawing on the work of other researchers, from primatologists to fellow octopologists and philosophers, Godfrey-Smith suggests two reasons for the large nervous system of the octopus. One has to do with its body. For an animal like a cat or a human, details of the skeleton dictate many of the motions the animal can make. You can’t roll your arm into a neat spiral from wrist to shoulder— your bones and joints get in the way. An octopus, having no skeleton, has no such constraint. It can, and frequently does, roll up some of its arms; or it can choose to make one (or several) of them stiff, creating an elbow. Surely the animal needs a huge number of neurons merely to be well coordinated when roaming about the reef.

At the same time, octopuses are versatile predators, eating a wide variety of food, from lobsters and shrimps to clams and fish. Octopuses that live in tide pools will occasionally leap out of the water to catch passing crabs; some even prey on incautious birds, grabbing them by the legs, pulling them underwater, and drowning them. Animals that evolve to tackle diverse kinds of food may tend to evolve larger brains than animals that always handle food in the same way (think of a frog catching insects).

Like humans, octopuses learn new skills. In some species, individuals inhabit a den for only a week or so before moving on, so they are constantly learning routes through new environments. Similarly, the first time an octopus tackles a clam, say, it has to figure out how to open it—can it pull it apart, or would it be more effective to drill a hole? If consciousness is necessary for such tasks, then perhaps the octopus does have an awareness that in some ways resembles our own.

Perhaps, indeed, we should take the “mammalian” behaviors of octopuses at face value. If evolution can produce similar eyes through different routes, why not similar minds? Or perhaps, in wishing to find these animals like ourselves, what we are really revealing is our deep desire not to be alone.

Read the entire article here.

Image: Common octopus. Courtesy: Wikipedia. CC BY-SA 3.0.

Send to Kindle

Wound Man

wound-man-wellcome-library-ms-49

No, the image is not a still from a forthcoming episode of Law & Order or Criminal Minds. Nor is it a nightmarish Hieronymus Bosch artwork.

Rather, “Wound Man”, as he was known, is a visual table of contents to a medieval manuscript of medical cures, treatments and surgeries. Wound Man first appeared in German surgical texts in the early 15th century. Arranged around each of his various wounds and ailments are references to further details on appropriate treatments. For instance, reference number 38 alongside an arrow penetrating Wound Man’s thigh, “An arrow whose shaft is still in place”, leads to details on how to address the wound — presumably a relatively common occurrence in the Middle Ages.

From Public Domain Review:

Staring impassively out of the page, he bears a multitude of graphic wounds. His skin is covered in bleeding cuts and lesions, stabbed and sliced by knives, spears and swords of varying sizes, many of which remain in the skin, protruding porcupine-like from his body. Another dagger pierces his side, and through his strangely transparent chest we see its tip puncture his heart. His thighs are pierced with arrows, some intact, some snapped down to just their heads or shafts. A club slams into his shoulder, another into the side of his face.

His neck, armpits and groin sport rounded blue buboes, swollen glands suggesting that the figure has contracted plague. His shins and feet are pockmarked with clustered lacerations and thorn scratches, and he is beset by rabid animals. A dog, snake and scorpion bite at his ankles, a bee stings his elbow, and even inside the cavity of his stomach a toad aggravates his innards.

Despite this horrendous cumulative barrage of injuries, however, the Wound Man is very much alive. For the purpose of this image was not to threaten or inspire fear, but to herald potential cures for all of the depicted maladies. He contrarily represented something altogether more hopeful than his battered body: an arresting reminder of the powerful knowledge that could be channelled and dispensed in the practice of late medieval medicine.

The earliest known versions of the Wound Man appeared at the turn of the fifteenth century in books on the surgical craft, particularly works from southern Germany associated with the renowned Würzburg surgeon Ortolf von Baierland (died before 1339). Accompanying a text known as the “Wundarznei” (The Surgery), these first Wound Men effectively functioned as a human table of contents for the cures contained within the relevant treatise. Look closely at the remarkable Wound Man shown above from the Wellcome Library’s MS. 49 – a miscellany including medical material produced in Germany in about 1420 – and you see that the figure is penetrated not only by weapons but also by text.

Read the entire article here.

Image: The Wound Man. Courtesy: Wellcome Library’s MS. 49 — Source (CC BY 4.0). Public Domain Review.

Send to Kindle

How and Why Did Metamorphosis Evolve?

papilio_machaon

Evolution is a truly wondrous thing. It has given us eyes and lots of grey matter [which we still don’t use very well]. It has given us the beautiful tiger and shimmering hues and soaring songs of our birds. It has given us the towering Sequoias, creepy insects, gorgeous ocean-bound creatures and invisible bacteria and viruses. Yet for all its wondrous adaptations one evolutionary invention still seems mysteriously supernatural — metamorphosis.

So, how and why did it evolve? A compelling new theory on the origins of insect metamorphosis by James W. Truman and Lynn M. Riddiford is excerpted below (from a detailed article in Scientific American).

The theory posits that a beneficial mutation around 300 million years ago led to the emergence of metamorphosis in insects:

By combining evidence from the fossil record with studies on insect anatomy and development, biologists have established a plausible narrative about the origin of insect metamorphosis, which they continue to revise as new information surfaces. The earliest insects in Earth’s history did not metamorphose; they hatched from eggs, essentially as miniature adults. Between 280 million and 300 million years ago, however, some insects began to mature a little differently—they hatched in forms that neither looked nor behaved like their adult versions. This shift proved remarkably beneficial: young and old insects were no longer competing for the same resources. Metamorphosis was so successful that, today, as many as 65 percent of all animal species on the planet are metamorphosing insects.

And, there are essentially three types of metamorphosis:

Wingless ametabolous insects, such as silverfish and bristletails, undergo little or no metamorphosis. When they hatch from eggs, they already look like adults, albeit tiny ones, and simply grow larger over time through a series of molts in which they shed their exoskeletons. Hemimetaboly, or incomplete metamorphosis, describes insects such as cockroaches, grasshoppers and dragonflies that hatch as nymphs—miniature versions of their adult forms that gradually develop wings and functional genitals as they molt and grow. Holometaboly, or complete metamorphosis, refers to insects such as beetles, flies, butterflies, moths and bees, which hatch as wormlike larvae that eventually enter a quiescent pupal stage before emerging as adults that look nothing like the larvae.

And, it’s backed by a concrete survival and reproductive advantage:

[T]he enormous numbers of metamorphosing insects on the planet speak for its success as a reproductive strategy. The primary advantage of complete metamorphosis is eliminating competition between the young and old. Larval insects and adult insects occupy very different ecological niches. Whereas caterpillars are busy gorging themselves on leaves, completely disinterested in reproduction, butterflies are flitting from flower to flower in search of nectar and mates. Because larvas and adults do not compete with one another for space or resources, more of each can coexist relative to species in which the young and old live in the same places and eat the same things.

Read the entire article here.

Image: Old World Swallowtail (Papilio machaon). Courtesy: fesoj – Otakárek fenyklový [Papilio machaon]. CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=7263187

Send to Kindle

Of Zebrafish and Men

Zebrafisch

A novel experiment in gene-editing shows how limbs of Earth’s land-dwelling creatures may have evolved from their fishy ancestors.

From University of Chicago:

One of the great transformations required for the descendants of fish to become creatures that could walk on land was the replacement of long, elegant fin rays by fingers and toes. In the Aug. 17, 2016 issue of Nature, scientists from the University of Chicago show that the same cells that make fin rays in fish play a central role in forming the fingers and toes of four-legged creatures.

After three years of painstaking experiments using novel gene-editing techniques and sensitive fate mapping to label and track developing cells in fish, the researchers describe how the small flexible bones found at the ends of fins are related to fingers and toes, which are more suitable for life on land.

“When I first saw these results you could have knocked me over with a feather,” said the study’s senior author, Neil Shubin, PhD, the Robert R. Bensley Distinguished Service Professor of Organismal Biology and Anatomy at the University of Chicago. Shubin is an authority on the transition from fins to limbs.

The team focused on Hox genes, which control the body plan of a growing embryo along the head-to-tail, or shoulder-to-fingertip, axis. Many of these genes are crucial for limb development.

They studied the development of cells, beginning, in some experiments, soon after fertilization, and followed them as they became part of an adult fin. Previous work has shown that when Hox genes, specifically those related to the wrists and digits of mice (HoxD and HoxA), were deleted, the mice did not develop those structures. When Nakamura deleted those same genes in zebrafish, the long fins rays were greatly reduced.

“What matters is not what happens when you knock out a single gene but when you do it in combination,” Nakamura explained. “That’s where the magic happens.”

The researchers also used a high-energy CT scanner to see the minute structures within the adult zebrafish fin. These can be invisible, even to most traditional microscopes. The scans revealed that fish lacking certain genes lost fin rays, but the small bones made of cartilage fin increased in number.

The authors suspect that the mutants that Nakamura made caused cells to stop migrating from the base of the fin to their usual position near the tip. This inability to migrate meant that there were fewer cells to make fin rays, leaving more cells at the fin base to produce cartilage elements.

Read more here. A female specimen of a zebrafish (Danio rerio) breed with fantails. Courtesy: Wikipedia / Azul.

Send to Kindle

Psychopath Versus Sociopath

psycopath-vs-sociopath-infographic1

I’ve been writing for a while now about a certain person who wishes to become the next President of the United States. His name is Donald Trump. He carries with him an entire encyclopedia — no, bookshelves of encyclopedias — of negative character traits. But chief among these he lacks empathy, tends to feel no guilt or remorse, and disregards the needs and rights of others. These are traits common to both psychopaths and sociopaths.

Over the last few years I’ve been describing Mr. Trump as a psychopath. Others, particularly recently (here, here, here), characterize him as a sociopath. Who’s right?

I’m turning to some psychological resources, excerpted and paraphrased below — American Psychological Association, Psychology Today, WebMD — to help me clarify the differences.

On first analysis it looks like Mr. Trump straddles both! Though I must say, that regardless, I don’t want either a sociopath or a psychopath, or a psycho-sociopath or a socio-psychopath in the White House with fingers anywhere close to the nuclear codes.

Sociopath:

Sociopaths tend to be volatile. That is, they tend to be nervous and easily agitated or angered. They are volatile and prone to emotional outbursts, including fits of rage. In addition, they may be uneducated and live on the fringes of traditional society, unable to hold down a steady job or stay in one place for very long. They are frequently transients and drifters.

It is difficult but not impossible for sociopaths to form attachments with others. They are capable of bonding emotionally and demonstrating empathy with certain people in certain situations but not others. Many sociopaths have no regard for society in general or its rules. Sociopathy, on the other hand, is more likely the product of environmental influences (“nurture”), such as childhood trauma and physical/emotional abuse.

Psychopath:

Psychopaths are unable to form emotional attachments or feel real empathy with others, although they often have disarming or even charming personalities. Psychopaths are very manipulative and can easily gain people’s trust. They learn to mimic emotions, despite their inability to actually feel them, and will appear normal to unsuspecting people. Psychopaths are often well educated and hold steady jobs. Some are so good at manipulation and mimicry that they have families and other long-term relationships without those around them ever suspecting their true nature.

It is believed that psychopathy is the largely the result of “nature” (genetics) and is related to a physiological defect that results in the underdevelopment of the part of the brain responsible for impulse control and emotions.

Infographic courtesy of Psychologia.

Send to Kindle

Thoughts As Shapes

wednesday is indigo blue bookcoverJonathan Jackson has a very rare form of a rare neurological condition. He has synesthesia, which is a cross-connection of two (or more) unrelated senses where an perception in one sense causes an automatic experience in another sense. Some synesthetes, for instance, see various sounds or musical notes as distinct colors (chromesthesia), others perceive different words as distinct tastes (lexical-gustatory synesthesia).

Jackson, on the other hand, experiences his thoughts as shapes in a visual mindmap. This is so fascinating I’ve excerpted a short piece of his story below.

Also, if you are further intrigued by this subject I recommend three great reads on the subject: Wednesday Is Indigo Blue: Discovering the Brain of Synesthesia by Richard Cytowic, and David M. Eagleman; Musicophilia: Tales of Music and the Brain, by Oliver Sacks; The Man Who Tasted Shapes by Richard Cytowic.

From the Atlantic:

One spring evening in the mid 2000s, Jonathan Jackson and Andy Linscott sat on some seaside rocks near their college campus, smoking the kind of cigarettes reserved for heartbreak. Linscott was, by his own admission, “emotionally spewing” over a girl, and Jackson was consoling him.

Jackson had always been a particularly good listener. But in the middle of their talk, he did something Linscott found deeply odd.

“He got up and jumped over to this much higher rock,” Linscott says. “He was like, ‘Andy, I’m listening, I just want to get a different angle. I want to see what you’re saying and the shape of your words from a different perspective.’ I was baffled.”

For Jackson, moving physically to think differently about an idea seemed totally natural. “People say, ‘Okay, we need to think about this from a new angle’ all the time!” he says. “But for me that’s literal.”

Jackson has synesthesia, a neurological phenomenon that has long been defined as the co-activation of two or more conventionally unrelated senses. Some synesthetes see music (known as auditory-visual synesthesia) or read letters and numbers in specific hues (grapheme-color synesthesia). But recent research has complicated that definition, exploring where in the sensory process those overlaps start and opening up the term to include types of synesthesia in which senses interact in a much more complex manner.

Read the entire  story here.

Image: Wednesday Is Indigo Blue, bookcover, Courtesy: By Richard E. Cytowic and David M. Eagleman, MIT Press.

Send to Kindle

Pessimism About Positive Thinking

Many of us have grown up in a world that teaches and values the power of positive thinking. The mantra of positive thinkers goes something like this: think positively about yourself, your situation, your goals and you will be much more motivated and energized to fulfill your dreams.

By some accounts the self-improvement industry in the US alone weighs in with annual revenues of around $10 billion. So, positive thinking must work, right? Psychologists suggest that it’s really not that simple; singular focus on positivity may help us in the short-term, but over the longer-term it frustrates our motivations and hinders progress towards our goals.

In short, it pays to be in touch with the negatives as well, to embrace and understand obstacles, to learn from and challenge our setbacks. It is to our advantage to be a pragmatic dreamer, grounded in both the beauty and ugliness that surrounds us.

From aeon:

In her book The Secret Daily Teachings (2008), the self-help author Rhonda Byrne suggested that: ‘Whatever big thing you are asking for, consider having the celebration now as though you have received it.’

Yet research in psychology reveals a more complicated picture. Indulging in undirected positive flights of fancy isn’t always in our interest. Positive thinking can make us feel better in the short term, but over the long term it saps our motivation, preventing us from achieving our wishes and goals, and leaving us feeling frustrated, stymied and stuck. If we really want to move ahead in our lives, engage with the world and feel energised, we need to go beyond positive thinking and connect as well with the obstacles that stand in our way. By bringing our dreams into contact with reality, we can unleash our greatest energies and make the most progress in our lives.

Now, you might wonder if positive thinking is really as harmful as I’m suggesting. In fact, it is. In a number of studies over two decades, my colleagues and I have discovered a powerful link between positive thinking and poor performance. In one study, we asked college students who had a crush on someone from afar to tell us how likely they would be to strike up a relationship with that person. Then we asked them to complete some open-ended scenarios related to dating. ‘You are at a party,’ one scenario read. ‘While you are talking to [your crush], you see a girl/boy, whom you believe [your crush] might like, come into the room. As she/he approaches the two of you, you imagine…’

Some of the students completed the scenarios by spinning a tale of romantic success. ‘The two of us leave the party, everyone watches, especially the other girl.’ Others offered negative fantasies about love thwarted: ‘My crush and the other girl begin to converse about things which I know nothing. They seem to be much more comfortable with each other than he and I….’

We checked back with the students after five months to see if they had initiated a relationship with their crush. The more students had engaged in positive fantasies about the future, the less likely they were actually to have started up a romantic relationship.

My colleagues and I performed such studies with participants in a number of demographic groups, in different countries, and with a range of personal wishes, including health goals, academic and professional goals, and relationship goals. Consistently, we found a correlation between positive fantasies and poor performance. The more that people ‘think positive’ and imagine themselves achieving their goals, the less they actually achieve.

Positive thinking impedes performance because it relaxes us and drains the energy we need to take action. After having participants in one study positively fantasise about the future for as little as a few minutes, we observed declines in systolic blood pressure, a standard measure of a person’s energy level. These declines were significant: whereas smoking a cigarette will typically raise a person’s blood pressure by five or 10 points, engaging in positive fantasies lowers it by about half as much.

Read the entire article here.

Send to Kindle

Comfort, Texas, the Timeship and Technological Immortality

Timeship-screenshot

There’s a small town deep in the heart of Texas’ Hill Country called Comfort. It was founded in the mid-19th century by German immigrants. Its downtown area is held to be one of the most well-preserved historic business districts in Texas. Now, just over 160 years on there’s another preservation effort underway in Comfort.

This time, however, the work goes well beyond preserving buildings; Comfort may soon be the global hub for life-extension research and human cryopreservation. The ambitious, and not without controversy, project is known as the Timeship, and is the brainchild of architect Stephen Valentine and the Stasis Foundation.

Since one the the key aims of the Timeship is to preserve biological material — DNA, tissue and organ samples, and even cryopreserved humans — the building design presents some rather unique and stringent challenges. The building must withstand a nuclear blast or other attack; its electrical and mechanical systems must remain functional and stable for hundreds of years; it must be self-sustaining and highly secure.

Read more about the building and much more about the Timeship here.

Image: Timeship screenshot. Courtesy of Timeship.

Send to Kindle

Towards an Understanding of Consciousness

Robert-Fudd-Consciousness-17C

The modern scientific method has helped us make great strides in our understanding of much that surrounds us. From knowledge of the infinitesimally small building blocks of atoms to the vast structures of the universe, theory and experiment have enlightened us considerably over the last several hundred years.

Yet a detailed understanding of consciousness still eludes us. Despite the intricate philosophical essays of John Locke in 1690 that laid the foundations for our modern day views of consciousness, a fundamental grasp of its mechanisms remain as elusive as our knowledge of the universe’s dark matter.

So, it’s encouraging to come across a refreshing view of consciousness, described in the context of evolutionary biology. Michael Graziano, associate professor of psychology and neuroscience at Princeton University, makes a thoughtful case for Attention Schema Theory (AST), which centers on the simple notion that there is adaptive value for the brain to build awareness. According to AST, the brain is constantly constructing and refreshing a model — in Graziano’s words an “attention schema” — that describes what its covert attention is doing from one moment to the next. The brain constructs this schema as an analog to its awareness of attention in others — a sound adaptive perception.

Yet, while this view may hold promise from a purely adaptive and evolutionary standpoint, it does have some way to go before it is able to explain how the brain’s abstraction of a holistic awareness is constructed from the physical substrate — the neurons and connections between them.

Read more of Michael Graziano’s essay, A New Theory Explains How Consciousness Evolved. Graziano is the author of Consciousness and the Social Brain, which serves as his introduction to AST. And, for a compelling rebuttal, check out R. Scott Bakker’s article, Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem.

Unfortunately, until our experimentalists make some definitive progress in this area, our understanding will remain just as abstract as the theories themselves, however compelling. But, ideas such as these inch us towards a deeper understanding.

Image: Representation of consciousness from the seventeenth century. Robert FluddUtriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione. Courtesy: Wikipedia. Public Domain.

Send to Kindle

Your Brain on LSD

Brain-on-LSD

For the first time, researchers have peered inside the brain to study the realtime effect of the psychedelic drug LSD (lysergic acid diethylamide). Yes, neuroscientists scanned the brains of subjects who volunteered to take a trip inside an MRI scanner, all in the name of science.

While the researchers did not seem to document the detailed subjective experiences of their volunteers, the findings suggest that they were experiencing intense dreamlike visions, effectively “seeing with their eyes shut”. Under the influence of LSD many areas of the brain that are usually compartmentalized showed far greater interconnection and intense activity.

LSD was first synthesized in 1938. Its profound psychological properties were studied from the mid-1940s to the early sixties. The substance was later banned — worldwide — after its adoption as a recreational drug.

This new study was conducted by researchers from Imperial College London and The Beckley Foundation, which researches psychoactive substances.

From Guardian:

The profound impact of LSD on the brain has been laid bare by the first modern scans of people high on the drug.

The images, taken from volunteers who agreed to take a trip in the name of science, have given researchers an unprecedented insight into the neural basis for effects produced by one of the most powerful drugs ever created.

A dose of the psychedelic substance – injected rather than dropped – unleashed a wave of changes that altered activity and connectivity across the brain. This has led scientists to new theories of visual hallucinations and the sense of oneness with the universe some users report.

The brain scans revealed that trippers experienced images through information drawn from many parts of their brains, and not just the visual cortex at the back of the head that normally processes visual information. Under the drug, regions once segregated spoke to one another.

Further images showed that other brain regions that usually form a network became more separated in a change that accompanied users’ feelings of oneness with the world, a loss of personal identity called “ego dissolution”.

David Nutt, the government’s former drugs advisor, professor of neuropsychopharmacology at Imperial College London, and senior researcher on the study, said neuroscientists had waited 50 years for this moment. “This is to neuroscience what the Higgs boson was to particle physics,” he said. “We didn’t know how these profound effects were produced. It was too difficult to do. Scientists were either scared or couldn’t be bothered to overcome the enormous hurdles to get this done.”

Read the entire story here.

Image: Different sections of the brain, either on placebo, or under the influence of LSD (lots of orange). Courtesy: Imperial College/Beckley Foundation.

Send to Kindle

Searching for Signs of Life

Gliese 581 c

Surely there is intelligent life somewhere in the universe. Cosmologists estimate that the observable universe contains around 1,000,000,000,000,000,000,000,000 planets. And, they calculate that our Milky Way galaxy alone contains around 100 billion planets that are hospitable to life (as we currently know it).

These numbers boggle the mind and beg a question: how do we find evidence for life beyond our shores? The decades long search for extraterrestrial intelligence (SETI) pioneered the use of radio telescope observations to look for alien signals from deep space. But, the process has remained rather rudimentary and narrowly focused. The good news now is that astronomers and astrobiologists have a growing toolkit of techniques that allow for much more sophisticated detection and analysis of the broader signals of life — not just potential radio transmissions from an advanced alien culture.

From Quanta:

Huddled in a coffee shop one drizzly Seattle morning six years ago, the astrobiologist Shawn Domagal-Goldman stared blankly at his laptop screen, paralyzed. He had been running a simulation of an evolving planet, when suddenly oxygen started accumulating in the virtual planet’s atmosphere. Up the concentration ticked, from 0 to 5 to 10 percent.

“Is something wrong?” his wife asked.

“Yeah.”

The rise of oxygen was bad news for the search for extraterrestrial life.

After millennia of wondering whether we’re alone in the universe — one of “mankind’s most profound and probably earliest questions beyond, ‘What are you going to have for dinner?’” as the NASA astrobiologist Lynn Rothschild put it — the hunt for life on other planets is now ramping up in a serious way. Thousands of exoplanets, or planets orbiting stars other than the sun, have been discovered in the past decade. Among them are potential super-Earths, sub-Neptunes, hot Jupiters and worlds such as Kepler-452b, a possibly rocky, watery “Earth cousin” located 1,400 light-years from here. Starting in 2018 with the expected launch of NASA’s James Webb Space Telescope, astronomers will be able to peer across the light-years and scope out the atmospheres of the most promising exoplanets. They will look for the presence of “biosignature gases,” vapors that could only be produced by alien life.

They’ll do this by observing the thin ring of starlight around an exoplanet while it is positioned in front of its parent star. Gases in the exoplanet’s atmosphere will absorb certain frequencies of the starlight, leaving telltale dips in the spectrum.

As Domagal-Goldman, then a researcher at the University of Washington’s Virtual Planetary Laboratory (VPL), well knew, the gold standard in biosignature gases is oxygen. Not only is oxygen produced in abundance by Earth’s flora — and thus, possibly, other planets’ — but 50 years of conventional wisdom held that it could not be produced at detectable levels by geology or photochemistry alone, making it a forgery-proof signature of life. Oxygen filled the sky on Domagal-Goldman’s simulated world, however, not as a result of biological activity there, but because extreme solar radiation was stripping oxygen atoms off carbon dioxide molecules in the air faster than they could recombine. This biosignature could be forged after all.

The search for biosignature gases around faraway exoplanets “is an inherently messy problem,” said Victoria Meadows, an Australian powerhouse who heads VPL. In the years since Domagal-Goldman’s discovery, Meadows has charged her team of 75 with identifying the major “oxygen false positives” that can arise on exoplanets, as well as ways to distinguish these false alarms from true oxygenic signs of biological activity. Meadows still thinks oxygen is the best biosignature gas. But, she said, “if I’m going to look for this, I want to make sure that when I see it, I know what I’m seeing.”

Meanwhile, Sara Seager, a dogged hunter of “twin Earths” at the Massachusetts Institute of Technology who is widely credited with inventing the spectral technique for analyzing exoplanet atmospheres, is pushing research on biosignature gases in a different direction. Seager acknowledges that oxygen is promising, but she urges the astrobiology community to be less terra-centric in its view of how alien life might operate — to think beyond Earth’s geochemistry and the particular air we breathe. “My view is that we do not want to leave a single stone unturned; we need to consider everything,” she said.

As future telescopes widen the survey of Earth-like worlds, it’s only a matter of time before a potential biosignature gas is detected in a faraway sky. It will look like the discovery of all time: evidence that we are not alone. But how will we know for sure?

Read the entire article here.

Image: Artist’s Impression of Gliese 581 c, the first terrestrial extrasolar planet discovered within its star’s habitable zone. Courtesy: Hervé Piraud, Latitude0116, Xhienne. Creative Commons Attribution 2.5.

Send to Kindle

Deconstructing Schizophrenia

Genetic and biomedical researchers have made yet another tremendous breakthrough from analyzing the human genome. This time a group of scientists, from Harvard Medical School, Boston Children’s Hospital and the Broad Institute, have identified key genetic markers and biological pathways that underlie schizophrenia.

In the US alone the psychiatric disorder affects around 2 million people. Symptoms of schizophrenia usually include hallucinations, delusional thinking and paranoia. While there are a number of drugs used to treat its symptoms, and psychotherapy to address milder forms, nothing as yet has been able to address its underlying cause(s). Hence the excitement.

From NYT:

Scientists reported on Wednesday that they had taken a significant step toward understanding the cause of schizophrenia, in a landmark study that provides the first rigorously tested insight into the biology behind any common psychiatric disorder.

More than two million Americans have a diagnosis of schizophrenia, which is characterized by delusional thinking and hallucinations. The drugs available to treat it blunt some of its symptoms but do not touch the underlying cause.

The finding, published in the journal Nature, will not lead to new treatments soon, experts said, nor to widely available testing for individual risk. But the results provide researchers with their first biological handle on an ancient disorder whose cause has confounded modern science for generations. The finding also helps explain some other mysteries, including why the disorder often begins in adolescence or young adulthood.

“They did a phenomenal job,” said David B. Goldstein, a professor of genetics at Columbia University who has been critical of previous large-scale projects focused on the genetics of psychiatric disorders. “This paper gives us a foothold, something we can work on, and that’s what we’ve been looking for now, for a long, long time.”

The researchers pieced together the steps by which genes can increase a person’s risk of developing schizophrenia. That risk, they found, is tied to a natural process called synaptic pruning, in which the brain sheds weak or redundant connections between neurons as it matures. During adolescence and early adulthood, this activity takes place primarily in the section of the brain where thinking and planning skills are centered, known as the prefrontal cortex. People who carry genes that accelerate or intensify that pruning are at higher risk of developing schizophrenia than those who do not, the new study suggests.

Some researchers had suspected that the pruning must somehow go awry in people with schizophrenia, because previous studies showed that their prefrontal areas tended to have a diminished number of neural connections, compared with those of unaffected people. The new paper not only strongly supports that this is the case, but also describes how the pruning probably goes wrong and why, and identifies the genes responsible: People with schizophrenia have a gene variant that apparently facilitates aggressive “tagging” of connections for pruning, in effect accelerating the process.

The research team began by focusing on a location on the human genome, the MHC, which was most strongly associated with schizophrenia in previous genetic studies. On a bar graph — called a Manhattan plot because it looks like a cluster of skyscrapers — the MHC looms highest.

Using advanced statistical methods, the team found that the MHC locus contained four common variants of a gene called C4, and that those variants produced two kinds of proteins, C4-A and C4-B.

The team analyzed the genomes of more than 64,000 people and found that people with schizophrenia were more likely to have the overactive forms of C4-A than control subjects. “C4-A seemed to be the gene driving risk for schizophrenia,” Dr. McCarroll said, “but we had to be sure.”

Read the entire article here.

Send to Kindle

Fictionalism of Free Will and Morality

In a recent opinion column William Irwin professor of philosophy at King’s College summarizes an approach to accepting the notion of free will rather than believing it. While I’d eventually like to see an explanation for free will and morality in biological and chemical terms — beyond metaphysics — I will (or may, if free will does not exist) for the time being have to content myself with mere acceptance. But, I my acceptance is not based on the notion that “free will” is pre-determined by a supernatural being — rather, I suspect it’s an illusion, instigated in the dark recesses of our un- or sub-conscious, and our higher reasoning functions rationalize it post factum in the full light of day. Morality on the other hand, as Irwin suggests, is an rather different state of mind altogether.

From the NYT:

Few things are more annoying than watching a movie with someone who repeatedly tells you, “That couldn’t happen.” After all, we engage with artistic fictions by suspending disbelief. For the sake of enjoying a movie like “Back to the Future,” I may accept that time travel is possible even though I do not believe it. There seems no harm in that, and it does some good to the extent that it entertains and edifies me.

Philosophy can take us in the other direction, by using reason and rigorous questioning to lead us to disbelieve what we would otherwise believe. Accepting the possibility of time travel is one thing, but relinquishing beliefs in God, free will, or objective morality would certainly be more troublesome. Let’s focus for a moment on morality.

The philosopher Michael Ruse has argued that “morality is a collective illusion foisted upon us by our genes.” If that’s true, why have our genes played such a trick on us? One possible answer can be found in the work of another philosopher Richard Joyce, who has argued that this “illusion” — the belief in objective morality — evolved to provide a bulwark against weakness of the human will. So a claim like “stealing is morally wrong” is not true, because such beliefs have an evolutionary basis but no metaphysical basis. But let’s assume we want to avoid the consequences of weakness of will that would cause us to act imprudently. In that case, Joyce makes an ingenious proposal: moral fictionalism.

Following a fictionalist account of morality, would mean that we would accept moral statements like “stealing is wrong” while not believing they are true. As a result, we would act as if it were true that “stealing is wrong,” but when pushed to give our answer to the theoretical, philosophical question of whether “stealing is wrong,” we would say no. The appeal of moral fictionalism is clear. It is supposed to help us overcome weakness of will and even take away the anxiety of choice, making decisions easier.

Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.

Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.

William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.

Read the entire article here.

Send to Kindle

Human Bloatware

Most software engineers and IT people are familiar with the term “bloatware“. The word is usually applied to a software application that takes up so much disk space and/or memory that its functional benefits are greatly diminished or rendered useless. Operating systems such as Windows and OSX are often characterized as bloatware — a newer version always seems to require an ever-expanding need for extra disk space (and memory) to accommodate an expanding array of new (often trivial) features with marginal added benefit.

DNA_Structure

But it seems that humans did not invent such obesity through our technology. Rather, a new genetic analysis shows that humans (and other animals) actually consist of biological bloatware, through a process which began when molecules of DNA first assembled the genes of the earliest living organisms.

From ars technica:

Eukaryotes like us are more complex than prokaryotes. We have cells with lots of internal structures, larger genomes with more genes, and our genes are more complex. Since there seems to be no apparent evolutionary advantage to this complexity—evolutionary advantage being defined as fitness, not as things like consciousness or sex—evolutionary biologists have spent much time and energy puzzling over how it came to be.

In 2010, Nick Lane and William Martin suggested that because they don’t have mitochondria, prokaryotes just can’t generate enough energy to maintain large genomes. Thus it was the acquisition of mitochondria and their ability to generate cellular energy that allowed eukaryotic genomes to expand. And with the expansion came the many different types of genes that render us so complex and diverse.

Michael Lynch and Georgi Marinov are now proposing a counter offer. They analyzed the bioenergetic costs of a gene and concluded that there is in fact no energetic barrier to genetic complexity. Rather, eukaryotes can afford bigger genomes simply because they have bigger cells.

First they looked at the lifetime energetic requirements of a cell, defined as the number of times that cell hydrolyzes ATP into ADP, a reaction that powers most cellular processes. This energy requirement rose linearly and smoothly with cell size from bacteria to eukaryotes with no break between them, suggesting that complexity alone, independently of cell volume, requires no more energy.

Then they calculated the cumulative cost of a gene—how much energy it takes to replicate it once per cell cycle, how much energy it takes to transcribe it into mRNA, and how much energy it takes to then translate that mRNA transcript into a functional protein. Genes may provide selective advantages, but those must be sufficient to overcome and justify these energetic costs.

At the levels of replication (copying the DNA) and transcription (making an RNA copy), eukaryotic genes are more costly than prokaryotic genes because they’re bigger and require more processing. But even though these costs are higher, they take up proportionally less of the total energy budget of the cell. That’s because bigger cells take more energy to operate in general (as we saw just above), while things like copying DNA only happens once per cell division. Bigger cells help here, too, as they divide less often.

Read the entire article here.

Send to Kindle