Tag Archives: DNA

Deconstructing Schizophrenia

Genetic and biomedical researchers have made yet another tremendous breakthrough from analyzing the human genome. This time a group of scientists, from Harvard Medical School, Boston Children’s Hospital and the Broad Institute, have identified key genetic markers and biological pathways that underlie schizophrenia.

In the US alone the psychiatric disorder affects around 2 million people. Symptoms of schizophrenia usually include hallucinations, delusional thinking and paranoia. While there are a number of drugs used to treat its symptoms, and psychotherapy to address milder forms, nothing as yet has been able to address its underlying cause(s). Hence the excitement.

From NYT:

Scientists reported on Wednesday that they had taken a significant step toward understanding the cause of schizophrenia, in a landmark study that provides the first rigorously tested insight into the biology behind any common psychiatric disorder.

More than two million Americans have a diagnosis of schizophrenia, which is characterized by delusional thinking and hallucinations. The drugs available to treat it blunt some of its symptoms but do not touch the underlying cause.

The finding, published in the journal Nature, will not lead to new treatments soon, experts said, nor to widely available testing for individual risk. But the results provide researchers with their first biological handle on an ancient disorder whose cause has confounded modern science for generations. The finding also helps explain some other mysteries, including why the disorder often begins in adolescence or young adulthood.

“They did a phenomenal job,” said David B. Goldstein, a professor of genetics at Columbia University who has been critical of previous large-scale projects focused on the genetics of psychiatric disorders. “This paper gives us a foothold, something we can work on, and that’s what we’ve been looking for now, for a long, long time.”

The researchers pieced together the steps by which genes can increase a person’s risk of developing schizophrenia. That risk, they found, is tied to a natural process called synaptic pruning, in which the brain sheds weak or redundant connections between neurons as it matures. During adolescence and early adulthood, this activity takes place primarily in the section of the brain where thinking and planning skills are centered, known as the prefrontal cortex. People who carry genes that accelerate or intensify that pruning are at higher risk of developing schizophrenia than those who do not, the new study suggests.

Some researchers had suspected that the pruning must somehow go awry in people with schizophrenia, because previous studies showed that their prefrontal areas tended to have a diminished number of neural connections, compared with those of unaffected people. The new paper not only strongly supports that this is the case, but also describes how the pruning probably goes wrong and why, and identifies the genes responsible: People with schizophrenia have a gene variant that apparently facilitates aggressive “tagging” of connections for pruning, in effect accelerating the process.

The research team began by focusing on a location on the human genome, the MHC, which was most strongly associated with schizophrenia in previous genetic studies. On a bar graph — called a Manhattan plot because it looks like a cluster of skyscrapers — the MHC looms highest.

Using advanced statistical methods, the team found that the MHC locus contained four common variants of a gene called C4, and that those variants produced two kinds of proteins, C4-A and C4-B.

The team analyzed the genomes of more than 64,000 people and found that people with schizophrenia were more likely to have the overactive forms of C4-A than control subjects. “C4-A seemed to be the gene driving risk for schizophrenia,” Dr. McCarroll said, “but we had to be sure.”

Read the entire article here.

Human Bloatware

Most software engineers and IT people are familiar with the term “bloatware“. The word is usually applied to a software application that takes up so much disk space and/or memory that its functional benefits are greatly diminished or rendered useless. Operating systems such as Windows and OSX are often characterized as bloatware — a newer version always seems to require an ever-expanding need for extra disk space (and memory) to accommodate an expanding array of new (often trivial) features with marginal added benefit.

DNA_Structure

But it seems that humans did not invent such obesity through our technology. Rather, a new genetic analysis shows that humans (and other animals) actually consist of biological bloatware, through a process which began when molecules of DNA first assembled the genes of the earliest living organisms.

From ars technica:

Eukaryotes like us are more complex than prokaryotes. We have cells with lots of internal structures, larger genomes with more genes, and our genes are more complex. Since there seems to be no apparent evolutionary advantage to this complexity—evolutionary advantage being defined as fitness, not as things like consciousness or sex—evolutionary biologists have spent much time and energy puzzling over how it came to be.

In 2010, Nick Lane and William Martin suggested that because they don’t have mitochondria, prokaryotes just can’t generate enough energy to maintain large genomes. Thus it was the acquisition of mitochondria and their ability to generate cellular energy that allowed eukaryotic genomes to expand. And with the expansion came the many different types of genes that render us so complex and diverse.

Michael Lynch and Georgi Marinov are now proposing a counter offer. They analyzed the bioenergetic costs of a gene and concluded that there is in fact no energetic barrier to genetic complexity. Rather, eukaryotes can afford bigger genomes simply because they have bigger cells.

First they looked at the lifetime energetic requirements of a cell, defined as the number of times that cell hydrolyzes ATP into ADP, a reaction that powers most cellular processes. This energy requirement rose linearly and smoothly with cell size from bacteria to eukaryotes with no break between them, suggesting that complexity alone, independently of cell volume, requires no more energy.

Then they calculated the cumulative cost of a gene—how much energy it takes to replicate it once per cell cycle, how much energy it takes to transcribe it into mRNA, and how much energy it takes to then translate that mRNA transcript into a functional protein. Genes may provide selective advantages, but those must be sufficient to overcome and justify these energetic costs.

At the levels of replication (copying the DNA) and transcription (making an RNA copy), eukaryotic genes are more costly than prokaryotic genes because they’re bigger and require more processing. But even though these costs are higher, they take up proportionally less of the total energy budget of the cell. That’s because bigger cells take more energy to operate in general (as we saw just above), while things like copying DNA only happens once per cell division. Bigger cells help here, too, as they divide less often.

Read the entire article here.

Crispr – Designer DNA

The world welcomed basic genetic engineering in the mid-1970s, when biotech pioneers Herbert Boyer and Stanley Cohen transferred DNA from one organism to another (bacteria). In so doing they created the first genetically modified organism (GMO). A mere forty years later we now have extremely powerful and accessible (cheap) biochemical tools for tinkering with the molecules of heredity. One of these tools, known as Crispr-Cas9, makes it easy and fast to move any genes around, within and across any species.

The technique promises immense progress in the fight against inherited illness, cancer treatment and viral infection. It also opens the door to untold manipulation of DNA in lower organisms and plants to develop an infection resistant and faster growing food supply, and to reimagine a whole host of biochemical and industrial processes (such as ethanol production).

Yet as is the case with many technological advances that hold great promise, tremendous peril lies ahead from this next revolution. Our bioengineering prowess has yet to be matched with a sound and pervasive ethical framework. Can humans reach a consensus on how to shape, focus and limit the application of such techniques? And, equally importantly, can we enforce these bioethical constraints before it’s too late to “uninvent” designer babies and bioweapons?

From Wired:

Spiny grass and scraggly pines creep amid the arts-and-crafts buildings of the Asilomar Conference Grounds, 100 acres of dune where California’s Monterey Peninsula hammerheads into the Pacific. It’s a rugged landscape, designed to inspire people to contemplate their evolving place on Earth. So it was natural that 140 scientists gathered here in 1975 for an unprecedented conference.

They were worried about what people called “recombinant DNA,” the manipulation of the source code of life. It had been just 22 years since James Watson, Francis Crick, and Rosalind Franklin described what DNA was—deoxyribonucleic acid, four different structures called bases stuck to a backbone of sugar and phosphate, in sequences thousands of bases long. DNA is what genes are made of, and genes are the basis of heredity.

Preeminent genetic researchers like David Baltimore, then at MIT, went to Asilomar to grapple with the implications of being able to decrypt and reorder genes. It was a God-like power—to plug genes from one living thing into another. Used wisely, it had the potential to save millions of lives. But the scientists also knew their creations might slip out of their control. They wanted to consider what ought to be off-limits.

By 1975, other fields of science—like physics—were subject to broad restrictions. Hardly anyone was allowed to work on atomic bombs, say. But biology was different. Biologists still let the winding road of research guide their steps. On occasion, regulatory bodies had acted retrospectively—after Nuremberg, Tuskegee, and the human radiation experiments, external enforcement entities had told biologists they weren’t allowed to do that bad thing again. Asilomar, though, was about establishing prospective guidelines, a remarkably open and forward-thinking move.

At the end of the meeting, Baltimore and four other molecular biologists stayed up all night writing a consensus statement. They laid out ways to isolate potentially dangerous experiments and determined that cloning or otherwise messing with dangerous pathogens should be off-limits. A few attendees fretted about the idea of modifications of the human “germ line”—changes that would be passed on from one generation to the next—but most thought that was so far off as to be unrealistic. Engineering microbes was hard enough. The rules the Asilomar scientists hoped biology would follow didn’t look much further ahead than ideas and proposals already on their desks.

Earlier this year, Baltimore joined 17 other researchers for another California conference, this one at the Carneros Inn in Napa Valley. “It was a feeling of déjà vu,” Baltimore says. There he was again, gathered with some of the smartest scientists on earth to talk about the implications of genome engineering.

The stakes, however, have changed. Everyone at the Napa meeting had access to a gene-editing technique called Crispr-Cas9. The first term is an acronym for “clustered regularly interspaced short palindromic repeats,” a description of the genetic basis of the method; Cas9 is the name of a protein that makes it work. Technical details aside, Crispr-Cas9 makes it easy, cheap, and fast to move genes around—any genes, in any living thing, from bacteria to people. “These are monumental moments in the history of biomedical research,” Baltimore says. “They don’t happen every day.”

Using the three-year-old technique, researchers have already reversed mutations that cause blindness, stopped cancer cells from multiplying, and made cells impervious to the virus that causes AIDS. Agronomists have rendered wheat invulnerable to killer fungi like powdery mildew, hinting at engineered staple crops that can feed a population of 9 billion on an ever-warmer planet. Bioengineers have used Crispr to alter the DNA of yeast so that it consumes plant matter and excretes ethanol, promising an end to reliance on petrochemicals. Startups devoted to Crispr have launched. International pharmaceutical and agricultural companies have spun up Crispr R&D. Two of the most powerful universities in the US are engaged in a vicious war over the basic patent. Depending on what kind of person you are, Crispr makes you see a gleaming world of the future, a Nobel medallion, or dollar signs.

The technique is revolutionary, and like all revolutions, it’s perilous. Crispr goes well beyond anything the Asilomar conference discussed. It could at last allow genetics researchers to conjure everything anyone has ever worried they would—designer babies, invasive mutants, species-specific bioweapons, and a dozen other apocalyptic sci-fi tropes. It brings with it all-new rules for the practice of research in the life sciences. But no one knows what the rules are—or who will be the first to break them.

In a way, humans were genetic engineers long before anyone knew what a gene was. They could give living things new traits—sweeter kernels of corn, flatter bulldog faces—through selective breeding. But it took time, and it didn’t always pan out. By the 1930s refining nature got faster. Scientists bombarded seeds and insect eggs with x-rays, causing mutations to scatter through genomes like shrapnel. If one of hundreds of irradiated plants or insects grew up with the traits scientists desired, they bred it and tossed the rest. That’s where red grapefruits came from, and most barley for modern beer.

Genome modification has become less of a crapshoot. In 2002, molecular biologists learned to delete or replace specific genes using enzymes called zinc-finger nucleases; the next-generation technique used enzymes named TALENs.

Yet the procedures were expensive and complicated. They only worked on organisms whose molecular innards had been thoroughly dissected—like mice or fruit flies. Genome engineers went on the hunt for something better.

As it happened, the people who found it weren’t genome engineers at all. They were basic researchers, trying to unravel the origin of life by sequencing the genomes of ancient bacteria and microbes called Archaea (as in archaic), descendants of the first life on Earth. Deep amid the bases, the As, Ts, Gs, and Cs that made up those DNA sequences, microbiologists noticed recurring segments that were the same back to front and front to back—palindromes. The researchers didn’t know what these segments did, but they knew they were weird. In a branding exercise only scientists could love, they named these clusters of repeating palindromes Crispr.

Then, in 2005, a microbiologist named Rodolphe Barrangou, working at a Danish food company called Danisco, spotted some of those same palindromic repeats in Streptococcus thermophilus, the bacteria that the company uses to make yogurt and cheese. Barrangou and his colleagues discovered that the unidentified stretches of DNA between Crispr’s palindromes matched sequences from viruses that had infected their S. thermophilus colonies. Like most living things, bacteria get attacked by viruses—in this case they’re called bacteriophages, or phages for short. Barrangou’s team went on to show that the segments served an important role in the bacteria’s defense against the phages, a sort of immunological memory. If a phage infected a microbe whose Crispr carried its fingerprint, the bacteria could recognize the phage and fight back. Barrangou and his colleagues realized they could save their company some money by selecting S. thermophilus species with Crispr sequences that resisted common dairy viruses.

As more researchers sequenced more bacteria, they found Crisprs again and again—half of all bacteria had them. Most Archaea did too. And even stranger, some of Crispr’s sequences didn’t encode the eventual manufacture of a protein, as is typical of a gene, but instead led to RNA—single-stranded genetic material. (DNA, of course, is double-stranded.)

That pointed to a new hypothesis. Most present-day animals and plants defend themselves against viruses with structures made out of RNA. So a few researchers started to wonder if Crispr was a primordial immune system. Among the people working on that idea was Jill Banfield, a geomicrobiologist at UC Berkeley, who had found Crispr sequences in microbes she collected from acidic, 110-degree water from the defunct Iron Mountain Mine in Shasta County, California. But to figure out if she was right, she needed help.

Luckily, one of the country’s best-known RNA experts, a biochemist named Jennifer Doudna, worked on the other side of campus in an office with a view of the Bay and San Francisco’s skyline. It certainly wasn’t what Doudna had imagined for herself as a girl growing up on the Big Island of Hawaii. She simply liked math and chemistry—an affinity that took her to Harvard and then to a postdoc at the University of Colorado. That’s where she made her initial important discoveries, revealing the three-dimensional structure of complex RNA molecules that could, like enzymes, catalyze chemical reactions.

The mine bacteria piqued Doudna’s curiosity, but when Doudna pried Crispr apart, she didn’t see anything to suggest the bacterial immune system was related to the one plants and animals use. Still, she thought the system might be adapted for diagnostic tests.

Banfield wasn’t the only person to ask Doudna for help with a Crispr project. In 2011, Doudna was at an American Society for Microbiology meeting in San Juan, Puerto Rico, when an intense, dark-haired French scientist asked her if she wouldn’t mind stepping outside the conference hall for a chat. This was Emmanuelle Charpentier, a microbiologist at Ume?a University in Sweden.

As they wandered through the alleyways of old San Juan, Charpentier explained that one of Crispr’s associated proteins, named Csn1, appeared to be extraordinary. It seemed to search for specific DNA sequences in viruses and cut them apart like a microscopic multitool. Charpentier asked Doudna to help her figure out how it worked. “Somehow the way she said it, I literally—I can almost feel it now—I had this chill down my back,” Doudna says. “When she said ‘the mysterious Csn1’ I just had this feeling, there is going to be something good here.”

Read the whole story here.

Chromosomal Chronometer

Researchers find possible evidence of DNA mechanism that keep track of age. It is too early to tell if changes over time in specific elements of our chromosomes result in or are a consequence of aging. Yet, this is a tantalizing discovery that bodes well for a better understanding into the genetic and biological systems that underlie the aging process.

From the Guardian:

A US scientist has discovered an internal body clock based on DNA that measures the biological age of our tissues and organs.

The clock shows that while many healthy tissues age at the same rate as the body as a whole, some of them age much faster or slower. The age of diseased organs varied hugely, with some many tens of years “older” than healthy tissue in the same person, according to the clock.

Researchers say that unravelling the mechanisms behind the clock will help them understand the ageing process and hopefully lead to drugs and other interventions that slow it down.

Therapies that counteract natural ageing are attracting huge interest from scientists because they target the single most important risk factor for scores of incurable diseases that strike in old age.

“Ultimately, it would be very exciting to develop therapy interventions to reset the clock and hopefully keep us young,” said Steve Horvath, professor of genetics and biostatistics at the University of California in Los Angeles.

Horvath looked at the DNA of nearly 8,000 samples of 51 different healthy and cancerous cells and tissues. Specifically, he looked at how methylation, a natural process that chemically modifies DNA, varied with age.

Horvath found that the methylation of 353 DNA markers varied consistently with age and could be used as a biological clock. The clock ticked fastest in the years up to around age 20, then slowed down to a steadier rate. Whether the DNA changes cause ageing or are caused by ageing is an unknown that scientists are now keen to work out.

“Does this relate to something that keeps track of age, or is a consequence of age? I really don’t know,” Horvath told the Guardian. “The development of grey hair is a marker of ageing, but nobody would say it causes ageing,” he said.

The clock has already revealed some intriguing results. Tests on healthy heart tissue showed that its biological age – how worn out it appears to be – was around nine years younger than expected. Female breast tissue aged faster than the rest of the body, on average appearing two years older.

Diseased tissues also aged at different rates, with cancers speeding up the clock by an average of 36 years. Some brain cancer tissues taken from children had a biological age of more than 80 years.

“Female breast tissue, even healthy tissue, seems to be older than other tissues of the human body. That’s interesting in the light that breast cancer is the most common cancer in women. Also, age is one of the primary risk factors of cancer, so these types of results could explain why cancer of the breast is so common,” Horvath said.

Healthy tissue surrounding a breast tumour was on average 12 years older than the rest of the woman’s body, the scientist’s tests revealed.

Writing in the journal Genome Biology, Horvath showed that the biological clock was reset to zero when cells plucked from an adult were reprogrammed back to a stem-cell-like state. The process for converting adult cells into stem cells, which can grow into any tissue in the body, won the Nobel prize in 2012 for Sir John Gurdon at Cambridge University and Shinya Yamanaka at Kyoto University.

“It provides a proof of concept that one can reset the clock,” said Horvath. The scientist now wants to run tests to see how neurodegenerative and infectious diseases affect, or are affected by, the biological clock.

Read the entire article here.

Image: Artist rendition of DNA fragment. Courtesy of Zoonar GmbH/Alamy.

Of Mice and Men

Biomolecular and genetic engineering continue apace. This time researchers have inserted artificially constructed human genes into the cells of living mice.

From the Independent:

Scientists have created genetically-engineered mice with artificial human chromosomes in every cell of their bodies, as part of a series of studies showing that it may be possible to treat genetic diseases with a radically new form of gene therapy.

In one of the unpublished studies, researchers made a human artificial chromosome in the laboratory from chemical building blocks rather than chipping away at an existing human chromosome, indicating the increasingly powerful technology behind the new field of synthetic biology.

The development comes as the Government announces today that it will invest tens of millions of pounds in synthetic biology research in Britain, including an international project to construct all the 16 individual chromosomes of the yeast fungus in order to produce the first synthetic organism with a complex genome.

A synthetic yeast with man-made chromosomes could eventually be used as a platform for making new kinds of biological materials, such as antibiotics or vaccines, while human artificial chromosomes could be used to introduce healthy copies of genes into the diseased organs or tissues of people with genetic illnesses, scientists said.

Researchers involved in the synthetic yeast project emphasised at a briefing in London earlier this week that there are no plans to build human chromosomes and create synthetic human cells in the same way as the artificial yeast project. A project to build human artificial chromosomes is unlikely to win ethical approval in the UK, they said.

However, researchers in the US and Japan are already well advanced in making “mini” human chromosomes called HACs (human artificial chromosomes), by either paring down an existing human chromosome or making them “de novo” in the lab from smaller chemical building blocks.

Natalay Kouprina of the US National Cancer Institute in Bethesda, Maryland, is part of the team that has successfully produced genetically engineered mice with an extra human artificial chromosome in their cells. It is the first time such an advanced form of a synthetic human chromosome made “from scratch” has been shown to work in an animal model, Dr Kouprina said.

“The purpose of developing the human artificial chromosome project is to create a shuttle vector for gene delivery into human cells to study gene function in human cells,” she told The Independent. “Potentially it has applications for gene therapy, for correction of gene deficiency in humans. It is known that there are lots of hereditary diseases due to the mutation of certain genes.”

Read the entire article here.

Image courtesy of Science Daily.

Law, Common Sense and Your DNA

Paradoxically the law and common sense often seem to be at odds. Justice may still be blind, at least in most open democracies, but there seems to be no question as to the stupidity of much of our law.

Some examples: in Missouri it’s illegal to drive with an uncaged bear in the car; in Maine, it’s illegal to keep Christmas decorations up after January 14th; in New Jersey, it’s illegal to wear a bulletproof vest while committing murder; in Connecticut, a pickle is not an official, legal pickle unless it can bounce; in Louisiana, you can be fined $500 for instructing a pizza delivery service to deliver pizza to a friend unknowingly.

So, today we celebrate a victory for common sense and justice over thoroughly ill-conceived and badly written law — the U.S. Supreme Court unanimously struck down laws granting patents to corporations for human genes.

Unfortunately though, due to the extremely high financial stakes this is not likely to be the last we hear about big business seeking to patent or control the building blocks to life.

From the WSJ:

The Supreme Court unanimously ruled Thursday that human genes isolated from the body can’t be patented, a victory for doctors and patients who argued that such patents interfere with scientific research and the practice of medicine.

The court was handing down one of its most significant rulings in the age of molecular medicine, deciding who may own the fundamental building blocks of life.

The case involved Myriad Genetics Inc., which holds patents related to two genes, known as BRCA1 and BRCA2, that can indicate whether a woman has a heightened risk of developing breast cancer or ovarian cancer.

Justice Clarence Thomas, writing for the court, said the genes Myriad isolated are products of nature, which aren’t eligible for patents.

“Myriad did not create anything,” Justice Thomas wrote in an 18-page opinion. “To be sure, it found an important and useful gene, but separating that gene from its surrounding genetic material is not an act of invention.”

Even if a discovery is brilliant or groundbreaking, that doesn’t necessarily mean it’s patentable, the court said.

However, the ruling wasn’t a complete loss for Myriad. The court said that DNA molecules synthesized in a laboratory were eligible for patent protection. Myriad’s shares soared after the court’s ruling.

The court adopted the position advanced by the Obama administration, which argued that isolated forms of naturally occurring DNA weren’t patentable, but artificial DNA molecules were.

Myriad also has patent claims on artificial genes, known as cDNA.

The high court’s ruling was a win for a coalition of cancer patients, medical groups and geneticists who filed a lawsuit in 2009 challenging Myriad’s patents. Thanks to those patents, the Salt Lake City company has been the exclusive U.S. commercial provider of genetic tests for breast cancer and ovarian cancer.

“Today, the court struck down a major barrier to patient care and medical innovation,” said Sandra Park of the American Civil Liberties Union, which represented the groups challenging the patents. “Because of this ruling, patients will have greater access to genetic testing and scientists can engage in research on these genes without fear of being sued.”

Myriad didn’t immediately respond to a request for comment.

The challengers argued the patents have allowed Myriad to dictate the type and terms of genetic screening available for the diseases, while also dissuading research by other laboratories.

Read the entire article here.

Image: Gene showing the coding region in a segment of eukaryotic DNA. Courtesy of Wikipedia.

From RNA Chemistry to Cell Biology

Each day we inch towards a better scientific understanding of how life is thought to have begun on our planet. Over the last decade researchers have shown how molecules like the nucleotides that make up complex chains of RNA (ribonucleic acid) and DNA (deoxyribonucleic acid) may have formed in the primaeval chemical soup of the early Earth. But it’s altogether a much greater leap to get from RNA (or DNA) to even a simple biological cell. Some recent work sheds more light and suggests that the chemical to biological chasm between long-strands of RNA and a complex cell may not be as wide to cross as once thought.

From ars technica:

Origin of life researchers have made impressive progress in recent years, showing that simple chemicals can combine to make nucleotides, the building blocks of DNA and RNA. Given the right conditions, these nucleotides can combine into ever-longer stretches of RNA. A lot of work has demonstrated that RNAs can perform all sorts of interesting chemistry, specifically binding other molecules and catalyzing reactions.

So the case for life getting its start in an RNA world has gotten very strong in the past decade, but the difference between a collection of interesting RNAs and anything like a primitive cell—surrounded by membranes, filled with both RNA and proteins, and running a simple metabolism—remains a very wide chasm. Or so it seems. A set of papers that came out in the past several days suggest that the chasm might not be as large as we’d tend to think.

Ironing out metabolism

A lot of the basic chemistry that drives the cell is based on electron transport, typically involving proteins that contain an iron atom. These reactions not only create some of the basic chemicals that are necessary for life, they’re also essential to powering the cell. Both photosynthesis and the breakdown of sugars involve the transfer of electrons to and from proteins that contain an iron atom.

DNA and RNA tend to have nothing to do with iron, interacting with magnesium instead. But some researchers at Georgia Tech have considered that fact a historical accident. Since photosynthesis put so much oxygen into the atmosphere, most of the iron has been oxidized into a state where it’s not soluble in water. If you go back to before photosynthesis was around, the oceans were filled with dissolved iron. Previously, the group had shown that, in oxygen-free and iron rich conditions, RNAs would happily work with iron instead and that its presence could speed up their catalytic activity.

Now the group is back with a new paper showing that if you put a bunch of random RNAs into the same conditions, some of them can catalyze electron transfer reactions. By “random,” I mean RNAs that are currently used by cells to do completely unrelated things (specifically, ribosomal and transfer RNAs). The reactions they catalyze are very simple, but remember: these RNAs don’t normally function as a catalyst at all. It wouldn’t surprise me if, after a number of rounds of evolutionary selection, an iron-RNA combination could be found that catalyzes a reaction that’s a lot closer to modern metabolism.

All of which suggests that the basics of a metabolism could have gotten started without proteins around.

Proteins build membranes

Clearly, proteins showed up at some point. They certainly didn’t look much like the proteins we see today, which may have hundreds or thousands of amino acids linked together. In fact, they may not have looked much like proteins at all, if a paper from Jack Szostak’s group is any indication. Szostak’s found that just two amino acids linked together may have catalytic activity. Some of that activity can help them engage in competition over another key element of the first cells: membrane material.

The work starts with a two amino acid long chemical called a peptide. If that peptide happens to be serine linked to histidine (two amino acids in use by life today), it has an interesting chemical activity: very slowly and poorly, it links other amino acids together to form more peptides. This weak activity is especially true if the amino acids are phenylalanine and leucine, two water-hating chemicals. Once they’re linked, they will precipitate out of a water solution.

The authors added a fatty acid membrane, figuring that it would soak up the reaction product. That definitely worked, with the catalytic efficiency of serine-histidine going up as a result. But something else happened as well: membranes that incorporated the reaction product started growing. It turns out that its presence in the membrane made it an efficient scrounger of other membrane material. As they grew, these membranes extended as long filaments that would break up into smaller parts with a gentle agitation and then start growing all over again.

In fact, the authors could set up a bit of a Darwinian competition between membranes based on how much starting catalyst each had. All of which suggests that proteins might have found their way into the cell as very simple chemicals that, at least initially, weren’t in any way connected to genetic and biochemical functions performed by RNA. But any cell-like things that evolved an RNA that made short proteins could have a big advantage over its competition.

Read the entire article here.

Intelligenetics

Intelligenetics isn’t recognized as a real word by Websters or the Oxford English dictionary. We just coined a term that might best represent the growing field of research examining the genetic basis for human intelligence. Of course, it’s not a new subject and comes with many cautionary tales. Past research into the genetic foundations of intelligence has often been misused by one group seeking racial, ethnic or political power over another. However, with strong and appropriate safeguards in place science does have a legitimate place in uncovering what makes some brains excel while others do not.

[div class=attrib]From the Wall Street Journal:[end-div]

At a former paper-printing factory in Hong Kong, a 20-year-old wunderkind named Zhao Bowen has embarked on a challenging and potentially controversial quest: uncovering the genetics of intelligence.

Mr. Zhao is a high-school dropout who has been described as China’s Bill Gates. He oversees the cognitive genomics lab at BGI, a private company that is partly funded by the Chinese government.

At the Hong Kong facility, more than 100 powerful gene-sequencing machines are deciphering about 2,200 DNA samples, reading off their 3.2 billion chemical base pairs one letter at a time. These are no ordinary DNA samples. Most come from some of America’s brightest people—extreme outliers in the intelligence sweepstakes.

The majority of the DNA samples come from people with IQs of 160 or higher. By comparison, average IQ in any population is set at 100. The average Nobel laureate registers at around 145. Only one in every 30,000 people is as smart as most of the participants in the Hong Kong project—and finding them was a quest of its own.

“People have chosen to ignore the genetics of intelligence for a long time,” said Mr. Zhao, who hopes to publish his team’s initial findings this summer. “People believe it’s a controversial topic, especially in the West. That’s not the case in China,” where IQ studies are regarded more as a scientific challenge and therefore are easier to fund.

The roots of intelligence are a mystery. Studies show that at least half of the variation in intelligence quotient, or IQ, is inherited. But while scientists have identified some genes that can significantly lower IQ—in people afflicted with mental retardation, for example—truly important genes that affect normal IQ variation have yet to be pinned down.

The Hong Kong researchers hope to crack the problem by comparing the genomes of super-high-IQ individuals with the genomes of people drawn from the general population. By studying the variation in the two groups, they hope to isolate some of the hereditary factors behind IQ.

Their conclusions could lay the groundwork for a genetic test to predict a person’s inherited cognitive ability. Such a tool could be useful, but it also might be divisive.

“If you can identify kids who are going to have trouble learning, you can intervene” early on in their lives, through special schooling or other programs, says Robert Plomin, a professor of behavioral genetics at King’s College, London, who is involved in the BGI project.

[div class=attrib]Read the entire article following the jump.[end-div]

Orphan Genes

DNA is a remarkable substance. It is the fundamental blueprint for biological systems. It is the basis for all complex life on our planet, it enables parents to share characteristics, both good and bad, with their children. Yet the more geneticists learn about the functions of DNA, the more mysteries it presents. One such conundrum is posed by so-called junk DNA and orphan genes — seemingly useless sequences of DNA that perform no function. Or so researchers previously believed.

[div class=attrib]From New Scientist:[end-div]

NOT having any family is tough. Often unappreciated and uncomfortably different, orphans have to fight to fit in and battle against the odds to realise their potential. Those who succeed, from Aristotle to Steve Jobs, sometimes change the world.

Who would have thought that our DNA plays host to a similar cast of foundlings? When biologists began sequencing genomes, they discovered that up to a third of genes in each species seemed to have no parents or family of any kind. Nevertheless, some of these “orphan genes” are high achievers, and a few even seem have played a part in the evolution of the human brain.

But where do they come from? With no obvious ancestry, it was as if these genes had appeared from nowhere, but that couldn’t be true. Everyone assumed that as we learned more, we would discover what had happened to their families. But we haven’t – quite the opposite, in fact.

Ever since we discovered genes, biologists have been pondering their origins. At the dawn of life, the very first genes must have been thrown up by chance. But life almost certainly began in an RNA world, so back then, genes weren’t just blueprints for making enzymes that guide chemical reactions – they themselves were the enzymes. If random processes threw up a piece of RNA that could help make more copies of itself, natural selection would have kicked in straight away.

As living cells evolved, though, things became much more complex. A gene became a piece of DNA coding for a protein. For a protein to be made, an RNA copy of the DNA has to be created. This cannot happen without “DNA switches”, which are actually just extra bits of DNA alongside the protein-coding bits saying “copy this DNA into RNA”. Next, the RNA has to get to the protein-making factories. In complex cells, this requires the presence of yet more extra sequences, which act as labels saying “export me” and “start making the protein from here”.

The upshot is that the chances of random mutations turning a bit of junk DNA into a new gene seem infinitesimally small. As the French biologist François Jacob famously wrote 35 years ago, “the probability that a functional protein would appear de novo by random association of amino acids is practically zero”.

Instead, back in the 1970s it was suggested that the accidental copying of genes can result in a single gene giving rise to a whole family of genes, rather like the way animals branch into families of related species over time. It’s common for entire genes to be inadvertently duplicated. Spare copies are usually lost, but sometimes the duplicates come to share the function of the original gene between them, or one can diverge and take on a new function.

Take the light-sensing pigments known as opsins. The various opsins in our eyes are not just related to each other, they are also related to the opsins found in all other animals, from jellyfish to insects. The thousands of different opsin genes found across the animal kingdom all evolved by duplication, starting with a single gene in a common ancestor living around 700 million years ago (see diagram).

Most genes belong to similar families, and their ancestry can be traced back many millions of years. But when the yeast genome was sequenced around 15 years ago, it was discovered that around a third of yeast genes appeared to have no family. The term orphans (sometimes spelt ORFans) was used to describe individual genes, or small groups of very similar genes, with no known relatives.

“If you see a gene and you can’t find a relative you get suspicious,” says Ken Weiss, who studies the evolution of complex traits at Penn State University. Some suggested orphans were the genetic equivalent of living fossils like the coelacanth, the last surviving members of an ancient family. Others thought they were nothing special, just normal genes whose family hadn’t been found yet. After all, the sequencing of entire genomes had only just begun.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: DNA structure. Courtesy of Wikipedia.[end-div]

Shakespearian Sonnets Now Available on DNA

Shakespeare meet thy DNA. The most famous literary figure in the English language had a recent rendezvous with that most famous and studied of molecules. Together chemists, cell biologists, geneticists and computer scientists are doing some amazing things — storing information using the base-pair sequences of amino-acids on the DNA molecule.

[div class=attrib]From ars technica:[end-div]

It’s easy to get excited about the idea of encoding information in single molecules, which seems to be the ultimate end of the miniaturization that has been driving the electronics industry. But it’s also easy to forget that we’ve been beaten there—by a few billion years. The chemical information present in biomolecules was critical to the origin of life and probably dates back to whatever interesting chemical reactions preceded it.

It’s only within the past few decades, however, that humans have learned to speak DNA. Even then, it took a while to develop the technology needed to synthesize and determine the sequence of large populations of molecules. But we’re there now, and people have started experimenting with putting binary data in biological form. Now, a new study has confirmed the flexibility of the approach by encoding everything from an MP3 to the decoding algorithm into fragments of DNA. The cost analysis done by the authors suggest that the technology may soon be suitable for decade-scale storage, provided current trends continue.

Trinary encoding

Computer data is in binary, while each location in a DNA molecule can hold any one of four bases (A, T, C, and G). Rather than using all that extra information capacity, however, the authors used it to avoid a technical problem. Stretches of a single type of base (say, TTTTT) are often not sequenced properly by current techniques—in fact, this was the biggest source of errors in the previous DNA data storage effort. So for this new encoding, they used one of the bases to break up long runs of any of the other three.

(To explain how this works practically, let’s say the A, T, and C encoded information, while G represents “more of the same.” If you had a run of four A’s, you could represent it as AAGA. But since the G doesn’t encode for anything in particular, TTGT can be used to represent four T’s. The only thing that matters is that there are no more than two identical bases in a row.)

That leaves three bases to encode information, so the authors converted their information into trinary. In all, they encoded a large number of works: all 154 Shakespeare sonnets, a PDF of a scientific paper, a photograph of the lab some of them work in, and an MP3 of part of Martin Luther King’s “I have a dream” speech. For good measure, they also threw in the algorithm they use for converting binary data into trinary.

Once in trinary, the results were encoded into the error-avoiding DNA code described above. The resulting sequence was then broken into chunks that were easy to synthesize. Each chunk came with parity information (for error correction), a short file ID, and some data that indicates the offset within the file (so, for example, that the sequence holds digits 500-600). To provide an added level of data security, 100-bases-long DNA inserts were staggered by 25 bases so that consecutive fragments had a 75-base overlap. Thus, many sections of the file were carried by four different DNA molecules.

And it all worked brilliantly—mostly. For most of the files, the authors’ sequencing and analysis protocol could reconstruct an error-free version of the file without any intervention. One, however, ended up with two 25-base-long gaps, presumably resulting from a particular sequence that is very difficult to synthesize. Based on parity and other data, they were able to reconstruct the contents of the gaps, but understanding why things went wrong in the first place would be critical to understanding how well suited this method is to long-term archiving of data.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Title page of Shakespeare’s Sonnets (1609). Courtesy of Wikipedia / Public Domain.[end-div]

The Missing Linc

LincRNA that is. Recent discoveries hint at the potentially crucial role of this new class of genetic material in embryonic development, cell and tissue differentiation and even speciation and evolution.

[div class=attrib]From the Economist:[end-div]

THE old saying that where there’s muck, there’s brass has never proved more true than in genetics. Once, and not so long ago, received wisdom was that most of the human genome—perhaps as much as 99% of it—was “junk”. If this junk had a role, it was just to space out the remaining 1%, the genes in which instructions about how to make proteins are encoded, in a useful way in the cell nucleus.

That, it now seems, was about as far from the truth as it is possible to be. The decade or so since the completion of the Human Genome Project has shown that lots of the junk must indeed have a function. The culmination of that demonstration was the publication, in September, of the results of the ENCODE project. This suggested that almost two-thirds of human DNA, rather than just 1% of it, is being copied into molecules of RNA, the chemical that carries protein-making instructions to the sub-cellular factories which turn those proteins out, and that as a consequence, rather than there being just 23,000 genes (namely, the bits of DNA that encode proteins), there may be millions of them.

The task now is to work out what all these extra genes are up to. And a study just published in Genome Biology, by David Kelley and John Rinn of Harvard University, helps do that for one new genetic class, a type known as lincRNAs. In doing so, moreover, Dr Kelley and Dr Rinn show just how complicated the modern science of genetics has become, and hint also at how animal species split from one another.

Lincs in the chain

Molecules of lincRNA are similar to the messenger-RNA molecules which carry protein blueprints. They do not, however, encode proteins. More than 9,000 sorts are known, and most of those whose job has been tracked down are involved in the regulation of other genes, for example by attaching themselves to the DNA switches that control those genes.

LincRNA is rather odd, though. It often contains members of a second class of weird genetic object. These are called transposable elements (or, colloquially, “jumping genes”, because their DNA can hop from one place to another within the genome). Transposable elements come in several varieties, but one group of particular interest are known as endogenous retroviruses. These are the descendants of ancient infections that have managed to hide away in the genome and get themselves passed from generation to generation along with the rest of the genes.

Dr Kelley and Dr Rinn realised that the movement within the genome of transposable elements is a sort of mutation, and wondered if it has evolutionary consequences. Their conclusion is that it does, for when they looked at the relation between such elements and lincRNA genes, they found some intriguing patterns.

In the first place, lincRNAs are much more likely to contain transposable elements than protein-coding genes are. More than 83% do so, in contrast to only 6% of protein-coding genes.

Second, those transposable elements are particularly likely to be endogenous retroviruses, rather than any of the other sorts of element.

Third, the interlopers are usually found in the bit of the gene where the process of copying RNA from the DNA template begins, suggesting they are involved in switching genes on or off.

And fourth, lincRNAs containing one particular type of endogenous retrovirus are especially active in pluripotent stem cells, the embryonic cells that are the precursors of all other cell types. That indicates these lincRNAs have a role in the early development of the embryo.

Previous work suggests lincRNAs are also involved in creating the differences between various sorts of tissue, since many lincRNA genes are active in only one or a few cell types. Given that their principal job is regulating the activities of other genes, this makes sense.

Even more intriguingly, studies of lincRNA genes from species as diverse as people, fruit flies and nematode worms, have found they differ far more from one species to another than do protein-coding genes. They are, in other words, more species specific. And that suggests they may be more important than protein-coding genes in determining the differences between those species.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Darwin’s finches or Galapagos finches. Darwin, 1845. Courtesy of Wikipedia.[end-div]

Your Molecular Ancestors

[div class=attrib]From Scientific American:[end-div]

Well, perhaps your great-to-the-hundred-millionth-grandmother was.

Understanding the origins of life and the mechanics of the earliest beginnings of life is as important for the quest to unravel the Earth’s biological history as it is for the quest to seek out other life in the universe. We’re pretty confident that single-celled organisms – bacteria and archaea – were the first ‘creatures’ to slither around on this planet, but what happened before that is a matter of intense and often controversial debate.

One possibility for a precursor to these organisms was a world without DNA, but with the bare bone molecular pieces that would eventually result in the evolutionary move to DNA and its associated machinery. This idea was put forward by an influential paper in the journal Nature in 1986 by Walter Gilbert (winner of a Nobel in Chemistry), who fleshed out an idea by Carl Woese – who had earlier identified the Archaea as a distinct branch of life. This ancient biomolecular system was called the RNA-world, since it consists of ribonucleic acid sequences (RNA) but lacks the permanent storage mechanisms of deoxyribonucleic acids (DNA).

A key part of the RNA-world hypothesis is that in addition to carrying reproducible information in their sequences, RNA molecules can also perform the duties of enzymes in catalyzing reactions – sustaining a busy, self-replicating, evolving ecosystem. In this picture RNA evolves away until eventually items like proteins come onto the scene, at which point things can really gear up towards more complex and familiar life. It’s an appealing picture for the stepping-stones to life as we know it.

In modern organisms a very complex molecular structure called the ribosome is the critical machine that reads the information in a piece of messenger-RNA (that has spawned off the original DNA) and then assembles proteins according to this blueprint by snatching amino acids out of a cell’s environment and putting them together. Ribosomes are amazing, they’re also composed of a mix of large numbers of RNA molecules and protein molecules.

But there’s a possible catch to all this, and it relates to the idea of a protein-free RNA-world some 4 billion years ago.

[div class=attrib]Read more after the jump:[end-div]

[div class=attrib]Image: RNA molecule. Courtesy of Wired / Universitat Pampeu Fabra.[end-div]

A Simpler Origin for Life

[div class=attrib]From Scientific American:[end-div]

Extraordinary discoveries inspire extraordinary claims. Thus, James Watson reported that immediately after he and Francis Crick uncovered the structure of DNA, Crick “winged into the Eagle (pub) to tell everyone within hearing that we had discovered the secret of life.” Their structure–an elegant double helix–almost merited such enthusiasm. Its proportions permitted information storage in a language in which four chemicals, called bases, played the same role as 26 letters do in the English language.

Further, the information was stored in two long chains, each of which specified the contents of its partner. This arrangement suggested a mechanism for reproduction: The two strands of the DNA double helix parted company, and new DNA building blocks that carry the bases, called nucleotides, lined up along the separated strands and linked up. Two double helices now existed in place of one, each a replica of the original.

[div class=attrib]More from theSource here.[end-div]