Tag Archives: science

A Kid’s Book For Adults

book_BoneByBoneOne of the most engaging new books for young children is a picture book that explains evolution. By way of whimsical illustrations and comparisons of animal skeletons the book — Bone By Bone — is able to deliver the story of evolutionary theory in an entertaining and compelling way.

Perhaps, it could be used just as well for those adults who have trouble grappling with the fruits of the scientific method. The Texas School Board of Education would make an ideal place to begin.

Bone By Bone is written by veterinarian Sara Levine.

From Slate:

In some of the best children’s books, dandelions turn into stars, sharks and radishes merge, and pancakes fall from the sky. No one would confuse these magical tales for descriptions of nature. Small children can differentiate between “the real world and the imaginary world,” as psychologist Alison Gopnik has written. They just “don’t see any particular reason for preferring to live in the real one.”

Children’s nuanced understanding of the not-real surely extends to the towering heap of books that feature dinosaurs as playmates who fill buckets of sand or bake chocolate-chip cookies. The imaginative play of these books may be no different to kids than radishsharks and llama dramas.

But as a parent, friendly dinos never steal my heart. I associate them, just a little, with old creationist images of animals frolicking near the Garden of Eden, which carried the message that dinosaurs and man, both created by God on the sixth day, co-existed on the Earth until after the flood. (Never mind the evidence that dinosaurs went extinct millions of years before humans appeared.) The founder of the Creation Museum in Kentucky calls dinosaurs “missionary lizards,” and that phrase echoes in my head when I see all those goofy illustrations of dinosaurs in sunglasses and hats.

I’ve been longing for another kind of picture book: one that appeals to young children’s wildest imagination in service of real evolutionary thinking. Such a book could certainly include dinosaur skeletons or fossils. But Bone by Bone, by veterinarian and professor Sara Levine, fills the niche to near perfection by relying on dogs, rabbits, bats, whales, and humans. Levine plays with differences in their skeletons to groom kids for grand scientific concepts.

Bone by Bone asks kids to imagine what their bodies would look like if they had different configurations of bones, like extra vertebrae, longer limbs, or fewer fingers. “What if your vertebrae didn’t stop at your rear end? What if they kept going?” Levine writes, as a boy peers over his shoulder at the spinal column. “You’d have a tail!”

“What kind of animal would you be if your leg bones were much, much longer than your arm bones?” she wonders, as a girl in pink sneakers rises so tall her face disappears from the page. “A rabbit or a kangaroo!” she says, later adding a pike and a hare. “These animals need strong hind leg bones for jumping.” Levine’s questions and answers are delightfully simple for the scientific heft they carry.

With the lightest possible touch, Levine introduces the idea that bones in different vertebrates are related and that they morph over time. She starts with vertebrae, skulls and ribs. But other structures bear strong kinships in these animals, too. The bone in the center of a horse’s hoof, for instance, is related to a human finger. (“What would happen if your middle fingers and the middle toes were so thick that they supported your whole body?”) The bones that radiate out through a bat’s wing are linked to those in a human hand. (“A web of skin connects the bones to make wings so that a bat can fly.”) This is different from the wings of a bird or an insect; with bats, it’s almost as if they’re swimming through air.

Of course, human hands did not shape-shift into bats’ wings, or vice versa. Both derive from a common ancestral structure, which means they share an evolutionary past. Homology, as this kind of relatedness is called, is among “the first and in many ways the best evidence for evolution,” says Josh Rosenau of the National Center for Science Education. Comparing bones also paves the way for comparing genes and molecules, for grasping evolution at the next level of sophistication. Indeed, it’s hard to look at the bat wings and human hands as presented here without lighting up, at least a little, with these ideas. So many smart writers focus on preparing young kids to read or understand numbers. Why not do more to ready them for the big ideas of science? Why not pave the way for evolution? (This is easier to do with older kids, with books like The Evolution of Calpurnia Tate and Why Don’t Your Eyelashes Grow?)

Read the entire story here.

Image: Bone By Bone, book cover. Courtesy: Lerner Publishing Group

The Universe of Numbers

There is no doubt that mathematics — some very complex — has been able to explain much of what we consider the universe. In reality, and perhaps surprisingly, only a small subset of equations is required to explain everything around us from the atoms and their constituents to the vast cosmos. Why is that? And, what is the fundamental relationship between mathematics and our current physical understanding of all things great and small?

From the New Scientist:

When Albert Einstein finally completed his general theory of relativity in 1916, he looked down at the equations and discovered an unexpected message: the universe is expanding.

Einstein didn’t believe the physical universe could shrink or grow, so he ignored what the equations were telling him. Thirteen years later, Edwin Hubble found clear evidence of the universe’s expansion. Einstein had missed the opportunity to make the most dramatic scientific prediction in history.

How did Einstein’s equations “know” that the universe was expanding when he did not? If mathematics is nothing more than a language we use to describe the world, an invention of the human brain, how can it possibly churn out anything beyond what we put in? “It is difficult to avoid the impression that a miracle confronts us here,” wrote physicist Eugene Wigner in his classic 1960 paper “The unreasonable effectiveness of mathematics in the natural sciences” (Communications on Pure and Applied Mathematics, vol 13, p 1).

The prescience of mathematics seems no less miraculous today. At the Large Hadron Collider at CERN, near Geneva, Switzerland, physicists recently observed the fingerprints of a particle that was arguably discovered 48 years ago lurking in the equations of particle physics.

How is it possible that mathematics “knows” about Higgs particles or any other feature of physical reality? “Maybe it’s because math is reality,” says physicist Brian Greene of Columbia University, New York. Perhaps if we dig deep enough, we would find that physical objects like tables and chairs are ultimately not made of particles or strings, but of numbers.

“These are very difficult issues,” says philosopher of science James Ladyman of the University of Bristol, UK, “but it might be less misleading to say that the universe is made of maths than to say it is made of matter.”

Difficult indeed. What does it mean to say that the universe is “made of mathematics”? An obvious starting point is to ask what mathematics is made of. The late physicist John Wheeler said that the “basis of all mathematics is 0 = 0”. All mathematical structures can be derived from something called “the empty set”, the set that contains no elements. Say this set corresponds to zero; you can then define the number 1 as the set that contains only the empty set, 2 as the set containing the sets corresponding to 0 and 1, and so on. Keep nesting the nothingness like invisible Russian dolls and eventually all of mathematics appears. Mathematician Ian Stewart of the University of Warwick, UK, calls this “the dreadful secret of mathematics: it’s all based on nothing” (New Scientist, 19 November 2011, p 44). Reality may come down to mathematics, but mathematics comes down to nothing at all.

That may be the ultimate clue to existence – after all, a universe made of nothing doesn’t require an explanation. Indeed, mathematical structures don’t seem to require a physical origin at all. “A dodecahedron was never created,” says Max Tegmark of the Massachusetts Institute of Technology. “To be created, something first has to not exist in space or time and then exist.” A dodecahedron doesn’t exist in space or time at all, he says – it exists independently of them. “Space and time themselves are contained within larger mathematical structures,” he adds. These structures just exist; they can’t be created or destroyed.

That raises a big question: why is the universe only made of some of the available mathematics? “There’s a lot of math out there,” Greene says. “Today only a tiny sliver of it has a realisation in the physical world. Pull any math book off the shelf and most of the equations in it don’t correspond to any physical object or physical process.”

It is true that seemingly arcane and unphysical mathematics does, sometimes, turn out to correspond to the real world. Imaginary numbers, for instance, were once considered totally deserving of their name, but are now used to describe the behaviour of elementary particles; non-Euclidean geometry eventually showed up as gravity. Even so, these phenomena represent a tiny slice of all the mathematics out there.

Not so fast, says Tegmark. “I believe that physical existence and mathematical existence are the same, so any structure that exists mathematically is also real,” he says.

So what about the mathematics our universe doesn’t use? “Other mathematical structures correspond to other universes,” Tegmark says. He calls this the “level 4 multiverse”, and it is far stranger than the multiverses that cosmologists often discuss. Their common-or-garden multiverses are governed by the same basic mathematical rules as our universe, but Tegmark’s level 4 multiverse operates with completely different mathematics.

All of this sounds bizarre, but the hypothesis that physical reality is fundamentally mathematical has passed every test. “If physics hits a roadblock at which point it turns out that it’s impossible to proceed, we might find that nature can’t be captured mathematically,” Tegmark says. “But it’s really remarkable that that hasn’t happened. Galileo said that the book of nature was written in the language of mathematics – and that was 400 years ago.”

Read the entire article here.

Elite Mediocrity

Yet another survey of global education attainment puts the United States firmly in yet another unenviable position. US students ranked a mere 28th in science and came further down the scale on math, at 36th, out of 65 nations. So, it’s time for another well-earned attack on the system that is increasingly nurturing mainstream mediocrity and dumbing-down education to mush. In fact, some nameless states seem to celebrate the fact by re-working textbooks and curricula to ensure historic fact and scientific principles are distorted to promote a religious agenda. And, for those who point to the US as a guiding light in all things innovative, please don’t forget that a significant proportion of the innovators gained their educational credentials elsewhere, outside the US.

As the news Comedy Central faux-news anchor and satirist Stephen Colbert recently put it:

“Like all great theologies, Bill [O’Reilly]’s can be boiled down to one sentence: there must be a God, because I don’t know how things work.”

From the Huffington Post:

The 2012 Programme for International Student Assessment, or PISA, results are in, and there’s some really good news for those that worry about the U.S. becoming a nation of brainy elitists. Of the 65 countries that participated in the PISA assessment, U.S. students ranked 36th in math, and 28th in science. When it comes to elitism, the U.S. truly has nothing to worry about.

For those relative few Americans who were already elite back when the 2009 PISA assessment was conducted, there’s good news for them too: they’re even more elite than they were in 2009, when the US ranked 30th in math and 23rd in science. Educated Americans are so elite, they’re practically an endangered species.

The only nagging possible shred of bad news from these test scores comes in the form of a question: where will the next Internet come from? Which country will deliver the next great big, landscape-changing, technological innovation that will propel its economy upward? The country of bold, transformative firsts, the one that created the world’s first nuclear reactor and landed humans on the moon seems very different than the one we live in today.

Mediocrity in science education has metastasized throughout the American mindset, dumbing down everything in its path, including the choices made by our elected officials. A stinging byproduct of America’s war on excellence in science education was the loss of its leadership position in particle physics research. On March 14 of this year, CERN, the European Organization for Nuclear Research, announced that the Higgs Boson, aka the “God particle,” had been discovered at the EU’s Large Hadron Collider. CERN describes itself as “the world’s leading laboratory for particle physics” — a title previously held by America’s Fermilab. Fermilab’s Tevatron particle accelerator was the world’s largest and most powerful until eclipsed by CERN’s Large Hadron Collider. The Tevatron was shut down on September 30th, 2011.

The Tevatron’s planned replacement, Texas’ Superconducting Super Collider (SSC), would have been three times the size of the EU’s Large Hadron Collider. Over one third of the SSC’s underground tunnel had been bored at the time of its cancellation by congress in 1993. As Texas Monthly reported in “How Texas Lost the World’s Largest Super Collider,” “Nobody doubts that the 40 TeV Superconducting Super Collider (SSC) in Texas would have discovered the Higgs boson a decade before CERN.” Fighting to save the SSC in 1993, its director, Dr. Roy Schwitters, said in a New York Times interview, “The SSC is becoming a victim of the revenge of the C students.”

Ever wonder about the practical benefits of theoretical physics? Consider this: without Einstein’s theory of general relativity, GPS doesn’t work. That’s because time in those GPS satellites whizzing above us in space is slightly different than time for us terrestrials. Without compensating for the difference, our cars would end up in a ditch instead of Starbucks. GPS would also not have happened without advances in US space technology. Consider that, in 2013, there are two manned spacefaring nations on Earth – the US isn’t one of them. GPS alone is estimated to generate $122.4 billion annually in direct and related benefits according to an NDP Consulting Group report. The Superconducting Super Collider would have cost $8.4 billion.

‘C’ students’ revenge doesn’t stop with crushing super colliders or grounding our space program. Fox News’ Bill O’Reilly famously translated his inability to explain 9th grade astronomy into justification for teaching creationism in public schools, stating that we don’t know how tides work, or where the sun or moon comes from, or why the Earth has a moon and Mars doesn’t (Mars actually has two moons).

Read the entire article here.

Meta-Research: Discoveries From Research on Discoveries

Discoveries through scientific research don’t just happen in the lab. Many of course do. Some discoveries now come through data analysis of research papers. Here, sophisticated data mining tools and semantic software sift through hundreds of thousands of research papers looking for patterns and links that would otherwise escape the eye of human researchers.

From Technology Review:

Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.

The software, developed in a collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth, which is implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.

The software, developed in a collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth, which is implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Lawrence Hunter, director of the Center for Computational Pharmacology at the University of Colorado Denver, says that careful empirical confirmation is needed for claims that the software has made new discoveries. But he says that progress in this area is important, and that such tools are desperately needed.

The volume of research literature both old and new is now so large that even specialists can’t hope to read everything that might help them, says Hunter. Last year over one million new articles were added to the U.S. National Library of Medicine’s Medline database of biomedical research papers, which now contains 23 million items. Software can crunch through massive amounts of information and find vital clues in unexpected places. “Crucial bits of information are sometimes isolated facts that are only a minor point in an article but would be really important if you can find it,” he says.

Read the entire article here.

The Mother of All Storms

Some regions of our planet are home to violent and destructive storms. However, one look at a recent mega-storm on Saturn may put it all in perspective — it could be much, much worse.

From ars technica:

Jupiter’s Great Red Spot may get most of the attention, but it’s hardly the only big weather event in the Solar System. Saturn, for example, has an odd hexagonal pattern in the clouds at its north pole, and when the planet tilted enough to illuminate it, the light revealed a giant hurricane embedded in the center of the hexagon. Scientists think the immense storm may have been there for years.

But Saturn is also home to transient storms that show up sporadically. The most notable of these are the Great White Spots, which can persist for months and alter the weather on a planetary scale. Great White Spots are rare, with only six having been observed since 1876. When one formed in 2010, we were lucky enough to have the Cassini orbiter in place to watch it from close up. Even though the head of the storm was roughly 7,000 km across, Cassini’s cameras were able to image it at resolutions where each pixel was only 14 km across, allowing an unprecedented view into the storm’s dynamics.

The storm turned out to be very violent, with convective features as big as 3,000 km across that could form and dissipate in as little as 10 hours. Winds of over 400 km/hour were detected, and the pressure gradient between the storm and the unaffected areas nearby was twice that of the one observed in the Great Red Spot of Jupiter. By carefully mapping the direction of the winds, the authors were able to conclude that the head of the White Spot was an anti-cyclone, with winds orbiting around a central feature.

Convection that brings warm material up from the depths of Saturn’s atmosphere appears to be key to driving these storms. The authors built an atmospheric model that could reproduce the White Spot and found that shutting down the energy injection from the lower atmosphere was enough to kill the storm. In addition, observations suggest that many areas of the storm contain freshly condensed particles, which may represent material that was brought up from the lower atmosphere and then condensed when it reached the cooler upper layers.

The Great White spot was an anticyclone, and the authors’ model suggests that there’s only a very narrow band of winds on Saturn that enable the formation of a Great White Spot. The convective activity won’t trigger a White Spot anywhere outside the range of 31.5° and 32.4°N, which probably goes a long way toward explaining why the storms are so rare.

Read the entire article here.

Image: The huge storm churning through the atmosphere in Saturn’s northern hemisphere overtakes itself as it encircles the planet in this true-color view from NASA’s Cassini spacecraft. Courtesy of NASA/JPL.

Science and Art of the Brain

Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.

From the New York Times:

This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.

Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.

This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.

Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.

The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.

As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.

Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.

The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.

This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.

Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.

Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?

Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”

Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.

This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.

Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.

Read the entire article following the jump.

The Dangerous World of Pseudo-Academia

Pseudoscience can be fun — for comedic purposes only of course. But when it is taken seriously and dogmatically, as it often is by a significant number of people, it imperils rational dialogue and threatens real scientific and cultural progress. There is no end to the lengthy list of fake scientific claims and theories — some of our favorites include: the moon “landing” conspiracy, hollow Earth, Bermuda triangle, crop circles, psychic surgery, body earthing, room temperature fusion, perpetual and motion machines.

Fun aside, pseudoscience can also be harmful and dangerous particularly when those duped by the dubious practice are harmed physically, medically or financially. Which brings us to a recent, related development aimed at duping academics. Welcome to the world of pseudo-academia.

From the New York Times:

The scientists who were recruited to appear at a conference called Entomology-2013 thought they had been selected to make a presentation to the leading professional association of scientists who study insects.

But they found out the hard way that they were wrong. The prestigious, academically sanctioned conference they had in mind has a slightly different name: Entomology 2013 (without the hyphen). The one they had signed up for featured speakers who were recruited by e-mail, not vetted by leading academics. Those who agreed to appear were later charged a hefty fee for the privilege, and pretty much anyone who paid got a spot on the podium that could be used to pad a résumé.

“I think we were duped,” one of the scientists wrote in an e-mail to the Entomological Society.

Those scientists had stumbled into a parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them. Many of the journals and meetings have names that are nearly identical to those of established, well-known publications and events.

Steven Goodman, a dean and professor of medicine at Stanford and the editor of the journal Clinical Trials, which has its own imitators, called this phenomenon “the dark side of open access,” the movement to make scholarly publications freely available.

The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.

Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.

But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk. “Most people don’t know the journal universe,” Dr. Goodman said. “They will not know from a journal’s title if it is for real or not.”

Researchers also say that universities are facing new challenges in assessing the résumés of academics. Are the publications they list in highly competitive journals or ones masquerading as such? And some academics themselves say they have found it difficult to disentangle themselves from these journals once they mistakenly agree to serve on their editorial boards.

The phenomenon has caught the attention of Nature, one of the most competitive and well-regarded scientific journals. In a news report published recently, the journal noted “the rise of questionable operators” and explored whether it was better to blacklist them or to create a “white list” of those open-access journals that meet certain standards. Nature included a checklist on “how to perform due diligence before submitting to a journal or a publisher.”

Jeffrey Beall, a research librarian at the University of Colorado in Denver, has developed his own blacklist of what he calls “predatory open-access journals.” There were 20 publishers on his list in 2010, and now there are more than 300. He estimates that there are as many as 4,000 predatory journals today, at least 25 percent of the total number of open-access journals.

“It’s almost like the word is out,” he said. “This is easy money, very little work, a low barrier start-up.”

Journals on what has become known as “Beall’s list” generally do not post the fees they charge on their Web sites and may not even inform authors of them until after an article is submitted. They barrage academics with e-mail invitations to submit articles and to be on editorial boards.

One publisher on Beall’s list, Avens Publishing Group, even sweetened the pot for those who agreed to be on the editorial board of The Journal of Clinical Trails & Patenting, offering 20 percent of its revenues to each editor.

One of the most prolific publishers on Beall’s list, Srinubabu Gedela, the director of the Omics Group, has about 250 journals and charges authors as much as $2,700 per paper. Dr. Gedela, who lists a Ph.D. from Andhra University in India, says on his Web site that he “learnt to devise wonders in biotechnology.”

Read the entire article following the jump.

Image courtesy of University of Texas.

Exoplanet Exploration

It wasn’t too long ago that astronomers found the first indirect evidence of a planet beyond our solar system. They inferred the presence of an exoplanet (extrasolar planet) from the periodic dimming or wiggle of its parental star, rather than much more difficult direct observation. Since the first confirmed exoplanet was discovered in 1995 (51 Pegasi b), researchers have definitively  catalogued around 800, and identified another 18,000 candidates. And, the list seems to now grow daily.

If that wasn’t amazing enough researchers now have directly observed several exoplanets and even measured their atmospheric composition.

[div class=attrib]From ars technica:[end-div]

The star system HR 8799 is a sort of Solar System on steroids: a beefier star, four possible planets that are much bigger than Jupiter, and signs of asteroids and cometary bodies, all spread over a bigger region. Additionally, the whole system is younger and hotter, making it one of only a few cases where astronomers can image the planets themselves. However, HR 8799 is very different from our Solar System, as astronomers are realizing thanks to two detailed studies released this week.

The first study was an overview of the four exoplanet candidates, covered by John Timmer. The second set of observations focused on one of the four planet candidates, HR 8799c. Quinn Konopacky, Travis Barman, Bruce Macintosh, and Christian Marois performed a detailed spectral analysis of the atmosphere of the possible exoplanet. They compared their findings to the known properties of a brown dwarf and concluded that they don’t match—it is indeed a young planet. Chemical differences between HR 8799c and its host star led the researchers to conclude the system likely formed in the same way the Solar System did.

The HR 8799 system was one of the first where direct imaging of the exoplanets was possible; in most cases, the evidence for a planet’s presence is indirect. (See the Ars overview of exoplanet science for more.) This serendipity is possible for two major reasons: the system is very young, and the planet candidates orbit far from their host star.

The young age means the bodies orbiting the system still retain heat from their formation and so are glowing in the infrared; older planets emit much less light. That makes it possible to image these planets at these wavelengths. (We mostly image planets in the Solar System using reflected sunlight, but that’s not a viable detection strategy at these distances). A large planet-star separation means that the star’s light doesn’t overwhelm the planets’ warm glow. Astronomers are also assisted by HR 8799’s relative closeness to us—it’s only about 130 light-years away.

However, the brightness of the exoplanet candidates also obscures their identity. They are all much larger than Jupiter—each is more than 5 times Jupiter’s mass, and the largest could be 35 times greater. That, combined with their large infrared emission, could mean that they are not planets but brown dwarfs: star-like objects with insufficient mass to engage in hydrogen fusion. Since brown dwarfs can overlap in size and mass with the largest planets, we haven’t been certain that the objects observed in the HR 8799 system are planets.

For this reason, the two recent studies aimed at measuring the chemistry of these bodies using their spectral emissions. The Palomar study described yesterday provided a broad, big-picture view of the whole HR 8799 system. By contrast, the second study used one of the 10-meter Keck telescopes for a focused, in-depth view of one object: HR 8799c, the second-farthest out of the four.

The researchers measured relatively high levels of carbon monoxide (CO) and water (H2O, just in case you forgot the formula), which were present at levels well above the abundance measured in the spectrum of the host star. According to the researchers, this difference in chemical composition indicated that the planet likely formed via “core accretion”— the gradual, bottom-up accumulation of materials to make a planet—rather than a top-down fragmentation of the disk surrounding the newborn star. The original disk in this scenario would have contained a lot of ice fragments, which merged to make a world relatively high in water content.

In many respects, HR 8799c seemed to have properties between brown dwarfs and other exoplanets, but the chemical and gravitational analyses pushed the object more toward the planet side. In particular, the size and chemistry of HR 8799c placed its surface gravity lower than expected for a brown dwarf, especially when considered with the estimated age of the star system. While this analysis says nothing about whether the other bodies in the system are planets, it does provide further hints about the way the system formed.

One final surprise was the lack of methane (CH4) in HR 8799c’s atmosphere. Methane is a chemical component present in all the Jupiter-like planets in our Solar System. The authors argued that this could be due to vigorous mixing of the atmosphere, which is expected because the exoplanet has higher temperatures and pressures than seen on Jupiter or Neptune. This mixing could enable reactions that limit methane formation. Since the HR 8799 system is much younger than the Solar System—roughly 30 million years compared with 4.5 billion years—it’s uncertain how much this chemical balance may change over time.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]One of the discovery images of the system obtained at the Keck II telescope using the adaptive optics system and NIRC2 Near-Infrared Imager. The rectangle indicates the field-of-view of the OSIRIS instrument for planet C. Courtesy of NRC-HIA, C. Marois and Keck Observatory.[end-div]

Your Tax Dollars at Work

Naysayers would say that government, and hence taxpayer dollars, should not be used to fund science initiatives. After all academia and business seem to do a fairly good job of discovery and innovation without a helping hand pilfering from the public purse. And, without a doubt, and money aside, government funded projects do raise a number of thorny questions: On what should our hard-earned income tax be spent? Who decides on the priorities? How is progress to be measured? Do taxpayers get any benefit in return? After many of us cringe at the thought of an unelected bureaucrat or a committee of such spending millions if not billions of our dollars. Why not just spend the money on fixing our national potholes?

But despite our many human flaws and foibles we are at heart explorers. We seek to know more about ourselves, our world and our universe. Those who seek answers to fundamental questions of consciousness, aging, and life are pioneers in this quest to expand our domain of understanding and knowledge. These answers increasingly aid our daily lives through continuous improvement in medical science, and innovation in materials science. And, our collective lives are enriched as we increasingly learn more about the how and the why of our and our universe’s existence.

So, some of our dollars have gone towards big science at the Large Hadron Collider (LHC) beneath Switzerland looking for constituents of matter, the wild laser experiment at the National Ignition Facility designed to enable controlled fusion reactions, and the Curiosity rover exploring Mars. Yet more of our dollars have gone to research and development into enhanced radar, graphene for next generation circuitry, online courseware, stress in coral reefs, sensors to aid the elderly, ultra-high speed internet for emergency response, erosion mitigation, self-cleaning surfaces, flexible solar panels.

Now comes word that the U.S. government wants to spend $3 billion dollars — over 10 years — on building a comprehensive map of the human brain. The media has dubbed this the “connectome” following similar efforts to map our human DNA, the genome. While this is the type of big science that may yield tangible results and benefits only decades from now, it ignites the passion and curiosity of our children to continue to seek and to find answers. So, this is good news for science and the explorer who lurks within us all.

[div class=attrib]From ars technica:[end-div]

Over the weekend, The New York Times reported that the Obama administration is preparing to launch biology into its first big project post-genome: mapping the activity and processes that power the human brain. The initial report suggested that the project would get roughly $3 billion dollars over 10 years to fund projects that would provide an unprecedented understanding of how the brain operates.

But the report was remarkably short on the scientific details of what the studies would actually accomplish or where the money would actually go. To get a better sense, we talked with Brown University’s John Donoghue, who is one of the academic researchers who has been helping to provide the rationale and direction for the project. Although he couldn’t speak for the administration’s plans, he did describe the outlines of what’s being proposed and why, and he provided a glimpse into what he sees as the project’s benefits.

What are we talking about doing?

We’ve already made great progress in understanding the behavior of individual neurons, and scientists have done some excellent work in studying small populations of them. On the other end of the spectrum, decades of anatomical studies have provided us with a good picture of how different regions of the brain are connected. “There’s a big gap in our knowledge because we don’t know the intermediate scale,” Donaghue told Ars. The goal, he said, “is not a wiring diagram—it’s a functional map, an understanding.”

This would involve a combination of things, including looking at how larger populations of neurons within a single structure coordinate their activity, as well as trying to get a better understanding of how different structures within the brain coordinate their activity. What scale of neuron will we need to study? Donaghue answered that question with one of his own: “At what point does the emergent property come out?” Things like memory and consciousness emerge from the actions of lots of neurons, and we need to capture enough of those to understand the processes that let them emerge. Right now, we don’t really know what that level is. It’s certainly “above 10,” according to Donaghue. “I don’t think we need to study every neuron,” he said. Beyond that, part of the project will focus on what Donaghue called “the big question”—what emerges in the brain at these various scales?”

While he may have called emergence “the big question,” it quickly became clear he had a number of big questions in mind. Neural activity clearly encodes information, and we can record it, but we don’t always understand the code well enough to understand the meaning of our recordings. When I asked Donaghue about this, he said, “This is it! One of the big goals is cracking the code.”

Donaghue was enthused about the idea that the different aspects of the project would feed into each other. “They go hand in hand,” he said. “As we gain more functional information, it’ll inform the connectional map and vice versa.” In the same way, knowing more about neural coding will help us interpret the activity we see, while more detailed recordings of neural activity will make it easier to infer the code.

As we build on these feedbacks to understand more complex examples of the brain’s emergent behaviors, the big picture will emerge. Donaghue hoped that the work will ultimately provide “a way of understanding how you turn thought into action, how you perceive, the nature of the mind, cognition.”

How will we actually do this?

Perception and the nature of the mind have bothered scientists and philosophers for centuries—why should we think we can tackle them now? Donaghue cited three fields that had given him and his collaborators cause for optimism: nanotechnology, synthetic biology, and optical tracers. We’ve now reached the point where, thanks to advances in nanotechnology, we’re able to produce much larger arrays of electrodes with fine control over their shape, allowing us to monitor much larger populations of neurons at the same time. On a larger scale, chemical tracers can now register the activity of large populations of neurons through flashes of fluorescence, giving us a way of monitoring huge populations of cells. And Donaghue suggested that it might be possible to use synthetic biology to translate neural activity into a permanent record of a cell’s activity (perhaps stored in DNA itself) for later retrieval.

Right now, in Donaghue’s view, the problem is that the people developing these technologies and the neuroscience community aren’t talking enough. Biologists don’t know enough about the tools already out there, and the materials scientists aren’t getting feedback from them on ways to make their tools more useful.

Since the problem is understanding the activity of the brain at the level of large populations of neurons, the goal will be to develop the tools needed to do so and to make sure they are widely adopted by the bioscience community. Each of these approaches is limited in various ways, so it will be important to use all of them and to continue the technology development.

Assuming the information can be recorded, it will generate huge amounts of data, which will need to be shared in order to have the intended impact. And we’ll need to be able to perform pattern recognition across these vast datasets in order to identify correlations in activity among different populations of neurons. So there will be a heavy computational component as well.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: White matter fiber architecture of the human brain. Courtesy of the Human Connectome Project.[end-div]

Pseudo-Science in Missouri and 2+2=5

Hot on the heels of recent successes by the Texas School Board of Education (SBOE) to revise history and science curricula, legislators in Missouri are planning to redefine commonly accepted scientific principles. Much like the situation in Texas the Missouri House is mandating that intelligent design be taught alongside evolution, in equal measure, in all the state’s schools. But, in a bid to take the lead in reversing thousands of years of scientific progress Missouri plans to redefine the actual scientific framework. So, if you can’t make “intelligent design” fit the principles of accepted science, then just change the principles themselves — first up, change the meanings of the terms “scientific hypothesis” and “scientific theory”.

We suspect that a couple of years from now, in Missouri, 2+2 will be redefined to equal 5, and that logic, deductive reasoning and experimentation will be replaced with mushy green peas.

[div class=attrib]From ars technica:[end-div]

Each year, state legislatures play host to a variety of bills that would interfere with science education. Most of these are variations on a boilerplate intended to get supplementary materials into classrooms criticizing evolution and climate change (or to protect teachers who do). They generally don’t mention creationism, but the clear intent is to sneak religious content into the science classrooms, as evidenced by previous bills introduced by the same lawmakers. Most of them die in the legislature (although the opponents of evolution have seen two successes).

The efforts are common enough that we don’t generally report on them. But, every now and then, a bill comes along veers off this script. And late last month, the Missouri House started considering one that deviates in staggering ways. Instead of being quiet about its intent, it redefines science, provides a clearer definition of intelligent design than any of the idea’s advocates ever have, and mandates equal treatment of the two. In the process, it mangles things so badly that teachers would be prohibited from discussing Mendel’s Laws.

Although even the Wikipedia entry for scientific theory includes definitions provided by the world’s most prestigious organizations of scientists, the bill’s sponsor Rick Brattin has seen fit to invent his own definition. And it’s a head-scratcher: “‘Scientific theory,’ an inferred explanation of incompletely understood phenomena about the physical universe based on limited knowledge, whose components are data, logic, and faith-based philosophy.” The faith or philosophy involved remain unspecified.

Brattin also mentions philosophy when he redefines hypothesis as, “a scientific theory reflecting a minority of scientific opinion which may lack acceptance because it is a new idea, contains faulty logic, lacks supporting data, has significant amounts of conflicting data, or is philosophically unpopular.” The reason for that becomes obvious when he turns to intelligent design, which he defines as a hypothesis. Presumably, he thinks it’s only a hypothesis because it’s philosophically unpopular, since his bill would ensure it ends up in the classrooms.

Intelligent design is roughly the concept that life is so complex that it requires a designer, but even its most prominent advocates have often been a bit wary about defining its arguments all that precisely. Not so with Brattin—he lists 11 concepts that are part of ID. Some of these are old-fashioned creationist claims, like the suggestion that mutations lead to “species degradation” and a lack of transitional fossils. But it also has some distinctive twists like the claim that common features, usually used to infer evolutionary relatedness, are actually a sign of parts re-use by a designer.

Eventually, the bill defines “standard science” as “knowledge disclosed in a truthful and objective manner and the physical universe without any preconceived philosophical demands concerning origin or destiny.” It then demands that all science taught in Missouri classrooms be standard science. But there are some problems with this that become apparent immediately. The bill demands anything taught as scientific law have “no known exceptions.” That would rule out teaching Mendel’s law, which have a huge variety of exceptions, such as when two genes are linked together on the same chromosome.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Seal of Missouri. Courtesy of Wikipedia.[end-div]

The Death of Scientific Genius

There is a certain school of thought that asserts that scientific genius is a thing of the past. After all, we haven’t seen the recent emergence of pivotal talents such as Galileo, Newton, Darwin or Einstein. Is it possible that fundamentally new ways to look at our world — that a new mathematics or a new physics is no longer possible?

In a recent essay in Nature, Dean Keith Simonton, professor of psychology at UC Davis, argues that such fundamental and singular originality is a thing of the past.

[div class=attrib]From ars technica:[end-div]

Einstein, Darwin, Galileo, Mendeleev: the names of the great scientific minds throughout history inspire awe in those of us who love science. However, according to Dean Keith Simonton, a psychology professor at UC Davis, the era of the scientific genius may be over. In a comment paper published in Nature last week, he explains why.

The “scientific genius” Simonton refers to is a particular type of scientist; their contributions “are not just extensions of already-established, domain-specific expertise.” Instead, “the scientific genius conceives of a novel expertise.” Simonton uses words like “groundbreaking” and “overthrow” to illustrate the work of these individuals, explaining that they each contributed to science in one of two major ways: either by founding an entirely new field or by revolutionizing an already-existing discipline.

Today, according to Simonton, there just isn’t room to create new disciplines or overthrow the old ones. “It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline,” he writes. Furthermore, most scientific fields aren’t in the type of crisis that would enable paradigm shifts, according to Thomas Kuhn’s classic view of scientific revolutions. Simonton argues that instead of finding big new ideas, scientists currently work on the details in increasingly specialized and precise ways.

And to some extent, this argument is demonstrably correct. Science is becoming more and more specialized. The largest scientific fields are currently being split into smaller sub-disciplines: microbiology, astrophysics, neuroscience, and paleogeography, to name a few. Furthermore, researchers have more tools and the knowledge to hone in on increasingly precise issues and questions than they did a century—or even a decade—ago.

But other aspects of Simonton’s argument are a matter of opinion. To me, separating scientists who “build on what’s already known” from those who “alter the foundations of knowledge” is a false dichotomy. Not only is it possible to do both, but it’s impossible to establish—or even make a novel contribution to—a scientific field without piggybacking on the work of others to some extent. After all, it’s really hard to solve the problems that require new solutions if other people haven’t done the work to identify them. Plate tectonics, for example, was built on observations that were already widely known.

And scientists aren’t done altering the foundations of knowledge, either. In science, as in many other walks of life, we don’t yet know everything we don’t know. Twenty years ago, exoplanets were hypothetical. Dark energy, as far as we knew, didn’t exist.

Simonton points out that “cutting-edge work these days tends to emerge from large, well-funded collaborative teams involving many contributors” rather than a single great mind. This is almost certainly true, especially in genomics and physics. However, it’s this collaboration and cooperation between scientists, and between fields, that has helped science progress past where we ever thought possible. While Simonton uses “hybrid” fields like astrophysics and biochemistry to illustrate his argument that there is no room for completely new scientific disciplines, I see these fields as having room for growth. Here, diverse sets of ideas and methodologies can mix and lead to innovation.

Simonton is quick to assert that the end of scientific genius doesn’t mean science is at a standstill or that scientists are no longer smart. In fact, he argues the opposite: scientists are probably more intelligent now, since they must master more theoretical work, more complicated methods, and more diverse disciplines. In fact, Simonton himself would like to be wrong; “I hope that my thesis is incorrect. I would hate to think that genius in science has become extinct,” he writes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Einstein 1921 by F. Schmutzer. Courtesy of Wikipedia.[end-div]

Someone Has to Stand Up to Experts

[tube]pzrUt9CHtpY[/tube]

“Someone has to stand up to experts!”. This is what Don McLeroy would have you believe about scientists. We all espouse senseless rants once in a while, so we should give McLeroy the benefit of the doubt – perhaps he had slept poorly the night before this impassioned, irrational plea. On the other hand, when you learn that McLeroy’s statement came as chairman of the Texas State Board of Education (SBOE) in 2010, then you may wish to think again, especially if you have children in the school system of the Lone Star State.

McLeroy and his fellow young-Earth creationists including Cynthia Dunbar are the subject of a documentary out this week titled The Revisionaries. It looks at the messy and yet successful efforts of the SBOE to revise the curriculum standards and the contents of science and social studies textbooks in their favor. So, included in a list of over 100 significant amendments, the non-experts did the following: marginalized Thomas Jefferson for being a secular humanist; watered down the historically accepted rationale for separation of church and state; stressed the positive side of the McCarthyist witchhunts; removed references to Hispanics having fought against Santa Anna in the battle of the Alamo; added the National Rifle Association as a key element in the recent conservative resurgence; and of course, re-opened the entire debate over the validity of evolutionary theory.

While McLeroy and some of his fellow non-experts lost re-election bids, their influence on young minds is likely to be far-reaching — textbooks in Texas are next revised in 2020, and because of Texas’ market power many publishers across the nation tend to follow Texas standards.

[div class=attrib]Video clip courtesy of The Revisionaries, PBS.[end-div]

Your City as an Information Warehouse

Big data keeps getting bigger and computers keep getting faster. Some theorists believe that the universe is a giant computer or a computer simulation; that principles of information science govern the cosmos. While this notion is one of the most recent radical ideas to explain our existence, there is no doubt that information is our future. Data surrounds us, we are becoming data-points and our cities are our information-rich databases.

[div class=attrib]From the Economist:[end-div]

IN 1995 GEORGE GILDER, an American writer, declared that “cities are leftover baggage from the industrial era.” Electronic communications would become so easy and universal that people and businesses would have no need to be near one another. Humanity, Mr Gilder thought, was “headed for the death of cities”.

It hasn’t turned out that way. People are still flocking to cities, especially in developing countries. Cisco’s Mr Elfrink reckons that in the next decade 100 cities, mainly in Asia, will reach a population of more than 1m. In rich countries, to be sure, some cities are sad shadows of their old selves (Detroit, New Orleans), but plenty are thriving. In Silicon Valley and the newer tech hubs what Edward Glaeser, a Harvard economist, calls “the urban ability to create collaborative brilliance” is alive and well.

Cheap and easy electronic communication has probably helped rather than hindered this. First, connectivity is usually better in cities than in the countryside, because it is more lucrative to build telecoms networks for dense populations than for sparse ones. Second, electronic chatter may reinforce rather than replace the face-to-face kind. In his 2011 book, “Triumph of the City”, Mr Glaeser theorises that this may be an example of what economists call “Jevons’s paradox”. In the 19th century the invention of more efficient steam engines boosted rather than cut the consumption of coal, because they made energy cheaper across the board. In the same way, cheap electronic communication may have made modern economies more “relationship-intensive”, requiring more contact of all kinds.

Recent research by Carlo Ratti, director of the SENSEable City Laboratory at the Massachusetts Institute of Technology, and colleagues, suggests there is something to this. The study, based on the geographical pattern of 1m mobile-phone calls in Portugal, found that calls between phones far apart (a first contact, perhaps) are often followed by a flurry within a small area (just before a meeting).

Data deluge

A third factor is becoming increasingly important: the production of huge quantities of data by connected devices, including smartphones. These are densely concentrated in cities, because that is where the people, machines, buildings and infrastructures that carry and contain them are packed together. They are turning cities into vast data factories. “That kind of merger between physical and digital environments presents an opportunity for us to think about the city almost like a computer in the open air,” says Assaf Biderman of the SENSEable lab. As those data are collected and analysed, and the results are recycled into urban life, they may turn cities into even more productive and attractive places.

Some of these “open-air computers” are being designed from scratch, most of them in Asia. At Songdo, a South Korean city built on reclaimed land, Cisco has fitted every home and business with video screens and supplied clever systems to manage transport and the use of energy and water. But most cities are stuck with the infrastructure they have, at least in the short term. Exploiting the data they generate gives them a chance to upgrade it. Potholes in Boston, for instance, are reported automatically if the drivers of the cars that hit them have an app called Street Bump on their smartphones. And, particularly in poorer countries, places without a well-planned infrastructure have the chance of a leap forward. Researchers from the SENSEable lab have been working with informal waste-collecting co-operatives in São Paulo whose members sift the city’s rubbish for things to sell or recycle. By attaching tags to the trash, the researchers have been able to help the co-operatives work out the best routes through the city so they can raise more money and save time and expense.

Exploiting data may also mean fewer traffic jams. A few years ago Alexandre Bayen, of the University of California, Berkeley, and his colleagues ran a project (with Nokia, then the leader of the mobile-phone world) to collect signals from participating drivers’ smartphones, showing where the busiest roads were, and feed the information back to the phones, with congested routes glowing red. These days this feature is common on smartphones. Mr Bayen’s group and IBM Research are now moving on to controlling traffic and thus easing jams rather than just telling drivers about them. Within the next three years the team is due to build a prototype traffic-management system for California’s Department of Transportation.

Cleverer cars should help, too, by communicating with each other and warning drivers of unexpected changes in road conditions. Eventually they may not even have drivers at all. And thanks to all those data they may be cleaner, too. At the Fraunhofer FOKUS Institute in Berlin, Ilja Radusch and his colleagues show how hybrid cars can be automatically instructed to switch from petrol to electric power if local air quality is poor, say, or if they are going past a school.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Images of cities courtesy of Google search.[end-div]

Politics Driven by Science

Imagine a nation, or even a world, where political decisions and policy are driven by science rather than emotion. Well, small experiments are underway, so this may not be as far off as many would believe, or even dare to hope.

[div class=attrib]From the New Scientist:[end-div]

In your wildest dreams, could you imagine a government that builds its policies on carefully gathered scientific evidence? One that publishes the rationale behind its decisions, complete with data, analysis and supporting arguments? Well, dream no longer: that’s where the UK is heading.

It has been a long time coming, according to Chris Wormald, permanent secretary at the Department for Education. The civil service is not short of clever people, he points out, and there is no lack of desire to use evidence properly. More than 20 years as a serving politician has convinced him that they are as keen as anyone to create effective policies. “I’ve never met a minister who didn’t want to know what worked,” he says. What has changed now is that informed policy-making is at last becoming a practical possibility.

That is largely thanks to the abundance of accessible data and the ease with which new, relevant data can be created. This has supported a desire to move away from hunch-based politics.

Last week, for instance, Rebecca Endean, chief scientific advisor and director of analytical services at the Ministry of Justice, announced that the UK government is planning to open up its data for analysis by academics, accelerating the potential for use in policy planning.

At the same meeting, hosted by innovation-promoting charity NESTA, Wormald announced a plan to create teaching schools based on the model of teaching hospitals. In education, he said, the biggest single problem is a culture that often relies on anecdotal experience rather than systematically reported data from practitioners, as happens in medicine. “We want to move teacher training and research and practice much more onto the health model,” Wormald said.

Test, learn, adapt

In June last year the Cabinet Office published a paper called “Test, Learn, Adapt: Developing public policy with randomised controlled trials”. One of its authors, the doctor and campaigning health journalist Ben Goldacre, has also been working with the Department of Education to compile a comparison of education and health research practices, to be published in the BMJ.

In education, the evidence-based revolution has already begun. A charity called the Education Endowment Foundation is spending £1.4 million on a randomised controlled trial of reading programmes in 50 British schools.

There are reservations though. The Ministry of Justice is more circumspect about the role of such trials. Where it has carried out randomised controlled trials, they often failed to change policy, or even irked politicians with conclusions that were obvious. “It is not a panacea,” Endean says.

Power of prediction

The biggest need is perhaps foresight. Ministers often need instant answers, and sometimes the data are simply not available. Bang goes any hope of evidence-based policy.

“The timescales of policy-making and evidence-gathering don’t match,” says Paul Wiles, a criminologist at the University of Oxford and a former chief scientific adviser to the Home Office. Wiles believes that to get round this we need to predict the issues that the government is likely to face over the next decade. “We can probably come up with 90 per cent of them now,” he says.

Crucial to the process will be convincing the public about the value and use of data, so that everyone is on-board. This is not going to be easy. When the government launched its Administrative Data Taskforce, which set out to look at data in all departments and opening it up so that it could be used for evidence-based policy, it attracted minimal media interest.

The taskforce’s remit includes finding ways to increase trust in data security. Then there is the problem of whether different departments are legally allowed to exchange data. There are other practical issues: many departments format data in incompatible ways. “At the moment it’s incredibly difficult,” says Jonathan Breckon, manager of the Alliance for Useful Evidence, a collaboration between NESTA and the Economic and Social Research Council.

[div class=attrib]Read the entire article after the jump.[end-div]

Best Science Stories of 2012

As the year comes to a close it’s fascinating to look back at some of the most breathtaking science of 2012.

 

 

 

 

 

 

 

 

The image above is of Saturn’s moon Enceladus. Evidence from Cassini spacecraft, which took this remarkable image, suggests a deep salty ocean beneath the frozen surface that periodically spews out icy particles into the space. Many scientists believe that Enceladus is the best place to look for signs of life beyond Earth within our Solar System.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Cassini Imaging Team/SSI/JPL/ESA/NASA.[end-div]

Lead a Congressional Committee on Science: No Grasp of Science Required

[div class=attrib]From ars technica:[end-div]

The House Committee on Space, Science, and Technology hears testimony on climate change in March 2011.[/ars_img]If you had the chance to ask questions of one of the world’s leading climatologists, would you select a set of topics that would be at home in the heated discussions that take place in the Ars forums? If you watch the video below, you’d find that’s precisely what Dana Rohrabacher (R-CA) chose to do when Penn State’s Richard Alley (a fellow Republican) was called before the House Science Committee, which has already had issues with its grasp of science. Rohrabacher took Alley on a tour of some of the least convincing arguments about climate change, all trying to convince him changes in the Sun were to blame for a changing climate. (Alley, for his part, noted that we have actually measured the Sun, and we’ve seen no such changes.)

Now, if he has his way, Rohrabacher will be chairing the committee once the next Congress is seated. Even if he doesn’t get the job, the alternatives aren’t much better.

There has been some good news for the Science Committee to come out of the last election. Representative Todd Akin (R-MO), whose lack of understanding of biology was made clear by his comments on “legitimate rape,” had to give up his seat to run for the Senate, a race he lost. Meanwhile, Paul Broun (R-GA), who said that evolution and cosmology are “lies straight from the pit of Hell,” won reelection, but he received a bit of a warning in the process: dead English naturalist Charles Darwin, who is ineligible to serve in Congress, managed to draw thousands of write-in votes. And, thanks to limits on chairmanships, Ralph Hall (R-TX), who accused climate scientists of being in it for the money (if so, they’re doing it wrong), will have to step down.

In addition to Rohrabacher, the other Representatives that are vying to lead the Committee are Wisconsin’s James Sensenbrenner and Texas’ Lamar Smith. They all suggest that they will focus on topics like NASA’s budget and the Department of Energy’s plans for future energy tech. But all of them have been embroiled in the controversy over climate change in the past.

In an interview with Science Insider about his candidacy, Rohrabacher engaged in a bit of triumphalism and suggested that his beliefs were winning out. “There were a lot of scientists who were just going along with the flow on the idea that mankind was causing a change in the world’s climate,” he said. “I think that after 10 years of debate, we can show that there are hundreds if not thousands of scientists who have come over to being skeptics, and I don’t know anyone [who was a skeptic] who became a believer in global warming.”

[div class=attrib]Read the entire article following the jump.[end-div]

The Half Life of Facts

There is no doubting the ever expanding reach of science and the acceleration of scientific discovery. Yet the accumulation, and for that matter the acceleration in the accumulation, of ever more knowledge does come with a price — many historical facts that we learned as kids are no longer true. This is especially important in areas such as medical research where new discoveries are constantly making obsolete our previous notions of disease and treatment.

Author Samuel Arbesman, tells us why facts should have an expiration date in his new book, A review of The Half-Life of Facts.

[div class=attrib]From Reason:[end-div]

Dinosaurs were cold-blooded. Vast increases in the money supply produce inflation. Increased K-12 spending and lower pupil/teacher ratios boosts public school student outcomes. Most of the DNA in the human genome is junk. Saccharin causes cancer and a high fiber diet prevents it. Stars cannot be bigger than 150 solar masses. And by the way, what are the ten most populous cities in the United States?

In the past half century, all of the foregoing facts have turned out to be wrong (except perhaps the one about inflation rates). We’ll revisit the ten biggest cities question below. In the modern world facts change all of the time, according to Samuel Arbesman, author of The Half-Life of Facts: Why Everything We Know Has an Expiration Date.

Arbesman, a senior scholar at the Kaufmann Foundation and an expert in scientometrics, looks at how facts are made and remade in the modern world. And since fact-making is speeding up, he worries that most of us don’t keep up to date and base our decisions on facts we dimly remember from school and university classes that turn out to be wrong.

The field of scientometrics – the science of measuring and analyzing science – took off in 1947 when mathematician Derek J. de Solla Price was asked to store a complete set of the Philosophical Transactions of the Royal Society temporarily in his house. He stacked them in order and he noticed that the height of the stacks fit an exponential curve. Price started to analyze all sorts of other kinds of scientific data and concluded in 1960 that scientific knowledge had been growing steadily at a rate of 4.7 percent annually since the 17th century. The upshot was that scientific data was doubling every 15 years.

In 1965, Price exuberantly observed, “All crude measures, however arrived at, show to a first approximation that science increases exponentially, at a compound interest of about 7 percent  per annum, thus doubling in size every 10–15 years, growing by a factor of 10 every half century, and by something like a factor of a million in the 300 years which separate us from the seventeenth-century invention of the scientific paper when the process began.” A 2010 study in the journal Scientometrics looked at data between 1907 and 2007 and concluded that so far the “overall growth rate for science still has been at least 4.7 percent per year.”

Since scientific knowledge is still growing by a factor of ten every 50 years, it should not be surprising that lots of facts people learned in school and universities have been overturned and are now out of date.  But at what rate do former facts disappear? Arbesman applies the concept of half-life, the time required for half the atoms of a given amount of a radioactive substance to disintegrate, to the dissolution of facts. For example, the half-life of the radioactive isotope strontium-90 is just over 29 years. Applying the concept of half-life to facts, Arbesman cites research that looked into the decay in the truth of clinical knowledge about cirrhosis and hepatitis. “The half-life of truth was 45 years,” reported the researchers.

In other words, half of what physicians thought they knew about liver diseases was wrong or obsolete 45 years later. As interesting and persuasive as this example is, Arbesman’s book would have been strengthened by more instances drawn from the scientific literature.

Facts are being manufactured all of the time, and, as Arbesman shows, many of them turn out to be wrong. Checking each by each is how the scientific process is supposed work, i.e., experimental results need to be replicated by other researchers. How many of the findings in 845,175 articles published in 2009 and recorded in PubMed, the free online medical database, were actually replicated? Not all that many. In 2011, a disheartening study in Nature reported that a team of researchers over ten years was able to reproduce the results of only six out of 53 landmark papers in preclinical cancer research.

[div class=attrib]Read the entire article after the jump.[end-div]

Old Concepts Die Hard

Regardless of how flawed old scientific concepts may be researchers have found that it is remarkably difficult for people to give these up and accept sound, new reasoning. Even scientists are creatures of habit.

[div class=attrib]From Scientific American:[end-div]

In one sense, science educators have it easy. The things they describe are so intrinsically odd and interesting — invisible fields, molecular machines, principles explaining the unity of life and origins of the cosmos — that much of the pedagogical attention-getting is built right in.  Where they have it tough, though, is in having to combat an especially resilient form of higher ed’s nemesis: the aptly named (if irredeemably clichéd) ‘preconceived idea.’ Worse than simple ignorance, naïve ideas about science lead people to make bad decisions with confidence. And in a world where many high-stakes issues fundamentally boil down to science, this is clearly a problem.

Naturally, the solution to the problem lies in good schooling — emptying minds of their youthful hunches and intuitions about how the world works, and repopulating them with sound scientific principles that have been repeatedly tested and verified. Wipe out the old operating system, and install the new. According to a recent paper by Andrew Shtulman and Joshua Valcarcel, however, we may not be able to replace old ideas with new ones so cleanly. Although science as a field discards theories that are wrong or lacking, Shtulman and Valcarcel’s work suggests that individuals —even scientifically literate ones — tend to hang on to their early, unschooled, and often wrong theories about the natural world. Even long after we learn that these intuitions have no scientific support, they can still subtly persist and influence our thought process. Like old habits, old concepts seem to die hard.

Testing for the persistence of old concepts can’t be done directly. Instead, one has to set up a situation in which old concepts, if present, measurably interfere with mental performance. To do this, Shtulman and Valcarcel designed a task that tested how quickly and accurately subjects verified short scientific statements (for example: “air is composed of matter.”). In a clever twist, the authors interleaved two kinds of statements — “consistent” ones that had the same truth-value under a naive theory and a proper scientific theory, and “inconsistent” ones. For example, the statement “air is composed of matter”  is inconsistent: it’s false under a naive theory (air just seems like empty space, right?), but is scientifically true. By contrast, the statement “people turn food into energy” is consistent: anyone who’s ever eaten a meal knows it’s true, and science affirms this by filling in the details about digestion, respiration and metabolism.

Shtulman and Valcarcel tested 150 college students on a battery of 200 such statements that included an equal and random mix of consistent and inconsistent statements from several domains, including astronomy, evolution, physiology, genetics, waves, and others. The scientists measured participants’ response speed and accuracy, and looked for systematic differences in how consistent vs. inconsistent statements were evaluated.

If scientific concepts, once learned, are fully internalized and don’t conflict with our earlier naive concepts, one would expect consistent and inconsistent statements to be processed similarly. On the other hand, if naive concepts are never fully supplanted, and are quietly threaded into our thought process, it should take take longer to evaluate inconsistent statements. In other words, it should take a bit of extra mental work (and time) to go against the grain of a naive theory we once held.

This is exactly what Shtulman and Valcarcel found. While there was some variability between the different domains tested, inconsistent statements took almost a half second longer to verify, on average. Granted, there’s a significant wrinkle in interpreting this result. Specifically, it may simply be the case that scientific concepts that conflict with naive intuition are simply learned more tenuously than concepts that are consistent with our intuition. Under this view, differences in response times aren’t necessarily evidence of ongoing inner conflict between old and new concepts in our brains — it’s just a matter of some concepts being more accessible than others, depending on how well they were learned.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of New Scientist.[end-div]

Curiosity in Flight

NASA pulled off another tremendous and daring feat of engineering when it successfully landed the Mars Science Laboratory (MSL) to the surface of Mars on August 5, 2012, 10:32 PM Pacific Time.

The MSL is housed aboard the Curiosity rover, a 2,000-pound, car-size robot. Not only did NASA land Curiosity a mere 1 second behind schedule following a journey of over 576 million kilometers (358 million miles) lasting around 8 months, it went one better. NASA had one of its Mars orbiters — Mars Reconnaissance Orbiter — snap an image of MSL from around 300 miles away as it descended through the Martian atmosphere, with its supersonic parachute unfurled.

Another historic day for science, engineering and exploration.

[div class=attrib]From NASA / JPL:[end-div]

NASA’s Curiosity rover and its parachute were spotted by NASA’s Mars Reconnaissance Orbiter as Curiosity descended to the surface on Aug. 5 PDT (Aug. 6 EDT). The High-Resolution Imaging Science Experiment (HiRISE) camera captured this image of Curiosity while the orbiter was listening to transmissions from the rover. Curiosity and its parachute are in the center of the white box; the inset image is a cutout of the rover stretched to avoid saturation. The rover is descending toward the etched plains just north of the sand dunes that fringe “Mt. Sharp.” From the perspective of the orbiter, the parachute and Curiosity are flying at an angle relative to the surface, so the landing site does not appear directly below the rover.

The parachute appears fully inflated and performing perfectly. Details in the parachute, such as the band gap at the edges and the central hole, are clearly seen. The cords connecting the parachute to the back shell cannot be seen, although they were seen in the image of NASA’s Phoenix lander descending, perhaps due to the difference in lighting angles. The bright spot on the back shell containing Curiosity might be a specular reflection off of a shiny area. Curiosity was released from the back shell sometime after this image was acquired.

This view is one product from an observation made by HiRISE targeted to the expected location of Curiosity about one minute prior to landing. It was captured in HiRISE CCD RED1, near the eastern edge of the swath width (there is a RED0 at the very edge). This means that the rover was a bit further east or downrange than predicted.

[div class=attrib]Follow the mission after the jump.[end-div]

[div class=attrib]Image courtesy of NASA/JPL-Caltech/Univ. of Arizona.[end-div]

National Education Rankings: C-

One would believe that the most affluent and open country on the planet would have one of the best, if not the best, education systems. Yet, the United States of America distinguishes itself by being thoroughly mediocre in a ranking of developed nations in science, mathematics and reading. How can we makes amends for our children?

[div class=attrib]From Slate:[end-div]

Take the 2009 PISA test, which assessed the knowledge of students from 65 countries and economies—34 of which are members of the development organization the OECD, including the United States—in math, science, and reading. Of the OECD countries, the United States came in 17th place in science literacy; of all countries and economies surveyed, it came in 23rd place. The U.S. score of 502 practically matched the OECD average of 501. That puts us firmly in the middle. Where we don’t want to be.

What do the leading countries do differently? To find out, Slate asked science teachers from five countries that are among the world’s best in science education—Finland, Singapore, South Korea, New Zealand, and Canada—how they approach their subject and the classroom. Their recommendations: Keep students engaged and make the science seem relevant.

Finland: “To Make Students Enjoy Chemistry Is Hard Work”

Finland was first among the 34 OECD countries in the 2009 PISA science rankings and second—behind mainland China—among all 65 nations and economies that took the test. Ari Myllyviita teaches chemistry and works with future science educators at the Viikki Teacher Training School of Helsinki University.

Finland’s National Core Curriculum is premised on the idea “that learning is a result of a student’s active and focused actions aimed to process and interpret received information in interaction with other students, teachers and the environment and on the basis of his or her existing knowledge structures.”

My conception of learning lies strongly on this citation from our curriculum. My aim is to support knowledge-building, socioculturally: to create socially supported activity in student’s zone of proximal development (the area where student need some support to achieve next level of understanding or skill). The student’s previous knowledge is the starting point, and then the learning is bound to the activity during lessons—experiments, simulations, and observing phenomena.

The National Core Curriculum also states, “The purpose of instruction in chemistry is to support development of students’ scientific thinking and modern worldview.” Our teaching is based on examination and observations of substances and chemical phenomena, their structures and properties, and reactions between substances. Through experiments and theoretical models, students are taught to understand everyday life and nature. In my classroom, I use discussion, lectures, demonstrations, and experimental work—quite often based on group work. Between lessons, I use social media and other information communication technologies to stay in touch with students.

In addition to the National Core Curriculum, my school has its own. They have the same bases, but our own curriculum is more concrete. Based on these, I write my course and lesson plans. Because of different learning styles, I use different kinds of approaches, sometimes theoretical and sometimes experimental. Always there are new concepts and perhaps new models to explain the phenomena or results.

To make students enjoy learning chemistry is hard work. I think that as a teacher, you have to love your subject and enjoy teaching even when there are sometimes students who don´t pay attention to you. But I get satisfaction when I can give a purpose for the future by being a supportive teacher.

New Zealand: “Students Disengage When a Teacher Is Simply Repeating Facts or Ideas”

New Zealand came in seventh place out of 65 in the 2009 PISA assessment. Steve Martin is head of junior science at Howick College. In 2010, he received the prime minister’s award for science teaching.

Science education is an important part of preparing students for their role in the community. Scientific understanding will allow them to engage in issues that concern them now and in the future, such as genetically modified crops. In New Zealand, science is also viewed as having a crucial role to play in the future of the economic health of the country. This can be seen in the creation of the “Prime Minister’s Science Prizes,” a program that identifies the nation’s leading scientists, emerging and future scientists, and science teachers.

The New Zealand Science Curriculum allows for flexibility depending on contextual factors such as school location, interests of students, and teachers’ specialization. The curriculum has the “Nature of Science” as its foundation, which supports students learning the skills essential to a scientist, such as problem-solving and effective communication. The Nature of Science refers to the skills required to work as a scientist, how to communicate science effectively through science-specific vocabulary, and how to participate in debates and issues with a scientific perspective.

School administrators support innovation and risk-taking by teachers, which fosters the “let’s have a go” attitude. In my own classroom, I utilize computer technology to create virtual science lessons that support and encourage students to think for themselves and learn at their own pace. Virtual Lessons are Web-based documents that support learning in and outside the classroom. They include support for students of all abilities by providing digital resources targeted at different levels of thinking. These could include digital flashcards that support vocabulary development, videos that explain the relationships between ideas or facts, and links to websites that allow students to create cartoon animations. The students are then supported by the use of instant messaging, online collaborative documents, and email so they can get support from their peers and myself at anytime. I provide students with various levels of success criteria, which are statements that students and teachers use to evaluate performance. In every lesson I provide the students with three different levels of success criteria, each providing an increase in cognitive demand. The following is an example based on the topic of the carbon cycle:
I can identify the different parts of the carbon cycle.

I can explain how all the parts interact with each other to form the carbon cycle.
I can predict the effect that removing one part of the carbon cycle has on the environment.
These provide challenge for all abilities and at the same time make it clear what students need to do to be successful. I value creativity and innovation, and this greatly influences the opportunities I provide for students.

My students learn to love to be challenged and to see that all ideas help develop greater understanding. Students value the opportunity to contribute to others’ understanding, and they disengage when a teacher is simply repeating facts or ideas.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Coloma 1914 Classroom. Courtesy of Coloma Convent School, Croydon UK.[end-div]

Persecution of Scientists: Old and New

The debate over the theory of evolution continues into the 21st century particularly in societies with a religious bent, including the United States of America. Yet, while the theory and corresponding evidence comes under continuous attack from mostly religious apologists, we generally do not see scientists themselves persecuted for supporting evolution, or not.

This cannot be said for climate scientists in Western countries, who while not physically abused or tortured or imprisoned do continue to be targets of verbal abuse and threats from corporate interests or dogmatic politicians and their followers. But, as we know persecution of scientists for embodying new, and thus threatening, ideas has been with us since the dawn of the scientific age. In fact, this behavior probably has been with us since our tribal ancestors moved out of Africa.

So, it is useful to remind ourselves how far we have come and of the distance we still have to travel.

[div class=attrib]From Wired:[end-div]

Turing was famously chemically-castrated after admitting to homosexual acts in the 1950s. He is one of a long line of scientists who have been persecuted for their beliefs or practices.

After admitting to “homosexual acts” in early 1952, Alan Turing was prosecuted and had to make the choice between a custodial sentence or chemical castration through hormone injections. Injections of oestrogen were intended to deal with “abnormal and uncontrollable” sexual urges, according to literature at the time.
He chose this option so that he could stay out of jail and continue his research, although his security clearance was revoked, meaning he could not continue with his cryptographic work. Turing experienced some disturbing side effects, including impotence, from the hormone treatment. Other known side effects include breast swelling, mood changes and an overall “feminization”. Turing completed his year of treatment without major incident. His medication was discontinued in April 1953 and the University of Manchester created a five-year readership position just for him, so it came as a shock when he committed suicide on 7 June, 1954.

Turing isn’t the only scientist to have been persecuted for his personal or professional beliefs or lifestyle. Here’s a a list of other prominent scientific luminaries who have been punished throughout history.

Rhazes (865-925)
Muhammad ibn Zakariy? R?z? or Rhazes was a medical pioneer from Baghdad who lived between 860 and 932 AD. He was responsible for introducing western teachings, rational thought and the works of Hippocrates and Galen to the Arabic world. One of his books, Continens Liber, was a compendium of everything known about medicine. The book made him famous, but offended a Muslim priest who ordered the doctor to be beaten over the head with his own manuscript, which caused him to go blind, preventing him from future practice.

Michael Servetus (1511-1553)
Servetus was a Spanish physician credited with discovering pulmonary circulation. He wrote a book, which outlined his discovery along with his ideas about reforming Christianity — it was deemed to be heretical. He escaped from Spain and the Catholic Inquisition but came up against the Protestant Inquisition in Switzerland, who held him in equal disregard. Under orders from John Calvin, Servetus was arrested, tortured and burned at the stake on the shores of Lake Geneva – copies of his book were accompanied for good measure.

Galileo Galilei (1564-1642)
The Italian astronomer and physicist Galileo Galilei was trialled and convicted in 1633 for publishing his evidence that supported the Copernican theory that the Earth revolves around the Sun. His research was instantly criticized by the Catholic Church for going against the established scripture that places Earth and not the Sun at the center of the universe. Galileo was found “vehemently suspect of heresy” for his heliocentric views and was required to “abjure, curse and detest” his opinions. He was sentenced to house arrest, where he remained for the rest of his life and his offending texts were banned.

Henry Oldenburg (1619-1677)
Oldenburg founded the Royal Society in London in 1662. He sought high quality scientific papers to publish. In order to do this he had to correspond with many foreigners across Europe, including the Netherlands and Italy. The sheer volume of his correspondence caught the attention of authorities, who arrested him as a spy. He was held in the Tower of London for several months.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Engraving of Galileo Galilei offering his telescope to three women (possibly Urania and attendants) seated on a throne; he is pointing toward the sky where some of his astronomical discoveries are depicted, 1655. Courtesy of Library of Congress.[end-div]

King Canute or Mother Nature in North Carolina, Virginia, Texas?

Legislators in North Carolina recently went one better than King C’Nut (Canute). The king of Denmark, England, Norway and parts of Sweden during various periods between 1018 and 1035, famously and unsuccessfully tried to hold back the incoming tide. The now mythic story tells of Canute’s arrogance. Not to be outdone, North Carolina’s state legislature recently passed a law that bans state agencies from reporting that sea-level rise is accelerating.

The bill From North Carolina states:

“… rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of sea-level rise may be extrapolated linearly to estimate future rates of rise but shall not include scenarios of accelerated rates of sea-level rise.”

This comes hot on the heals of the recent revisionist push in Virginia where references to phrases such as “sea level rise” and “climate change” are forbidden in official state communications. Last year of course, Texas led the way for other states following the climate science denial program when the Texas Commission on Environmental Quality, which had commissioned a scientific study of Galveston Bay, removed all references to “rising sea levels”.

For more detailed reporting on this unsurprising and laughable state of affairs check out this article at Skeptical Science.

[div class=attrib]From Scientific American:[end-div]

Less than two weeks after the state’s senate passed a climate science-squelching bill, research shows that sea level along the coast between N.C. and Massachusetts is rising faster than anywhere on Earth.

Could nature be mocking North Carolina’s law-makers? Less than two weeks after the state’s senate passed a bill banning state agencies from reporting that sea-level rise is accelerating, research has shown that the coast between North Carolina and Massachusetts is experiencing the fastest sea-level rise in the world.

Asbury Sallenger, an oceanographer at the US Geological Survey in St Petersburg, Florida, and his colleagues analysed tide-gauge records from around North America. On 24 June, they reported in Nature Climate Change that since 1980, sea-level rise between Cape Hatteras, North Carolina, and Boston, Massachusetts, has accelerated to between 2 and 3.7 millimetres per year. That is three to four times the global average, and it means the coast could see 20–29 centimetres of sea-level rise on top of the metre predicted for the world as a whole by 2100 ( A. H. Sallenger Jr et al. Nature Clim. Change http://doi.org/hz4; 2012).

“Many people mistakenly think that the rate of sea-level rise is the same everywhere as glaciers and ice caps melt,” says Marcia McNutt, director of the US Geological Survey. But variations in currents and land movements can cause large regional differences. The hotspot is consistent with the slowing measured in Atlantic Ocean circulation, which may be tied to changes in water temperature, salinity and density.

North Carolina’s senators, however, have tried to stop state-funded researchers from releasing similar reports. The law approved by the senate on 12 June banned scientists in state agencies from using exponential extrapolation to predict sea-level rise, requiring instead that they stick to linear projections based on historical data.

Following international opprobrium, the state’s House of Representatives rejected the bill on 19 June. However, a compromise between the house and the senate forbids state agencies from basing any laws or plans on exponential extrapolations for the next three to four years, while the state conducts a new sea-level study.

According to local media, the bill was the handiwork of industry lobbyists and coastal municipalities who feared that investors and property developers would be scared off by predictions of high sea-level rises. The lobbyists invoked a paper published in the Journal of Coastal Research last year by James Houston, retired director of the US Army Corps of Engineers’ research centre in Vicksburg, Mississippi, and Robert Dean, emeritus professor of coastal engineering at the University of Florida in Gainesville. They reported that global sea-level rise has slowed since 1930 ( J. R. Houston and R. G. Dean J. Coastal Res. 27 , 409 – 417 ; 2011) — a contention that climate sceptics around the world have seized on.

Speaking to Nature, Dean accused the oceanographic community of ideological bias. “In the United States, there is an overemphasis on unrealistically high sea-level rise,” he says. “The reason is budgets. I am retired, so I have the freedom to report what I find without any bias or need to chase funding.” But Sallenger says that Houston and Dean’s choice of data sets masks acceleration in the sea-level-rise hotspot.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Policymic.[end-div]

Science and Politics

The tension between science, religion and politics that began several millennia ago continues unabated.

[div class=attrib]From ars technica:[end-div]

In the US, science has become a bit of a political punching bag, with a number of presidential candidates accusing climatologists of fraud, even as state legislators seek to inject phony controversies into science classrooms. It’s enough to make one long for the good old days when science was universally respected. But did those days ever actually exist?

A new look at decades of survey data suggests that there was never a time when science was universally respected, but one political group in particular—conservative voters—has seen its confidence in science decline dramatically over the last 30 years.

The researcher behind the new work, North Carolina’s Gordon Gauchat, figures there are three potential trajectories for the public’s view of science. One possibility is that the public, appreciating the benefits of the technological advances that science has helped to provide, would show a general increase in its affinity for science. An alternative prospect is that this process will inevitably peak, either because there are limits to how admired a field can be, or because a more general discomfort with modernity spills over to a field that helped bring it about.

The last prospect Gauchat considers is that there has been a change in views about science among a subset of the population. He cites previous research that suggests some view the role of science as having changed from one where it enhances productivity and living standards to one where it’s the primary justification for regulatory policies. “Science has always been politicized,” Gauchat writes. “What remains unclear is how political orientations shape public trust in science.”

To figure out which of these trends might apply, he turned to the General Social Survey, which has been gathering information on the US public’s views since 1972. During that time, the survey consistently contained a series of questions about confidence in US institutions, including the scientific community. The answers are divided pretty crudely—”a great deal,” “only some,” and “hardly any”—but they do provide a window into the public’s views on science. (In fact, “hardly any” was the choice of less than 7 percent of the respondents, so Gauchat simply lumped it in with “only some” for his analysis.)

The data showed a few general trends. For much of the study period, moderates actually had the lowest levels of confidence in science, with liberals typically having the highest; the levels of trust for both these groups were fairly steady across the 34 years of data. Conservatives were the odd one out. At the very start of the survey in 1974, they actually had the highest confidence in scientific institutions. By the 1980s, however, they had dropped so that they had significantly less trust than liberals did; in recent years, they’ve become the least trusting of science of any political affiliation.

Examining other demographic trends, Gauchat noted that the only other group to see a significant decline over time is regular churchgoers. Crunching the data, he states, indicates that “The growing force of the religious right in the conservative movement is a chief factor contributing to conservatives’ distrust in science.” This decline in trust occurred even among those who had college or graduate degrees, despite the fact that advanced education typically correlated with enhanced trust in science.

[div class=attrib]Read the entire article after the jump:[end-div]

Beautiful Explanations

Each year for the past 15 years Edge has posed a weighty question to a group of scientists, researchers, philosophers, mathematicians and thinkers. For 2012, Edge asked the question, “What Is Your Favorite Deep, Elegant, or Beautiful Explanation?”, to 192 of our best and brightest. Back came 192 different and no-less wonderful answers. We can post but a snippet here, so please visit the Edge, and then make a note to buy the book (it’s not available yet).

Read the entire article here.

The Mysterious Coherence Between Fundamental Physics and Mathematics
Peter Woit, Mathematical Physicist, Columbia University; Author, Not Even Wrong

Any first course in physics teaches students that the basic quantities one uses to describe a physical system include energy, momentum, angular momentum and charge. What isn’t explained in such a course is the deep, elegant and beautiful reason why these are important quantities to consider, and why they satisfy conservation laws. It turns out that there’s a general principle at work: for any symmetry of a physical system, you can define an associated observable quantity that comes with a conservation law:

1. The symmetry of time translation gives energy
2. The symmetries of spatial translation give momentum
3. Rotational symmetry gives angular momentum
4. Phase transformation symmetry gives charge

 

Einstein Explains Why Gravity Is Universal
Sean Carroll, Theoretical Physicist, Caltech; Author, From Eternity to Here: The Quest for the Ultimate Theory of Time

The ancient Greeks believed that heavier objects fall faster than lighter ones. They had good reason to do so; a heavy stone falls quickly, while a light piece of paper flutters gently to the ground. But a thought experiment by Galileo pointed out a flaw. Imagine taking the piece of paper and tying it to the stone. Together, the new system is heavier than either of its components, and should fall faster. But in reality, the piece of paper slows down the descent of the stone.

Galileo argued that the rate at which objects fall would actually be a universal quantity, independent of their mass or their composition, if it weren’t for the interference of air resistance. Apollo 15 astronaut Dave Scott once illustrated this point by dropping a feather and a hammer while standing in vacuum on the surface of the Moon; as Galileo predicted, they fell at the same rate.

Subsequently, many scientists wondered why this should be the case. In contrast to gravity, particles in an electric field can respond very differently; positive charges are pushed one way, negative charges the other, and neutral particles not at all. But gravity is universal; everything responds to it in the same way.

Thinking about this problem led Albert Einstein to what he called “the happiest thought of my life.” Imagine an astronaut in a spaceship with no windows, and no other way to peer at the outside world. If the ship were far away from any stars or planets, everything inside would be in free fall, there would be no gravitational field to push them around. But put the ship in orbit around a massive object, where gravity is considerable. Everything inside will still be in free fall: because all objects are affected by gravity in the same way, no one object is pushed toward or away from any other one. Sticking just to what is observed inside the spaceship, there’s no way we could detect the existence of gravity.

 

True or False: Beauty Is Truth
Judith Rich Harris, Independent Investigator and Theoretician; Author, The Nurture Assumption; No Two Alike: Human Nature and Human Individuality

“Beauty is truth, truth beauty,” said John Keats. But what did he know? Keats was a poet, not a scientist.

In the world that scientists inhabit, truth is not always beautiful or elegant, though it may be deep. In fact, it’s my impression that the deeper an explanation goes, the less likely it is to be beautiful or elegant.

Some years ago, the psychologist B. F. Skinner proposed an elegant explanation of “the behavior of organisms,” based on the idea that rewarding a response—he called it reinforcement—increases the probability that the same response will occur again in the future. The theory failed, not because it was false (reinforcement generally does increase the probability of a response) but because it was too simple. It ignored innate components of behavior. It couldn’t even handle all learned behavior. Much behavior is acquired or shaped through experience, but not necessarily by means of reinforcement. Organisms learn different things in different ways.

 

The Power Of One, Two, Three
Charles Seife, Professor of Journalism, New York University; formerly journalist, Science Magazine; Author, Proofiness: The Dark Arts of Mathematical Deception

Sometimes even the simple act of counting can tell you something profound.

One day, back in the late 1990s, when I was a correspondent for New Scientist magazine, I got an e-mail from a flack waxing rhapsodic about an extraordinary piece of software. It was a revolutionary data-compression program so efficient that it would squash every digital file by 95% or more without losing a single bit of data. Wouldn’t my magazine jump at the chance to tell the world about the computer program that will make their hard drives hold 20 times more information than every before.

No, my magazine wouldn’t.

No such compression algorithm could possibly exist; it was the algorithmic equivalent of a perpetual motion machine. The software was a fraud.

The reason: the pigeonhole principle.

 

Watson and Crick Explain How DNA Carries Genetic Information
Gary Klein, Cognitive Psychologist; Author, Sources of Power; Streetlights and Shadows: Searching for Keys to Adaptive Decision Making

In 1953, when James Watson pushed around some two-dimensional cut-outs and was startled to find that an adenine-thymine pair had an isomorphic shape to the guanine-cytosine pair, he solved eight mysteries simultaneously. In that instant he knew the structure of DNA: a helix. He knew how many strands: two. It was a double helix. He knew what carried the information: the nucleic acids in the gene, not the protein. He knew what maintained the attraction: hydrogen bonds. He knew the arrangement: The sugar-phosphate backbone was on the outside and the nucleic acids were in the inside. He knew how the strands match: through the base pairs. He knew the arrangement: the two identical chains ran in opposite directions. And he knew how genes replicated: through a zipper-like process.

The discovery that Watson and Crick made is truly impressive, but I am also interested in what we can learn from the process by which they arrived at their discovery. On the surface, the Watson-Crick story fits in with five popular claims about innovation, as presented below. However, the actual story of their collaboration is more nuanced than these popular claims suggest.

It is important to have clear research goals. Watson and Crick had a clear goal, to describe the structure of DNA, and they succeeded.

But only the first two of their eight discoveries had to do with this goal. The others, arguably the most significant, were unexpected byproducts.

A Theory of Everything? Nah

A peer-reviewed journal recently published a 100-page scientific paper describing a theory of everything that unifies quantum theory and relativity (a long sought-after goal) with the origin of life, evolution and cosmology. And, best of all the paper contains no mathematics.

The paper written by a faculty member at Case Western Reserve University raises interesting issues about the peer review process and the viral spread of information, whether it’s correct or not.

[div class=attrib]From Ars Technica:[end-div]

Physicists have been working for decades on a “theory of everything,” one that unites quantum mechanics and relativity. Apparently, they were being too modest. Yesterday saw publication of a press release claiming a biologist had just published a theory accounting for all of that—and handling the origin of life and the creation of the Moon in the bargain. Better yet, no math!

Where did such a crazy theory originate? In the mind of a biologist at a respected research institution, Case Western Reserve University Medical School. Amazingly, he managed to get his ideas published, then amplified by an official press release. At least two sites with poor editorial control then reposted the press release—verbatim—as a news story.

Gyres all the way down

The theory in question springs from the brain of one Erik Andrulis, a CWRU faculty member who has a number of earlier papers on fairly standard biochemistry. The new paper was accepted by an open access journal called Life, meaning that you can freely download a copy of its 105 pages if you’re so inclined. Apparently, the journal is peer-reviewed, which is a bit of a surprise; even accepting that the paper makes a purely theoretical proposal, it is nothing like science as I’ve ever seen it practiced.

The basic idea is that everything, from subatomic particles to living systems, is based on helical systems the author calls “gyres,” which transform matter, energy, and information. These transformations then determine the properties of various natural systems, living and otherwise. What are these gyres? It’s really hard to say; even Andrulis admits that they’re just “a straightforward and non-mathematical core model” (although he seems to think that’s a good thing). Just about everything can be derived from this core model; the author cites “major phenomena including, but not limited to, quantum gravity, phase transitions of water, why living systems are predominantly CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur), homochirality of sugars and amino acids, homeoviscous adaptation, triplet code, and DNA mutations.”

He’s serious about the “not limited to” part; one of the sections describes how gyres could cause the Moon to form.

Is this a viable theory of everything? The word “boson,” the particle that carries forces, isn’t in the text at all. “Quark” appears once—in the title of one of the 800 references. The only subatomic particle Andrulis describes is the electron; he skips from there straight up to oxygen. Enormous gaps exist everywhere one looks.

[div class=attrib]Read more here.[end-div]

How the World May End: Science Versus Brimstone

Every couple of years a (hell)fire and brimstone preacher floats into the national consciousness and makes the headlines with certain predictions from the book regarding imminent destruction of our species and home. Most recently Harold Camping, the radio evangelist, predicted the apocalypse would begin on Saturday, May 21, 2011. His subsequent revision placed the “correct date” at October 21, 2011. Well, we’re still here, so the next apocalyptic date to prepare for, according to watchers of all things Mayan, is December 21, 2012.

So not to be outdone by prophesy from one particular religion or another, science has come out swinging with its own list of potential apocalyptic end-of-days. No surprise, many scenarios may well be at our own hands.

[div class=attrib]From the Guardian:[end-div]

Stories of brimstone, fire and gods make good tales and do a decent job of stirring up the requisite fear and jeopardy. But made-up doomsday tales pale into nothing, creatively speaking, when contrasted with what is actually possible. Look through the lens of science and “the end” becomes much more interesting.

Since the beginning of life on Earth, around 3.5 billion years ago, the fragile existence has lived in the shadow of annihilation. On this planet, extinction is the norm – of the 4 billion species ever thought to have evolved, 99% have become extinct. In particular, five times in this past 500 million years the steady background rate of extinction has shot up for a period of time. Something – no one knows for sure what – turned the Earth into exactly the wrong planet for life at these points and during each mass extinction, more than 75% of the existing species died off in a period of time that was, geologically speaking, a blink of the eye.

One or more of these mass extinctions occurred because of what we could call the big, Hollywood-style, potential doomsday scenarios. If a big enough asteroid hit the Earth, for example, the impact would cause huge earthquakes and tsunamis that could cross the globe. There would be enough dust thrown into the air to block out the sun for several years. As a result, the world’s food resources would be destroyed, leading to famine. It has happened before: the dinosaurs (along with more than half the other species on Earth) were wiped out 65 million years ago by a 10km-wide asteroid that smashed into the area around Mexico.

Other natural disasters include sudden changes in climate or immense volcanic eruptions. All of these could cause global catastrophes that would wipe out large portions of the planet’s life, but, given we have survived for several hundreds of thousands of years while at risk of these, it is unlikely that a natural disaster such as that will cause catastrophe in the next few centuries.

In addition, cosmic threats to our existence have always been with us, even thought it has taken us some time to notice: the collision of our galaxy, the Milky Way, with our nearest neighbour, Andromeda, for example, or the arrival of a black hole. Common to all of these threats is that there is very little we can do about them even when we know the danger exists, except trying to work out how to survive the aftermath.

But in reality, the most serious risks for humans might come from our own activities. Our species has the unique ability in the history of life on Earth to be the first capable of remaking our world. But we can also destroy it.

All too real are the human-caused threats born of climate change, excess pollution, depletion of natural resources and the madness of nuclear weapons. We tinker with our genes and atoms at our own peril. Nanotechnology, synthetic biology and genetic modification offer much potential in giving us better food to eat, safer drugs and a cleaner world, but they could also go wrong if misapplied or if we charge on without due care.

Some strange ways to go and their corresponding danger signs listed below:

DEATH BY EUPHORIA

Many of us use drugs such as caffeine or nicotine every day. Our increased understanding of physiology brings new drugs that can lift mood, improve alertness or keep you awake for days. How long before we use so many drugs we are no longer in control? Perhaps the end of society will not come with a bang, but fade away in a haze.

Danger sign: Drugs would get too cheap to meter, but you might be too doped up to notice.

VACUUM DECAY

If the Earth exists in a region of space known as a false vacuum, it could collapse into a lower-energy state at any point. This collapse would grow at the speed of light and our atoms would not hold together in the ensuing wave of intense energy – everything would be torn apart.

Danger sign: There would be no signs. It could happen half way through this…

STRANGELETS

Quantum mechanics contains lots of frightening possibilities. Among them is a particle called a strangelet that can transform any other particle into a copy of itself. In just a few hours, a small chunk of these could turn a planet into a featureless mass of strangelets. Everything that planet was would be no more.

Danger sign: Everything around you starts cooking, releasing heat.

END OF TIME

What if time itself somehow came to a finish because of the laws of physics? In 2007, Spanish scientists proposed an alternative explanation for the mysterious dark energy that accounts for 75% of the mass of the universe and acts as a sort of anti-gravity, pushing galaxies apart. They proposed that the effects we observe are due to time slowing down as it leaked away from our universe.

Danger sign: It could be happening right now. We would never know.

MEGA TSUNAMI

Geologists worry that a future volcanic eruption at La Palma in the Canary Islands might dislodge a chunk of rock twice the volume of the Isle of Man into the Atlantic Ocean, triggering waves a kilometre high that would move at the speed of a jumbo jet with catastrophic effects for the shores of the US, Europe, South America and Africa.

Danger sign: Half the world’s major cities are under water. All at once.

GEOMAGNETIC REVERSAL

The Earth’s magnetic field provides a shield against harmful radiation from our sun that could rip through DNA and overload the world’s electrical systems. Every so often, Earth’s north and south poles switch positions and, during the transition, the magnetic field will weaken or disappear for many years. The last known transition happened almost 780,000 years ago and it is likely to happen again.

Danger sign: Electronics stop working.

GAMMA RAYS FROM SPACE

When a supermassive star is in its dying moments, it shoots out two beams of high-energy gamma rays into space. If these were to hit Earth, the immense energy would tear apart the atmosphere’s air molecules and disintegrate the protective ozone layer.

Danger sign: The sky turns brown and all life on the surface slowly dies.

RUNAWAY BLACK HOLE

Black holes are the most powerful gravitational objects in the universe, capable of tearing Earth into its constituent atoms. Even within a billion miles, a black hole could knock Earth out of the solar system, leaving our planet wandering through deep space without a source of energy.

Danger sign: Increased asteroid activity; the seasons get really extreme.

INVASIVE SPECIES

Invasive species are plants, animals or microbes that turn up in an ecosystem that has no protection against them. The invader’s population surges and the ecosystem quickly destabilises towards collapse. Invasive species are already an expensive global problem: they disrupt local ecosystems, transfer viruses, poison soils and damage agriculture.

Danger sign: Your local species disappear.

TRANSHUMANISM

What if biological and technological enhancements took humans to a level where they radically surpassed anything we know today? “Posthumans” might consist of artificial intelligences based on the thoughts and memories of ancient humans, who uploaded themselves into a computer and exist only as digital information on superfast computer networks. Their physical bodies might be gone but they could access and store endless information and share their thoughts and feelings immediately and unambiguously with other digital humans.

Danger sign: You are outcompeted, mentally and physically, by a cyborg.

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]End is Nigh Sign. Courtesy of frontporchrepublic.com.[end-div]

The Battle of Evidence and Science versus Belief and Magic

An insightful article over at the Smithsonian ponders the national (U.S.) decline in the trust of science. Regardless of the topic in question — climate change, health supplements, vaccinations, air pollution, “fracking”, evolution — and regardless of the specific position on a particular topic, scientific evidence continues to be questioned, ignored, revised, and politicized. And perhaps it is in this last issue, that of politics, that we may see a possible cause for a growing national pandemic of denialism. The increasingly fractured, fractious and rancorous nature of the U.S. political system threatens to undermine all debate and true skepticism, whether based on personal opinion or scientific fact.

[div class=attrib]From the Smithsonian:[end-div]

A group of scientists and statisticians led by the University of California at Berkeley set out recently to conduct an independent assessment of climate data and determine once and for all whether the planet has warmed in the last century and by how much. The study was designed to address concerns brought up by prominent climate change skeptics, and it was funded by several groups known for climate skepticism. Last week, the group released its conclusions: Average land temperatures have risen by about 1.8 degrees Fahrenheit since the middle of the 20th century. The result matched the previous research.

The skeptics were not happy and immediately claimed that the study was flawed.

Also in the news last week were the results of yet another study that found no link between cell phones and brain cancer. Researchers at the Institute of Cancer Epidemiology in Denmark looked at data from 350,000 cell phone users over an 18-year period and found they were no more likely to develop brain cancer than people who didn’t use the technology.

But those results still haven’t killed the calls for more monitoring of any potential link.

Study after study finds no link between autism and vaccines (and plenty of reason to worry about non-vaccinated children dying from preventable diseases such as measles). But a quarter of parents in a poll released last year said that they believed that “some vaccines cause autism in healthy children” and 11.5 percent had refused at least one vaccination for their child.

Polls say that Americans trust scientists more than, say, politicians, but that trust is on the decline. If we’re losing faith in science, we’ve gone down the wrong path. Science is no more than a process (as recent contributors to our “Why I Like Science” series have noted), and skepticism can be a good thing. But for many people that skepticism has grown to the point that they can no longer accept good evidence when they get it, with the result that “we’re now in an epidemic of fear like one I’ve never seen and hope never to see again,” says Michael Specter, author of Denialism, in his TEDTalk below.

If you’re reading this, there’s a good chance that you think I’m not talking about you. But here’s a quick question: Do you take vitamins? There’s a growing body of evidence that vitamins and dietary supplements are no more than a placebo at best and, in some cases, can actually increase the risk of disease or death. For example, a study earlier this month in the Archives of Internal Medicine found that consumption of supplements, such as iron and copper, was associated with an increased risk of death among older women. In a related commentary, several doctors note that the concept of dietary supplementation has shifted from preventing deficiency (there’s a good deal of evidence for harm if you’re low in, say, folic acid) to one of trying to promote wellness and prevent disease, and many studies are showing that more supplements do not equal better health.

But I bet you’ll still take your pills tomorrow morning. Just in case.

[div class=attrib]Read the entire article here.[end-div]

Science at its Best: The Universe is Expanding AND Accelerating

The 2011 Nobel Prize in Physics was recently awarded to three scientists: Adam Riess, Saul Perlmutter and Brian Schmidt. Their computations and observations of a very specific type of exploding star upended decades of commonly accepted beliefs of our universe. Namely, that the expansion of the universe is accelerating.

Prior to their observations, first publicly articulated in 1998, general scientific consensus held that the universe would expand at a steady rate forever or slow, and eventually fold back in on itself in a cosmic Big Crunch.

The discovery by Riess, Perlmutter and Schmidt laid the groundwork for the idea that a mysterious force called “dark energy” is fueling the acceleration. This dark energy is now believed to make up 75 percent of the universe. Direct evidence of dark energy is lacking, but most cosmologists now accept that universal expansion is indeed accelerating.

Re-published here are the notes and a page scan from Riess’s logbook that led to this year’s Nobel Prize, which show the value of the scientific process:

[div class=attrib]The original article is courtesy of Symmetry Breaking:[end-div]

In the fall of 1997, I was leading the calibration and analysis of data gathered by the High-z Supernova Search Team, one of two teams of scientists—the other was the Supernova Cosmology Project—trying to determine the fate of our universe: Will it expand forever, or will it halt and contract, resulting in the Big Crunch?

To find the answer, we had to determine the mass of the universe. It can be calculated by measuring how much the expansion of the universe is slowing.

First, we had to find cosmic candles—distant objects of known brightness—and use them as yardsticks. On this page, I checked the reliability of the supernovae, or exploding stars, that we had collected to serve as our candles. I found that the results they yielded for the present expansion rate of the universe (known as the Hubble constant) did not appear to be affected by the age or dustiness of their host galaxies.

Next, I used the data to calculate ?M, the relative mass of the universe.

It was significantly negative!

The result, if correct, meant that the assumption of my analysis was wrong. The expansion of the universe was not slowing. It was speeding up! How could that be?

I spent the next few days checking my calculation. I found one could explain the acceleration by introducing a vacuum energy, also called the cosmological constant, that pushes the universe apart. In March 1998, we submitted these results, which were published in September 1998.

Today, we know that 74 percent of the universe consists of this dark energy. Understanding its nature remains one of the most pressing tasks for physicists and astronomers alike.

Adam Riess, Johns Hopkins University

The discovery, and many others like it both great and small, show the true power of the scientific process. Scientific results are open for constant refinement, or re-evaluation or refutation and re-interpretation. The process leads to inexorable progress towards greater and greater knowledge and understanding, and eventually to truth that most skeptics can embrace. That is, until the next and better theory and corresponding results come along.

[div class=attrib]Image courtesy of Symmetry Breaking, Adam Riess.[end-div]

Once Not So Crazy Ideas About Our Sun

Some wacky ideas about our sun from not so long ago help us realize the importance of a healthy dose of skepticism combined with good science. In fact, as you’ll see from the timestamp on the image from NASA’s Solar and Heliospheric Observatory (SOHO) science can now bring us – the public – near realtime images of our nearest star.

[div class=attrib]From Slate:[end-div]

The sun is hell.

The18th-century English clergyman Tobias Swinden argued that hell couldn’t lie below Earth’s surface: The fires would soon go out, he reasoned, due to lack of air. Not to mention that the Earth’s interior would be too small to accommodate all the damned, especially after making allowances for future generations of the damned-to-be. Instead, wrote Swinden, it’s obvious that hell stares us in the face every day: It’s the sun.

The sun is made of ice.

In 1798, Charles Palmer—who was not an astronomer, but an accountant—argued that the sun can’t be a source of heat, since Genesis says that light already existed before the day that God created the sun. Therefore, he reasoned, the sun must merely focus light upon Earth—light that exists elsewhere in the universe. Isn’t the sun even shaped like a giant lens? The only natural, transparent substance that it could be made of, Palmer figured, is ice. Palmer’s theory was published in a widely read treatise that, its title crowed, “overturn[ed] all the received systems of the universe hitherto extant, proving the celebrated and indefatigable Sir Isaac Newton, in his theory of the solar system, to be as far distant from the truth, as any of the heathen authors of Greece or Rome.”

Earth is a sunspot.

Sunspots are magnetic regions on the sun’s surface. But in 1775, mathematician and theologian J. Wiedeberg said that the sun’s spots are created by the clumping together of countless solid “heat particles,” which he speculated were constantly being emitted by the sun. Sometimes, he theorized, these heat particles stick together even at vast distances from the sun—and this is how planets form. In other words, he believed that Earth is a sunspot.

The sun’s surface is liquid.

Throughout the 18th and 19th centuries, textbooks and astronomers were torn between two competing ideas about the sun’s nature. Some believed that its dazzling brightness was caused by luminous clouds and that small holes in the clouds, which revealed the cool, dark solar surface below, were the sunspots. But the majority view was that the sun’s body was a hot, glowing liquid, and that the sunspots were solar mountains sticking up through this lava-like substance.

The sun is inhabited.

No less a distinguished astronomer than William Herschel, who discovered the planet Uranus in 1781, often stated that the sun has a cool, solid surface on which human-like creatures live and play. According to him, these solar citizens are shielded from the heat given off by the sun’s “dazzling outer clouds” by an inner protective cloud layer—like a layer of haz-mat material—that perfectly blocks the solar emissions and allows for pleasant grassy solar meadows and idyllic lakes.