Intelligenetics

Intelligenetics isn’t recognized as a real word by Websters or the Oxford English dictionary. We just coined a term that might best represent the growing field of research examining the genetic basis for human intelligence. Of course, it’s not a new subject and comes with many cautionary tales. Past research into the genetic foundations of intelligence has often been misused by one group seeking racial, ethnic or political power over another. However, with strong and appropriate safeguards in place science does have a legitimate place in uncovering what makes some brains excel while others do not.

[div class=attrib]From the Wall Street Journal:[end-div]

At a former paper-printing factory in Hong Kong, a 20-year-old wunderkind named Zhao Bowen has embarked on a challenging and potentially controversial quest: uncovering the genetics of intelligence.

Mr. Zhao is a high-school dropout who has been described as China’s Bill Gates. He oversees the cognitive genomics lab at BGI, a private company that is partly funded by the Chinese government.

At the Hong Kong facility, more than 100 powerful gene-sequencing machines are deciphering about 2,200 DNA samples, reading off their 3.2 billion chemical base pairs one letter at a time. These are no ordinary DNA samples. Most come from some of America’s brightest people—extreme outliers in the intelligence sweepstakes.

The majority of the DNA samples come from people with IQs of 160 or higher. By comparison, average IQ in any population is set at 100. The average Nobel laureate registers at around 145. Only one in every 30,000 people is as smart as most of the participants in the Hong Kong project—and finding them was a quest of its own.

“People have chosen to ignore the genetics of intelligence for a long time,” said Mr. Zhao, who hopes to publish his team’s initial findings this summer. “People believe it’s a controversial topic, especially in the West. That’s not the case in China,” where IQ studies are regarded more as a scientific challenge and therefore are easier to fund.

The roots of intelligence are a mystery. Studies show that at least half of the variation in intelligence quotient, or IQ, is inherited. But while scientists have identified some genes that can significantly lower IQ—in people afflicted with mental retardation, for example—truly important genes that affect normal IQ variation have yet to be pinned down.

The Hong Kong researchers hope to crack the problem by comparing the genomes of super-high-IQ individuals with the genomes of people drawn from the general population. By studying the variation in the two groups, they hope to isolate some of the hereditary factors behind IQ.

Their conclusions could lay the groundwork for a genetic test to predict a person’s inherited cognitive ability. Such a tool could be useful, but it also might be divisive.

“If you can identify kids who are going to have trouble learning, you can intervene” early on in their lives, through special schooling or other programs, says Robert Plomin, a professor of behavioral genetics at King’s College, London, who is involved in the BGI project.

[div class=attrib]Read the entire article following the jump.[end-div]

The Police Drones Next Door

You might expect to find police drones in the pages of a science fiction novel by Philip K. Dick or Iain M. Banks. But by 2015, citizens of the United States may well see these unmanned flying machines patrolling the skies over the homeland. The U.S. government recently pledged to loosen Federal Aviation Administration (FAA) restrictions that would allow local law enforcement agencies to use drones in just a few short years. So, soon the least of your worries will be traffic signal cameras and the local police officer armed with a radar gun. Our home-grown drones are likely to be deployed first for surveillance. But, undoubtedly armaments will follow. Hellfire missiles over Helena, Montana anyone?

[div class=attrib]From National Geographic:[end-div]

At the edge of a stubbly, dried-out alfalfa field outside Grand Junction, Colorado, Deputy Sheriff Derek Johnson, a stocky young man with a buzz cut, squints at a speck crawling across the brilliant, hazy sky. It’s not a vulture or crow but a Falcon—a new brand of unmanned aerial vehicle, or drone, and Johnson is flying it. The sheriff ’s office here in Mesa County, a plateau of farms and ranches corralled by bone-hued mountains, is weighing the Falcon’s potential for spotting lost hikers and criminals on the lam. A laptop on a table in front of Johnson shows the drone’s flickering images of a nearby highway.

Standing behind Johnson, watching him watch the Falcon, is its designer, Chris Miser. Rock-jawed, arms crossed, sunglasses pushed atop his shaved head, Miser is a former Air Force captain who worked on military drones before quitting in 2007 to found his own company in Aurora, Colorado. The Falcon has an eight-foot wingspan but weighs just 9.5 pounds. Powered by an electric motor, it carries two swiveling cameras, visible and infrared, and a GPS-guided autopilot. Sophisticated enough that it can’t be exported without a U.S. government license, the Falcon is roughly comparable, Miser says, to the Raven, a hand-launched military drone—but much cheaper. He plans to sell two drones and support equipment for about the price of a squad car.

A law signed by President Barack Obama in February 2012 directs the Federal Aviation Administration (FAA) to throw American airspace wide open to drones by September 30, 2015. But for now Mesa County, with its empty skies, is one of only a few jurisdictions with an FAA permit to fly one. The sheriff ’s office has a three-foot-wide helicopter drone called a Draganflyer, which stays aloft for just 20 minutes.

The Falcon can fly for an hour, and it’s easy to operate. “You just put in the coordinates, and it flies itself,” says Benjamin Miller, who manages the unmanned aircraft program for the sheriff ’s office. To navigate, Johnson types the desired altitude and airspeed into the laptop and clicks targets on a digital map; the autopilot does the rest. To launch the Falcon, you simply hurl it into the air. An accelerometer switches on the propeller only after the bird has taken flight, so it won’t slice the hand that launches it.

The stench from a nearby chicken-processing plant wafts over the alfalfa field. “Let’s go ahead and tell it to land,” Miser says to Johnson. After the deputy sheriff clicks on the laptop, the Falcon swoops lower, releases a neon orange parachute, and drifts gently to the ground, just yards from the spot Johnson clicked on. “The Raven can’t do that,” Miser says proudly.

Offspring of 9/11

A dozen years ago only two communities cared much about drones. One was hobbyists who flew radio-controlled planes and choppers for fun. The other was the military, which carried out surveillance missions with unmanned aircraft like the General Atomics Predator.

Then came 9/11, followed by the U.S. invasions of Afghanistan and Iraq, and drones rapidly became an essential tool of the U.S. armed forces. The Pentagon armed the Predator and a larger unmanned surveillance plane, the Reaper, with missiles, so that their operators—sitting in offices in places like Nevada or New York—could destroy as well as spy on targets thousands of miles away. Aerospace firms churned out a host of smaller drones with increasingly clever computer chips and keen sensors—cameras but also instruments that measure airborne chemicals, pathogens, radioactive materials.

The U.S. has deployed more than 11,000 military drones, up from fewer than 200 in 2002. They carry out a wide variety of missions while saving money and American lives. Within a generation they could replace most manned military aircraft, says John Pike, a defense expert at the think tank GlobalSecurity.org. Pike suspects that the F-35 Lightning II, now under development by Lockheed Martin, might be “the last fighter with an ejector seat, and might get converted into a drone itself.”

At least 50 other countries have drones, and some, notably China, Israel, and Iran, have their own manufacturers. Aviation firms—as well as university and government researchers—are designing a flock of next-generation aircraft, ranging in size from robotic moths and hummingbirds to Boeing’s Phantom Eye, a hydrogen-fueled behemoth with a 150-foot wingspan that can cruise at 65,000 feet for up to four days.

More than a thousand companies, from tiny start-ups like Miser’s to major defense contractors, are now in the drone business—and some are trying to steer drones into the civilian world. Predators already help Customs and Border Protection agents spot smugglers and illegal immigrants sneaking into the U.S. NASA-operated Global Hawks record atmospheric data and peer into hurricanes. Drones have helped scientists gather data on volcanoes in Costa Rica, archaeological sites in Russia and Peru, and flooding in North Dakota.

So far only a dozen police departments, including ones in Miami and Seattle, have applied to the FAA for permits to fly drones. But drone advocates—who generally prefer the term UAV, for unmanned aerial vehicle—say all 18,000 law enforcement agencies in the U.S. are potential customers. They hope UAVs will soon become essential too for agriculture (checking and spraying crops, finding lost cattle), journalism (scoping out public events or celebrity backyards), weather forecasting, traffic control. “The sky’s the limit, pun intended,” says Bill Borgia, an engineer at Lockheed Martin. “Once we get UAVs in the hands of potential users, they’ll think of lots of cool applications.”

The biggest obstacle, advocates say, is current FAA rules, which tightly restrict drone flights by private companies and government agencies (though not by individual hobbyists). Even with an FAA permit, operators can’t fly UAVs above 400 feet or near airports or other zones with heavy air traffic, and they must maintain visual contact with the drones. All that may change, though, under the new law, which requires the FAA to allow the “safe integration” of UAVs into U.S. airspace.

If the FAA relaxes its rules, says Mark Brown, the civilian market for drones—and especially small, low-cost, tactical drones—could soon dwarf military sales, which in 2011 totaled more than three billion dollars. Brown, a former astronaut who is now an aerospace consultant in Dayton, Ohio, helps bring drone manufacturers and potential customers together. The success of military UAVs, he contends, has created “an appetite for more, more, more!” Brown’s PowerPoint presentation is called “On the Threshold of a Dream.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Unmanned drone used to patrol the U.S.-Canadian border. (U.S. Customs and Border Protection/AP).[end-div]

Measuring Antifragility

Nassim Nicholas Taleb, one of our favorite thinkers and writers over here at theDiagonal recently published Antifragile, his follow-up to his successful “black swan” title Black Swan. In Antifragile Taleb argues that some things thrive when subjected to volatility, disorder and uncertainty. He labels the positive reaction to these external stressors, antifragility. (Ironically, this book was published by Random House).

In his essay, excerpted below, Taleb summarizes the basic tenets of antifragility and the payoff that we would gain from its empirical measurement. This would certainly represent a leap forward, from our persistent and misguided focus on luck in research, relationships and business.

[div class=attrib]From Edge.org:[end-div]

Something central, very central, is missing in historical accounts of scientific and technological discovery. The discourse and controversies focus on the role of luck as opposed to teleological programs (from telos, “aim”), that is, ones that rely on pre-set direction from formal science. This is a faux-debate: luck cannot lead to formal research policies; one cannot systematize, formalize, and program randomness. The driver is neither luck nor direction, but must be in the asymmetry (or convexity) of payoffs, a simple mathematical property that has lied hidden from the discourse, and the understanding of which can lead to precise research principles and protocols.

MISSING THE ASYMMETRY

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics—even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

The point we will be making here is that logically, neither trial and error nor “chance” and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.

The beneficial properties have to reside in the type of exposure, that is, the payoff function and not in the “luck” part: there needs to be a significant asymmetry between the gains (as they need to be large) and the errors (small or harmless), and it is from such asymmetry that luck and trial and error can produce results. The general mathematical property of this asymmetry is convexity (which is explained in Figure 1); functions with larger gains than losses are nonlinear-convex and resemble financial options. Critically, convex payoffs benefit from uncertainty and disorder. The nonlinear properties of the payoff function, that is, convexity, allow us to formulate rational and rigorous research policies, and ones that allow the harvesting of randomness.

OPAQUE SYSTEMS AND OPTIONALITY

Further, it is in complex systems, ones in which we have little visibility of the chains of cause-consequences, that tinkering, bricolage, or similar variations of trial and error have been shown to vastly outperform the teleological—it is nature’s modus operandi. But tinkering needs to be convex; it is imperative. Take the most opaque of all, cooking, which relies entirely on the heuristics of trial and error, as it has not been possible for us to design a dish directly from chemical equations or reverse-engineer a taste from nutritional labels. We take hummus, add an ingredient, say a spice, taste to see if there is an improvement from the complex interaction, and retain if we like the addition or discard the rest. Critically we have the option, not the obligation to keep the result, which allows us to retain the upper bound and be unaffected by adverse outcomes.

This “optionality” is what is behind the convexity of research outcomes. An option allows its user to get more upside than downside as he can select among the results what fits him and forget about the rest (he has the option, not the obligation). Hence our understanding of optionality can be extended to research programs — this discussion is motivated by the fact that the author spent most of his adult life as an option trader. If we translate François Jacob’s idea into these terms, evolution is a convex function of stressors and errors —genetic mutations come at no cost and are retained only if they are an improvement. So are the ancestral heuristics and rules of thumbs embedded in society; formed like recipes by continuously taking the upper-bound of “what works”. But unlike nature where choices are made in an automatic way via survival, human optionality requires the exercise of rational choice to ratchet up to something better than what precedes it —and, alas, humans have mental biases and cultural hindrances that nature doesn’t have. Optionality frees us from the straightjacket of direction, predictions, plans, and narratives. (To use a metaphor from information theory, if you are going to a vacation resort offering you more options, you can predict your activities by asking a smaller number of questions ahead of time.)

While getting a better recipe for hummus will not change the world, some results offer abnormally large benefits from discovery; consider penicillin or chemotherapy or potential clean technologies and similar high impact events (“Black Swans”). The discovery of the first antimicrobial drugs came at the heel of hundreds of systematic (convex) trials in the 1920s by such people as Domagk whose research program consisted in trying out dyes without much understanding of the biological process behind the results. And unlike an explicit financial option for which the buyer pays a fee to a seller, hence tend to trade in a way to prevent undue profits, benefits from research are not zero-sum.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Antifragile by Naseem Nicholas Taleb, book cover. Courtesy of the author / Random House / Barnes & Noble.[end-div]

Distance to Europa: $2 billion and 14 years

Europa is Jupiter’s gravitationally tortured moon. It has liquid oceans underneath an icy surface. This makes Europa a very interesting target for future missions to the solar system — missions looking for life beyond our planet. Unfortunately, NASA’s planned mission has yet to be funded. But should the agency (and taxpayers) come up with the estimated $2 billion to fund a spacecraft, we could well have a probe circling Europa by 2027.

[div class=attrib]From the Guardian:[end-div]

Nasa scientists have drawn up plans for a mission that could look for life on Europa, a moon of Jupiter that is covered in vast oceans of water under a thick layer of ice.

The Europa Clipper would be the first dedicated mission to the waterworld moon, if it gets approval for funding from Nasa. The project is set to cost $2bn.

“On Earth, everywhere where there’s liquid water, we find life,” said Robert Pappalardo, a senior research scientist at Nasa’s jet propulsion laboratory in California, who led the design of the Europa Clipper.

“The search for life in our solar system somewhat equates to the search for liquid water. When we ask the question where are the water worlds, we have to look to the outer solar system because there are oceans beneath the icy shells of the moons.”

Jupiter’s biggest moons such as Ganymede, Callisto and Europa are too far from the sun to gain much warmth from it, but have liquid oceans beneath their blankets of ice because the moons are squeezed and warmed up as they orbit the planet.

“We generally focus down on Europa as the most promising in terms of potential habitability because of its relatively thick ice shell, an ocean that is in contact with rock below, and that it’s probably geologically active today,” Pappalardo said at the annual meeting of the American Association for the Advancement of Science in Boston.

In addition, because Europa is bombarded by extreme levels of radiation, the moon is likely to be covered in oxidants at its surface. These molecules are created when water is ripped apart by energetic radiation and could be used by lifeforms as a type of fuel.

For several years scientists have been considering plans for a spacecraft that could orbit Europa, but this turned out to be too expensive for Nasa’s budgets. Over the past year Pappalardo has worked with colleagues at the applied physics lab at Johns Hopkins University to come up with the Europa Clipper.

The spacecraft would orbit Jupiter and make several flybys of Europa, in the same way that the successful Cassini probe did for Saturn’s moon Titan.

“That way we can get effectively global coverage of Europa – not quite as good as an orbiter but not bad for half the cost . We have a validated cost of $2bn over the lifetime of the mission, excluding the launch,” Pappalardo said.

A probe could be readied in time for launch around 2021 and would take between three to six years to arrive at Europa, depending on the rockets used.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Complex and beautiful patterns adorn the icy surface of Jupiter’s moon Europa, as seen in this color image intended to approximate how the satellite might appear to the human eye. Image Credit: NASA/JPL/Ted Stryk.[end-div]

RIP: Chief Innovation Officer

“Innovate or die” goes the business mantra. Embrace creativity or you and your company will fall by the wayside and wither into insignificance.

A leisurely skim through a couple of dozen TV commercials, print ads and online banners will reinforce the notion — we are surrounded by innovators.

Absolutely everyone is innovating: Subway innovates with a new type of sandwich; Campbell Soup innovates by bringing a new blend to market more quickly; Skyy vodka innovates by adding a splash of lemon flavoring; Mercedes innovates by adding blind spot technology in its car door mirrors; Delta Airlines innovates by adding an inch more legroom for weary fliers; Bank of America innovates by communicating with customers via Twitter; L’Oreal innovates by boosting lashes. Innovation is everywhere and all the time.

Or is it?

There was a time when innovation meant radical, disruptive change: think movable type, printing, telegraphy, light bulb, mass production, photographic film, transistor, frozen food processing, television.

Now, the word innovation is liberally applied to just about anything. Marketers and advertisers have co-opted the word in service of coolness and an entrepreneurial halo. But, overuse of the label and its attachment to most new products and services in general has ensured that its value has become greatly diminished. Rather than connoting disruptive change, innovation in business is no more than a corporate cliché designed to market the coolness or an incremental improvement. So, who needs a Chief Innovation Officer anymore? After all, we are now all innovators.

[div class=attrib]From the Wall Street Journal:[end-div]

Got innovation? Just about every company says it does.

Businesses throw around the term to show they’re on the cutting edge of everything from technology and medicine to snacks and cosmetics. Companies are touting chief innovation officers, innovation teams, innovation strategies and even innovation days.

But that doesn’t mean the companies are actually doing any innovating. Instead they are using the word to convey monumental change when the progress they’re describing is quite ordinary.

Like the once ubiquitous buzzwords “synergy” and “optimization,” innovation is in danger of becoming a cliché—if it isn’t one already.

“Most companies say they’re innovative in the hope they can somehow con investors into thinking there is growth when there isn’t,” says Clayton Christensen, a professor at Harvard Business School and the author of the 1997 book, “The Innovator’s Dilemma.”

A search of annual and quarterly reports filed with the Securities and Exchange Commission shows companies mentioned some form of the word “innovation” 33,528 times last year, which was a 64% increase from five years before that.

More than 250 books with “innovation” in the title have been published in the last three months, most of them dealing with business, according to a search of Amazon.com.

The definition of the term varies widely depending on whom you ask. To Bill Hickey, chief executive of Bubble Wrap’s maker, Sealed Air Corp., it means inventing a product that has never existed, such as packing material that inflates on delivery.

To Ocean Spray Cranberries Inc. CEO Randy Papadellis, it is turning an overlooked commodity, such as leftover cranberry skins, into a consumer snack like Craisins.

To Pfizer Inc.’s PFE +0.85% research and development head, Mikael Dolsten, it is extending a product’s scope and application, such as expanding the use of a vaccine for infants that is also effective in older adults.

Scott Berkun, the author of the 2007 book “The Myths of Innovation,” which warns about the dilution of the word, says that what most people call an innovation is usually just a “very good product.”

He prefers to reserve the word for civilization-changing inventions like electricity, the printing press and the telephone—and, more recently, perhaps the iPhone.

Mr. Berkun, now an innovation consultant, advises clients to ban the word at their companies.

“It is a chameleon-like word to hide the lack of substance,” he says.

Mr. Berkun tracks innovation’s popularity as a buzzword back to the 1990s, amid the dot-com bubble and the release of James M. Utterback’s “Mastering the Dynamics of Innovation” and Mr. Christensen’s “Dilemma.”

The word appeals to large companies because it has connotations of being agile and “cool,” like start-ups and entrepreneurs, he says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Draisine, also called Laufmaschine (“running machine”), from around 1820. The Laufmaschine was invented by the German Baron Karl von Drais in Mannheim in 1817. Being the first means of transport to make use of the two-wheeler principle, the Laufmaschine is regarded as the archetype of the bicycle. Courtesy of Wikipedia.[end-div]

Anxiety, Fear and Wisdom

In a recent essay author Jana Richman weaves her personal stories about anxiety with Bertrand Russell’s salient observations on fear, and the desert Southwest is her colorful backdrop.

[div class=attrib]From the New York Times:[end-div]

On a cold, sunny day in early March, my husband, Steve, and I layered up and took ourselves out to our backyard: Grand Staircase Escalante National Monument. For a few days we had been spiraling downward through a series of miscommunications and tensions — the culmination of my rigorous dedication to fear, or what Bertrand Russell aptly coined “the tyranny of the habit of fear.”  A fresh storm had dropped 10 inches of snow with little moisture giving it an airy, crystallized texture that sprayed out in an arc with each footstep and made a shushing sound, as if it were speaking directly to me. Shush. Shush. Shush.

Moving into the elegant world of white-draped red rock is usually enough to strip our minds of the qualms that harass us, but on this particular day, Steve and I both stomped into the desert bearing a commitment to hang onto the somber roles we had adopted. Solemnity is difficult, however, when one is tumbling down hills of snow-covered, deep sand and slipping off steep angles of slickrock on one’s backside. Still, it took a good half-mile before we were convinced of our absurdity.

Such is the nature of the desert. If you persist in your gravity, the desert will take full advantage — it will have you falling over yourself as you trudge along carrying your blame and angst and fear; it will mock you until you literally and figuratively lighten up and conform to the place. The place will never conform to you. We knew that; that’s why we went. That’s why we always go to the desert when we’re stuck in a cycle of self-induced wretchedness.

“Fear,” Russell writes, “makes man unwise in the three great departments of human conduct: his dealings with nature, his dealings with other men, and his dealings with himself.”

I can attest to the truth of Russell’s words. I’ve spent many lifetime hours processing fear, and I’ve brought fear’s oppression into my marriage. Because fear is the natural state of my mind, I often don’t realize I’m spewing it into the atmosphere with my words and actions. The incident that drove us into the desert on that particular day was, in my mind, a simple expression of concern, a few “what will happen ifs”; in Steve’s mind, a paranoid rant. Upon reflection, I have to agree with his version.

A few months prior, Steve and I had decided upon a change in our lives: certainty in the form of a bi-weekly paycheck was traded for joy in the form writing time. It wasn’t a rash decision; it was five years in the making. Yet, from the moment the last check was cashed, my fear began roiling, slowly at first, but soon popping and splashing out of its shallow container. My voiced concerns regarding homelessness and insolvency went considerably beyond probable, falling to the far side of remotely possible. In my world, that’s enough for worry, discussion, obsession, more discussion, and several nights of insomnia.

We had parked the truck at the “head of the rocks,” an understated description of a spot that allows a 360-degree view of red and white slickrock cut with deep gulches and painted with the sweeping wear of wind and water. The Grand Staircase Escalante National Monument is 1.9 million acres of land, much of it devoid of human intrusion on any given day. Before we moved to the small town of Escalante on the Monument’s border, we came here from our city home five hours away — alone or together — whenever life threatened to shut us down.

From the head of the rocks, we followed the old cream cellar road, a wagon trail of switchbacks carved into stone in the early 1900s. We could see our destination about two miles out — a smooth, jutting wall with a level run of sand at its base that would allow us to sit with our faces to the sun and our backs against the wall — a fitting spot.

Steve walked behind me in silence, but I knew his thoughts. My fear perplexes and disparages him. His acts of heroism should dispel my anxiety, but it persists beyond the reach of his love.  Yet, his love, too, persists.

Knowing I’ll pick up and read anything placed in my path, Steve had left on the butcher block where I eat breakfast Russell’s timeless collection of essays, “New Hopes for a Changing World,” published in 1951, five years before I was born. I skimmed the table of contents until I reached three essays entitled, “Fear,” “Fortitude,” and “Life Without Fear,” in which Russell writes about the pervasive and destructive nature of fear. One of the significant fears Russell writes about — a fear close to his own heart — is the fear of being unlovable, which, he writes, is self-fulfilling unless one gets out from under fear’s dominion.  I’ve been testing Russell’s theory for the past eight years.

I’ve heard it said that all fear stems from the knowledge of our own mortality, and indeed, many of our social systems thrive by exploiting our fear of death and our desire to thwart it. But fear of death has never been my problem. To me, life, not death, holds the promise of misery.  When life is lived as a problem to be solved, death offers the ultimate resolution, the release of all fears, the moment of pure peace.

[div class=attrib]Read the entire article following the jump.[end-div]

Cluttered Desk, Cluttered Mind

Life coach Jayne Morris suggests that de-cluttering your desk, attic or garage can add positive energy to your personal and business life. Morris has coached numerous business leaders and celebrities in the art of clearing clutter.

[div class=attrib]From the Telegraph:[end-div]

According to a leading expert, having a cluttered environment reflects a cluttered mind and the act of tidying up can help you be more successful.

The advice comes from Jayne Morris, the resident “life coach” for NHS Online, who said it is no good just moving the mess around.

In order to clear the mind, unwanted items must be thrown away to free your “internal world”, she said.

Ms Morris, who claims to have coached celebrities to major business figures, said: “Clearing clutter from your desk has the power to transform you business.

“How? Because clutter in your outer environment is the physical manifestation of all the clutter going on inside of you.

“Clearing clutter has a ripple effect across your entire life, including your work.

“Having an untidy desk covered in clutter could be stopping you achieving the business success you want.”

She is adamant cleaning up will be a boon even though some of history’s biggest achievers lived and worked in notoriously messy conditions.

Churchill was considered untidy from a boy throughout his life, from his office to his artist’s studio, and the lab where Alexander Fleming discovered penicillin was famously dishevelled.

Among the recommendations is that the simply tidying a desk at work and an overflowing filing cabinet will instantly have a positive impact on “your internal world.”

Anything that is no longer used should not be put into storage but thrown away completely.

Keeping something in the loft, garage or other part of the house, does not help because it is still connected to the person “by tiny energetic cords” she claims.

She said: “The things in your life that are useful to you, that add value to your life, that serve a current purpose are charged with positive energy that replenishes you and enriches your life.

“But the things that you are holding on to that you don’t really like, don’t ever use and don’t need anymore have the opposite effect on your energy. Things that no longer fit or serve you, drain your energy.”

Briton has long been a nation of hoarders and a survey showed that more than a million are compulsive about their keeping their stuff.

Brains scans have also confirmed that victims of hoarding disorder have abnormal activity in regions of the brain involved in decision making – particularly in what to do with objects that belong to them.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Still from Buried Alive Season 3, TLC.[end-div]

Psst! AIDS Was Created by the U.S. Government

Some believe that AIDS was created by the U.S. Government or bestowed by a malevolent god. Some believe that Neil Armstrong never set foot on the moon, while others believe that Nazis first established a moon base in 1942. Some believe that recent tsunamis were caused by the U.S. military, and that said military is hiding evidence of alien visits in Area 51, Nevada. The latest of course is the great conspiracy of climate change, which apparently is created by socialists seeking to destroy the United States. This conspiratorial thinking makes for good reality-TV, and presents wonderful opportunities for psychological research. Why after all, in the face of seemingly insurmountable evidence, widespread common consensus and fundamental scientific reasoning, do such ideas, and their believers persist?

[div class=attrib]From Skeptical Science:[end-div]

There is growing evidence that conspiratorial thinking, also known as conspiracist ideation, is often involved in the rejection of scientific propositions. Conspiracist ideations tend to invoke alternative explanations for the nature or source of the scientific evidence. For example, among people who reject the link between HIV and AIDS, common ideations involve the beliefs that AIDS was created by the U.S. Government.

My colleagues and I published a paper recently that found evidence for the involvement of conspiracist ideation in the rejection of scientific propositions—from climate change to the link between tobacco and lung cancer, and between HIV and AIDS—among visitors to climate blogs. This was a fairly unsurprising result because it meshed well with previous research and the existing literature on the rejection of science. Indeed, it would have been far more surprising, from a scientific perspective, if the article had not found a link between conspiracist ideation and rejection of science.

Nonetheless, as some readers of this blog may remember, this article engendered considerable controversy.

The article also generated data.

Data, because for social scientists, public statements and publically-expressed ideas constitute data for further research. Cognitive scientists sometimes apply something called “narrative analysis” to understand how people, groups, or societies are organized and how they think.

In the case of the response to our earlier paper, we were struck by the way in which some of the accusations leveled against our paper were, well, somewhat conspiratorial in nature. We therefore decided to analyze the public response to our first paper with the hypothesis in mind that this response might also involve conspiracist ideation. We systematically collected utterances by bloggers and commenters, and we sought to classify them into various hypotheses leveled against our earlier paper. For each hypothesis, we then compared the public statements against a list of criteria for conspiracist ideation that was taken from the previous literature.

This follow-up paper was accepted a few days ago by Frontiers in Psychology, and a preliminary version of the paper is already available, for open access, here.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Area 51 – Warning sign near secret Area 51 base in Nevada. Courtesy of Wikipedia.[end-div]

First, Build A Blue Box; Second, Build Apple

Edward Tufte built the first little blue box in 1962. The blue box contained home-made circuitry and a tone generator that could place free calls over the phone network to anywhere in the world.

This electronic revelation spawned groups of “phone phreaks” (hackers) who would build their own blue boxes to fight MaBell (AT&T), illegally of course. The phreaks assumed suitably disguised names, such as Captain Crunch and Cheshire Cat, to hide from the long-arm of the FBI.

This later caught the attention of a pair of new recruits to the subversive cause, Berkeley Blue and Oaf Tobar, who would go on to found Apple under their more common pseudonyms, Steve Wozniak and Steve Jobs. The rest, as the saying goes, is history.

Put it down to curiosity, an anti-authoritarian streak and a quest to ever-improve.

[div class=attrib]From Slate:[end-div]

One of the most heartfelt—and unexpected—remembrances of Aaron Swartz, who committed suicide last month at the age of 26, came from Yale professor Edward Tufte. During a speech at a recent memorial service for Swartz in New York City, Tufte reflected on his secret past as a hacker—50 years ago.

“In 1962, my housemate and I invented the first blue box,” Tufte said to the crowd. “That’s a device that allows for undetectable, unbillable long distance telephone calls. We played around with it and the end of our research came when we completed what we thought was the longest long-distance phone call ever made, which was from Palo Alto to New York … via Hawaii.”

Tufte was never busted for his youthful forays into phone hacking, also known as phone phreaking. He rose to become one of Yale’s most famous professors, a world authority on data visualization and information design. One can’t help but think that Swartz might have followed in the distinguished footsteps of a professor like Tufte, had he lived.

Swartz faced 13 felony charges and up to 35 years in prison for downloading 4.8 million academic articles from the digital repository JSTOR, using MIT’s network. In the face of the impending trial, Swartz—a brilliant young hacker and activist who was a key force behind many worthy projects, including the RSS 1.0 specification and Creative Commons—killed himself on Jan. 11.

“Aaron’s unique quality was that he was marvelously and vigorously different,” Tufte said, a tear in his eye, as he closed his speech. “There is a scarcity of that. Perhaps we can all be a little more different, too.”

Swartz was too young to be a phone phreak like Tufte. In our present era of Skype and smartphones, the old days of outsmarting Ma Bell with 2600 Hertz sine wave tones and homemade “blue boxes” seems quaint, charmingly retro. But there is a thread that connects these old-school phone hackers to Swartz—common traits that Tufte recognized. It’s not just that, like Swartz, many phone phreaks faced trumped-up charges (wire fraud, in their cases). The best of these proto-computer hackers possessed Swartz’s enterprising spirit, his penchant for questioning authority, and his drive to figure out how a complicated system works from the inside. They were nerds, they were misfits; like Swartz, they were a little more different.

In his new history of phone phreaking, Exploding the Phone, engineer and consultant Phil Lapsley details the story of the 1960s and 1970s culture of hackers who, like Tufte, devised numerous ways to outwit the phone system. The foreword of the book is by Steve Wozniak, co-founder of Apple—and, as it happens, an old-school hacker himself. Before Wozniak and Steve Jobs built Apple in the 1970s, they were phone phreaks. (Wozniak’s hacker name was Berkeley Blue; Jobs’ handle was Oaf Tobar.)

In 1971, Esquire published an article about phone phreaking called “Secrets of the Little Blue Box,” by Ron Rosenbaum (a Slate columnist). It chronicled a ragtag crew sporting names like Captain Crunch and the Cheshire Cat, who prided themselves on using ingenuity and rudimentary electronics to outsmart the many-tentacled monstrosities of Ma Bell and the FBI. A blind 22-year-old named Joe Engressia was one of the scene’s heroes; according to Rosenbaum, Engressia could whistle at exactly the right frequency to place a free phone call.

Wozniak, age 20 in ’71, devoured the now-legendary article. “You know how some articles just grab you from the first paragraph?” he wrote in his 2006 memoir, iWoz, quoted in Lapsley’s book. “Well, it was one of those articles. It was the most amazing article I’d ever read!” Wozniak was entranced by the way these hackers seemed so much like himself. “I could tell that the characters being described were really tech people, much like me, people who liked to design things just to see what was possible, and for no other reason, really.” Building a blue box—a device that could generate the same tones that the phone system used to route phone calls, in a certain sequence—required technical smarts, and Wozniak loved nerdy challenges. Plus, the payoff—and the potential for epic pranks—was irresistible. (Wozniak once used a blue box to call the Vatican; impersonating Henry Kissinger he asked to talk to the pope.)

Wozniak immediately called Jobs, who was then a 17-year-old senior in high school. The friends drove to the technical library at Stanford’s Linear Accelerator Center to find a phone manual that listed tone frequencies. That same day, as Lapsley details in the book, Wozniak and Jobs bought analog tone generator kits, but were soon frustrated that the generators weren’t good enough for really high-quality phone phreaking.

Wozniak had a better, geekier idea: They needed to build their own blue boxes, but make them with digital circuits, which were more precise and easier to control than the usual analog ones. Wozniak and Jobs didn’t just build one blue box—they went on to build dozens of them, which they sold for about $170 apiece. In a way, their sophisticated, compact design foreshadowed the Apple products to come. Their digital circuitry incorporated several smart tricks, including a method to make the battery last longer. “I have never designed a circuit I was prouder of,” Wozniak says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Exploding the Phone by Phil Lapsley, book cover. Courtesy of Barnes & Noble.[end-div]

Nordic Noir and Scandinavian Cool

Apparently the world once thought of the countries that make up the Scandinavian region as dull and boring. Nothing much happened in Norway, Sweden, Finland and Denmark besides endless winters, ABBA, Volvo and utopian socialist experiments. Not any longer. Over the last couple of decades this region has become a hotbed of artistic, literary and business creativity.

[div class=attrib]From the Economist:[end-div]

TWENTY YEARS AGO the Nordic region was a cultural backwater. Even the biggest cities were dead after 8pm. The restaurants offered meatballs or pale versions of Italian or French favourites. The region did come up with a few cultural icons such as Ingmar Bergman and Abba, and managed to produce world-class architects and designers even at the height of post-war brutalism. But the few successes served only to emphasise the general dullness.

The backwater has now turned into an entrepot. Stockholm relishes its reputation as one of the liveliest cities in Europe (and infuriates its neighbours by billing itself as “the capital of Scandinavia”). Scandinavian crime novels have become a genre in their own right. Danish television shows such as “The Killing” and “Borgen” are syndicated across the world. Swedish music producers are fixtures in Hollywood. Copenhagen’s Noma is one of the world’s most highly rated restaurants and has brought about a food renaissance across the region.

Why has the land of the bland become a cultural powerhouse? Jonas Bonnier, CEO of the Bonnier Group, Sweden’s largest media company, thinks that it is partly because new technologies are levelling the playing field. Popular music was once dominated by British and American artists who were able to use all sorts of informal barriers to protect their position. Today, thanks to the internet, somebody sitting in a Stockholm attic can reach the world. Rovio’s Michael Hed suggests that network effects are much more powerful in small countries: as soon as one writer cracks the global detective market, dozens of others quickly follow.

All true. But there is no point in giving people microphones if they have nothing to say. The bigger reason why the region’s writers and artists—and indeed chefs and game designers—are catching the world’s attention is that they are so full of vim. They are reinventing old forms such as the detective story or the evening meal but also coming up with entirely new forms such as video games for iPads.

The cultural renaissance is thus part of the other changes that have taken place in the region. A closed society that was dominated by a single political orthodoxy (social democracy) and by a narrow definition of national identity (say, Swedishness or Finnishness) is being shaken up by powerful forces such as globalisation and immigration. All the Nordics are engaged in a huge debate about their identity in a post-social democratic world. Think-tanks such as Denmark’s Cepos flaunt pictures of Milton Friedman in the same way that student radicals once flaunted pictures of Che Guevara. Writers expose the dark underbelly of the old social democratic regime. Chefs will prepare anything under the sun as long as it is not meatballs.

The region’s identity crisis is creating a multicultural explosion. The Nordics are scavenging the world for ideas. They continue to enjoy a love-hate relationship with America. They are discovering inspiration from their growing ethnic minorities but are also reaching back into their own cultural traditions. Swedish crime writers revel in the peculiarities of their culture. Danish chefs refuse to use foreign ingredients. A region that has often felt the need to apologise for its culture—those bloodthirsty Vikings! Those toe-curling Abba lyrics! Those naff fishermen’s jumpers!—is enjoying a surge of regional pride.

Blood and snow

Over the past decade Scandinavia has become the world’s leading producer of crime novels. The two Swedes who did more than anyone else to establish Nordic noir—Stieg Larsson and Henning Mankell—have both left the scene of crime. Larsson died of a heart attack in 2004 before his three books about a girl with a dragon tattoo became a global sensation. Mr Mankell consigned his hero, Kurt Wallander, to Alzheimer’s after a dozen bestsellers. But their books continue to be bought in their millions: “Dragon Tattoo” has sold more than 50m, and the Wallander books collectively even more.

A group of new writers, such as Jo Nesbo in Norway and Camilla Lackberg in Sweden, are determined to keep the flame burning. And the crime wave is spreading beyond adult fiction and the written word. Sweden’s Martin Widmark writes detective stories for children. Swedish and British television producers compete to make the best version of Wallander. “The Killing” established a new standard for televised crime drama.

The region has a long tradition of crime writing. Per Wahloo and Maj Sjowall, a Swedish husband-and-wife team, earned a dedicated following among aficionados with their police novels in the 1960s and 1970s. They also established two of Nordic noir’s most appealing memes. Martin Beck is an illness-prone depressive who gets to the truth by dint of relentless plodding. The ten Martin Beck novels present Sweden as a capitalist hellhole that can be saved only by embracing Soviet-style communism (the crime at the heart of the novels is the social democratic system’s betrayal of its promise).

Today’s crime writers continue to profit from these conventions. Larsson’s Sweden, for example, is a crypto-fascist state run by a conspiracy of psychopathic businessmen and secret-service agents. But today’s Nordic crime writers have two advantages over their predecessors. The first is that their hitherto homogenous culture is becoming more variegated and their peaceful society has suffered inexplicable bouts of violence (such as the assassination in 1986 of Sweden’s prime minister, Olof Palme, and in 2003 of its foreign minister, Anna Lindh, and Anders Breivik’s murderous rampage in Norway in 2011). Nordic noir is in part an extended meditation on the tension between the old Scandinavia, with its low crime rate and monochrome culture, and the new one, with all its threats and possibilities. Mr Mankell is obsessed by the disruption of small-town life by global forces such as immigration and foreign criminal gangs. Each series of “The Killing” focuses as much on the fears—particularly of immigrant minorities—that the killing exposes as it does on the crime itself.

The second advantage is something that Wahloo and Sjowall would have found repulsive: a huge industry complete with support systems and the promise of big prizes. Ms Lackberg began her career in an all-female crime-writing class. Mr Mankell wrote unremunerative novels and plays before turning to a life of crime. Thanks in part to Larsson, crime fiction is one of the region’s biggest exports: a brand that comes with a guarantee of quality and a distribution system that stretches from Stockholm to Hollywood.

Dinner in Copenhagen can come as a surprise to even the most jaded foodie. The dishes are more likely to be served on slabs of rock or pieces of wood than on plates. The garnish often takes the form of leaves or twigs. Many ingredients, such as sea cabbage or wild flowers, are unfamiliar, and the more familiar sort, such as pike, are often teamed with less familiar ones, such as unripe elderberries.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: ABBA, Eurovision, 1974. Courtesy of Time.[end-div]

MondayPoem: Wild Nights – Wild Nights!

Emily Dickinson has been much written about, but still remains enigmatic. Many of her peers thought her to be eccentric and withdrawn. Only after her death did the full extent of her prolific writing become apparent. To this day, her unique poetry is regarded as having ushered in a new era of personal observation and expression.

By Emily Dickinson

– Wild Nights – Wild Nights!

Wild nights – Wild nights!
Were I with thee
Wild nights should be
Our luxury!

Futile – the winds –
To a Heart in port –
Done with the Compass –
Done with the Chart!

Rowing in Eden –
Ah – the Sea!
Might I but moor – tonight –
In thee!

[div class=attrib]Image: Daguerreotype of the poet Emily Dickinson, taken circa 1848. Courtesy of the Todd-Bingham Picture Collection and Family Papers, Yale University.[end-div]

Your Brain and Politics

New research out of the University of Exeter in Britain and the University of California, San Diego, shows that liberals and conservatives really do have different brains. In fact, activity in specific areas of the brain can be used to predict whether a person leans to the left or to the right with an accuracy of just under 83 percent. This means that a brain scan could more accurately predict your politics than the political persuasions of your parents (accurate around 70 percent of the time).

[div class=attrib]From Smithsonian:[end-div]

If you want to know people’s politics, tradition said to study their parents. In fact, the party affiliation of someone’s parents can predict the child’s political leanings about around 70 percent of the time.

But new research, published yesterday in the journal PLOS ONE, suggests what mom and dad think isn’t the endgame when it comes to shaping a person’s political identity. Ideological differences between partisans may reflect distinct neural processes, and they can predict who’s right and who’s left of center with 82.9 percent accuracy, outperforming the “your parents pick your party” model. It also out-predicts another neural model based on differences in brain structure, which distinguishes liberals from conservatives with 71.6 percent accuracy.

The study matched publicly available party registration records with the names of 82 American participants whose risk-taking behavior during a gambling experiment was monitored by brain scans. The researchers found that liberals and conservatives don’t differ in the risks they do or don’t take, but their brain activity does vary while they’re making decisions.

The idea that the brains of Democrats and Republicans may be hard-wired to their beliefs is not new. Previous research has shown that during MRI scans, areas linked to broad social connectedness, which involves friends and the world at large, light up in Democrats’ brains. Republicans, on the other hand, show more neural activity in parts of the brain associated with tight social connectedness, which focuses on family and country.

Other scans have shown that brain regions associated with risk and uncertainty, such as the fear-processing amygdala, differ in structure in liberals and conservatives. And different architecture means different behavior. Liberals tend to seek out novelty and uncertainty, while conservatives exhibit strong changes in attitude to threatening situations. The former are more willing to accept risk, while the latter tends to have more intense physical reactions to threatening stimuli.

Building on this, the new research shows that Democrats exhibited significantly greater activity in the left insula, a region associated with social and self-awareness, during the task. Republicans, however, showed significantly greater activity in the right amygdala, a region involved in our fight-or flight response system.

“If you went to Vegas, you won’t be able to tell who’s a Democrat or who’s a Republican, but the fact that being a Republican changes how your brain processes risk and gambling is really fascinating,” says lead researcher Darren Schreiber, a University of Exeter professor who’s currently teaching at Central European University in Budapest. “It suggests that politics alters our worldview and alters the way our brains process.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Sagittal brain MRI. Courtesy of Wikipedia.[end-div]

Pseudo-Science in Missouri and 2+2=5

Hot on the heels of recent successes by the Texas School Board of Education (SBOE) to revise history and science curricula, legislators in Missouri are planning to redefine commonly accepted scientific principles. Much like the situation in Texas the Missouri House is mandating that intelligent design be taught alongside evolution, in equal measure, in all the state’s schools. But, in a bid to take the lead in reversing thousands of years of scientific progress Missouri plans to redefine the actual scientific framework. So, if you can’t make “intelligent design” fit the principles of accepted science, then just change the principles themselves — first up, change the meanings of the terms “scientific hypothesis” and “scientific theory”.

We suspect that a couple of years from now, in Missouri, 2+2 will be redefined to equal 5, and that logic, deductive reasoning and experimentation will be replaced with mushy green peas.

[div class=attrib]From ars technica:[end-div]

Each year, state legislatures play host to a variety of bills that would interfere with science education. Most of these are variations on a boilerplate intended to get supplementary materials into classrooms criticizing evolution and climate change (or to protect teachers who do). They generally don’t mention creationism, but the clear intent is to sneak religious content into the science classrooms, as evidenced by previous bills introduced by the same lawmakers. Most of them die in the legislature (although the opponents of evolution have seen two successes).

The efforts are common enough that we don’t generally report on them. But, every now and then, a bill comes along veers off this script. And late last month, the Missouri House started considering one that deviates in staggering ways. Instead of being quiet about its intent, it redefines science, provides a clearer definition of intelligent design than any of the idea’s advocates ever have, and mandates equal treatment of the two. In the process, it mangles things so badly that teachers would be prohibited from discussing Mendel’s Laws.

Although even the Wikipedia entry for scientific theory includes definitions provided by the world’s most prestigious organizations of scientists, the bill’s sponsor Rick Brattin has seen fit to invent his own definition. And it’s a head-scratcher: “‘Scientific theory,’ an inferred explanation of incompletely understood phenomena about the physical universe based on limited knowledge, whose components are data, logic, and faith-based philosophy.” The faith or philosophy involved remain unspecified.

Brattin also mentions philosophy when he redefines hypothesis as, “a scientific theory reflecting a minority of scientific opinion which may lack acceptance because it is a new idea, contains faulty logic, lacks supporting data, has significant amounts of conflicting data, or is philosophically unpopular.” The reason for that becomes obvious when he turns to intelligent design, which he defines as a hypothesis. Presumably, he thinks it’s only a hypothesis because it’s philosophically unpopular, since his bill would ensure it ends up in the classrooms.

Intelligent design is roughly the concept that life is so complex that it requires a designer, but even its most prominent advocates have often been a bit wary about defining its arguments all that precisely. Not so with Brattin—he lists 11 concepts that are part of ID. Some of these are old-fashioned creationist claims, like the suggestion that mutations lead to “species degradation” and a lack of transitional fossils. But it also has some distinctive twists like the claim that common features, usually used to infer evolutionary relatedness, are actually a sign of parts re-use by a designer.

Eventually, the bill defines “standard science” as “knowledge disclosed in a truthful and objective manner and the physical universe without any preconceived philosophical demands concerning origin or destiny.” It then demands that all science taught in Missouri classrooms be standard science. But there are some problems with this that become apparent immediately. The bill demands anything taught as scientific law have “no known exceptions.” That would rule out teaching Mendel’s law, which have a huge variety of exceptions, such as when two genes are linked together on the same chromosome.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Seal of Missouri. Courtesy of Wikipedia.[end-div]

Grow Your Own… Heart

A timely article for Valentine’s Day. Researchers continue to make astonishing progress in areas of cell biology and human genomics. So, it should come as no surprise that growing a customized, replacement heart in a lab from reprogrammed cells will one day be on the horizon.

[div class=attrib]From the Guardian:[end-div]

Every two minutes someone in the UK has a heart attack. Every six minutes, someone dies from heart failure. During an attack, the heart remodels itself and dilates around the site of the injury to try to compensate, but these repairs are rarely effective. If the attack does not kill you, heart failure later frequently will.

“No matter what other clinical interventions are available, heart transplantation is the only genuine cure for this,” says Paul Riley, professor of regenerative medicine at Oxford University. “The problem is there is a dearth of heart donors.”

Transplants have their own problems – successful operations require patients to remain on toxic, immune-suppressing drugs for life and their subsequent life expectancies are not usually longer than 20 years.

The solution, emerging from the laboratories of several groups of scientists around the world, is to work out how to rebuild damaged hearts. Their weapons of choice are reprogrammed stem cells.

These researchers have rejected the more traditional path of cell therapy that you may have read about over the past decade of hope around stem cells – the idea that stem cells could be used to create batches of functioning tissue (heart or brain or whatever else) for transplant into the damaged part of the body. Instead, these scientists are trying to understand what the chemical and genetic switches are that turn something into a heart cell or muscle cell. Using that information, they hope to programme cells at will, and help the body make repairs.

It is an exciting time for a technology that no one thought possible a few years ago. In 2007, Shinya Yamanaka showed it was possible to turn adult skin cells into embryonic-like stem cells, called induced pluripotent stem cells (iPSCs), using just a few chemical factors. His technique radically advanced stem cell biology, sweeping aside years of blockages due to the ethical objections about using stem cells from embryos. He won the Nobel prize in physiology or medicine for his work in October. Researchers have taken this a step further – directly turning one mature cell type to another without going through a stem cell phase.

And politicians are taking notice. At the Royal Society in November, in his first major speech on the Treasury’s ambitions for science and technology, the chancellor, George Osborne, identified regenerative medicine as one of eight areas of technology in which he wanted the UK to become a world leader. Earlier last year, the Lords science and technology committee launched an inquiry into the potential of regenerative medicine in the UK – not only the science but what regulatory obstacles there might be to turning the knowledge into medical applications.

At Oxford, Riley has spent almost a year setting up a £2.5m lab, funded as part of the British Heart Foundation’s Mending Broken Hearts appeal, to work out how to get heart muscle to repair itself. The idea is to expand the scope of the work that got Riley into the headlines last year after a high-profile paper published in the journal Nature in which he showed a means of repairing cells damaged during a heart attack in mice. That work involved in effect turning the clock back in a layer of cells on the outside of the heart, called the epicardium, making adult cells think they were embryos again and thereby restarting their ability to repair.

During the development of the embryo, the epicardium turns into the many types of cells seen in the heart and surrounding blood vessels. After the baby is born this layer of cells loses its ability to transform. By infusing the epicardium with the protein thymosin ?4 (T?4), Riley’s team found the once-dormant layer of cells was able to produce new, functioning heart cells. Overall, the treatment led to a 25% improvement in the mouse heart’s ability to pump blood after a month compared with mice that had not received the treatment.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google Search.[end-div]

Vaccinia – Prototype Viral Cancer Killer

The illustrious Vaccinia virus may well have an Act Two in its future.

For Act One, over the last 150 years or so, it has been successfully used to vaccinate most of the world’s population against smallpox. This helped eradicate smallpox in the United States in the early 1970s.

Now, researchers are using it to target cancer.

First, take the Vaccinia virus — a relative of the smallpox virus. Second, re-engineer the virus to inhibit its growth in normal cells. Third, add a gene to the virus that stimulates the immune system. Fourth, set it to work on tumor cells and watch. While, such research has been going on for a couple of decades, this enhanced approach to attacking cancer cells with a viral immune system stimulant shows early promise.

[div class=attrib]From ars technica:[end-div]

For roughly 20 years, scientists have been working to engineer a virus that will attack cancer. The basic idea is sound, and every few years there have been some promising-looking results, with tumors shrinking dramatically in response to an infection. But the viruses never seem to go beyond small trials, and the companies making them always seem to focus on different things.

Over the weekend, Nature Medicine described some further promising results, this time with a somewhat different approach to ensuring that the virus leads to the death of cancer cells: if the virus doesn’t kill the cells directly, it revs up the immune system to attack them. It’s not clear this result will make it to a clinic, but it provides a good opportunity to review the general approach of treating cancer with viruses.

The basic idea is to leverage decades of work on some common viruses. This research has identified a variety of mutations keeping viruses from growing in normal cells. It means that if you inject the virus into a healthy individual, it won’t be able to infect any of their cells.

But cancer cells are different, as they carry a series of mutations of their own. In some cases, these mutations compensate for the problems in the virus. To give one example, the p53 protein normally induces aberrant cells to undergo an orderly death called apoptosis. It also helps shut down the growth of viruses in a cell, which is why some viruses encode a protein that inhibits p53. Cancer cells tend to damage or eliminate their copies of p53 so that it doesn’t cause them to undergo apoptosis.

So imagine a virus with its p53 inhibitor deleted. It can’t grow in normal cells since they have p53 around, but it can grow in cancer cells, which have eliminated their p53. The net result should be a cancer-killing virus. (A great idea, but this is one of the viruses that got dropped after preliminary trials.)

In the new trial, the virus in question takes a similar approach. The virus, vaccinia (a relative of smallpox used for vaccines), carries a gene that is essential for it to make copies of itself. Researchers have engineered a version without that gene, ensuring it can’t grow in normal cells (which have their equivalent of the gene shut down). Cancer cells need to reactivate the gene, meaning they present a hospitable environment for the mutant virus.

But the researchers added another trick by inserting a gene for a molecule that helps recruit immune cells (the awkwardly named granulocyte-macrophage colony-stimulating factor, or GM-CSF). The immune system plays an important role in controlling cancer, but it doesn’t always generate a full-scale response to cancer. By adding GM-CSF, the virus should help bring immune cells to the site of the cancer and activate them, creating a more aggressive immune response to any cells that survive viral infection.

The study here was simply checking the tolerance for two different doses of the virus. In general, the virus was tolerated well. Most subjects reported a short bout of flu-like symptoms, but only one subject out of 30 had a more severe response.

However, the tumors did respond. Based on placebo-controlled trials, the average survival time of patients like the ones in the trial would have been expected to be about two to four months. Instead, the low-dose group had a survival time of nearly seven months; for the higher dose group, that number went up to over a year. Two of those treated were still alive after more than two years. Imaging of tumors showed lots of dead cells, and tests of the immune system indicate the virus had generated a robust response.

[div class=attrib]Read the entire article after the leap.[end-div]

[div class=attrib]Image: An electron micrograph of a Vaccinia virus. Courtesy of Wikipedia.[end-div]

Do Corporations Go to Heaven When They Die?

Perhaps heaven is littered with the disembodied, collective consciousness of Woolworth, Circuit City, Borders and Blockbuster. Similarly, it may be possible that Enron and Lehman Brothers, a little less fortunate due to the indiscretions of their leaders, have found their corporate souls to be forever tormented in business hell. And, what of the high tech start-ups that come and go in the beat of a hummingbird’s wing? Where are Webvan, Flooz, Gowalla, Beenz, Loopt, Kosmo, eToys and Pets.com? Are they spinning endlessly somewhere between the gluttons (third circle) and the heretics (sixth circle) in Dante’s concentric hell. And where are the venture capitalists and where will Burger King and Apple find themselves when they eventually pass to the other side?

This may all seem rather absurd. It is. Yet, the evangelical corporate crusaders such as Hobby Lobby and Chick Fil A would have us treat their corporations just as we do mere (im)mortals. Where is all this nonsense heading? Well, the Supreme Court of the United States, of course.

[div class=attrib]From the New York Times:[end-div]

David Green, who built a family picture-framing business into a 42-state chain of arts and crafts stores, prides himself on being the model of a conscientious Christian capitalist. His 525 Hobby Lobby stores forsake Sunday profits to give employees their biblical day of rest. The company donates to Christian counseling services and buys holiday ads that promote the faith in all its markets. Hobby Lobby has been known to stick decals over Botticelli’s naked Venus in art books it sells.

And the company’s in-house health insurance does not cover morning-after contraceptives, which Green, like many of his fellow evangelical Christians, regards as chemical abortions.

“We’re Christians,” he says, “and we run our business on Christian principles.”

This has put Hobby Lobby at the leading edge of a legal battle that poses the intriguing question: Can a corporation have a conscience? And if so, is it protected by the First Amendment.

The Affordable Care Act, a k a Obamacare, requires that companies with more than 50 full-time employees offer health insurance, including coverage for birth control. Churches and other purely religious organizations are exempt. The Obama administration, in an unrequited search for compromise, has also proposed to excuse nonprofit organizations such as hospitals and universities if they are affiliated with religions that preach the evil of contraception. You might ask why a clerk at Notre Dame or an orderly at a Catholic hospital should be denied the same birth control coverage provided to employees of secular institutions. You might ask why institutions that insist they are like everyone else when it comes to applying for federal grants get away with being special when it comes to federal health law. Good questions. You will find the unsatisfying answers in the Obama handbook of political expediency.

But these concessions are not enough to satisfy the religious lobbies. Evangelicals and Catholics, cheered on by anti-abortion groups and conservative Obamacare-haters, now want the First Amendment freedom of religion to be stretched to cover an array of for-profit commercial ventures, Hobby Lobby being the largest litigant. They are suing to be exempted on the grounds that corporations sometimes embody the faith of the individuals who own them.

“The legal case” for the religious freedom of corporations “does not start with, ‘Does the corporation pray?’ or ‘Does the corporation go to heaven?’ ” said Kyle Duncan, general counsel of the Becket Fund for Religious Liberty, which is representing Hobby Lobby. “It starts with the owner.” For owners who have woven religious practice into their operations, he told me, “an exercise of religion in the context of a business” is still an exercise of religion, and thus constitutionally protected.

The issue is almost certain to end up in the Supreme Court, where the betting is made a little more interesting by a couple of factors: six of the nine justices are Catholic, and this court has already ruled, in the Citizens United case, that corporations are protected by the First Amendment, at least when it comes to freedom of speech. Also, we know that at least four members of the court don’t think much of Obamacare.

In lower courts, advocates of the corporate religious exemption have won a few and lost a few. (Hobby Lobby has lost so far, and could eventually face fines of more than $1 million a day for defying the law. The company’s case is now before the Court of Appeals for the 10th Circuit.)

You can feel some sympathy for David Green’s moral dilemma, and even admire him for practicing what he preaches, without buying the idea that la corporation, c’est moi. Despite the Supreme Court’s expansive view of the First Amendment, Hobby Lobby has a high bar to get over — as it should.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Gluttony: The circle itself is a living abomination, a hellish digestive system revealing horrific faces with mouths ready to devour the gluttons over and over for eternity. Picture: Mihai Marius Mihu / Rex Features / Telegraph. To see more of the nine circles of hell from Dante’s Inferno recreated in Lego by artist Mihai Mihu jump here.[end-div]

Better Relaxation Equals Higher Productivity

A growing body of research shows that employees who are well rested and relaxed are generally more productive. Isn’t this just common sense? But the notion that employees who are happier and less-stressed outside the workplace can be more effective within the workplace still seems to evade most employers.

[div class=attrib]From the New York Times:[end-div]

THINK for a moment about your typical workday. Do you wake up tired? Check your e-mail before you get out of bed? Skip breakfast or grab something on the run that’s not particularly nutritious? Rarely get away from your desk for lunch? Run from meeting to meeting with no time in between? Find it nearly impossible to keep up with the volume of e-mail you receive? Leave work later than you’d like, and still feel compelled to check e-mail in the evenings?

More and more of us find ourselves unable to juggle overwhelming demands and maintain a seemingly unsustainable pace. Paradoxically, the best way to get more done may be to spend more time doing less. A new and growing body of multidisciplinary research shows that strategic renewal — including daytime workouts, short afternoon naps, longer sleep hours, more time away from the office and longer, more frequent vacations — boosts productivity, job performance and, of course, health.

“More, bigger, faster.” This, the ethos of the market economies since the Industrial Revolution, is grounded in a mythical and misguided assumption — that our resources are infinite.

Time is the resource on which we’ve relied to get more accomplished. When there’s more to do, we invest more hours. But time is finite, and many of us feel we’re running out, that we’re investing as many hours as we can while trying to retain some semblance of a life outside work.

Although many of us can’t increase the working hours in the day, we can measurably increase our energy. Science supplies a useful way to understand the forces at play here. Physicists understand energy as the capacity to do work. Like time, energy is finite; but unlike time, it is renewable. Taking more time off is counterintuitive for most of us. The idea is also at odds with the prevailing work ethic in most companies, where downtime is typically viewed as time wasted. More than one-third of employees, for example, eat lunch at their desks on a regular basis. More than 50 percent assume they’ll work during their vacations.

In most workplaces, rewards still accrue to those who push the hardest and most continuously over time. But that doesn’t mean they’re the most productive.

Spending more hours at work often leads to less time for sleep and insufficient sleep takes a substantial toll on performance. In a study of nearly 400 employees, published last year, researchers found that sleeping too little — defined as less than six hours each night — was one of the best predictors of on-the-job burn-out. A recent Harvard study estimated that sleep deprivation costs American companies $63.2 billion a year in lost productivity.

The Stanford researcher Cheri D. Mah found that when she got male basketball players to sleep 10 hours a night, their performances in practice dramatically improved: free-throw and three-point shooting each increased by an average of 9 percent.

Daytime naps have a similar effect on performance. When night shift air traffic controllers were given 40 minutes to nap — and slept an average of 19 minutes — they performed much better on tests that measured vigilance and reaction time.

Longer naps have an even more profound impact than shorter ones. Sara C. Mednick, a sleep researcher at the University of California, Riverside, found that a 60- to 90-minute nap improved memory test results as fully as did eight hours of sleep.

MORE vacations are similarly beneficial. In 2006, the accounting firm Ernst & Young did an internal study of its employees and found that for each additional 10 hours of vacation employees took, their year-end performance ratings from supervisors (on a scale of one to five) improved by 8 percent. Frequent vacationers were also significantly less likely to leave the firm.

As athletes understand especially well, the greater the performance demand, the greater the need for renewal. When we’re under pressure, however, most of us experience the opposite impulse: to push harder rather than rest. This may explain why a recent survey by Harris Interactive found that Americans left an average of 9.2 vacation days unused in 2012 — up from 6.2 days in 2011.

The importance of restoration is rooted in our physiology. Human beings aren’t designed to expend energy continuously. Rather, we’re meant to pulse between spending and recovering energy.

[div class=attrib]Read the entire article following the jump.[end-div]

Geoengineering As a Solution to Climate Change

Experimental physicist David Keith has a plan: dump hundreds of thousands of tons of atomized sulfuric acid into the upper atmosphere; watch the acid particles reflect additional sunlight; wait for global temperature to drop. Many of Keith’s peers think this geoengineering scheme is crazy, least of which are the possible unknown and unmeasured side-effects, but this hasn’t stopped the healthy debate. One thing is becoming increasingly clear — humans need to take collective action.

[div class=attrib]From Technology Review:[end-div]

Here is the plan. Customize several Gulfstream business jets with military engines and with equipment to produce and disperse fine droplets of sulfuric acid. Fly the jets up around 20 kilometers—significantly higher than the cruising altitude for a commercial jetliner but still well within their range. At that altitude in the tropics, the aircraft are in the lower stratosphere. The planes spray the sulfuric acid, carefully controlling the rate of its release. The sulfur combines with water vapor to form sulfate aerosols, fine particles less than a micrometer in diameter. These get swept upward by natural wind patterns and are dispersed over the globe, including the poles. Once spread across the stratosphere, the aerosols will reflect about 1 percent of the sunlight hitting Earth back into space. Increasing what scientists call the planet’s albedo, or reflective power, will partially offset the warming effects caused by rising levels of greenhouse gases.

The author of this so-called geoengineering scheme, David Keith, doesn’t want to implement it anytime soon, if ever. Much more research is needed to determine whether injecting sulfur into the stratosphere would have dangerous consequences such as disrupting precipitation patterns or further eating away the ozone layer that protects us from damaging ultraviolet radiation. Even thornier, in some ways, are the ethical and governance issues that surround geoengineering—questions about who should be allowed to do what and when. Still, Keith, a professor of applied physics at Harvard University and a leading expert on energy technology, has done enough analysis to suspect it could be a cheap and easy way to head off some of the worst effects of climate change.

According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft.

One of the startling things about Keith’s proposal is just how little sulfur would be required. A few grams of it in the stratosphere will offset the warming caused by a ton of carbon dioxide, according to his estimate. And even the amount that would be needed by 2070 is dwarfed by the roughly 50 million metric tons of sulfur emitted by the burning of fossil fuels every year. Most of that pollution stays in the lower atmosphere, and the sulfur molecules are washed out in a matter of days. In contrast, sulfate particles remain in the stratosphere for a few years, making them more effective at reflecting sunlight.

The idea of using sulfate aerosols to offset climate warming is not new. Crude versions of the concept have been around at least since a Russian climate scientist named Mikhail Budkyo proposed the idea in the mid-1970s, and more refined descriptions of how it might work have been discussed for decades. These days the idea of using sulfur particles to counteract warming—often known as solar radiation management, or SRM—is the subject of hundreds of papers in academic journals by scientists who use computer models to try to predict its consequences.

But Keith, who has published on geoengineering since the early 1990s, has emerged as a leading figure in the field because of his aggressive public advocacy for more research on the technology—and his willingness to talk unflinchingly about how it might work. Add to that his impeccable academic credentials—last year Harvard lured him away from the University of Calgary with a joint appointment in the school of engineering and the Kennedy School of Government—and Keith is one of the world’s most influential voices on solar geoengineering. He is one of the few who have done detailed engineering studies and logistical calculations on just how SRM might be carried out. And if he and his collaborator James ­Anderson, a prominent atmospheric chemist at Harvard, gain public funding, they plan to conduct some of the first field experiments to assess the risks of the technique.

Leaning forward from the edge of his chair in a small, sparse Harvard office on an unusually warm day this winter, he explains his urgency. Whether or not greenhouse-gas emissions are cut sharply—and there is little evidence that such reductions are coming—”there is a realistic chance that [solar geoengineering] technologies could actually reduce climate risk significantly, and we would be negligent if we didn’t look at that,” he says. “I’m not saying it will work, and I’m not saying we should do it.” But “it would be reckless not to begin serious research on it,” he adds. “The sooner we find out whether it works or not, the better.”

The overriding reason why Keith and other scientists are exploring solar geoengineering is simple and well documented, though often overlooked: the warming caused by atmospheric carbon dioxide buildup is for all practical purposes irreversible, because the climate change is directly related to the total cumulative emissions. Even if we halt carbon dioxide emissions entirely, the elevated concentrations of the gas in the atmosphere will persist for decades. And according to recent studies, the warming itself will continue largely unabated for at least 1,000 years. If we find in, say, 2030 or 2040 that climate change has become intolerable, cutting emissions alone won’t solve the problem.

“That’s the key insight,” says Keith. While he strongly supports cutting carbon dioxide emissions as rapidly as possible, he says that if the climate “dice” roll against us, that won’t be enough: “The only thing that we think might actually help [reverse the warming] in our lifetime is in fact geoengineering.”

[div class=attrib]Read the entire article following the jump.[end-div]

From Sea to Shining Sea – By Rail

Now that air travel has become well and truly commoditized, and for most of us, a nightmare, it’s time, again, to revisit the romance of rail. After all, the elitist romance of air travel passed away about 40-50 years ago. Now all we are left with is parking trauma at the airport; endless lines at check in, security, the gate and while boarding and disembarking; inane airport announcements and beeping golf carts; coughing, tweeting passengers crammed shoulder to shoulder in far too small seats; poor quality air and poor quality service in the cabin. It’s even dangerous to open the shade and look out of the aircraft window for fear of waking a cranky neighbor, or, more calamitous still, for washing out the in-seat displays showing the latest reality TV videos.

Some of you, surely, still pine for a quiet and calming ride across the country taking in the local sights at a more leisurely pace. Alfred Twu, who helped define the 2008 high speed rail proposal for California, would have us zooming across the entire United States in trains, again. So, it not be a leisurely ride — think more like 200-300 miles per hour — but it may well bring us closer to what we truly miss when suspended at 30,000 ft. We can’t wait.

[div class=attrib]From the Guardian:[end-div]

I created this US High Speed Rail Map as a composite of several proposed maps from 2009, when government agencies and advocacy groups were talking big about rebuilding America’s train system.

Having worked on getting California’s high speed rail approved in the 2008 elections, I’ve long sung the economic and environmental benefits of fast trains.

This latest map comes more from the heart. It speaks more to bridging regional and urban-rural divides than about reducing airport congestion or even creating jobs, although it would likely do that as well.

Instead of detailing construction phases and service speeds, I took a little artistic license and chose colors and linked lines to celebrate America’s many distinct but interwoven regional cultures.

The response to my map this week went above and beyond my wildest expectations, sparking vigorous political discussion between thousands of Americans ranging from off-color jokes about rival cities to poignant reflections on how this kind of rail network could change long-distance relationships and the lives of faraway family members.

Commenters from New York and Nebraska talked about “wanting to ride the red line”. Journalists from Chattanooga, Tennessee (population 167,000) asked to reprint the map because they were excited to be on the map. Hundreds more shouted “this should have been built yesterday”.

It’s clear that high speed rail is more than just a way to save energy or extend economic development to smaller cities.

More than mere steel wheels on tracks, high speed rail shrinks space and brings farflung families back together. It keeps couples in touch when distant career or educational opportunities beckon. It calls to adventure and travel. It is duct tape and string to reconnect politically divided regions. Its colorful threads weave new American Dreams.

That said, while trains still live large in the popular imagination, decades of limited service have left some blind spots in the collective consciousness. I’ll address few here:

Myth: High speed rail is just for big city people.
Fact: Unlike airplanes or buses which must make detours to drop off passengers at intermediate points, trains glide into and out of stations with little delay, pausing for under a minute to unload passengers from multiple doors. Trains can, have, and continue to effectively serve small towns and suburbs, whereas bus service increasingly bypasses them.

I do hear the complaint: “But it doesn’t stop in my town!” In the words of one commenter, “the train doesn’t need to stop on your front porch.” Local transit, rental cars, taxis, biking, and walking provide access to and from stations.

Myth: High speed rail is only useful for short distances.
Fact: Express trains that skip stops allow lines to serve many intermediate cities while still providing some fast end-to-end service. Overnight sleepers with lie-flat beds where one boards around dinner and arrives after breakfast have been successful in the US before and are in use on China’s newest 2,300km high speed line.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: U.S. High Speed Rail System proposal. Alfred Twu created this map to showcase what could be possible.[end-div]

The Death of Scientific Genius

There is a certain school of thought that asserts that scientific genius is a thing of the past. After all, we haven’t seen the recent emergence of pivotal talents such as Galileo, Newton, Darwin or Einstein. Is it possible that fundamentally new ways to look at our world — that a new mathematics or a new physics is no longer possible?

In a recent essay in Nature, Dean Keith Simonton, professor of psychology at UC Davis, argues that such fundamental and singular originality is a thing of the past.

[div class=attrib]From ars technica:[end-div]

Einstein, Darwin, Galileo, Mendeleev: the names of the great scientific minds throughout history inspire awe in those of us who love science. However, according to Dean Keith Simonton, a psychology professor at UC Davis, the era of the scientific genius may be over. In a comment paper published in Nature last week, he explains why.

The “scientific genius” Simonton refers to is a particular type of scientist; their contributions “are not just extensions of already-established, domain-specific expertise.” Instead, “the scientific genius conceives of a novel expertise.” Simonton uses words like “groundbreaking” and “overthrow” to illustrate the work of these individuals, explaining that they each contributed to science in one of two major ways: either by founding an entirely new field or by revolutionizing an already-existing discipline.

Today, according to Simonton, there just isn’t room to create new disciplines or overthrow the old ones. “It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline,” he writes. Furthermore, most scientific fields aren’t in the type of crisis that would enable paradigm shifts, according to Thomas Kuhn’s classic view of scientific revolutions. Simonton argues that instead of finding big new ideas, scientists currently work on the details in increasingly specialized and precise ways.

And to some extent, this argument is demonstrably correct. Science is becoming more and more specialized. The largest scientific fields are currently being split into smaller sub-disciplines: microbiology, astrophysics, neuroscience, and paleogeography, to name a few. Furthermore, researchers have more tools and the knowledge to hone in on increasingly precise issues and questions than they did a century—or even a decade—ago.

But other aspects of Simonton’s argument are a matter of opinion. To me, separating scientists who “build on what’s already known” from those who “alter the foundations of knowledge” is a false dichotomy. Not only is it possible to do both, but it’s impossible to establish—or even make a novel contribution to—a scientific field without piggybacking on the work of others to some extent. After all, it’s really hard to solve the problems that require new solutions if other people haven’t done the work to identify them. Plate tectonics, for example, was built on observations that were already widely known.

And scientists aren’t done altering the foundations of knowledge, either. In science, as in many other walks of life, we don’t yet know everything we don’t know. Twenty years ago, exoplanets were hypothetical. Dark energy, as far as we knew, didn’t exist.

Simonton points out that “cutting-edge work these days tends to emerge from large, well-funded collaborative teams involving many contributors” rather than a single great mind. This is almost certainly true, especially in genomics and physics. However, it’s this collaboration and cooperation between scientists, and between fields, that has helped science progress past where we ever thought possible. While Simonton uses “hybrid” fields like astrophysics and biochemistry to illustrate his argument that there is no room for completely new scientific disciplines, I see these fields as having room for growth. Here, diverse sets of ideas and methodologies can mix and lead to innovation.

Simonton is quick to assert that the end of scientific genius doesn’t mean science is at a standstill or that scientists are no longer smart. In fact, he argues the opposite: scientists are probably more intelligent now, since they must master more theoretical work, more complicated methods, and more diverse disciplines. In fact, Simonton himself would like to be wrong; “I hope that my thesis is incorrect. I would hate to think that genius in science has become extinct,” he writes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Einstein 1921 by F. Schmutzer. Courtesy of Wikipedia.[end-div]

Printing Human Cells

The most fundamental innovation tends to happen at the intersection of disciplines. So, what do you get if you cross 3-D printing technology with embryonic stem cell research? Well, you get a device that can print lines of cells with similar functions, such as heart muscle or kidney cells. Welcome to the new world of biofabrication. The science fiction future seems to be ever increasingly close.

[div class=attrib]From Scientific American:[end-div]

Imagine if you could take living cells, load them into a printer, and squirt out a 3D tissue that could develop into a kidney or a heart. Scientists are one step closer to that reality, now that they have developed the first printer for embryonic human stem cells.

In a new study, researchers from the University of Edinburgh have created a cell printer that spits out living embryonic stem cells. The printer was capable of printing uniform-size droplets of cells gently enough to keep the cells alive and maintain their ability to develop into different cell types. The new printing method could be used to make 3D human tissues for testing new drugs, grow organs, or ultimately print cells directly inside the body.

Human embryonic stem cells (hESCs) are obtained from human embryos and can develop into any cell type in an adult person, from brain tissue to muscle to bone. This attribute makes them ideal for use in regenerative medicine — repairing, replacing and regenerating damaged cells, tissues or organs. [Stem Cells: 5 Fascinating Findings]

In a lab dish, hESCs can be placed in a solution that contains the biological cues that tell the cells to develop into specific tissue types, a process called differentiation. The process starts with the cells forming what are called “embryoid bodies.” Cell printers offer a means of producing embryoid bodies of a defined size and shape.

In the new study, the cell printer was made from a modified CNC machine (a computer-controlled machining tool) outfitted with two “bio-ink” dispensers: one containing stem cells in a nutrient-rich soup called cell medium and another containing just the medium. These embryonic stem cells were dispensed through computer-operated valves, while a microscope mounted to the printer provided a close-up view of what was being printed.

The two inks were dispensed in layers, one on top of the other to create cell droplets of varying concentration. The smallest droplets were only two nanoliters, containing roughly five cells.

The cells were printed onto a dish containing many small wells. The dish was then flipped over so the droplets now hung from them, allowing the stem cells to form clumps inside each well. (The printer lays down the cells in precisely sized droplets and in a certain pattern that is optimal for differentiation.)

Tests revealed that more than 95 percent of the cells were still alive 24 hours after being printed, suggesting they had not been killed by the printing process. More than 89 percent of the cells were still alive three days later, and also tested positive for a marker of their pluripotency — their potential to develop into different cell types.

Biomedical engineer Utkan Demirci, of Harvard University Medical School and Brigham and Women’s Hospital, has done pioneering work in printing cells, and thinks the new study is taking it in an exciting direction. “This technology could be really good for high-throughput drug testing,” Demirci told LiveScience. One can build mini-tissues from the bottom up, using a repeatable, reliable method, he said. Building whole organs is the long-term goal, Demirci said, though he cautioned that it “may be quite far from where we are today.”

[div class=attrib]Read the entire article after the leap.[end-div]

[div class=attrib]Image: 3D printing with embryonic stem cells. Courtesy of Alan Faulkner-Jones et al./Heriot-Watt University.[end-div]

A Peek inside Lichtenstein’s Head

Residents and visitors to London are fortunate — they are bombarded by the rich sights, sounds and smells of one of the world’s great cities. One such sight is Tate Modern, ex-power station, now iconic home to some really good art. In fact, they’re hosting what promises to be a great exhibit soon — a retrospective of Roy Lichtenstein from February 21 to May 27.

[div class=attrib]From the Telegraph:[end-div]

Black paintwork, white brickwork, in tree-lined Greenwich Village. We’re spitting distance from Bleecker, whose elongated vowels once made music for Simon and Garfunkel and Steely Dan. When the floodwaters of the nearby Hudson inched upward and east during Hurricane Sandy, they ceased their creep yards from the steps outside.

Inside are the wood floors and fireplace of the area’s typical brownstone, but the cosy effect ends when an alcove ‘bookcase’ turns revolving door, stairway leading downwards. It’s straight from the pages of Agatha Christie, even Indiana Jones.

This is one of two entries (the other far less thrilling) to the cavernous room beneath that was once Roy Lichtenstein’s studio. The house above was used as a bolthole for visiting friends and family, ensuring he could work undisturbed, day in, day out. His watch was rigorous: 10 to 6, with 90 minutes for lunch.

The building is now home to the Lichtenstein Foundation, where every reference to his work, even wrapping paper, is assiduously filed away alongside the artist’s sketchbooks, scrapbooks and working materials. The studio is set up as it was when he was alive. Charts by the sink show dots and lines in every size, colour and combination. The walls have wooden racks designed to tip forward, preventing paint drip. One of his vast murals still hangs there – an incongruous combination of Etruscan meets Henry Moore meets a slice of Swiss cheese.

Aside a scalpel-scored table worktable stands the paint-splattered stool at which the artist whilst drafting and redrafting his compositions. And this is the thing about Lichtenstein. His finished works look so effortless, so without their maker’s mark that we rarely think of the hours, methods and materials that went into their producing. He sought to erase all trace of the selective artist engaged in difficult work. He is as apt to slip through our pressing fingers, as one observer put it, as drops of liquid mercury.

Roy Fox Lichtenstein had a long, uncommonly successful career, even if he did spend most of it in his studio rather than out basking in its rewards. With a retrospective of his work – the first since his death from pneumonia in 1997 aged 73 – opening at the Tate this month, comes the chance to assess the painterly approach behind the Pop inspired sheen, and it isn’t so hands-off after all.

Lichtenstein, born and raised in 1930s Manhattan, began his creative career at a time when Abstract Expressionism reigned supreme, emotional work predicated on a belief that each work is impossible to repeat. Artists sought to impress upon their public a unique signature that would reveal their inner sensibility. Brushwork, the hand-drawn line – these were the lauded aim.

Now, exiting the woodwork, were artists like Claes Oldenburg and Andy Warhol, using banal subjects to skewer such bloated clichés. The Pop crew drew plugs, step-on trash cans, dollar bills and Don Draper’s fizzy saviour, Alka Seltzer. But while most still used a grainy, obviously hand-drawn hatching or line to convey realism, Lichtenstein went a step further.

“I’d always wanted to know the difference between a mark that was art and one that wasn’t” he said, “so l chose among the crudest types of illustration – product packaging, mail order catalogues.” It provided the type of drawing that was most opposite individual expression and its lack of nuance appealed greatly. “I don’t care what a cup of coffee looks like” he said. “I only care about how it’s drawn.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ohhh…Alright… by Roy Lichtenstein, 1964. Courtesy of Roy Lichtenstein Foundation / Wikipedia.[end-div]

 

Geeks As Guardians of (Some of) Our Civil Liberties

It’s interesting to ponder what would have been if the internet and social media had been around during those more fractious times in Seneca Falls, Selma and Stonewall. Perhaps these tools would have helped accelerate progress.

[div class-attrib]From Technology Review:[end-div]

A decade-plus of anthropological fieldwork among hackers and like-minded geeks has led me to the firm conviction that these people are building one of the most vibrant civil liberties movements we’ve ever seen. It is a culture committed to freeing information, insisting on privacy, and fighting censorship, which in turn propels wide-ranging political activity. In the last year alone, hackers have been behind some of the most powerful political currents out there.

Before I elaborate, a brief word on the term “hacker” is probably in order. Even among hackers, it provokes debate. For instance, on the technical front, a hacker might program, administer a network, or tinker with hardware. Ethically and politically, the variability is just as prominent. Some hackers are part of a transgressive, law-breaking tradition, their activities opaque and below the radar. Other hackers write open-source software and pride themselves on access and transparency. While many steer clear of political activity, an increasingly important subset rise up to defend their productive autonomy, or engage in broader social justice and human rights campaigns.

Despite their differences, there are certain  websites and conferences that bring the various hacker clans together. Like any political movement, it is internally diverse but, under the right conditions, individuals with distinct abilities will work in unison toward a cause.

Take, for instance, the reaction to the Stop Online Piracy Act (SOPA), a far-reaching copyright bill meant to curtail piracy online. SOPA was unraveled before being codified into law due to a massive and elaborate outpouring of dissent driven by the hacker movement.

The linchpin was a “Blackout Day”—a Web-based protest of unprecedented scale. To voice their opposition to the bill, on January 17, 2012, nonprofits, some big Web companies, public interest groups, and thousands of individuals momentarily removed their websites from the Internet and thousands of other citizens called or e-mailed their representatives. Journalists eventually wrote a torrent of articles. Less than a week later, in response to these stunning events, SOPA and PIPA, its counterpart in the Senate, were tabled (see “SOPA Battle Won, but War Continues”).

The victory hinged on its broad base of support cultivated by hackers and geeks. The participation of corporate giants like Google, respected Internet personalities like Jimmy Wales, and the civil liberties organization EFF was crucial to its success. But the geek and hacker contingent was palpably present, and included, of course, Anonymous. Since 2008, activists have rallied under this banner to initiate targeted demonstrations, publicize various wrongdoings, leak sensitive data, engage in digital direct action, and provide technology assistance for revolutionary movements.

As part of the SOPA protests, Anonymous churned out videos and propaganda posters and provided constant updates on several prominent Twitter accounts, such as Your Anonymous News, which are brimming with followers. When the blackout ended, corporate players naturally receded from the limelight and went back to work. Anonymous and others, however, continue to fight for Internet freedoms.

In fact, just the next day, on January 18, 2012, federal authorities orchestrated the takedown of the popular file-sharing site MegaUpload. The company’s gregarious and controversial founder Kim Dotcom was also arrested in a dramatic early morning raid in New Zealand. The removal of this popular website was received ominously by Anonymous activists: it seemed to confirm that if bills like SOPA become law, censorship would become a far more common fixture on the Internet. Even though no court had yet found Kim Dotcom guilty of piracy, his property was still confiscated and his website knocked off the Internet.

As soon as the news broke, Anonymous coordinated its largest distributed denial of service campaign to date. It took down a slew of websites, including the homepage of Universal Music, the FBI, the U.S. Copyright Office, the Recording Industry Association of America, and the Motion Picture Association of America.

[div class=attrib]Read the entire article after the jump.[end-div]

Light Breeze Signals the Winds of Change

The gods of Norse legend are surely turning slowly in their graves. A Reykjavik, Iceland, court recently granted a 15-year-old the right to use her given name. Her first name, “Blaer” means “light breeze” in Icelandic, and until the ruling was not permitted to use the name under Iceland’s strict cultural preservation laws. So, before you name your next child Shoniqua or Te’o or Cruise, pause for a few moments to think how lucky you are that you live elsewhere (with apologies to our readers in Iceland).

[div class=attrib]From the Guardian:[end-div]

A 15-year-old Icelandic girl has been granted the right to legally use the name given to her by her mother, despite the opposition of authorities and Iceland’s strict law on names.

Reykjavik District Court ruled Thursday that the name “Blaer” can be used. It means “light breeze.”

The decision overturns an earlier rejection by Icelandic authorities who declared it was not a proper feminine name. Until now, Blaer Bjarkardottir had been identified simply as “Girl” in communications with officials.

“I’m very happy,” she said after the ruling. “I’m glad this is over. Now I expect I’ll have to get new identity papers. Finally I’ll have the name Blaer in my passport.”

Like a handful of other countries, including Germany and Denmark, Iceland has official rules about what a baby can be named. Names are supposed to fit Icelandic grammar and pronunciation rules — choices like Carolina and Christa are not allowed because the letter “c” is not part of Iceland’s alphabet.

Blaer’s mother, Bjork Eidsdottir, had fought for the right for the name to be recognized. The court ruling means that other girls will be also allowed to use the name in Iceland.

In an interview earlier this year, Eidsdottir said she did not know the name “Blaer” was not on the list of accepted female names when she gave it to her daughter. The name was rejected because the panel viewed it as a masculine name that was inappropriate for a girl.

The court found that based on testimony and other evidence, that the name could be used by both males and females and that Blaer had a right to her own name under Iceland’s constitution and Europe’s human rights conventions. It rejected the government’s argument that her request should be denied to protect the Icelandic language.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Odin holds bracelets and leans on his spear while looking towards the völva in Völuspá. Gesturing, the völva holds a spoon and sits beside a steaming kettle. Published in Gjellerup, Karl (1895). Courtesy of Wikipedia.[end-div]

Beware North Korea, Google is Watching You

This week Google refreshed its maps of North Korea. What was previously a blank canvas with only the country’s capital — Pyongyang — visible, now boasts roads, hotels, monuments and even some North Korean internment camps. While this is not the first detailed map of the secretive state it is an important milestone in Google’s quest to map us all.

[div class=attrib]From the Washington Post:[end-div]

Until Tuesday, North Korea appeared on Google Maps as a near-total white space — no roads, no train lines, no parks and no restaurants. The only thing labeled was the capital city, Pyongyang.

This all changed when Google, on Tuesday, rolled out a detailed map of one of the world’s most secretive states. The new map labels everything from Pyongyang’s subway stops to the country’s several city-sized gulags, as well as its monuments, hotels, hospitals and department stores.

According to a Google blog post, the maps were created by a group of volunteer “citizen cartographers,” through an interface known as Google Map Maker. That program — much like Wikipedia — allows users to submit their own data, which is then fact-checked by other users, and sometimes altered many times over. Similar processes were used in other once-unmapped countries like Afghanistan and Burma.

In the case of North Korea, those volunteers worked from outside of the country, beginning from 2009. They used information that was already public, compiling details from existing analog maps, satellite images, or other Web-based materials. Much of the information was already available on the Internet, said Hwang Min-woo, 28, a volunteer mapmaker from Seoul who worked for two years on the project.

North Korea was the last country virtually unmapped by Google, but other — even more detailed — maps of the North existed before this. Most notable is a map created by Curtis Melvin, who runs the North Korea Economy Watch blog and spent years identifying thousands of landmarks in the North: tombs, textile factories, film studios, even rumored spy training locations. Melvin’s map is available as a downloadable Google Earth file.

Google’s map is important, though, because it is so readily accessible.  The map is unlikely to have an immediate influence in the North, where Internet use is restricted to all but a handful of elites. But it could prove beneficial for outsider analysts and scholars, providing an easy-to-access record about North Korea’s provinces, roads, landmarks, as well as hints about its many unseen horrors.

[div class=attrib]Read the entire article and check out more maps after the jump.[end-div]

Sales Performance and Extroversion

There is a common urban legend that to be successful in most deeds one needs to be an extrovert. In business, many of us are led to believe that all successful CEOs and corporate-titans are extroverts. We also tend to think that to be a top-flight sales person one also needs to be an out-and-out party-animal. Well it is a myth, now backed up by the most comprehensive meta-study (a study of studies) to date on extroversion and business performance.

[div class=attrib]From the Washington Post:[end-div]

Spend a day with any leader in any organization, and you’ll quickly discover that the person you’re shadowing, whatever his or her official title or formal position, is actually in sales. These leaders are often pitching customers and clients, of course. But they’re also persuading employees, convincing suppliers, sweet-talking funders or cajoling a board. At the core of their exalted work is a less glamorous truth: Leaders sell.

So what kind of personality makes the best salesperson — and therefore, presumably, the most effective leader?

Most of us would say extroverts. These wonderfully gregarious folks, we like to think, have the right stuff for the role. They’re at ease in social settings. They know how to strike up conversations. They don’t shrink from making requests. Little wonder, then, that scholars such as Michael Mount of the University of Iowa and others have shown that hiring managers select for this trait when assembling a sales force.

The conventional view that extroverts make the finest salespeople is so accepted that we’ve overlooked one teensy flaw: There’s almost no evidence it’s actually true.

When social scientists have examined the relationship between extroverted personalities and sales success — that is, how often the cash register rings — they’ve found the link to be, at best, flimsy. For instance, one of the most comprehensive investigations, a meta-analysis of 35 studies of nearly 4,000 salespeople, found that the correlation between extroversion and sales performance was essentially zero (0.07, to be exact).

Does this mean instead that introverts, the soft-spoken souls more at home in a study carrel than on a sales call,are more effective? Not at all.

The answer, in new research from Adam Grant, the youngest tenured professor at the University of Pennsylvania’s Wharton School of Management, is far more intriguing. In a study that will be published later this year in the journal Psychological Science, Grant collected data from sales representatives at a software company. He began by giving reps an often-used personality assessment that measures introversion and extroversion on a 1-to-7 scale, with 1 being most introverted and 7 being most extroverted.

Then he tracked their performance over the next three months. The introverts fared worst; they earned average revenue of $120 per hour. The extroverts performed slightly better, pulling in $125 per hour. But neither did nearly as well as a third group: the ambiverts.

Ambi-whats?

Ambiverts, a term coined by social scientists in the 1920s, are people who are neither extremely introverted nor extremely extroverted. Think back to that 1-to-7 scale that Grant used. Ambiverts aren’t 1s or 2s, but they’re not 6s or 7s either. They’re 3s, 4s and 5s. They’re not quiet, but they’re not loud. They know how to assert themselves, but they’re not pushy.

[div class=attrib]Read the entire article following the jump.[end-div]

International Art English

Yes, it’s official. There really is a subset of the Queen’s English for the contemporary art scene — dubbed International Art English (IAE). If you’ve visited a gallery over the last couple of decades you may be familiar with this type language on press releases and wall tags. It uses multisyllabic words in breathless, flowery, billowy sentences; high-brow phraseology replete with pretentious insider nods and winks; it’s often enthusiastically festooned with adverbs and esoteric adjectives, in apparently random but clear juxtaposition. So, it’s rather like the preceding sentence. Will IAE become as pervasive as International Sport English – you know, that subset of language increasingly spoken, in the same accent, by international sports celebrities? Time will tell.

[div class=attrib]From the Guardian:[end-div]

The Simon Lee Gallery in Mayfair is currently showing work by the veteran American artist Sherrie Levine. A dozen small pink skulls in glass cases face the door. A dozen small bronze mirrors, blandly framed but precisely arranged, wink from the walls. In the deep, quiet space of the London gallery, shut away from Mayfair’s millionaire traffic jams, all is minimal, tasteful and oddly calming.

Until you read the exhibition hand-out. “The artist brings the viewer face to face with their own preconceived hierarchy of cultural values and assumptions of artistic worth,” it says. “Each mirror imaginatively propels its viewer forward into the seemingly infinite progression of possible reproductions that the artist’s practice engenders, whilst simultaneously pulling them backwards in a quest for the ‘original’ source or referent that underlines Levine’s oeuvre.”

If you’ve been to see contemporary art in the last three decades, you will probably be familiar with the feelings of bafflement, exhaustion or irritation that such gallery prose provokes. You may well have got used to ignoring it. As Polly Staple, art writer and director of the Chisenhale Gallery in London, puts it: “There are so many people who come to our shows who don’t even look at the programme sheet. They don’t want to look at any writing about art.”

With its pompous paradoxes and its plagues of adverbs, its endless sentences and its strained rebellious poses, much of this promotional writing serves mainly, it seems, as ammunition for those who still insist contemporary art is a fraud. Surely no one sensible takes this jargon seriously?

David Levine and Alix Rule do. “Art English is something that everyone in the art world bitches about all the time,” says Levine, a 42-year-old American artist based in New York and Berlin. “But we all use it.” Three years ago, Levine and his friend Rule, a 29-year-old critic and sociology PhD student at Columbia university in New York, decided to try to anatomise it. “We wanted to map it out,” says Levine, “to describe its contours, rather than just complain about it.”

They christened it International Art English, or IAE, and concluded that its purest form was the gallery press release, which – in today’s increasingly globalised, internet-widened art world – has a greater audience than ever. “We spent hours just printing them out and reading them to each other,” says Levine. “We’d find some super-outrageous sentence and crack up about it. Then we’d try to understand the reality conveyed by that sentence.”

Next, they collated thousands of exhibition announcements published since 1999 by e-flux, a powerful New York-based subscriber network for art-world professionals. Then they used some language-analysing software called Sketch Engine, developed by a company in Brighton, to discover what, if anything, lay behind IAE’s great clouds of verbiage.

Their findings were published last year as an essay in the voguish American art journal Triple Canopy; it has since become one of the most widely and excitedly circulated pieces of online cultural criticism. It is easy to see why. Levine and Rule write about IAE in a droll, largely jargon-free style. They call it “a unique language” that has “everything to do with English, but is emphatically not English. [It] is oddly pornographic: we know it when we see it.”

IAE always uses “more rather than fewer words”. Sometimes it uses them with absurd looseness: “Ordinary words take on non-specific alien functions. ‘Reality,’ writes artist Tania Bruguera, ‘functions as my field of action.'” And sometimes it deploys words with faddish precision: “Usage of the word speculative spiked unaccountably in 2009; 2011 saw a sudden rage for rupture; transversal now seems poised to have its best year ever.”

Through Sketch Engine, Rule and Levine found that “the real” – used as a portentous, would-be philosophical abstract noun – occurred “179 times more often” in IAE than in standard English. In fact, in its declarative, multi-clause sentences, and in its odd combination of stiffness and swagger, they argued that IAE “sounds like inexpertly translated French”. This was no coincidence, they claimed, having traced the origins of IAE back to French post-structuralism and the introduction of its slippery ideas and prose style into American art writing via October, the New York critical journal founded in 1976. Since then, IAE had spread across the world so thoroughly that there was even, wrote Rule and Levine, an “IAE of the French press release … written, we can only imagine, by French interns imitating American interns imitating American academics imitating French academics”.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Monkeys as Judges of Art, 1889, by Gabriel Cornelius von Max. Courtesy of Wikipedia / Public Domain.[end-div]