All posts by Mike

Declining and Disparate Life Expectancy in the U.S

Social scientists are not certain of the causes but the sobering numbers speak for themselves: life expectancy for white women without a high school diploma is 74 years, while that for women with at least a college degree is 84 years; for white men the comparable life expectancies are 66 years versus 80 years.

[div class=attrib]From the New York Times:[end-div]

For generations of Americans, it was a given that children would live longer than their parents. But there is now mounting evidence that this enduring trend has reversed itself for the country’s least-educated whites, an increasingly troubled group whose life expectancy has fallen by four years since 1990.

Researchers have long documented that the most educated Americans were making the biggest gains in life expectancy, but now they say mortality data show that life spans for some of the least educated Americans are actually contracting. Four studies in recent years identified modest declines, but a new one that looks separately at Americans lacking a high school diploma found disturbingly sharp drops in life expectancy for whites in this group. Experts not involved in the new research said its findings were persuasive.

The reasons for the decline remain unclear, but researchers offered possible explanations, including a spike in prescription drug overdoses among young whites, higher rates of smoking among less educated white women, rising obesity, and a steady increase in the number of the least educated Americans who lack health insurance.

The steepest declines were for white women without a high school diploma, who lost five years of life between 1990 and 2008, said S. Jay Olshansky, a public health professor at the University of Illinois at Chicago and the lead investigator on the study, published last month in Health Affairs. By 2008, life expectancy for black women without a high school diploma had surpassed that of white women of the same education level, the study found.

White men lacking a high school diploma lost three years of life. Life expectancy for both blacks and Hispanics of the same education level rose, the data showed. But blacks over all do not live as long as whites, while Hispanics live longer than both whites and blacks.

“We’re used to looking at groups and complaining that their mortality rates haven’t improved fast enough, but to actually go backward is deeply troubling,” said John G. Haaga, head of the Population and Social Processes Branch of the National Institute on Aging, who was not involved in the new study.

The five-year decline for white women rivals the catastrophic seven-year drop for Russian men in the years after the collapse of the Soviet Union, said Michael Marmot, director of the Institute of Health Equity in London.

[div class=attrib]Read the entire article after the jump.[end-div]

Your Proximity to Fast Food

A striking map that shows how close or far you are from a McDonalds. If you love fast food then the Eastern U.S. is the place for you. On the other hand, if you crave McDistance, then you may want to move to the Nevada desert, the wilds of Idaho, the Rocky Mountains or the plains of the Dakotas. The map is based on 2009 data.

[div class=attrib]Read more details about this cool map after the jump.[end-div]

[div class=attrib]Map courtesy of Guardian / Stephen Von Worley, Data Pointed.[end-div]

Bicyclist Tribes

If you ride a bike (as in, bicycle) you will find that you probably belong to a specific tribe of bicyclist — and you’re being observed by bicyclist watchers! Read on to find out if you’re a Roadie or a Beach Cruiser or if you belong to one of the other tribes. Of course, some are quite simply in an exclusive “mayo jaune” tribe of their own.

[div class=attrib]From Wall Street Journal:[end-div]

Bird watching is a fine hobby for those with the time and inclination to traipse into nature, but the thrill of spotting different species of bicyclists can be just as rewarding. Why travel to Argentina to find a black-breasted plovercrest when one can spy a similarly plumed “Commuter” at the neighborhood Starbucks? No need to squint into binoculars or get up at the crack of dawn, either—bicyclists are out and about at all hours.

Bicyclist-watching has become much more interesting in recent years as the number of two-wheeled riders has grown. High gas prices, better bicycles, concern about the environment, looking cool—they’re all contributing factors. And with proliferation has come specialization. People don’t just “ride” bikes anymore: They commute or race or cruise, with each activity spawning corresponding gear and attitudes. Those in the field categorize cyclists into groups known as “bike tribes.” Instead of ducks, hawks and water fowl, bicyclologists might speak of Roadies, Cyclocrossers and Beach Cruisers.

To identify a bike tribe, note distinguishing marks, patterns and habits. Start with the dominant color and materials of a cyclist’s clothing. For example, garish jerseys and Lycra shorts indicate a Roadie, while padded gloves, mud-spattered jackets and black cleats are the territory of Cyclocrossers. Migration patterns are revealing. Observe the speed of travel and the treatment of other cyclists. Does the cyclist insist on riding amid cars even when wide bicycle paths are available? Probably a Roadie. Is the cyclist out in the pouring rain? Sounds like a Commuter. The presence of juveniles is telling, too; only a few tribes travel with offspring.

The Roadie

No bike tribe is more common in the United States than the Roadie. Their mien is sportiness and “performance” their goal. Roadies love passing other bike riders; they get annoyed when they have to dodge pedestrians walking with dogs or small children; they often ride in the middle of the road. They tend to travel in packs and spend time in small bicycle shops.

The Commuter

Commuters view a bicycle first and foremost as a means of transportation. They don’t ride without a destination. It’s easy to confuse Commuters with other tribes because others will sometimes use their bicycles to get to work. Even more challenging, Commuters come in all shapes and sizes and ride all different types of bicycles. But there are some distinguishing behaviors. Commuters almost always travel alone. They tend to wear drabber clothing than other tribes. Some adopt a smug, I’m-saving-the-world attitude, which is apparent in the way they glare at motorists. Commuters are most visible during rush hour.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Bradley Wiggins, Winner 2012 Tour de France.[end-div]

Mr. Tesla, Meet Mr. Blaine

A contemporary showman puts the inventions from another to the test with electrifying results.

[tube]irAYUU_6VSc[/tube]

[div class=attrib]From the New York Times:[end-div]

David Blaine, the magician and endurance artist, is ready for more pain. With the help of the Liberty Science Center, a chain-mail suit and an enormous array of Tesla electrical coils, he plans to stand atop a 20-foot-high pillar for 72 straight hours, without sleep or food, while being subjected to a million volts of electricity.

When Mr. Blaine performs “Electrified” on a pier in Hudson River Park, the audience there as well as viewers in London, Beijing, Tokyo and Sydney, Australia, will take turns controlling which of the seven coils are turned on, and at what intensity. They will also be able to play music by producing different notes from the coils. The whole performance, on Pier 54 near West 13th Street, will be shown live at www.youtube.com/electrified.

[div class=attrib]Read more after the jump. Read more about Nikola Tesla here.[end-div]

Social Media and Vanishing History

Social media is great for notifying members in one’s circle of events in the here and now. Of course, most events turn out to be rather trivial, of the “what I ate for dinner” kind. However, social media also has a role in spreading word of more momentous social and political events; the Arab Spring comes to mind.

But, while Twitter and its peers may be a boon for those who live in the present moment and need to transmit their current status, it seems that our social networks are letting go of the past. Will history become lost and irrelevant to the Twitter generation?

A terrifying thought.

[div class=attrib]From Technology Review:[end-div]

On 25 January 2011, a popular uprising began in Egypt that  led to the overthrow of the country’s brutal president and to the first truly free elections. One of the defining features of this uprising and of others in the Arab Spring was the way people used social media to organise protests and to spread news.

Several websites have since begun the task of curating this content, which is an important record of events and how they unfolded. That led Hany SalahEldeen and Michael Nelson at Old Dominion University in Norfolk, Virginia, to take a deeper look at the material to see how much the shared  were still live.

What they found has serious implications. SalahEldeen and Nelson say a significant proportion of the websites that this social media points to has disappeared. And the same pattern occurs for other culturally significant events, such as the the H1N1 virus outbreak, Michael Jackson’s death and the Syrian uprising.

In other words, our history, as recorded by social media, is slowly leaking away.

Their method is straightforward. SalahEldeen and Nelson looked for tweets on six culturally significant events that occurred between June 2009 and March 2012. They then filtered the URLs these tweets pointed to and checked to see whether the content was still available on the web, either in its original form or in an archived form.

They found that the older the social media, the more likely its content was to be missing. In fact, they found an almost linear relationship between time and the percentage lost.

The numbers are startling. They say that 11 per cent of the social media content had disappeared within a year and 27 per cent within 2 years. Beyond that, SalahEldeen and Nelson say the world loses 0.02 per cent of its culturally significant social media material every day.

That’s a sobering thought.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Movie poster for the 2002 film ”The Man Without a Past”. The Man Without a Past (Finnish: Mies vailla menneisyyttä) is a 2002 Finnish comedy-drama film directed by Aki Kaurismäki. Courtesy of Wikipedia.[end-div]

Engage the Warp Engines

According to Star Trek fictional history warp engines were invented in 2063. That gives us just over 50 years. While very unlikely based on our current technological prowess and general lack of understanding of the cosmos, warp engines are perhaps becoming just a little closer to being realized. But, please, no photon torpedoes!

[div class=attrib]From Wired:[end-div]

NASA scientists now think that the famous warp drive concept is a realistic possibility, and that in the far future humans could regularly travel faster than the speed of light.

A warp drive would work by “warping” spacetime around any spaceship, which physicist Miguel Alcubierre showed was theoretically possible in 1994, albeit well beyond the current technical capabilities of humanity. However, any such Alcubierre drive was assumed to require more energy — equivalent to the mass-energy of the whole planet of Jupiter – than could ever possibly be supplied, rendering it impossible to build.

But now scientists believe that those requirements might not be so vast, making warp travel a tangible possibility. Harold White, from NASA’s Johnson Space Centre, revealed the news on Sept. 14 at the 100 Year Starship Symposium, a gathering to discuss the possibilities and challenges of interstellar space travel. Space.com reports that White and his team have calculated that the amount of energy required to create an Alcubierre drive may be smaller than first thought.

The drive works by using a wave to compress the spacetime in front of the spaceship while expanding the spacetime behind it. The ship itself would float in a “bubble” of normal spacetime that would float along the wave of compressed spacetime, like the way a surfer rides a break. The ship, inside the warp bubble, would be going faster than the speed of light relative to objects outside the bubble.

By changing the shape of the warp bubble from a sphere to more of a rounded doughnut, White claims that the energy requirements will be far, far smaller for any faster-than-light ship — merely equivalent to the mass-energy of an object the size of Voyager 1.

Alas, before you start plotting which stars you want to visit first, don’t expect one appearing within our lifetimes. Any warp drive big enough to transport a ship would still require vast amounts of energy by today’s standards, which would probably necessitate exploiting dark energy — but we don’t know yet what, exactly, dark energy is, nor whether it’s something a spaceship could easily harness. There’s also the issue that we have no idea how to create or maintain a warp bubble, let alone what it would be made out of. It could even potentially, if not constructed properly, create unintended black holes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: U.S.S Enterprise D. Courtesy of Startrek.com.[end-div]

Uncertainty Strikes the Uncertainty Principle

Some recent experiments out of the University of Toronto show for the first time an anomaly in measurements predicted by Werner Heisenberg’s fundamental law of quantum mechanics, the Uncertainty Principle.

[div class=attrib]From io9:[end-div]

Heisenberg’s uncertainty principle is an integral component of quantum physics. At the quantum scale, standard physics starts to fall apart, replaced by a fuzzy, nebulous set of phenomena. Among all the weirdness observed at this microscopic scale, Heisenberg famously observed that the position and momentum of a particle cannot be simultaneously measured, with any meaningful degree of precision. This led him to posit the uncertainty principle, the declaration that there’s only so much we can know about a quantum system, namely a particle’s momentum and position.

Now, by definition, the uncertainty principle describes a two-pronged process. First, there’s the precision of a measurement that needs to be considered, and second, the degree of uncertainty, or disturbance, that it must create. It’s this second aspect that quantum physicists refer to as the “measurement-disturbance relationship,” and it’s an area that scientists have not sufficiently explored or proven.

Up until this point, quantum physicists have been fairly confident in their ability to both predict and measure the degree of disturbances caused by a measurement. Conventional thinking is that a measurement will always cause a predictable and consistent disturbance — but as the study from Toronto suggests, this is not always the case. Not all measurements, it would seem, will cause the effect predicted by Heisenberg and the tidy equations that have followed his theory. Moreover, the resultant ambiguity is not always caused by the measurement itself.

The researchers, a team led by Lee Rozema and Aephraim Steinberg, experimentally observed a clear-cut violation of Heisenberg’s measurement-disturbance relationship. They did this by applying what they called a “weak measurement” to define a quantum system before and after it interacted with their measurement tools — not enough to disturb it, but enough to get a basic sense of a photon’s orientation.

Then, by establishing measurement deltas, and then applying stronger, more disruptive measurements, the team was able to determine that they were not disturbing the quantum system to the degree that the uncertainty principle predicted. And in fact, the disturbances were half of what would normally be expected.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Heisenberg, Werner Karl Prof. 1901-1976; Physicist, Nobel Prize for Physics 1933, Germany. Courtesy of Wikipedia.[end-div]

The How and Why of Supersized Sodas

Apparently the Great Depression in the United States is to blame for the mega-sized soda drinks that many now consume on a daily basis, except in New York City of course (sugary drinks larger than 16oz were banned for sale in restaurants beginning September 13, 2012).

[div class=attrib]From Wired:[end-div]

The New York City Board of Health voted Thursday to ban the sale of sugary soft drinks larger than 16 ounces at restaurants, a move that has sparked intense debate between public health advocates and beverage industry lobbyists. When did sodas get so big in the first place?

In the 1930s. At the beginning of the Great Depression, the 6-ounce Coca-Cola bottle was the undisputed king of soft drinks. The situation began to change in 1934, when the smallish Pepsi-Cola company began selling 12-ounces bottles for the same nickel price as 6 ounces of Coke. The move was brilliant. Distribution, bottling, and advertising accounted for most of the company’s costs, so adding six free ounces hardly mattered. In addition, the 12-ounce size enabled Pepsi-Cola to use the same bottles as beer-makers, cutting container costs. The company pursued a similar strategy at the nation’s soda fountains, selling enough syrup to make 10 ounces for the same price as 6 ounces worth of Coca-Cola. Pepsi sales soared, and the company soon produced a jingle about their supersize bottles: “Pepsi-Cola hits the spot, 12 full ounces, that’s a lot. Twice as much for a nickel, too. Pepsi-Cola is the drink for you.” Pepsi’s value-for-volume gambit kicked off a decades-long industry trend.

Coke was slow to respond at first, according to author Mark Pendergrast, who chronicled the company’s history in For God, Country, and Coca-Cola: The Definitive History of the Great American Soft Drink and the Company That Makes It. President Robert Woodruff held firm to the 6-ounce size, even as his subordinates warned him that Pepsi was onto something. By the 1950s, industry observers predicted that Coca-Cola might lose its dominant position, and top company executives were threatening to resign if Woodruff didn’t bend on bottle size. In 1955, 10- and 12-ounce “King Size” Coke bottles hit the market, along with a 26-ounce “Family Size.” Although the new flexibility helped Coca-Cola regain its footing, the brave new world of giant bottles was hard to accept for some. Company vice president Ed Forio noted that “bringing out another bottle was like being unfaithful to your wife.”

The trend toward larger sizes occurred in all sectors of the market. When Coca-Cola partnered with McDonald’s in the 1950s, the original fountain soda at the restaurant chain more closely approximated the classic Coke bottle at seven ounces. The largest cup size grew to 16 ounces in the 1960s and hit 21 ounces by 1974.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Big Gulp. Courtesy of Chicago Tribune.[end-div]

Fusion and the Z Machine

The quest to tap fusion as an energy source here on Earth continues to inch forward with some promising new developments. Of course, we mean nuclear fusion — the type which drives our companion star to shine, not the now debunked “cold fusion” supposedly demonstrated in a test tube in the late 1980s.

[div class=attrib]From Wired:[end-div]

In the high-stakes race to realize fusion energy, a smaller lab may be putting the squeeze on the big boys. Worldwide efforts to harness fusion—the power source of the sun and stars—for energy on Earth currently focus on two multibillion dollar facilities: the ITER fusion reactor in France and the National Ignition Facility (NIF) in California. But other, cheaper approaches exist—and one of them may have a chance to be the first to reach “break-even,” a key milestone in which a process produces more energy than needed to trigger the fusion reaction.

Researchers at the Sandia National Laboratory in Albuquerque, New Mexico, will announce in a Physical Review Letters (PRL) paper accepted for publication that their process, known as magnetized liner inertial fusion (MagLIF) and first proposed 2 years ago, has passed the first of three tests, putting it on track for an attempt at the coveted break-even. Tests of the remaining components of the process will continue next year, and the team expects to take its first shot at fusion before the end of 2013.

Fusion reactors heat and squeeze a plasma—an ionized gas—composed of the hydrogen isotopes deuterium and tritium, compressing the isotopes until their nuclei overcome their mutual repulsion and fuse together. Out of this pressure-cooker emerge helium nuclei, neutrons, and a lot of energy. The temperature required for fusion is more than 100 million°C—so you have to put a lot of energy in before you start to get anything out. ITER and NIF are planning to attack this problem in different ways. ITER, which will be finished in 2019 or 2020, will attempt fusion by containing a plasma with enormous magnetic fields and heating it with particle beams and radio waves. NIF, in contrast, takes a tiny capsule filled with hydrogen fuel and crushes it with a powerful laser pulse. NIF has been operating for a few years but has yet to achieve break-even.

Sandia’s MagLIF technique is similar to NIF’s in that it rapidly crushes its fuel—a process known as inertial confinement fusion. But to do it, MagLIF uses a magnetic pulse rather than lasers. The target in MagLIF is a tiny cylinder about 7 millimeters in diameter; it’s made of beryllium and filled with deuterium and tritium. The cylinder, known as a liner, is connected to Sandia’s vast electrical pulse generator (called the Z machine), which can deliver 26 million amps in a pulse lasting milliseconds or less. That much current passing down the walls of the cylinder creates a magnetic field that exerts an inward force on the liner’s walls, instantly crushing it—and compressing and heating the fusion fuel.

Researchers have known about this technique of crushing a liner to heat the fusion fuel for some time. But the MagLIF-Z machine setup on its own didn’t produce quite enough heat; something extra was needed to make the process capable of reaching break-even. Sandia researcher Steve Slutz led a team that investigated various enhancements through computer simulations of the process. In a paper published in Physics of Plasmas in 2010, the team predicted that break-even could be reached with three enhancements.

First, they needed to apply the current pulse much more quickly, in just 100 nanoseconds, to increase the implosion velocity. They would also preheat the hydrogen fuel inside the liner with a laser pulse just before the Z machine kicks in. And finally, they would position two electrical coils around the liner, one at each end. These coils produce a magnetic field that links the two coils, wrapping the liner in a magnetic blanket. The magnetic blanket prevents charged particles, such as electrons and helium nuclei, from escaping and cooling the plasma—so the temperature stays hot.

Sandia plasma physicist Ryan McBride is leading the effort to see if the simulations are correct. The first item on the list is testing the rapid compression of the liner. One critical parameter is the thickness of the liner wall: The thinner the wall, the faster it will be accelerated by the magnetic pulse. But the wall material also starts to evaporate away during the pulse, and if it breaks up too early, it will spoil the compression. On the other hand, if the wall is too thick, it won’t reach a high enough velocity. “There’s a sweet spot in the middle where it stays intact and you still get a pretty good implosion velocity,” McBride says.

To test the predicted sweet spot, McBride and his team set up an elaborate imaging system that involved blasting a sample of manganese with a high-powered laser (actually a NIF prototype moved to Sandia) to produce x-rays. By shining the x-rays through the liner at various stages in its implosion, the researchers could image what was going on. They found that at the sweet-spot thickness, the liner held its shape right through the implosion. “It performed as predicted,” McBride says. The team aims to test the other two enhancements—the laser preheating and the magnetic blanket—in the coming year, and then put it all together to take a shot at break-even before the end of 2013.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Z Pulsed Power Facility produces tremendous energy when it fires. Courtesy of Sandia National Laboratory.[end-div]

GDP of States Versus Countries

A niffty or neat (depending upon your location) map courtesy of Frank Jacobs over at Strange Maps. This one shows countries in place of U.S. States where the GDP (Gross Domestic Product) is similar. For instance, Canada replaces Texas in the United States map since Canada’s entire GDP matches the economy of Texas. The map is based on data for 2007.

[div class=attrib]Read the entire article after the jump.[end-div]

A Link Between BPA and Obesity

You have probably heard of BPA. It’s a compound used in the manufacture of many plastics, especially hard, polycarbonate plastics. Interestingly, it has hormone-like characteristics, mimicking estrogen. As a result, BPA crops up in many studies that show adverse health affects. As a precaution, the U.S. Food and Drug Administration (FDA) several years ago banned the use of BPA from products aimed at young children, such as baby bottles. But evidence remains inconsistent, so BPA is still found in many products today. Now comes another study linking BPA to obesity.

[div class=attrib]From Smithsonian:[end-div]

Since the 1960s, manufacturers have widely used the chemical bisphenol-A (BPA) in plastics and food packaging. Only recently, though, have scientists begun thoroughly looking into how the compound might affect human health—and what they’ve found has been a cause for concern.

Starting in 2006, a series of studies, mostly in mice, indicated that the chemical might act as an endocrine disruptor (by mimicking the hormone estrogen), cause problems during development and potentially affect the reproductive system, reducing fertility. After a 2010 Food and Drug Administration report warned that the compound could pose an especially hazardous risk for fetuses, infants and young children, BPA-free water bottles and food containers started flying off the shelves. In July, the FDA banned the use of BPA in baby bottles and sippy cups, but the chemical is still present in aluminum cans, containers of baby formula and other packaging materials.

Now comes another piece of data on a potential risk from BPA but in an area of health in which it has largely been overlooked: obesity. A study by researchers from New York University, published today in the Journal of the American Medical Association, looked at a sample of nearly 3,000 children and teens across the country and found a “significant” link between the amount of BPA in their urine and the prevalence of obesity.

“This is the first association of an environmental chemical in childhood obesity in a large, nationally representative sample,” said lead investigator Leonardo Trasande, who studies the role of environmental factors in childhood disease at NYU. “We note the recent FDA ban of BPA in baby bottles and sippy cups, yet our findings raise questions about exposure to BPA in consumer products used by older children.”

The researchers pulled data from the 2003 to 2008 National Health and Nutrition Examination Surveys, and after controlling for differences in ethnicity, age, caregiver education, income level, sex, caloric intake, television viewing habits and other factors, they found that children and adolescents with the highest levels of BPA in their urine had a 2.6 times greater chance of being obese than those with the lowest levels. Overall, 22.3 percent of those in the quartile with the highest levels of BPA were obese, compared with just 10.3 percent of those in the quartile with the lowest levels of BPA.

The vast majority of BPA in our bodies comes from ingestion of contaminated food and water. The compound is often used as an internal barrier in food packaging, so that the product we eat or drink does not come into direct contact with a metal can or plastic container. When heated or washed, though, plastics containing BPA can break down and release the chemical into the food or liquid they hold. As a result, roughly 93 percent of the U.S. population has detectable levels of BPA in their urine.

The researchers point specifically to the continuing presence of BPA in aluminum cans as a major problem. “Most people agree the majority of BPA exposure in the United States comes from aluminum cans,” Trasande said. “Removing it from aluminum cans is probably one of the best ways we can limit exposure. There are alternatives that manufacturers can use to line aluminum cans.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bisphenol A. Courtesy of Wikipedia.[end-div]

As Simple as abc; As Difficult as ABC

As children we all learn our abc’s; as adults very few ponder the ABC Conjecture in mathematics. The first is often a simple task of rote memorization; the second is a troublesome mathematical problem with a fiendishly complex solution (maybe).

[div class=attrib]From the New Scientist:[end-div]

?Whole numbers, addition and multiplication are among the first things schoolchildren learn, but a new mathematical proof shows that even the world’s best minds have plenty more to learn about these seemingly simple concepts.

Shinichi Mochizuki of Kyoto University in Japan has torn up these most basic of mathematical concepts and reconstructed them as never before. The result is a fiendishly complicated proof for the decades-old “ABC conjecture” – and an alternative mathematical universe that should prise open many other outstanding enigmas.

To boot, Mochizuki’s proof also offers an alternative explanation for Fermat’s last theorem, one of the most famous results in the history of mathematics but not proven until 1993 (see “Fermat’s last theorem made easy”, below).

The ABC conjecture starts with the most basic equation in algebra, adding two whole numbers, or integers, to get another: a + b = c. First posed in 1985 by Joseph Oesterlé and David Masser, it places constraints on the interactions of the prime factors of these numbers, primes being the indivisible building blocks that can be multiplied together to produce all integers.

Dense logic

Take 81 + 64 = 145, which breaks down into the prime building blocks 3 × 3 × 3 × 3 + 2 × 2 × 2 × 2 × 2 × 2 = 5 × 29. Simplified, the conjecture says that the large amount of smaller primes on the equation’s left-hand side is always balanced by a small amount of larger primes on the right – the addition restricts the multiplication, and vice versa.

“The ABC conjecture in some sense exposes the relationship between addition and multiplication,” says Jordan Ellenberg of the University of Wisconsin-Madison. “To learn something really new about them at this late date is quite startling.”

Though rumours of Mochizuki’s proof started spreading on mathematics blogs earlier this year, it was only last week that he posted a series of papers on his website detailing what he calls “inter-universal geometry”, one of which claims to prove the ABC conjecture. Only now are mathematicians attempting to decipher its dense logic, which spreads over 500 pages.

So far the responses are cautious, but positive. “It will be fabulously exciting if it pans out, experience suggests that that’s quite a big ‘if’,” wrote University of Cambridge mathematician Timothy Gowers on Google+.

Alien reasoning

“It is going to be a while before people have a clear idea of what Mochizuki has done,” Ellenberg told New Scientist. “Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” he added on his blog.

Mochizuki’s reasoning is alien even to other mathematicians because it probes deep philosophical questions about the foundations of mathematics, such as what we really mean by a number, says Minhyong Kim at the University of Oxford. The early 20th century saw a crisis emerge as mathematicians realised they actually had no formal way to define a number – we can talk about “three apples” or “three squares”, but what exactly is the mathematical object we call “three”? No one could say.

Eventually numbers were redefined in terms of sets, rigorously specified collections of objects, and mathematicians now know that the true essence of the number zero is a set which contains no objects – the empty set – while the number one is a set which contains one empty set. From there, it is possible to derive the rest of the integers.

But this was not the end of the story, says Kim. “People are aware that many natural mathematical constructions might not really fall into the universe of sets.”

Terrible deformation

Rather than using sets, Mochizuki has figured out how to translate fundamental mathematical ideas into objects that only exist in new, conceptual universes. This allowed him to “deform” basic whole numbers and push their innate relationships – such as multiplication and addition – to the limit. “He is literally taking apart conventional objects in terrible ways and reconstructing them in new universes,” says Kim.

These new insights led him to a proof of the ABC conjecture. “How he manages to come back to the usual universe in a way that yields concrete consequences for number theory, I really have no idea as yet,” says Kim.

Because of its fundamental nature, a verified proof of ABC would set off a chain reaction, in one swoop proving many other open problems and deepening our understanding of the relationships between integers, fractions, decimals, primes and more.

Ellenberg compares proving the conjecture to the discovery of the Higgs boson, which particle physicists hope will reveal a path to new physics. But while the Higgs emerged from the particle detritus of a machine specifically designed to find it, Mochizuki’s methods are completely unexpected, providing new tools for mathematical exploration.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Clare College Cambridge.[end-div]

What is the True Power of Photography?

Hint. The answer is not shameless self-promotion or exploitative voyeurism; images used in this way may scratch a personal itch, but rarely influence fundamental societal or political behavior. Importantly, photography has given us a rich, nuanced and lasting medium for artistic expression since cameras and film were first invented. However, the principal answer is lies in photography’s ability to tell truth about and to power.

Michael Glover reminds us of this critical role through the works of a dozen of the most influential photographers from the 1960s and 1970s. Their collective works are on display at a new exhibit at the Barbican Art Gallery, London, which runs until mid-January 2013.

[div class=attrib]From the Independent:[end-div]

Photography has become so thoroughly prostituted as a means of visual exchange, available to all or none for every purpose under the sun (or none worthy of the name), that it is easy to forget that until relatively recently one of the most important consequences of fearless photographic practice was to tell the truth about power.

This group show at the Barbican focuses on the work of 12 photographers from around the world, including Vietnam, India, the US, Mexico, Japan, China, Ukraine, Germany, Mali, Japan and South Africa, examining their photographic practice in relation to the particular historical moments through which they lived. The covert eye of the camera often shows us what the authorities do not want us to see: the bleak injustice of life lived under apartheid; the scarring aftermath of the allied bombing and occupation of Japan; the brutish day-to-day realities of the Vietnam war.

Photography, it has often been said, documents the world. This suggests that the photographer might be a dispassionate observer of neutral spaces, more machine than emotive being. Nonsense. Using a camera is the photographer’s own way of discovering his or her own particular angle of view. It is a point of intersection between self and world. There is no such thing as a neutral landscape; there is only ever a personal landscape, cropped by the ever quizzical human eye. The good photographer, in the words of Bruce Davidson, the man (well represented in this show) who tirelessly and fearlessly chronicled the fight for civil rights in America in the early 1960s, seeks out the “emotional truth” of a situation.

For more than half a century, David Goldblatt, born in the mining town of Randfontein of Lithuanian Jewish parentage, has been chronicling the social divisions of South Africa. Goldblatt’s images are stark, forensic and pitiless, from the matchbox houses in the dusty, treeless streets of 1970s Soweto, to the lean man in the hat who is caught wearily and systematically butchering the coal-merchant’s dead horse for food in a bleak scrubland of wrecked cars. Goldblatt captures the day-to-day life of the Afrikaners: their narrowness of view; that tenacious conviction of rightness; the visceral bond with the soil. There is nothing demonstrative or rhetorical about his work. It is utterly, monochromatically sober, and quite subtly focused on the job in hand, as if he wishes to say to the onlooker that reality is quite stark enough.

Boris Mikhailov, wild, impish and contrarian in spirit, turns photography into a self-consciously subversive art form. Born in Kharkov in Ukraine under communism, his photographic montages represent a ferociously energetic fight-back against the grinding dullness, drabness and tedium of accepted notions of conformity. He frames a sugary image of a Kremlin tower in a circlet of slabs of raw meat. He reduces accepted ideas of beauty to kitsch. Underwear swings gaily in the air beside a receding railway track. He mercilessly lampoons the fact that the authorities forbade the photographing of nudity. This is the not-so-gentle art of blowing red raspberries.

Shomei Tomatsu has been preoccupied all his life by a single theme that he circles around obsessively: the American occupation of Japan in the aftermath of its humiliating military capitulation. Born in 1930, he still lives in Okinawa, the island from which the Americans launched their B52s during the Vietnam war. His angle of view suggests a mixture of abhorrence with the invasion of an utterly alien culture and a fascination with its practical consequences: a Japanese child blows a huge chewing gum bubble beside a street sign that reads “Bar Oasis”. The image of the child is distorted in the bubble.

But this show is not all about cocking a snook at authority. It is also about aesthetic issues: the use of colour as a way of shaping a different kind of reality, for example. William Eggleston made his series of photographic portraits of ordinary people from Memphis, Tennessee, often at night, in the 1970s. These are seemingly casual and immediate moments of intimate engagement between photographer and subject. Until this moment, colour had often been used by the camera (and especially the movie camera), not to particularise but to glamorise. Not so here. Eggleston is especially good at registering the lonely decrepitude of objects – a jukebox on a Memphis wall; the reptilian patina of a rusting street light; the resonance of an empty room in Las Vegas.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of “Everything Was Moving: Photography from the 60s and 70s”, Barbican Art Gallery. Copyright Bruce Davidson / Magnum Photos.[end-div]

Social Outcast = Creative Wunderkind

A recent study published in the Journal of Experimental Psychology correlates social ostracization and rejection with creativity. Businesses seeking creative individuals take note: perhaps your next great hire is a social misfit.

[div class=attrib]From Fast Company:[end-div]

Are you a recovering high school geek who still can’t get the girl? Are you always the last person picked for your company’s softball team? When you watched Office Space, did you feel a special kinship to the stapler-obsessed Milton Waddams? If you answered yes to any of these questions, do not despair. Researchers at Johns Hopkins and Cornell have recently found that the socially rejected might also be society’s most creatively powerful people.

The study, which is forthcoming in the Journal of Experimental Psychology, is called “Outside Advantage: Can Social Rejection Fuel Creative Thought?” It found that people who already have a strong “self-concept”–i.e. are independently minded–become creatively fecund in the face of rejection. “We were inspired by the stories of highly creative individuals like Steve Jobs and Lady Gaga,” says the study’s lead author, Hopkins professor Sharon Kim. “And we wanted to find a silver lining in all the popular press about bullying. There are benefits to being different.”

The study consisted of 200 Cornell students and set out to identify the relationship between the strength of an individual’s self-concept and their level of creativity. First, Kim tested the strength of each student’s self-concept by assessing his or her “need for uniqueness.” In other words, how important it is for each individual to feel separate from the crowd. Next, students were told that they’d either been included in or rejected from a hypothetical group project. Finally, they were given a simple, but creatively demanding, task: Draw an alien from a planet unlike earth.

If you’re curious about your own general creativity level (at least by the standards of Kim’s study), go ahead and sketch an alien right now…Okay, got your alien? Now give yourself a point for every non-human characteristic you’ve included in the drawing. If your alien has two eyes between the nose and forehead, you don’t get any points. If your alien has two eyes below the mouth, or three eyes that breathe fire, you get a point. If your alien doesn’t even have eyes or a mouth, give yourself a bunch of points. In short, the more dissimilar your alien is to a human, the higher your creativity score.

Kim found that people with a strong self-concept who were rejected produced more creative aliens than people from any other group, including people with a strong self-concept who were accepted. “If you’re in a mindset where you don’t care what others think,” she explained, “you’re open to ideas that you may not be open to if you’re concerned about what other people are thinking.”

This may seem like an obvious conclusion, but Kim pointed out that most companies don’t encourage the kind of freedom and independence that readers of Fast Company probably expect. “The benefits of being different is not a message everyone is getting,” she said.

But Kim also discovered something unexpected. People with a weak self-concept could be influenced toward a stronger one and, thus, toward a more creative mindset. In one part of the study, students were asked to read a short story in which all the pronouns were either singular (I/me) or plural (we/us) and then to circle all the pronouns. They were then “accepted” or “rejected” and asked to draw their aliens.

Kim found that all of the students who read stories with singular pronouns and were rejected produced more creative aliens. Even the students who originally had a weaker self-concept. Once these group-oriented individuals focused on individual-centric prose, they became more individualized themselves. And that made them more creative.

This finding doesn’t prove that you can teach someone to have a strong self-concept but it suggests that you can create a professional environment that facilitates independent and creative thought.

[div class=attrib]Read the entire article after the jump.[end-div]

Work as Punishment (and For the Sake of Leisure)

Gary Gutting, professor of philosophy at the University of Notre Dame reminds us that work is punishment for Adam’s sin, according to the Book of Genesis. No doubt, many who hold other faiths, as well as those who don’t, may tend to agree with this basic notion.

So, what on earth is work for?

Gutting goes on to remind us that Aristotle and Bertrand Russell had it right: that work is for the sake of leisure.

[div class=attrib]From the New York Times:[end-div]

Is work good or bad?  A fatuous question, it may seem, with unemployment such a pressing national concern.  (Apart from the names of the two candidates, “jobs” was the politically relevant word most used by speakers at the Republican and Democratic conventions.) Even apart from current worries, the goodness of work is deep in our culture. We applaud people for their work ethic, judge our economy by its productivity and even honor work with a national holiday.

But there’s an underlying ambivalence: we celebrate Labor Day by not working, the Book of Genesis says work is punishment for Adam’s sin, and many of us count the days to the next vacation and see a contented retirement as the only reason for working.

We’re ambivalent about work because in our capitalist system it means work-for-pay (wage-labor), not for its own sake.  It is what philosophers call an instrumental good, something valuable not in itself but for what we can use it to achieve.  For most of us, a paying job is still utterly essential — as masses of unemployed people know all too well.  But in our economic system, most of us inevitably see our work as a means to something else: it makes a living, but it doesn’t make a life.

What, then, is work for? Aristotle has a striking answer: “we work to have leisure, on which happiness depends.” This may at first seem absurd. How can we be happy just doing nothing, however sweetly (dolce far niente)?  Doesn’t idleness lead to boredom, the life-destroying ennui portrayed in so many novels, at least since “Madame Bovary”?

Everything depends on how we understand leisure. Is it mere idleness, simply doing nothing?  Then a life of leisure is at best boring (a lesson of Voltaire’s “Candide”), and at worst terrifying (leaving us, as Pascal says, with nothing to distract from the thought of death).  No, the leisure Aristotle has in mind is productive activity enjoyed for its own sake, while work is done for something else.

We can pass by for now the question of just what activities are truly enjoyable for their own sake — perhaps eating and drinking, sports, love, adventure, art, contemplation? The point is that engaging in such activities — and sharing them with others — is what makes a good life. Leisure, not work, should be our primary goal.

Bertrand Russell, in his classic essay “In Praise of Idleness,” agrees. ”A great deal of harm,” he says, “is being done in the modern world by belief in the virtuousness of work.” Instead, “the road to happiness and prosperity lies in an organized diminution of work.” Before the technological breakthroughs of the last two centuries, leisure could be only “the prerogative of small privileged classes,” supported by slave labor or a near equivalent. But this is no longer necessary: “The morality of work is the morality of slaves, and the modern world has no need of slavery.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bust of Aristotle. Marble, Roman copy after a Greek bronze original by Lysippos from 330 BC; the alabaster mantle is a modern addition. Courtesy of Wikipedia.[end-div]

Innovation Before Its Time

Product driven companies, inventors from all backgrounds and market researchers have long studied how some innovations take off while others fizzle. So, why do some innovations gain traction? Given two similar but competing inventions, what factors lead to one eclipsing the other? Why do some pioneering ideas and inventions fail only to succeed from a different instigator years, sometimes decades, later? Answers to these questions would undoubtedly make many inventors household names, but as is the case in most human endeavors, the process of innovation is murky and more of an art than a science.

Author and columnist Matt Ridley offers some possible answers to the conundrum.

[div class=attrib]From the Wall Street Journal:[end-div]

Bill Moggridge, who invented the laptop computer in 1982, died last week. His idea of using a hinge to attach a screen to a keyboard certainly caught on big, even if the first model was heavy, pricey and equipped with just 340 kilobytes of memory. But if Mr. Moggridge had never lived, there is little doubt that somebody else would have come up with the idea.

The phenomenon of multiple discovery is well known in science. Innovations famously occur to different people in different places at the same time. Whether it is calculus (Newton and Leibniz), or the planet Neptune (Adams and Le Verrier), or the theory of natural selection (Darwin and Wallace), or the light bulb (Edison, Swan and others), the history of science is littered with disputes over bragging rights caused by acts of simultaneous discovery.

As Kevin Kelly argues in his book “What Technology Wants,” there is an inexorability about technological evolution, expressed in multiple discovery, that makes it look as if technological innovation is an autonomous process with us as its victims rather than its directors.

Yet some inventions seem to have occurred to nobody until very late. The wheeled suitcase is arguably such a, well, case. Bernard Sadow applied for a patent on wheeled baggage in 1970, after a Eureka moment when he was lugging his heavy bags through an airport while a local worker effortlessly pushed a large cart past. You might conclude that Mr. Sadow was decades late. There was little to stop his father or grandfather from putting wheels on bags.

Mr. Sadow’s bags ran on four wheels, dragged on a lead like a dog. Seventeen years later a Northwest Airlines pilot, Robert Plath, invented the idea of two wheels on a suitcase held vertically, plus a telescopic handle to pull it with. This “Rollaboard,” now ubiquitous, also feels as if it could have been invented much earlier.

Or take the can opener, invented in the 1850s, eight decades after the can. Early 19th-century soldiers and explorers had to make do with stabbing bayonets into food cans. “Why doesn’t somebody come up with a wheeled cutter?” they must have muttered (or not) as they wrenched open the cans.

Perhaps there’s something that could be around today but hasn’t been invented and that will seem obvious to future generations. Or perhaps not. It’s highly unlikely that brilliant inventions are lying on the sidewalk ignored by the millions of entrepreneurs falling over each other to innovate. Plenty of terrible ideas are tried every day.

Understanding why inventions take so long may require mentally revisiting a long-ago time. For a poorly paid Napoleonic soldier who already carried a decent bayonet, adding a can opener to his limited kitbag was probably a waste of money and space. Indeed, going back to wheeled bags, if you consider the abundance of luggage porters with carts in the 1960s, the ease of curbside drop-offs at much smaller airports and the heavy iron casters then available, 1970 seems about the right date for the first invention of rolling luggage.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class]Image: Joseph Swan, inventor of the incandescent light bulb, which was first publicly demonstrated on 18 December 1878. Courtesy of Wikipedia.[end-div]

An Answer is Blowing in the Wind

Two recent studies report that the world (i.e., humans) could meet its entire electrical energy needs from several million wind turbines.

[div class=attrib]From Ars Technica:[end-div]

Is there not enough wind blowing across the planet to satiate our demands for electricity? If there is, would harnessing that much of it begin to actually affect the climate?

Two studies published this week tried to answer these questions. Long story short: we could supply all our power needs for the foreseeable future from wind, all without affecting the climate in a significant way.

The first study, published in this week’s Nature Climate Change, was performed by Kate Marvel of Lawrence Livermore National Laboratory with Ben Kravitz and Ken Caldeira of the Carnegie Institution for Science. Their goal was to determine a maximum geophysical limit to wind power—in other words, if we extracted all the kinetic energy from wind all over the world, how much power could we generate?

In order to calculate this power limit, the team used the Community Atmosphere Model (CAM), developed by National Center for Atmospheric Research. Turbines were represented as drag forces removing momentum from the atmosphere, and the wind power was calculated as the rate of kinetic energy transferred from the wind to these momentum sinks. By increasing the drag forces, a power limit was reached where no more energy could be extracted from the wind.

The authors found that at least 400 terawatts could be extracted by ground-based turbines—represented by drag forces on the ground—and 1,800 terawatts by high-altitude turbines—represented by drag forces throughout the atmosphere. For some perspective, the current global power demand is around 18 terawatts.

The second study, published in the Proceedings of the National Academy of Sciences by Mark Jacobsen at Stanford and Cristina Archer at the University of Delaware, asked some more practical questions about the limits of wind power. For example, rather than some theoretical physical limit, what is the maximum amount of power that could actually be extracted by real turbines?

For one thing, turbines can’t extract all the kinetic energy from wind—no matter the design, 59.3 percent, the Betz limit, is the absolute maximum. Less-than-perfect efficiencies based on the specific turbine design reduce the extracted power further.

Another important consideration is that, for a given area, you can only add so many turbines before hitting a limit on power extraction—the area is “saturated,” and any power increase you get by adding any turbines ends up matched by a drop in power from existing ones. This happens because the wakes from turbines near each other interact and reduce the ambient wind speed. Jacobsen and Archer expanded this concept to a global level, calculating the saturation wind power potential for both the entire globe and all land except Antarctica.

Like the first study, this one considered both surface turbines and high-altitude turbines located in the jet stream. Unlike the model used in the first study, though, these were placed at specific altitudes: 100 meters, the hub height of most modern turbines, and 10 kilometers. The authors argue improper placement will lead to incorrect reductions in wind speed.

Jacobsen and Archer found that, with turbines placed all over the planet, including the oceans, wind power saturates at about 250 terawatts, corresponding to nearly three thousand terawatts of installed capacity. If turbines are just placed on land and shallow offshore locations, the saturation point is 80 terawatts for 1,500 installed terawatts of installed power.

For turbines at the jet-stream height, they calculated a maximum power of nearly 400 terawatts—about 150 percent of that at 100 meters.

These results show that, even at the saturation point, we could extract enough wind power to supply global demands many times over. Unfortunately, the numbers of turbines required aren’t plausible—300 million five-megawatt turbines in the smallest case (land plus shallow offshore).

[div class=attrib]Read the entire article after the jump.[end-div]

Let the Wealthy Fund Innovation?

Nathan Myhrvold, former CTO of Microsoft, suggests that the wealthy should “think big” by funding large-scale and long-term innovation. Arguably, this would be a much preferred alternative to the wealthy using their millions to gain (more) political influence in much of the West, especially the United States. Myhrvold is now a backer of TerraPower, a nuclear energy startup.

[div class=attrib]From Technology Review:[end-div]

For some technologists, it’s enough to build something that makes them financially successful. They retire happily. Others stay with the company they founded for years and years, enthralled with the platform it gives them. Think how different the work Steve Jobs did at Apple in 2010 was from the innovative ride he took in the 1970s.

A different kind of challenge is to start something new. Once you’ve made it, a new venture carries some disadvantages. It will be smaller than your last company, and more frustrating. Startups require a level of commitment not everyone is ready for after tasting success. On the other hand, there’s no better time than that to be an entrepreneur. You’re not gambling your family’s entire future on what happens next. That is why many accomplished technologists are out in the trenches, leading and funding startups in unprecedented areas.

Jeff Bezos has Blue Origin, a company that builds spaceships. Elon Musk has Tesla, an electric-car company, and SpaceX, another rocket-ship company. Bill Gates took on big challenges in the developing world—combating malaria, HIV, and poverty. He is also funding inventive new companies at the cutting edge of technology. I’m involved in some of them, including TerraPower, which we formed to commercialize a promising new kind of nuclear reactor.

There are few technologies more daunting to inventors (and investors) than nuclear power. On top of the logistics, science, and engineering, you have to deal with the regulations and politics. In the 1970s, much of the world became afraid of nuclear energy, and last year’s events in Fukushima haven’t exactly assuaged those fears.

So why would any rational group of people create a nuclear power company? Part of the reason is that Bill and I have been primed to think long-term. We have the experience and resources to look for game-changing ideas—and the confidence to act when we think we’ve found one. Other technologists who fund ambitious projects have similar motivations. Elon Musk and Jeff Bezos are literally reaching for the stars because they believe NASA and its traditional suppliers can’t innovate at the same rate they can.

In the next few decades, we need more technology leaders to reach for some very big advances. If 20 of us were to try to solve energy problems—with carbon capture and storage, or perhaps some other crazy idea—maybe one or two of us would actually succeed. If nobody tries, we’ll all certainly fail.

I believe the world will need to rely on nuclear energy. A looming energy crisis will force us to rework the underpinnings of our energy economy. That happened last in the 19th century, when we moved at unprecedented scale toward gas and oil. The 20th century didn’t require a big switcheroo, but looking into the 21st century, it’s clear that we have a much bigger challenge.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Nathan Myhrvold. Courtesy of AllThingsD.[end-div]

What’s All the Fuss About Big Data?

We excerpt an interview with big data pioneer and computer scientist, Alex Pentland, via the Edge. Pentland is a leading thinker in computational social science and currently directs the Human Dynamics Laboratory at MIT.

While there is no exact definition of “big data” it tends to be characterized quantitatively and qualitatively differently from data commonly used by most organizations. Where regular data can be stored, processed and analyzed using common database tools and analytical engines, big data refers to vast collections of data that often lie beyond the realm of regular computation. So, often big data requires vast and specialized storage and enormous processing capabilities. Data sets that fall into the big data area cover such areas as climate science, genomics, particle physics, and computational social science.

Big data holds true promise. However, while storage and processing power now enable quick and efficient crunching of tera- and even petabytes of data, tools for comprehensive analysis and visualization lag behind.

[div class=attrib]Alex Pentland via the Edge:[end-div]

Recently I seem to have become MIT’s Big Data guy, with people like Tim O’Reilly and “Forbes” calling me one of the seven most powerful data scientists in the world. I’m not sure what all of that means, but I have a distinctive view about Big Data, so maybe it is something that people want to hear.

I believe that the power of Big Data is that it is information about people’s behavior instead of information about their beliefs. It’s about the behavior of customers, employees, and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs. This sort of Big Data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.

What those breadcrumbs tell is the story of your life. It tells what you’ve chosen to do. That’s very different than what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behavior, and by analyzing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.

They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviors, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviors they will learn from each other.

As a consequence analysis of Big Data is increasingly about finding connections, connections with the people around you, and connections between people’s behavior and outcomes. You can see this in all sorts of places. For instance, one type of Big Data and connection analysis concerns financial data. Not just the flash crash or the Great Recession, but also all the other sorts of bubbles that occur. What these are is these are systems of people, communications, and decisions that go badly awry. Big Data shows us the connections that cause these events. Big data gives us the possibility of understanding how these systems of people and machines work, and whether they’re stable.

The notion that it is connections between people that is really important is key, because researchers have mostly been trying to understand things like financial bubbles using what is called Complexity Science or Web Science. But these older ways of thinking about Big Data leaves the humans out of the equation. What actually matters is how the people are connected together by the machines and how, as a whole, they create a financial market, a government, a company, and other social structures.

Because it is so important to understand these connections Asu Ozdaglar and I have recently created the MIT Center for Connection Science and Engineering, which spans all of the different MIT departments and schools. It’s one of the very first MIT-wide Centers, because people from all sorts of specialties are coming to understand that it is the connections between people that is actually the core problem in making transportation systems work well, in making energy grids work efficiently, and in making financial systems stable. Markets are not just about rules or algorithms; they’re about people and algorithms together.

Understanding these human-machine systems is what’s going to make our future social systems stable and safe. We are getting beyond complexity, data science and web science, because we are including people as a key part of these systems. That’s the promise of Big Data, to really understand the systems that make our technological society. As you begin to understand them, then you can build systems that are better. The promise is for financial systems that don’t melt down, governments that don’t get mired in inaction, health systems that actually work, and so on, and so forth.

The barriers to better societal systems are not about the size or speed of data. They’re not about most of the things that people are focusing on when they talk about Big Data. Instead, the challenge is to figure out how to analyze the connections in this deluge of data and come to a new way of building systems based on understanding these connections.

Changing The Way We Design Systems

With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant!  As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart.

Big data and the notion of Connection Science is outside of our normal way of managing things. We live in an era that builds on centuries of science, and our methods of building of systems, governments, organizations, and so on are pretty well defined. There are not a lot of things that are really novel. But with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.

With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident? You don’t know. Normal analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.

The other problem with Big Data is human understanding. When you find a connection that works, you’d like to be able to use it to build new systems, and that requires having human understanding of the connection. The managers and the owners have to understand what this new connection means. There needs to be a dialogue between our human intuition and the Big Data statistics, and that’s not something that’s built into most of our management systems today. Our managers have little concept of how to use big data analytics, what they mean, and what to believe.

In fact, the data scientists themselves don’t have much of intuition either…and that is a problem. I saw an estimate recently that said 70 to 80 percent of the results that are found in the machine learning literature, which is a key Big Data scientific field, are probably wrong because the researchers didn’t understand that they were overfitting the data. They didn’t have that dialogue between intuition and causal processes that generated the data. They just fit the model and got a good number and published it, and the reviewers didn’t catch it either. That’s pretty bad because if we start building our world on results like that, we’re going to end up with trains that crash into walls and other bad things. Management using Big Data is actually a radically new thing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Techcrunch.[end-div]

Scientifiction

Science fiction stories and illustrations from our past provide a wonderful opportunity for us to test the predictive and prescient capabilities of their creators. Some like Arthur C. Clarke, we are often reminded, foresaw the communications satellite and the space elevator. Others, such as science fiction great, Isaac Asimov, fared less well in predicting future technology; while he is considered to have coined the term “robotics”, he famously predicted future computers and robots as using punched cards.

Illustrations of our future from the past are even more fascinating. One of the leading proponents of the science fiction illustration genre, or scientifiction, as it was titled in the mid-1920s, was Frank R. Paul. Paul illustrated many of the now classic U.S. pulp science fiction magazines beginning in the 1920s with vivid visuals of aliens, spaceships, destroyed worlds and bizarre technologies. Though, one of his less apocalyptic, but perhaps prescient, works showed a web-footed alien smoking a cigarette through a lengthy proboscis.

Of Frank R. Paul, Ray Bradbury is quoted as saying, “Paul’s fantastic covers for Amazing Stories changed my life forever.”

See more of Paul’s classic illustrations after the jump.

[div class=attrib]Image courtesy of 50Watts / Frank R. Paul.[end-div]

How Apple With the Help of Others Invented the iPhone

Apple’s invention of the iPhone is story of insight, collaboration, cannibalization and dogged persistence over the period of a decade.

[div class=attrib]From Slate:[end-div]

Like many of Apple’s inventions, the iPhone began not with a vision, but with a problem. By 2005, the iPod had eclipsed the Mac as Apple’s largest source of revenue, but the music player that rescued Apple from the brink now faced a looming threat: The cellphone. Everyone carried a phone, and if phone companies figured out a way to make playing music easy and fun, “that could render the iPod unnecessary,” Steve Jobs once warned Apple’s board, according to Walter Isaacson’s biography.

Fortunately for Apple, most phones on the market sucked. Jobs and other Apple executives would grouse about their phones all the time. The simplest phones didn’t do much other than make calls, and the more functions you added to phones, the more complicated they were to use. In particular, phones “weren’t any good as entertainment devices,” Phil Schiller, Apple’s longtime marketing chief, testified during the company’s patent trial with Samsung. Getting music and video on 2005-era phones was too difficult, and if you managed that, getting the device to actually play your stuff was a joyless trudge through numerous screens and menus.

That was because most phones were hobbled by a basic problem—they didn’t have a good method for input. Hard keys (like the ones on the BlackBerry) worked for typing, but they were terrible for navigation. In theory, phones with touchscreens could do a lot more, but in reality they were also a pain to use. Touchscreens of the era couldn’t detect finger presses—they needed a stylus, and the only way to use a stylus was with two hands (one to hold the phone and one to hold the stylus). Nobody wanted a music player that required two-handed operation.

This is the story of how Apple reinvented the phone. The general outlines of this tale have been told before, most thoroughly in Isaacson’s biography. But the Samsung case—which ended last month with a resounding victory for Apple—revealed a trove of details about the invention, the sort of details that Apple is ordinarily loath to make public. We got pictures of dozens of prototypes of the iPhone and iPad. We got internal email that explained how executives and designers solved key problems in the iPhone’s design. We got testimony from Apple’s top brass explaining why the iPhone was a gamble.

Put it all together and you get remarkable story about a device that, under the normal rules of business, should not have been invented. Given the popularity of the iPod and its centrality to Apple’s bottom line, Apple should have been the last company on the planet to try to build something whose explicit purpose was to kill music players. Yet Apple’s inner circle knew that one day, a phone maker would solve the interface problem, creating a universal device that could make calls, play music and videos, and do everything else, too—a device that would eat the iPod’s lunch. Apple’s only chance at staving off that future was to invent the iPod killer itself. More than this simple business calculation, though, Apple’s brass saw the phone as an opportunity for real innovation. “We wanted to build a phone for ourselves,” Scott Forstall, who heads the team that built the phone’s operating system, said at the trial. “We wanted to build a phone that we loved.”

The problem was how to do it. When Jobs unveiled the iPhone in 2007, he showed off a picture of an iPod with a rotary-phone dialer instead of a click wheel. That was a joke, but it wasn’t far from Apple’s initial thoughts about phones. The click wheel—the brilliant interface that powered the iPod (which was invented for Apple by a firm called Synaptics)—was a simple, widely understood way to navigate through menus in order to play music. So why not use it to make calls, too?

In 2005, Tony Fadell, the engineer who’s credited with inventing the first iPod, got hold of a high-end desk phone made by Samsung and Bang & Olufsen that you navigated using a set of numerical keys placed around a rotating wheel. A Samsung cell phone, the X810, used a similar rotating wheel for input. Fadell didn’t seem to like the idea. “Weird way to hold the cellphone,” he wrote in an email to others at Apple. But Jobs thought it could work. “This may be our answer—we could put the number pad around our clickwheel,” he wrote. (Samsung pointed to this thread as evidence for its claim that Apple’s designs were inspired by other companies, including Samsung itself.)

Around the same time, Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Retro design iPhone courtesy of Ubergizmo.[end-div]

Building Character in Kids

Many parents have known this for a long time: it takes more than a stellar IQ, SAT or ACT score to make a well-rounded kid. Arguably there a many more important traits that never feature on these quantitative tests. Such qualities as leadership, curiosity, initiative, perseverance, motivation, courage and empathy come to mind.

An excerpt below from Paul Tough’s book, “How Children Succeed: Grit, Curiosity and the Hidden Power of Character”.

[div class=attrib]From the Wall Street Journal:[end-div]

We are living through a particularly anxious moment in the history of American parenting. In the nation’s big cities these days, the competition among affluent parents over slots in favored preschools verges on the gladiatorial. A pair of economists from the University of California recently dubbed this contest for early academic achievement the “Rug Rat Race,” and each year, the race seems to be starting earlier and growing more intense.

At the root of this parental anxiety is an idea you might call the cognitive hypothesis. It is the belief, rarely spoken aloud but commonly held nonetheless, that success in the U.S. today depends more than anything else on cognitive skill—the kind of intelligence that gets measured on IQ tests—and that the best way to develop those skills is to practice them as much as possible, beginning as early as possible.

There is something undeniably compelling about the cognitive hypothesis. The world it describes is so reassuringly linear, such a clear case of inputs here leading to outputs there. Fewer books in the home means less reading ability; fewer words spoken by your parents means a smaller vocabulary; more math work sheets for your 3-year-old means better math scores in elementary school. But in the past decade, and especially in the past few years, a disparate group of economists, educators, psychologists and neuroscientists has begun to produce evidence that calls into question many of the assumptions behind the cognitive hypothesis.

What matters most in a child’s development, they say, is not how much information we can stuff into her brain in the first few years of life. What matters, instead, is whether we are able to help her develop a very different set of qualities, a list that includes persistence, self-control, curiosity, conscientiousness, grit and self-confidence. Economists refer to these as noncognitive skills, psychologists call them personality traits, and the rest of us often think of them as character.

If there is one person at the hub of this new interdisciplinary network, it is James Heckman, an economist at the University of Chicago who in 2000 won the Nobel Prize in economics. In recent years, Mr. Heckman has been convening regular invitation-only conferences of economists and psychologists, all engaged in one form or another with the same questions: Which skills and traits lead to success? How do they develop in childhood? And what kind of interventions might help children do better?

The transformation of Mr. Heckman’s career has its roots in a study he undertook in the late 1990s on the General Educational Development program, better known as the GED, which was at the time becoming an increasingly popular way for high-school dropouts to earn the equivalent of high-school diplomas. The GED’s growth was founded on a version of the cognitive hypothesis, on the belief that what schools develop, and what a high-school diploma certifies, is cognitive skill. If a teenager already has the knowledge and the smarts to graduate from high school, according to this logic, he doesn’t need to waste his time actually finishing high school. He can just take a test that measures that knowledge and those skills, and the state will certify that he is, legally, a high-school graduate, as well-prepared as any other high-school graduate to go on to college or other postsecondary pursuits.

Mr. Heckman wanted to examine this idea more closely, so he analyzed a few large national databases of student performance. He found that in many important ways, the premise behind the GED was entirely valid. According to their scores on achievement tests, GED recipients were every bit as smart as high-school graduates. But when Mr. Heckman looked at their path through higher education, he found that GED recipients weren’t anything like high-school graduates. At age 22, Mr. Heckman found, just 3% of GED recipients were either enrolled in a four-year university or had completed some kind of postsecondary degree, compared with 46% of high-school graduates. In fact, Heckman discovered that when you consider all kinds of important future outcomes—annual income, unemployment rate, divorce rate, use of illegal drugs—GED recipients look exactly like high-school dropouts, despite the fact that they have earned this supposedly valuable extra credential, and despite the fact that they are, on average, considerably more intelligent than high-school dropouts.

These results posed, for Mr. Heckman, a confounding intellectual puzzle. Like most economists, he had always believed that cognitive ability was the single most reliable determinant of how a person’s life would turn out. Now he had discovered a group—GED holders—whose good test scores didn’t seem to have any positive effect on their eventual outcomes. What was missing from the equation, Mr. Heckman concluded, were the psychological traits, or noncognitive skills, that had allowed the high-school graduates to make it through school.

So what can parents do to help their children develop skills like motivation and perseverance? The reality is that when it comes to noncognitive skills, the traditional calculus of the cognitive hypothesis—start earlier and work harder—falls apart. Children can’t get better at overcoming disappointment just by working at it for more hours. And they don’t lag behind in curiosity simply because they didn’t start doing curiosity work sheets at an early enough age.

[div class=attrib]Read the entire article after the jump.[end-div]

Sign First; Lie Less

A recent paper filed with the Proceedings of the National Academy of Sciences (PNAS) shows that we are more likely to be honest if we sign a form before, rather than after, completing it. So, over the coming years look out for Uncle Sam to revise the ubiquitous IRS 1040 form by adding a signature line at the top rather than the bottom of the last page.

[div class=attrib]From Ars Technica:[end-div]

What’s the purpose of signing a form? On the simplest level, a signature is simply a way to make someone legally responsible for the content of the form. But in addition to the legal aspect, the signature is an appeal to personal integrity, forcing people to consider whether they’re comfortable attaching their identity to something that may not be completely true.

Based on some figures in a new PNAS paper, the signatures on most forms are miserable failures, at least from the latter perspective. The IRS estimates that it misses out on about $175 billion because people misrepresent their income or deductions. And the insurance industry calculates that it loses about $80 billion annually due to fraudulent claims. But the same paper suggests a fix that is as simple as tweaking the form. Forcing people to sign before they complete the form greatly increases their honesty.

It shouldn’t be a surprise that signing at the end of a form does not promote accurate reporting, given what we know about human psychology. “Immediately after lying,” the paper’s authors write, “individuals quickly engage in various mental justifications, reinterpretations, and other ‘tricks’ such as suppressing thoughts about their moral standards that allow them to maintain a positive self-image despite having lied.” By the time they get to the actual request for a signature, they’ve already made their peace with lying: “When signing comes after reporting, the morality train has already left the station.”

The problem isn’t with the signature itself. Lots of studies have shown that focusing the attention on one’s self, which a signature does successfully, can cause people to behave more ethically. The problem comes from its placement after the lying has already happened. So, the authors posited a quick fix: stick the signature at the start. Their hypothesis was that “signing one’s name before reporting information (rather than at the end) makes morality accessible right before it is most needed, which will consequently promote honest reporting.”

To test this proposal, they designed a series of forms that required self reporting of personal information, either involving performance on a math quiz where higher scores meant higher rewards, or the reimbursable travel expenses involved in getting to the study’s location. The only difference among the forms? Some did not ask for a signature, some put the signature on top, and some placed it in its traditional location, at the end.

In the case of the math quiz, the researchers actually tracked how well the participants had performed. With the signature at the end, a full 79 percent of the participants cheated. Somewhat fewer cheated when no signature was required, though the difference was not statistically significant. But when the signature was required on top, only 37 percent cheated—less than half the rate seen in the signature-at-bottom group. A similar pattern was seen when the authors analyzed the extent of the cheating involved.

Although they didn’t have complete information on travel expenses, the same pattern prevailed: people who were given the signature-on-top form reported fewer expenses than either of the other two groups.

The authors then repeated this experiment, but added a word completion task, where participants were given a series of blanks, some filled in with letters, and asked to complete the word. These completion tasks were set up so that they could be answered with neutral words or with those associated with personal ethics, like “virtue.” They got the same results as in the earlier tests of cheating, and the word completion task showed that the people who had signed on top were more likely to fill in the blanks to form ethics-focused words. This supported the contention that the early signature put people in an ethical state of mind prior to completion of the form.

But the really impressive part of the study came from its real-world demonstration of this effect. The authors got an unnamed auto insurance company to send out two versions of its annual renewal forms to over 13,000 policy holders, identical except for the location of the signature. One part of this form included a request for odometer readings, which the insurance companies use to calculate typical miles travelled, which are proportional to accident risk. These are used to calculate insurance cost—the more you drive, the more expensive it is.

Those who signed at the top reported nearly 2,500 miles more than the ones who signed at the end.

[div class=attrib]Read the entire article after the jump, or follow the article at PNAS, here.[end-div]

[div class=attrib]Image courtesy of University of Illinois at Urbana-Champaign.[end-div]

Scandinavian Killer on Ice

The title could be mistaken for a dark and violent crime novel from the likes of (Stieg) Larrson, Nesbø, Sjöwall-Wahlöö, or Henning Mankell. But, this story is somewhat more mundane, though much more consequential. It’s a story about a Swedish cancer killer.

[div class=attrib]From the Telegraph:[end-div]

On the snow-clotted plains of central Sweden where Wotan and Thor, the clamorous gods of magic and death, once held sway, a young, self-deprecating gene therapist has invented a virus that eliminates the type of cancer that killed Steve Jobs.

‘Not “eliminates”! Not “invented”, no!’ interrupts Professor Magnus Essand, panicked, when I Skype him to ask about this explosive achievement.

‘Our results are only in the lab so far, not in humans, and many treatments that work in the lab can turn out to be not so effective in humans. However, adenovirus serotype 5 is a common virus in which we have achieved transcriptional targeting by replacing an endogenous viral promoter sequence by…’

It sounds too kindly of the gods to be true: a virus that eats cancer.

‘I sometimes use the phrase “an assassin who kills all the bad guys”,’ Prof Essand agrees contentedly.

Cheap to produce, the virus is exquisitely precise, with only mild, flu-like side-effects in humans. Photographs in research reports show tumours in test mice melting away.

‘It is amazing,’ Prof Essand gleams in wonder. ‘It’s better than anything else. Tumour cell lines that are resistant to every other drug, it kills them in these animals.’

Yet as things stand, Ad5[CgA-E1A-miR122]PTD – to give it the full gush of its most up-to-date scientific name – is never going to be tested to see if it might also save humans. Since 2010 it has been kept in a bedsit-sized mini freezer in a busy lobby outside Prof Essand’s office, gathering frost. (‘Would you like to see?’ He raises his laptop computer and turns, so its camera picks out a table-top Electrolux next to the lab’s main corridor.)

Two hundred metres away is the Uppsala University Hospital, a European Centre of Excellence in Neuroendocrine Tumours. Patients fly in from all over the world to be seen here, especially from America, where treatment for certain types of cancer lags five years behind Europe. Yet even when these sufferers have nothing else to hope for, have only months left to live, wave platinum credit cards and are prepared to sign papers agreeing to try anything, to hell with the side-effects, the oncologists are not permitted – would find themselves behind bars if they tried – to race down the corridors and snatch the solution out of Prof Essand’s freezer.

I found out about Prof Magnus Essand by stalking him. Two and a half years ago the friend who edits all my work – the biographer and genius transformer of rotten sentences and misdirected ideas, Dido Davies – was diagnosed with neuroendocrine tumours, the exact type of cancer that Steve Jobs had. Every three weeks she would emerge from the hospital after eight hours of chemotherapy infusion, as pale as ice but nevertheless chortling and optimistic, whereas I (having spent the day battling Dido’s brutal edits to my work, among drip tubes) would stumble back home, crack open whisky and cigarettes, and slump by the computer. Although chemotherapy shrank the tumour, it did not cure it. There had to be something better.

It was on one of those evenings that I came across a blog about a quack in Mexico who had an idea about using sub-molecular particles – nanotechnology. Quacks provide a very useful service to medical tyros such as myself, because they read all the best journals the day they appear and by the end of the week have turned the results into potions and tinctures. It’s like Tommy Lee Jones in Men in Black reading the National Enquirer to find out what aliens are up to, because that’s the only paper trashy enough to print the truth. Keep an eye on what the quacks are saying, and you have an idea of what might be promising at the Wild West frontier of medicine. This particular quack was in prison awaiting trial for the manslaughter (by quackery) of one of his patients, but his nanotechnology website led, via a chain of links, to a YouTube lecture about an astounding new therapy for neuroendocrine cancer based on pig microbes, which is currently being put through a variety of clinical trials in America.

I stopped the video and took a snapshot of the poster behind the lecturer’s podium listing useful research company addresses; on the website of one of these organisations was a reference to a scholarly article that, when I checked through the footnotes, led, via a doctoral thesis, to a Skype address – which I dialled.

‘Hey! Hey!’ Prof Magnus Essand answered.

To geneticists, the science makes perfect sense. It is a fact of human biology that healthy cells are programmed to die when they become infected by a virus, because this prevents the virus spreading to other parts of the body. But a cancerous cell is immortal; through its mutations it has somehow managed to turn off the bits of its genetic programme that enforce cell suicide. This means that, if a suitable virus infects a cancer cell, it could continue to replicate inside it uncontrollably, and causes the cell to ‘lyse’ – or, in non-technical language, tear apart. The progeny viruses then spread to cancer cells nearby and repeat the process. A virus becomes, in effect, a cancer of cancer. In Prof Essand’s laboratory studies his virus surges through the bloodstreams of test animals, rupturing cancerous cells with Viking rapacity.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]The Snowman by Jo Nesbø. Image courtesy of Barnes and Noble.[end-div]

Corporate R&D meets Public Innovation

As corporate purse strings have drawn tighter some companies have looked for innovation beyond the office cubicle.

[div class=attrib]From Technology Review:[end-div]

Where does innovation come from? For one answer, consider the work of MIT professor Eric von Hippel, who has calculated that ordinary U.S. consumers spend $20 billion in time and money trying to improve on household products—for example, modifying a dog-food bowl so it doesn’t slide on the floor. Von Hippel estimates that these backyard Edisons collectively invest more in their efforts than the largest corporation anywhere does in R&D.

The low-tech kludges of consumers might once have had little impact. But one company, Procter & Gamble, has actually found a way to tap into them; it now gets many of its ideas for new Swiffers and toothpaste tubes from the general public. One way it has managed to do so is with the help of InnoCentive, a company in Waltham, Massachusetts, that specializes in organizing prize competitions over the Internet. Volunteer “solvers” can try to earn $500 to $1 million by coming up with answers to a company’s problems.

We like Procter & Gamble’s story because the company has discovered a creative, systematic way to pay for ideas originating far outside of its own development labs. It’s made an innovation in funding innovation, which is the subject of this month’s Technology Review business report.

How we pay for innovation is a question prompted, in part, by the beleaguered state of the venture capital industry. Over the long term, it’s the system that’s most often gotten the economic incentives right. Consider that although fewer than two of every 1,000 new American businesses are venture backed, these account for 11 percent of public companies and 6 percent of U.S. employment, according to Harvard Business School professor Josh Lerner. (Many of those companies, although not all, have succeeded because they’ve brought new technology to market.)

Yet losses since the dot-com boom in the late 1990s have taken a toll. In August, the nation’s largest public pension fund, the California Public Employees Retirement System, said it would basically stop investing with the state’s venture funds, citing returns of 0.0 percent over a decade.

The crisis has partly to do with the size of venture funds—$1 billion isn’t uncommon. That means they need big money plays at a time when entrepreneurs are headed on exactly the opposite course. On the Web, it’s never been cheaper to start a company. You can outsource software development, rent a thousand servers, and order hardware designs from China. That is significant because company founders can often get the money they need from seed accelerators, angel investors, or Internet-based funding mechanisms such as Kickstarter.

“We’re in a period of incredible change in how you fund innovation, especially entrepreneurial innovation,” says Ethan Mollick, a professor of management science at the Wharton School. He sees what’s happening as a kind of democratization—the bets are getting smaller, but also more spread out and numerous. He thinks this could be a good thing. “One of the ways we get more innovation is by taking more draws,” he says.

In an example of the changes ahead, Mollick cites plans by the U.S. Securities and Exchange Commission to allow “crowdfunding”—it will let companies raise $1 million or so directly from the public, every year, over the Internet. (This activity had previously been outlawed as a hazard to gullible investors.) Crowdfunding may lead to a major upset in the way inventions get financed, especially those with popular appeal and modest funding requirements, like new gadget designs.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Louisiana Department of Education.[end-div]

The Power of Lists

Where would you be without lists? Surely your life would be much less organized were it not for the shopping list, gift list, re-gifting list, reading list, items to fix list, resolutions list, medications list, vacation list, work action items list, spouse to-do list, movies to see list, greeting card list, gift wish list, allergies list, school supplies list, and of course the places to visit before you die list. The lists just go on an on.

[div class=attrib]From the New York Times:[end-div]

WITH school starting and vacations ending, this is the month, the season of the list. But face it. We’re living in the era of the list, maybe even its golden age. The Web click has led to the wholesale repackaging of information into lists, which can be complex and wonderful pieces of information architecture. Our technology has imperceptibly infected us with “list thinking.”

Lists are the simplest way to organize information. They are also a symptom of our short attention spans.

The crudest of online lists are galaxies of buttons, replacing real stories. “Listicles,” you might say. They are just one step beyond magazine cover lines like “37 Ways to Drive Your Man Wild in Bed.” Bucket lists have produced competitive list making online. Like competitive birders, people check off books read or travel destinations visited.

But lists can also tell a story. Even the humble shopping list says something about the shopper — and the Netflix queue, a “smart list” built on experience and suggestion algorithms, says much about the subscriber.

Lists can reveal personal dramas. An exhibit of lists at the Morgan Library and Museum showed a passive-aggressive Picasso omitting his bosom buddy, Georges Braque, from a list of recommended artists.

We’ve come a long way from the primitive best-seller lists and hit parade lists, “crowd sourced,” if you will, from sales. We all have our “to-do” lists, and there is a modern, sophisticated form of the list that is as serious as the “best of…” list is frivolous. That is the checklist.

The surgeon Atul Gawande, in his book “The Checklist Manifesto,” explains the utility of the list in assuring orderly procedures and removing error. For all that society has accomplished in such fields as medicine and aviation, he argues, the know-how is often unmanageable — without a checklist.

A 70-page checklist put together by James Lovell, the commander of Apollo 13, helped him navigate the spacecraft back to Earth after an oxygen tank exploded. Capt. Chesley B. Sullenberger safely ditched his Airbus A-320 in the Hudson River after consulting the “engine out” checklist, which advised “Land ASAP” if the engines fail to restart.

At a local fast-food joint, I see checklists for cleanliness, one list for the front of the store and one for restrooms — a set of inspections and cleanups to be done every 30 minutes. The list is mapped on photo views, with numbers of the tasks over the areas in question. A checklist is a kind of story or narrative and has a long history in literature. The heroic list or catalog is a feature of epic poetry, from Homer to Milton. There is the famed catalog of ships and heroes in “The Iliad.”

Homer’s ships are also echoed in a list in Lewis Carroll’s “The Walrus and the Carpenter”: “‘The time has come,’ the walrus said, ‘to talk of many things: Of shoes — and ships — and sealing-wax — of cabbages — and kings.’” This is the prototype of the surrealist list.

There are other sorts of lists in literature. Vladimir Nabokov said he spent a long time working out the list (he called it a poem) of Lolita’s classmates in his famous novel; the names reflect the flavor of suburban America in the 1950s and give sly clues to the plot as well. There are hopeful names like Grace Angel and ominous ones like Aubrey McFate.

[div class=attrib]Read the entire article after the jump.[end-div]

Happy Birthday :-)

Thirty years ago today Professor Scott Fahlman of Carnegie Mellon University sent what is believed to be the first emoticon embedded in an email. The symbol, :-), which he proposed as a joke marker, spread rapidly, morphed and evolved into a universe of symbolic nods, winks, and cyber-emotions.

For a lengthy list of popular emoticons, including some very interesting Eastern ones, jump here.

[div class=attrib]From the Independent:[end-div]

To some, an email isn’t complete without the inclusion of 🙂 or :-(. To others, the very idea of using “emoticons” – communicative graphics – makes the blood boil and represents all that has gone wrong with the English language.

Regardless of your view, as emoticons celebrate their 30th anniversary this month, it is accepted that they are here stay. Their birth can be traced to the precise minute: 11:44am on 19 September 1982. At that moment, Professor Scott Fahlman, of Carnegie Mellon University in Pittsburgh, sent an email on an online electronic bulletin board that included the first use of the sideways smiley face: “I propose the following character sequence for joke markers: 🙂 Read it sideways.” More than anyone, he must take the credit – or the blame.

The aim was simple: to allow those who posted on the university’s bulletin board to distinguish between those attempting to write humorous emails and those who weren’t. Professor Fahlman had seen how simple jokes were often misunderstood and attempted to find a way around the problem.

This weekend, the professor, a computer science researcher who still works at the university, says he is amazed his smiley face took off: “This was a little bit of silliness that I tossed into a discussion about physics,” he says. “It was ten minutes of my life. I expected my note might amuse a few of my friends, and that would be the end of it.”

But once his initial email had been sent, it wasn’t long before it spread to other universities and research labs via the primitive computer networks of the day. Within months, it had gone global.

Nowadays dozens of variations are available, mainly as little yellow, computer graphics. There are emoticons that wear sunglasses; some cry, while others don Santa hats. But Professor Fahlman isn’t a fan.

“I think they are ugly, and they ruin the challenge of trying to come up with a clever way to express emotions using standard keyboard characters. But perhaps that’s just because I invented the other kind.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Wikipedia.[end-div]

The Pleasure from Writing Long Sentences

Author Pico Iver distances himself from the short bursts of broken language of the Twitterscape and the exclamatory sound-bites of our modern day lives, and revels in the lush beauty of the long and winding sentence.

[div class=attrib]From the LA Times:[end-div]

“Your sentences are so long,” said a friend who teaches English at a local college, and I could tell she didn’t quite mean it as a compliment. The copy editor who painstakingly went through my most recent book often put yellow dashes on-screen around my multiplying clauses, to ask if I didn’t want to break up my sentences or put less material in every one. Both responses couldn’t have been kinder or more considered, but what my friend and my colleague may not have sensed was this: I’m using longer and longer sentences as a small protest against — and attempt to rescue any readers I might have from — the bombardment of the moment.

When I began writing for a living, my feeling was that my job was to give the reader something vivid, quick and concrete that she couldn’t get in any other form; a writer was an information-gathering machine, I thought, and especially as a journalist, my job was to go out into the world and gather details, moments, impressions as visual and immediate as TV. Facts were what we needed most. And if you watched the world closely enough, I believed (and still do), you could begin to see what it would do next, just as you can with a sibling or a friend; Don DeLillo or Salman Rushdie aren’t mystics, but they can tell us what the world is going to do tomorrow because they follow it so attentively.

Yet nowadays the planet is moving too fast for even a Rushdie or DeLillo to keep up, and many of us in the privileged world have access to more information than we know what to do with. What we crave is something that will free us from the overcrowded moment and allow us to see it in a larger light. No writer can compete, for speed and urgency, with texts or CNN news flashes or RSS feeds, but any writer can try to give us the depth, the nuances — the “gaps,” as Annie Dillard calls them — that don’t show up on many screens. Not everyone wants to be reduced to a sound bite or a bumper sticker.

Enter (I hope) the long sentence: the collection of clauses that is so many-chambered and lavish and abundant in tones and suggestions, that has so much room for near-contradiction and ambiguity and those places in memory or imagination that can’t be simplified, or put into easy words, that it allows the reader to keep many things in her head and heart at the same time, and to descend, as by a spiral staircase, deeper into herself and those things that won’t be squeezed into an either/or. With each clause, we’re taken further and further from trite conclusions — or that at least is the hope — and away from reductionism, as if the writer were a dentist, saying “Open wider” so that he can probe the tender, neglected spaces in the reader (though in this case it’s not the mouth that he’s attending to but the mind).

“There was a little stoop of humility,” Alan Hollinghurst writes in a sentence I’ve chosen almost at random from his recent novel “The Stranger’s Child,” “as she passed through the door, into the larger but darker library beyond, a hint of frailty, an affectation of bearing more than her fifty-nine years, a slight bewildered totter among the grandeur that her daughter now had to pretend to take for granted.” You may notice — though you don’t have to — that “humility” has rather quickly elided into “affectation,” and the point of view has shifted by the end of the sentence, and the physical movement through the rooms accompanies a gradual inner movement that progresses through four parallel clauses, each of which, though legato, suggests a slightly different take on things.

Many a reader will have no time for this; William Gass or Sir Thomas Browne may seem long-winded, the equivalent of driving from L.A. to San Francisco by way of Death Valley, Tijuana and the Sierras. And a highly skilled writer, a Hemingway or James Salter, can get plenty of shading and suggestion into even the shortest and straightest of sentences. But too often nowadays our writing is telegraphic as a way of keeping our thinking simplistic, our feeling slogan-crude. The short sentence is the domain of uninflected talk-radio rants and shouting heads on TV who feel that qualification or subtlety is an assault on their integrity (and not, as it truly is, integrity’s greatest adornment).

If we continue along this road, whole areas of feeling and cognition and experience will be lost to us. We will not be able to read one another very well if we can’t read Proust’s labyrinthine sentences, admitting us to those half-lighted realms where memory blurs into imagination, and we hide from the person we care for or punish the thing that we love. And how can we feel the layers, the sprawl, the many-sidedness of Istanbul in all its crowding amplitude without the 700-word sentence, transcribing its features, that Orhan Pamuk offered in tribute to his lifelong love?

[div class=attrib]Read the entire article after the jump.[end-div]

Old Concepts Die Hard

Regardless of how flawed old scientific concepts may be researchers have found that it is remarkably difficult for people to give these up and accept sound, new reasoning. Even scientists are creatures of habit.

[div class=attrib]From Scientific American:[end-div]

In one sense, science educators have it easy. The things they describe are so intrinsically odd and interesting — invisible fields, molecular machines, principles explaining the unity of life and origins of the cosmos — that much of the pedagogical attention-getting is built right in.  Where they have it tough, though, is in having to combat an especially resilient form of higher ed’s nemesis: the aptly named (if irredeemably clichéd) ‘preconceived idea.’ Worse than simple ignorance, naïve ideas about science lead people to make bad decisions with confidence. And in a world where many high-stakes issues fundamentally boil down to science, this is clearly a problem.

Naturally, the solution to the problem lies in good schooling — emptying minds of their youthful hunches and intuitions about how the world works, and repopulating them with sound scientific principles that have been repeatedly tested and verified. Wipe out the old operating system, and install the new. According to a recent paper by Andrew Shtulman and Joshua Valcarcel, however, we may not be able to replace old ideas with new ones so cleanly. Although science as a field discards theories that are wrong or lacking, Shtulman and Valcarcel’s work suggests that individuals —even scientifically literate ones — tend to hang on to their early, unschooled, and often wrong theories about the natural world. Even long after we learn that these intuitions have no scientific support, they can still subtly persist and influence our thought process. Like old habits, old concepts seem to die hard.

Testing for the persistence of old concepts can’t be done directly. Instead, one has to set up a situation in which old concepts, if present, measurably interfere with mental performance. To do this, Shtulman and Valcarcel designed a task that tested how quickly and accurately subjects verified short scientific statements (for example: “air is composed of matter.”). In a clever twist, the authors interleaved two kinds of statements — “consistent” ones that had the same truth-value under a naive theory and a proper scientific theory, and “inconsistent” ones. For example, the statement “air is composed of matter”  is inconsistent: it’s false under a naive theory (air just seems like empty space, right?), but is scientifically true. By contrast, the statement “people turn food into energy” is consistent: anyone who’s ever eaten a meal knows it’s true, and science affirms this by filling in the details about digestion, respiration and metabolism.

Shtulman and Valcarcel tested 150 college students on a battery of 200 such statements that included an equal and random mix of consistent and inconsistent statements from several domains, including astronomy, evolution, physiology, genetics, waves, and others. The scientists measured participants’ response speed and accuracy, and looked for systematic differences in how consistent vs. inconsistent statements were evaluated.

If scientific concepts, once learned, are fully internalized and don’t conflict with our earlier naive concepts, one would expect consistent and inconsistent statements to be processed similarly. On the other hand, if naive concepts are never fully supplanted, and are quietly threaded into our thought process, it should take take longer to evaluate inconsistent statements. In other words, it should take a bit of extra mental work (and time) to go against the grain of a naive theory we once held.

This is exactly what Shtulman and Valcarcel found. While there was some variability between the different domains tested, inconsistent statements took almost a half second longer to verify, on average. Granted, there’s a significant wrinkle in interpreting this result. Specifically, it may simply be the case that scientific concepts that conflict with naive intuition are simply learned more tenuously than concepts that are consistent with our intuition. Under this view, differences in response times aren’t necessarily evidence of ongoing inner conflict between old and new concepts in our brains — it’s just a matter of some concepts being more accessible than others, depending on how well they were learned.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of New Scientist.[end-div]