All posts by Mike

Curiosity in Flight

NASA pulled off another tremendous and daring feat of engineering when it successfully landed the Mars Science Laboratory (MSL) to the surface of Mars on August 5, 2012, 10:32 PM Pacific Time.

The MSL is housed aboard the Curiosity rover, a 2,000-pound, car-size robot. Not only did NASA land Curiosity a mere 1 second behind schedule following a journey of over 576 million kilometers (358 million miles) lasting around 8 months, it went one better. NASA had one of its Mars orbiters — Mars Reconnaissance Orbiter — snap an image of MSL from around 300 miles away as it descended through the Martian atmosphere, with its supersonic parachute unfurled.

Another historic day for science, engineering and exploration.

[div class=attrib]From NASA / JPL:[end-div]

NASA’s Curiosity rover and its parachute were spotted by NASA’s Mars Reconnaissance Orbiter as Curiosity descended to the surface on Aug. 5 PDT (Aug. 6 EDT). The High-Resolution Imaging Science Experiment (HiRISE) camera captured this image of Curiosity while the orbiter was listening to transmissions from the rover. Curiosity and its parachute are in the center of the white box; the inset image is a cutout of the rover stretched to avoid saturation. The rover is descending toward the etched plains just north of the sand dunes that fringe “Mt. Sharp.” From the perspective of the orbiter, the parachute and Curiosity are flying at an angle relative to the surface, so the landing site does not appear directly below the rover.

The parachute appears fully inflated and performing perfectly. Details in the parachute, such as the band gap at the edges and the central hole, are clearly seen. The cords connecting the parachute to the back shell cannot be seen, although they were seen in the image of NASA’s Phoenix lander descending, perhaps due to the difference in lighting angles. The bright spot on the back shell containing Curiosity might be a specular reflection off of a shiny area. Curiosity was released from the back shell sometime after this image was acquired.

This view is one product from an observation made by HiRISE targeted to the expected location of Curiosity about one minute prior to landing. It was captured in HiRISE CCD RED1, near the eastern edge of the swath width (there is a RED0 at the very edge). This means that the rover was a bit further east or downrange than predicted.

[div class=attrib]Follow the mission after the jump.[end-div]

[div class=attrib]Image courtesy of NASA/JPL-Caltech/Univ. of Arizona.[end-div]

The Radium Girls and the Polonium Assassin

Deborah Blum’s story begins with Marie Curie’s analysis of a “strange energy” released from uranium ore, and ends with the assassination of Russian dissident, Alexander Litveninko in 2006.

[div class=attrib]From Wired:[end-div]

In the late 19th century, a then-unknown chemistry student named Marie Curie was searching for a thesis subject. With encouragement from her husband, Pierre, she decided to study the strange energy released by uranium ores, a sizzle of power far greater than uranium alone could explain.

The results of that study are today among the most famous in the history of science. The Curies discovered not one but two new radioactive elements in their slurry of material (and Marie invented the word radioactivity to help explain them.) One was the glowing element radium. The other, which burned brighter and briefer, she named after her home country of Poland — Polonium (from the Latin root, polonia). In honor of that discovery, the Curies shared the 1903 Nobel Prize in Physics with their French colleague Henri Becquerel for his work with uranium.

Radium was always Marie Curie’s first love – “radium, my beautiful radium”, she used to call it. Her continued focus gained her a second Nobel Prize in chemistry in 1911. (Her Nobel lecture was titled Radium and New Concepts in Chemistry.)  It was also the higher-profile radium — embraced in a host of medical, industrial, and military uses — that first called attention to the health risks of radioactive elements. I’ve told some of that story here before in a look at the deaths and illnesses suffered by the “Radium Girls,” young women who in the 1920s painted watch-dial faces with radium-based luminous paint.

Polonium remained the unstable, mostly ignored step-child element of the story, less famous, less interesting, less useful than Curie’s beautiful radium. Until the last few years, that is. Until the reported 2006 assassination by polonium 210 of Russian spy turned dissident, Alexander Litveninko. And until the news this week, first reported by Al Jazeera, that surprisingly high levels of polonium-210 were detected by a Swiss laboratory in the clothes and other effects of the late Palestinian leader Yasser Arafat.

Arafat, 75, had been held for almost two years under an Israeli form of house arrest when he died in 2004 of a sudden wasting illness. His rapid deterioration led to a welter of conspiracy theories that he’d been poisoned, some accusing his political rivals and many more accusing Israel, which has steadfastly denied any such plot.

Recently (and for undisclosed reasons) his widow agreed to the forensic analysis of articles including clothes, a toothbrush, bed sheets, and his favorite kaffiyeh. Al Jazeera arranged for the analysis and took the materials to Europe for further study. After the University of Lausanne’s Institute of Radiation Physics released the findings, Suha Arafat asked that her husband’s body be exhumed and tested for polonium. Palestinian authorities have indicated that they may do so within the week.

And at this point, as we anticipate those results, it’s worth asking some questions about the use of a material like polonium as an assassination poison. Why, for instance, pick a poison that leaves such a durable trail of evidence behind? In the case of the Radium Girls, I mentioned earlier, scientists found that their bones were still hissing with radiation years after their deaths. In the case of Litvinenko, public health investigators found that he’d literally left a trail of radioactive residues across London where he was living at the time of his death.

In what we might imagine as the clever world of covert killings  why would a messy element like polonium even be on the assassination list? To answer that, it helps to begin by stepping back to some of the details provided in the Curies’ seminal work. Both radium and polonium are links in a chain of radioactive decay (element changes due to particle emission) that begins with uranium.  Polonium, which eventually decays to an isotope of lead, is one of the more unstable points in this chain, unstable enough that there are  some 33 known variants (isotopes) of the element.

Of these, the best known and most abundant is the energetic isotope polonium-210, with its half life of 138 days. Half-life refers to the time it takes for a radioactive element to burn through its energy supply, essentially the time it takes for activity to decrease by half. For comparison, the half life of the uranium isotope U-235, which often features in weapon design, is 700 million years. In other words, polonium is a little blast furnace of radioactive energy. The speed of its decay means that eight years after Arafat’s death, it would probably be identified by the its breakdown products. And it’s on that note – its life as a radioactive element –  that it becomes interesting as an assassin’s weapon.

Like radium, polonium’s radiation is primarily in the form of alpha rays — the emission of alpha particles. Compared to other subatomic particles, alpha particles tend to be high energy and high mass. Their relatively larger mass means that they don’t penetrate as well as other forms of radiation, in fact, alpha particles barely penetrate the skin. And they can stopped from even that by a piece of paper or protective clothing.

That may make them sound safe. It shouldn’t. It should just alert us that these are only really dangerous when they are inside the body. If a material emitting alpha radiation is swallowed or inhaled, there’s nothing benign about it. Scientists realized, for instance, that the reason the Radium Girls died of radiation poisoning was because they were lip-pointing their paintbrushes and swallowing radium-laced paint. The radioactive material deposited in their bones — which literally crumbled. Radium, by the way, has a half-life of about 1,600 years. Which means that it’s not in polonium’s league as an alpha emitter. How bad is this? By mass, polonium-210 is considered to be about 250,000 times more poisonous than hydrogen cyanide. Toxicologists estimate that an amount the size of a grain of salt could be fatal to the average adult.

In other words, a victim would never taste a lethal dose in food or drink. In the case of Litvinenko, investigators believed that he received his dose of polonium-210 in a cup of tea, dosed during a meeting with two Russian agents. (Just as an aside, alpha particles tend not to set off radiation detectors so it’s relatively easy to smuggle from country to country.) Another assassin advantage is that illness comes on gradually, making it hard to pinpoint the event.  Yet another advantage is that polonium poisoning is so rare that it’s not part of a standard toxics screen. In Litvinenko’s case, the poison wasn’t identified until shortly after his death. In Arafat’s case — if polonium-210 killed him and that has not been established — obviously it wasn’t considered at the time. And finally, it gets the job done.  “Once absorbed,” notes the U.S. Regulatory Commission, “The alpha radiation can rapidly destroy major organs, DNA and the immune system.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Pierre and Marie Curie in the laboratory, Paris c1906. Courtesy of Wikipedia.[end-div]

The North Continues to Melt Away

On July 16, 2012 the Petermann Glacier in Greenland calved another gigantic island of ice, about twice the size of Manhattan in New York, or about 46 square miles. Climatologists armed with NASA satellite imagery have been following the glacier for many years, and first spotted the break-off point around 8 years ago. The Petermann Glacier calved a previous huge iceberg, twice this size, in 2010.

According to NASA average temperatures in northern Greenland and the Canadian Arctic have increased by about 4 degrees Fahrenheit in the last 30 years.

So, driven by climate change or not, regardless of whether it is short-term or long-term, temporary or irreversible, man-made or a natural cycle, the trend is clear — the Arctic is warming, the ice cap is shrinking and sea-levels are rising.

[div class=attrib]From the Economist:[end-div]

STANDING ON THE Greenland ice cap, it is obvious why restless modern man so reveres wild places. Everywhere you look, ice draws the eye, squeezed and chiselled by a unique coincidence of forces. Gormenghastian ice ridges, silver and lapis blue, ice mounds and other frozen contortions are minutely observable in the clear Arctic air. The great glaciers impose order on the icy sprawl, flowing down to a semi-frozen sea.

The ice cap is still, frozen in perturbation. There is not a breath of wind, no engine’s sound, no bird’s cry, no hubbub at all. Instead of noise, there is its absence. You feel it as a pressure behind the temples and, if you listen hard, as a phantom roar. For generations of frosty-whiskered European explorers, and still today, the ice sheet is synonymous with the power of nature.

The Arctic is one of the world’s least explored and last wild places. Even the names of its seas and rivers are unfamiliar, though many are vast. Siberia’s Yenisey and Lena each carries more water to the sea than the Mississippi or the Nile. Greenland, the world’s biggest island, is six times the size of Germany. Yet it has a population of just 57,000, mostly Inuit scattered in tiny coastal settlements. In the whole of the Arctic—roughly defined as the Arctic Circle and a narrow margin to the south (see map)—there are barely 4m people, around half of whom live in a few cheerless post-Soviet cities such as Murmansk and Magadan. In most of the rest, including much of Siberia, northern Alaska, northern Canada, Greenland and northern Scandinavia, there is hardly anyone. Yet the region is anything but inviolate.

Fast forward

A heat map of the world, colour-coded for temperature change, shows the Arctic in sizzling maroon. Since 1951 it has warmed roughly twice as much as the global average. In that period the temperature in Greenland has gone up by 1.5°C, compared with around 0.7°C globally. This disparity is expected to continue. A 2°C increase in global temperatures—which appears inevitable as greenhouse-gas emissions soar—would mean Arctic warming of 3-6°C.

Almost all Arctic glaciers have receded. The area of Arctic land covered by snow in early summer has shrunk by almost a fifth since 1966. But it is the Arctic Ocean that is most changed. In the 1970s, 80s and 90s the minimum extent of polar pack ice fell by around 8% per decade. Then, in 2007, the sea ice crashed, melting to a summer minimum of 4.3m sq km (1.7m square miles), close to half the average for the 1960s and 24% below the previous minimum, set in 2005. This left the north-west passage, a sea lane through Canada’s 36,000-island Arctic Archipelago, ice-free for the first time in memory.

Scientists, scrambling to explain this, found that in 2007 every natural variation, including warm weather, clear skies and warm currents, had lined up to reinforce the seasonal melt. But last year there was no such remarkable coincidence: it was as normal as the Arctic gets these days. And the sea ice still shrank to almost the same extent.

There is no serious doubt about the basic cause of the warming. It is, in the Arctic as everywhere, the result of an increase in heat-trapping atmospheric gases, mainly carbon dioxide released when fossil fuels are burned. Because the atmosphere is shedding less solar heat, it is warming—a physical effect predicted back in 1896 by Svante Arrhenius, a Swedish scientist. But why is the Arctic warming faster than other places?

Consider, first, how very sensitive to temperature change the Arctic is because of where it is. In both hemispheres the climate system shifts heat from the steamy equator to the frozen pole. But in the north the exchange is much more efficient. This is partly because of the lofty mountain ranges of Europe, Asia and America that help mix warm and cold fronts, much as boulders churn water in a stream. Antarctica, surrounded by the vast southern seas, is subject to much less atmospheric mixing.

The land masses that encircle the Arctic also prevent the polar oceans revolving around it as they do around Antarctica. Instead they surge, north-south, between the Arctic land masses in a gigantic exchange of cold and warm water: the Pacific pours through the Bering Strait, between Siberia and Alaska, and the Atlantic through the Fram Strait, between Greenland and Norway’s Svalbard archipelago.

That keeps the average annual temperature for the high Arctic (the northernmost fringes of land and the sea beyond) at a relatively sultry -15°C; much of the rest is close to melting-point for much of the year. Even modest warming can therefore have a dramatic effect on the region’s ecosystems. The Antarctic is also warming, but with an average annual temperature of -57°C it will take more than a few hot summers for this to become obvious.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Sequence of three images showing the Petermann Glacier sliding toward the sea along the northwestern coast of Greenland, terminating in a huge, new floating ice island. Courtesy: NASA.[end-div]

Procrastination is a Good Thing

Procrastinators have known this for a long time: that success comes from making a decision at the last possible moment.

Procrastinating professor Frank Partnoy expands on this theory, captured in his book, “Wait: The Art and Science of Delay“.

[div class=attrib]From Smithsonian:[end-div]

Sometimes life seems to happen at warp speed. But, decisions, says Frank Partnoy, should not. When the financial market crashed in 2008, the former investment banker and corporate lawyer, now a professor of finance and law and co-director of the Center for Corporate and Securities Law at the University of San Diego, turned his attention to literature on decision-making.

“Much recent research about decisions helps us understand what we should do or how we should do it, but it says little about when,” he says.

In his new book, Wait: The Art and Science of Delay, Partnoy claims that when faced with a decision, we should assess how long we have to make it, and then wait until the last possible moment to do so. Should we take his advice on how to “manage delay,” we will live happier lives.

It is not surprising that the author of a book titled Wait is a self-described procrastinator. In what ways do you procrastinate?

I procrastinate in just about every possible way and always have, since my earliest memories going back to when I first starting going to elementary school and had these arguments with my mother about making my bed.

My mom would ask me to make my bed before going to school. I would say, no, because I didn’t see the point of making my bed if I was just going to sleep in it again that night. She would say, well, we have guests coming over at 6 o’clock, and they might come upstairs and look at your room. I said, I would make my bed when we know they are here. I want to see a car in the driveway. I want to hear a knock on the door. I know it will take me about one minute to make my bed so at 5:59, if they are here, I will make my bed.

I procrastinated all through college and law school. When I went to work at Morgan Stanley, I was delighted to find that although the pace of the trading floor is frenetic and people are very fast, there were lots of incredibly successful mentors of procrastination.

Now, I am an academic. As an academic, procrastination is practically a job requirement. If I were to say I would be submitting an academic paper by September 1, and I submitted it in August, people would question my character.

It has certainly been drilled into us that procrastination is a bad thing. Yet, you argue that we should embrace it. Why?

Historically, for human beings, procrastination has not been regarded as a bad thing. The Greeks and Romans generally regarded procrastination very highly. The wisest leaders embraced procrastination and would basically sit around and think and not do anything unless they absolutely had to.

The idea that procrastination is bad really started in the Puritanical era with Jonathan Edwards’s sermon against procrastination and then the American embrace of “a stitch in time saves nine,” and this sort of work ethic that required immediate and diligent action.

But if you look at recent studies, managing delay is an important tool for human beings. People are more successful and happier when they manage delay. Procrastination is just a universal state of being for humans. We will always have more things to do than we can possibly do, so we will always be imposing some sort of unwarranted delay on some tasks. The question is not whether we are procrastinating, it is whether we are procrastinating well.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of eHow.[end-div]

Curiosity: August 5, 2012, 10:31 PM Pacific Time

This is the time when NASA’s latest foray into space reaches its zenith — the upcoming landing of the Curiosity rover on Mars. At this time NASA’s Mars Science Laboratory (MSL) mission plans to deliver the nearly 2,000-pound, car-size robot rover to the surface of Mars. Curiosity will then embark on two years of exploration on the Red Planet.

For mission scientists and science buffs alike Curiosity’s descent and landing will be a major event. And, for the first time NASA will have a visual feed beamed back direct from the spacecraft (but only available after the event). The highly complex and fully automated landing has been dubbed “the Seven Minutes of Terror” by NASA engineers. Named for the time lag of signals from Curiosity to reach Earth due to the immense distance, mission scientists (and the rest of us) will not know whether Curiosity successfully descended and landed until a full 7 minutes after the fact.

For more on Curiosity and this special event visit NASA’s Jet Propulsion MSL site, here.

[div class=attrib]Image: This artist’s concept features NASA’s Mars Science Laboratory Curiosity rover, a mobile robot for investigating Mars’ past or present ability to sustain microbial life. Courtesy: NASA/JPL-Caltech.[end-div]

Re-resurgence of the United States

Those who have written off the United States in the 21st century may need to thing again. A combination of healthy demographics, sound intellectual capital, institutionalized innovation and fracking (yes, fracking) have placed the U.S. on a sound footing for the future, despite current political and economic woes.

[div class=attrib]From the Wilson Quarterly:[end-div]

If the United States were a person, a plausible diagnosis could be made that it suffers from manic depression. The country’s self-perception is highly volatile, its mood swinging repeatedly from euphoria to near despair and back again. Less than a decade ago, in the wake of the deceptively easy triumph over the wretched legions of Saddam Hussein, the United States was the lonely superpower, the essential nation. Its free markets and free thinking and democratic values had demonstrated their superiority over all other forms of human organization. Today the conventional wisdom speaks of inevitable decline and of equally inevitable Chinese triumph; of an American financial system flawed by greed and debt; of a political system deadlocked and corrupted by campaign contributions, negative ads, and lobbyists; of a social system riven by disparities of income, education, and opportunity.

It was ever thus. The mood of justified triumph and national solidarity after global victory in 1945 gave way swiftly to an era of loyalty oaths, political witch-hunts, and Senator Joseph McCarthy’s obsession with communist moles. The Soviet acquisition of the atom bomb, along with the victory of Mao Zedong’s communist armies in China, had by the end of the 1940s infected America with the fear of existential defeat. That was to become a pattern; at the conclusion of each decade of the Cold War, the United States felt that it was falling behind. The successful launch of the Sputnik satellite in 1957 triggered fears that the Soviet Union was winning the technological race, and the 1960 presidential election was won at least in part by John F. Kennedy’s astute if disingenuous claim that the nation was threatened by a widening “missile gap.”
At the end of the 1960s, with cities burning in race riots, campuses in an uproar, and a miserably unwinnable war grinding through the poisoned jungles of Indochina, an American fear of losing the titanic struggle with communism was perhaps understandable. Only the farsighted saw the importance of the contrast between American elections and the ruthless swagger of the Red Army’s tanks crushing the Prague Spring of 1968. At the end of the 1970s, with American diplomats held hostage in Tehran, a Soviet puppet ruling Afghanistan, and glib talk of Soviet troops soon washing their feet in the Indian Ocean, Americans waiting in line for gasoline hardly felt like winners. Yet at the end of the 1980s, what a surprise! The Cold War was over and the good guys had won.

Naturally, there were many explanations for this, from President Ronald Reagan’s resolve to Mikhail Gorbachev’s decency; from American industrial prowess to Soviet inefficiency. The most cogent reason was that the United States back in the late 1940s had crafted a bipartisan grand strategy for the Cold War that proved to be both durable and successful. It forged a tripartite economic alliance of Europe, North America, and Japan, backed up by various regional treaty organizations such as NATO, and counted on scientists, inventors, business leaders, and a prosperous and educated work force to deliver both guns and butter for itself and its allies. State spending on defense and science would keep unemployment at bay while Social Security would ensure that the siren songs of communism had little to offer the increasingly comfortable workers of the West. And while the West waited for its wealth and technologies to attain overwhelming superiority, its troops, missiles, and nuclear deterrent would contain Soviet and Chinese hopes of expansion.

It worked. The Soviet Union collapsed, and the Chinese leadership drew the appropriate lessons. (The Chinese view was that by starting with glasnost and political reform, and ducking the challenge of economic reform, Gorbachev had gotten the dynamics of change the wrong way round.) But by the end of 1991, the Democrat who would win the next year’s New Hampshire primary (Senator Paul Tsongas of Massachusetts) had a catchy new campaign slogan: “The Cold War is over—and Japan won.” With the country in a mild recession and mega-rich Japanese investors buying up landmarks such as Manhattan’s Rockefeller Center and California’s Pebble Beach golf course, Tsongas’s theme touched a national chord. But the Japanese economy has barely grown since, while America’s gross domestic product has almost doubled.

There are, of course, serious reasons for concern about the state of the American economy, society, and body politic today. But remember, the United States is like the weather in Ireland; if you don’t like it, just wait a few minutes and it’s sure to shift. This is a country that has been defined by its openness to change and innovation, and the search for the latest and the new has transformed the country’s productivity and potential. This openness, in effect, was America’s secret weapon that won both World War II and the Cold War. We tend to forget that the Soviet Union fulfilled Nikita Khrushchev’s pledge in 1961 to outproduce the United States in steel, coal, cement, and fertilizer within 20 years. But by 1981 the United States was pioneering a new kind of economy, based on plastics, silicon, and transistors, while the Soviet Union lumbered on building its mighty edifice of obsolescence.

This is the essence of America that the doom mongers tend to forget. Just as we did after Ezra Cornell built the nationwide telegraph system and after Henry Ford developed the assembly line, we are again all living in a future invented in America. No other country produced, or perhaps even could have produced, the transformative combination of Microsoft, Apple, Google, Amazon, and Facebook. The American combination of universities, research, venture capital, marketing, and avid consumers is easy to envy but tough to emulate. It’s not just free enterprise. The Internet itself might never have been born but for the Pentagon’s Defense Advanced Research Projects Agency, and much of tomorrow’s future is being developed at the nanotechnology labs at the Argonne National Laboratory outside Chicago and through the seed money of Department of Energy research grants.

American research labs are humming with new game-changing technologies. One MIT-based team is using viruses to bind and create new materials to build better batteries, while another is using viruses to create catalysts that can turn natural gas into oil and plastics. A University of Florida team is pioneering a practical way of engineering solar cells from plastics rather than silicon. The Center for Bits and Atoms at MIT was at the forefront of the revolution in fabricators, assembling 3-D printers and laser milling and cutting machines into a factory-in-a-box that just needs data, raw materials, and a power source to turn out an array of products. Now that the latest F-18 fighters are flying with titanium parts that were made by a 3-D printer, you know the technology has taken off. Some 23,000 such printers were sold last year, most of them to the kind of garage tinkerers—many of them loosely grouped in the “maker movement” of freelance inventors—who more than 70 years ago created Hewlett-Packard and 35 years ago produced the first Apple personal computer.

The real game changer for America is the combination of two not-so-new technologies: hydraulic fracturing (“fracking”) of underground rock formations and horizontal drilling, which allows one well to spin off many more deep underground. The result has been a “frack gas” revolution. As recently as 2005, the U.S. government assumed that the country had about a 10-year supply of natural gas remaining. Now it knows that there is enough for at least several decades. In 2009, the United States outpaced Russia to become the world’s top natural gas producer. Just a few years ago, the United States had five terminals receiving imported liquefied natural gas (LNG), and permits had been issued to build 17 more. Today, one of the five plants is being converted to export U.S. gas, and the owners of three others have applied to do the same. (Two applications to build brand new export terminals are also pending.) The first export contract, worth $8 billion, was signed with Britain’s BG Group, a multinational oil and gas company. Sometime between 2025 and 2030, America is likely to become self-sufficient in energy again. And since imported energy accounts for about half of the U.S. trade deficit, fracking will be a game changer in more ways than one.

The supply of cheap and plentiful local gas is already transforming the U.S. chemical industry by making cheap feedstock available—ethylene, a key component of plastics, and other crucial chemicals are derived from natural gas in a process called ethane cracking. Many American companies have announced major projects that will significantly boost U.S. petrochemical capacity. In addition to expansions along the Gulf Coast, Shell Chemical plans to build a new ethane cracking plant in Pennsylvania, near the Appalachian Mountains’ Marcellus Shale geologic formation. LyondellBasell Industries is seeking to increase ethylene output at its Texas plants, and Williams Companies is investing $3 billion in Gulf Coast development. In short, billions of dollars will pour into regions of the United States that desperately need investment. The American Chemistry Council projects that over several years the frack gas revolution will create 400,000 new jobs, adding $130 billion to the economy and more than $4 billion in annual tax revenues. The prospect of cheap power also promises to improve the balance sheets of the U.S. manufacturing industry.

[div class-attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtey of Wikipedia.[end-div]

Time Flows Uphill

Many people in industrialized countries often describe time as flowing like a river: it flows back into the past, and it flows forward into the future. Of course, for bored workers time sometimes stands still, while for kids on summer vacation time flows all too quickly. And, for many people over, say the age of forty, days often drag, but the years fly by.

For some, time flows uphill, and it flows downhill.

[div class=attrib]From New Scientist:[end-div]

“HERE and now”, “Back in the 1950s”, “Going forward”… Western languages are full of spatial metaphors for time, and whether you are, say, British, French or German, you no doubt think of the past as behind you and the future as stretching out ahead. Time is a straight line that runs through your body.

Once thought to be universal, this “embodied cognition of time” is in fact strictly cultural. Over the past decade, encounters with various remote tribal societies have revealed a rich diversity of the ways in which humans relate to time (see “Attitudes across the latitudes”). The latest, coming from the Yupno people of Papua New Guinea, is perhaps the most remarkable. Time for the Yupno flows uphill and is not even linear.

Rafael Núñez of the University of California, San Diego, led his team into the Finisterre mountain range of north-east Papua New Guinea to study the Yupno living in the village of Gua. There are no roads in this remote region. The Yupno have no electricity or even domestic animals to work the land. They live with very little contact with the western world.

Núñez and his colleagues noticed that the tribespeople made spontaneous gestures when speaking about the past, present and future. They filmed and analysed the gestures and found that for the Yupno the past is always downhill, in the direction of the mouth of the local river. The future, meanwhile, is towards the river’s source, which lies uphill from Gua.

This was true regardless of the direction they were facing. For instance, if they were facing downhill when talking about the future, a person would gesture backwards up the slope. But when they turned around to face uphill, they pointed forwards.

Núñez thinks the explanation is historical. The Yupno’s ancestors arrived by sea and climbed up the 2500-metre-high mountain valley, so lowlands may represent the past, and time flows uphill.

But the most unusual aspect of the Yupno timeline is its shape. The village of Gua, the river’s source and its mouth do not lie in a straight line, so the timeline is kinked. “This is the first time ever that a culture has been documented to have everyday notions of time anchored in topographic properties,” says Núñez.

Within the dark confines of their homes, geographical landmarks disappear and the timeline appears to straighten out somewhat. The Yupno always point towards the doorway when talking about the past, and away from the door to indicate the future, regardless of their home’s orientation. That could be because entrances are always raised, says Núñez. You have to climb down – towards the past – to leave the house, so each home has its own timeline.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The Persistence of Memory, by Salvador Dalí. Courtesy of Salvador Dalí, Gala-Salvador Dalí Foundation / Artists Rights Society (ARS), Museum of Modern Art New York / Wikipedia.[end-div]

Corporate Corruption: Greed, Lies and Nothing New

The last couple of decades has seen some remarkable cases of corporate excess and corruption. The deep-rooted human inclinations toward greed, telling falsehoods and exhibiting questionable ethics can probably be traced to the dawn of bipedalism. However, in more recent times we have seen misdeeds particularly in the business world grow in their daring, scale and impact.

We’ve seen Worldcom overstate its cashflows, Parmalat falsifying accounts, Lehman Brothers (and other investment banks) hiding critical information from investors, Enron cooking all their books, Bernard Madoff marketing his immense Ponzi scheme, Halliburton overcharging government contracts, Tyco executives looting their own company, Wells Fargo and other retail banks robo-signing contracts, investment banks selling questionable products to investors and then betting against them, and now ever more recently, Barclays and other big banks manipulating interest rates.

These tales of gluttony and wrongdoing are a dream for social scientists; and for the public in general, well, we tend to let the fat cats just get fatter and nastier. And, where are the regulators, legislators and enforcers of the law? Well, they are generally asleep at the wheel or in bed, so to speak, with their corporate donors. No wonder we all yawn at the latest scandal. However, some suggest this undermines the very foundations of western capitalism.

[div class=attrib]From the New York Times:[end-div]

Perhaps the most surprising aspect of the Libor scandal is how familiar it seems. Sure, for some of the world’s leading banks to try to manipulate one of the most important interest rates in contemporary finance is clearly egregious. But is that worse than packaging billions of dollars worth of dubious mortgages into a bond and having it stamped with a Triple-A rating to sell to some dupe down the road while betting against it? Or how about forging documents on an industrial scale to foreclose fraudulently on countless homeowners?

The misconduct of the financial industry no longer surprises most Americans. Only about one in five has much trust in banks, according to Gallup polls, about half the level in 2007. And it’s not just banks that are frowned upon. Trust in big business overall is declining. Sixty-two percent of Americans believe corruption is widespread across corporate America. According to Transparency International, an anticorruption watchdog, nearly three in four Americans believe that corruption has increased over the last three years.

We should be alarmed that corporate wrongdoing has come to be seen as such a routine occurrence. Capitalism cannot function without trust. As the Nobel laureate Kenneth Arrow observed, “Virtually every commercial transaction has within itself an element of trust.”

The parade of financiers accused of misdeeds, booted from the executive suite and even occasionally jailed, is undermining this essential element. Have corporations lost whatever ethical compass they once had? Or does it just look that way because we are paying more attention than we used to?

This is hard to answer because fraud and corruption are impossible to measure precisely. Perpetrators understandably do their best to hide the dirty deeds from public view. And public perceptions of fraud and corruption are often colored by people’s sense of dissatisfaction with their lives.

Last year, the economists Justin Wolfers and Betsey Stevenson from the University of Pennsylvania published a study suggesting that trust in government and business falls when unemployment rises. “Much of the recent decline in confidence — particularly in the financial sector — may simply be a standard response to a cyclical downturn,” they wrote.

And waves of mistrust can spread broadly. After years of dismal employment prospects, Americans are losing trust in a broad range of institutions, including Congress, the Supreme Court, the presidency, public schools, labor unions and the church.

Corporate wrongdoing may be cyclical, too. Fraud is probably more lucrative, as well as easier to hide, amid the general prosperity of economic booms. And the temptation to bend the rules is probably highest toward the end of an economic upswing, when executives must be the most creative to keep the stream of profits rolling in.

The most toxic, no-doc, reverse amortization, liar loans flourished toward the end of the housing bubble. And we typically discover fraud only after the booms have turned to bust. As Warren Buffett famously said, “You only find out who is swimming naked when the tide goes out.”

Company executives are paid to maximize profits, not to behave ethically. Evidence suggests that they behave as corruptly as they can, within whatever constraints are imposed by law and reputation. In 1977, the United States Congress passed the Foreign Corrupt Practices Act, to stop the rampant practice of bribing foreign officials. Business by American multinationals in the most corrupt countries dropped. But they didn’t stop bribing. And American companies have been lobbying against the law ever since.

Extrapolating from frauds that were uncovered during and after the dot-com bubble, the economists Luigi Zingales and Adair Morse of the University of Chicago and Alexander Dyck of the University of Toronto estimated conservatively that in any given year a fraud was being committed by 11 to 13 percent of the large companies in the country.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Mug shot of Charles Ponzi (March 3, 1882 – January 18, 1949). Charles Ponzi was born in Italy and became known as a swindler for his money scheme. His aliases include Charles Ponei, Charles P. Bianchi, Carl and Carlo.. Courtesy of Wikipedia.[end-div]

London’s Telephone Box

London’s bright red telephone boxes (booths for our readers in the United States) are as iconic and recognizable as the Queen or Big Ben looming over the Houses of Parliament. Once as ubiquitous as the distinctive London Bobby’s (police officer) helmet, many of these red iron chambers have now been replaced by mobile phones. As a result BT has taken to auctioning some of its telephone boxes for a very good cause — ChildLine’s 25th anniversary. Though not before each is painted or re-imagined by an artist or designer. Check out our five favorites below, and see all of BT’s colorful “Artboxes”, here.

Accessorize

Proud of their London heritage, the ArtBox sports Accessorize’s trademark Union Jack design – customized and embellished in true Accessorize fashion.

 

 

 

Big Ben BT ArtBox

When Mandii first came to London from New Zealand, one of the first sights she wanted to see was Big Ben.

 

 

 

Peekaboo

Take a look and see what you find.

Evoking memories of the childhood game, hide and seek ‘Peekaboo’ invites you to consider issues of loneliness and neglect, and the role of the ‘finder’, which can be attributed to ChildLine.

 

 

Slip

A phonebox troubled by a landslide. Just incredible.

 

 

 

 

Londontotem

Loving the block colours and character designs. Their jolly spirit is infection, I mean, just look at their faces! The PhoneBox is like a mini street ornament in London isn’t it? A proper little totem pole in its own right!

 

 

 

[div class=attrib]Read more about BT’s Artbox project after the jump.[end-div]

[div class=attrib]Images courtesy of BT.[end-div]

Yayoi Kusama: Connecting All the Dots

 

 

 

 

 

 

 

 

Yayoi Kusama, c1939                                                           Yayoi Kusama, 2000

The art establishment has Yayoi Kusama in its sights, again. Over the last 60 years Kusama has created and evolved a style that is all her own, best seen rather than discussed.

A recent exhibit of Kusama’s work in Brisbane featured “The obliteration room”. This wonderful, interactive exhibit was commissioned specifically for kids aged 1-101 years. The exhibit features a whitewashed room with simple furniture, fixtures and objects all in white. The interactive — and fun — part features sheets of bright and colorful sticky dots given to each visitor. Armed with these dots visitors are encouraged to place them anywhere and everywhere. Results below (including a few, select dots courtesy of theDiagonal’s editor).

For an interesting timeline of her work, courtesy of the Queensland Art Gallery in Brisbane, Australia follow this jump.

[div class=attrib]From the Telegraph:[end-div]

There are spots before my eyes. I am at the National Museum of Art in Osaka, Japan, where crowds are flocking to a big exhibition of Yayoi Kusama’s work. Dots are a recurring theme in her art, a visual representation of the hallucinations and anxiety attacks she has suffered from since childhood, so the show is dominated by giant red polka-dotted spheres, and a disorienting room in which huge white fibreglass tulips are covered in red dots – as are the white walls, ceiling and floor.

There’s one of her unsettling infinity mirror rooms, illuminated by seemingly endless floating dots of light, and a giant pumpkin crawling with a distinctive pattern of dots she calls Nerves. But unlike her retrospective at Tate Modern in London, which ran from February to June this year, the emphasis here is on her recent paintings: one long gallery is filled with monochrome works, another with paintings so bright they hurt the eyes. The same primitive, repetitive motifs occur in all of them: dots, eyes, faces, zigzag patterns, amoebic blobs and snakelike forms bristling with cilia.

The sheer number is overwhelming, dizzying. When she was based in New York, her phallus sculptures and naked hippie ‘happenings’ were seen as scandalous and shameful by many in her home country, but the scale of this show is an indication of her standing in Japan, where she is fast becoming a national treasure.

The next day, I am invited to Kusama’s studio in a backstreet of the Shinjuku area of Tokyo, a short walk away from her private room in Siewa Hospital, a psychiatric unit where she has been a voluntary in-patient since 1977 and which she rarely leaves, except to work. Her studio is a cramped concrete and glass building, with cardboard boxes of supplies stacked up to the ceiling, the walls covered in racks of finished paintings, works in progress and blank canvases, a grey paint-spattered industrial carpet and a scruffy old office chair at the table where Kusama works under a glaring neon strip light.

She usually paints in comfortable pyjamas, one of her assistants tells me, her grey hair pulled up into a bun, but today she is upstairs having her hair and make-up done, ready to greet her guests.

When she finally comes down in the lift, a frail but colourful 83-year-old resplendent in a red wig and polka-dot ensemble, pushed in a polka-dotted wheelchair, she asks an assistant to show us some press cuttings of the Tate show, especially one from a paper from Matsumoto City, where she grew up. There’s something touching about this need to prove herself, but it’s also confusing – akin to J K Rowling showing off a review in The Gloucestershire Echo to verify that she is a published author.

Talking to Kusama can be a surreal experience. She is easily distracted, and although she lived in America for 20 years, she now speaks no English. She is surrounded by a team of assistants who translate for her, addressing her with respect as ‘sensei’ (‘master’ or ‘teacher’), and with whom she often seems to have long discussions before answering even the blandest questions. It’s hard to know what is being lost in translation, and what is down to the vagaries of age and health. But occasionally a question will engage her, and you’ll get a brief but fierce flash of the intelligence and focus she has so clearly poured into her work over the years.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Images: Yayoi Kusama, 1939 / Image courtesy: Ota Fine Arts, Tokyo / © Yayoi Kusama, Yayoi Kusama Studio Inc; Kusama, 2000 / Image courtesy: Ota Fine Arts, Tokyo / © Yayoi Kusama, Yayoi Kusama Studio Inc; theDiagonal / Queensland Art Gallery.[end-div]

Truthiness 101

Strangely and ironically it takes a satirist to tell the truth, and of course, academics now study the phenomenon.

[div class=attrib]From Washington Post:[end-div]

Nation, our so-called universities are in big trouble, and not just because attending one of them leaves you with more debt than the Greek government. No, we’re talking about something even more unsettling: the academic world’s obsession with Stephen Colbert.

Last we checked, Colbert was a mere TV comedian, or a satirist if you want to get fancy about it. (And, of course, being college professors, they do.) He’s a TV star, like Donald Trump, only less of a caricature.

Yet ever since Colbert’s show, “The Colbert Report,” began airing on Comedy Central in 2005, these ivory-tower eggheads have been devoting themselves to studying all things Colbertian. They’ve sliced and diced his comic stylings more ways than a Ginsu knife. Every academic discipline — well, among the liberal arts, at least — seems to want a piece of him. Political science. Journalism. Philosophy. Race relations. Communications studies. Theology. Linguistics. Rhetoric.

There are dozens of scholarly articles, monographs, treatises and essays about Colbert, as well as books of scholarly articles, monographs and essays. A University of Oklahoma student even earned her doctorate last year by examining him and his “Daily Show” running mate Jon Stewart. It was called “Political Humor and Third-Person Perception.”

The academic cult of Colbert (or is it “the cul of Colbert”?) is everywhere. Here’s a small sample. Jim .?.?.

?“Is Stephen Colbert America’s Socrates?,” chapter heading in “Stephen Colbert and Philosophy: I Am Philosophy (And So Can You!),” published by Open Court, 2009.

?“The Wørd Made Fresh: A Theological Exploration of Stephen Colbert,” published in Concepts (“an interdisciplinary journal of graduate studies”), Villanova University, 2010.

?“It’s All About Meme: The Art of the Interview and the Insatiable Ego of the Colbert Bump,” chapter heading in “The Stewart/Colbert Effect: Essays on the Real Impacts of Fake News,” published by McFarland Press, 2011.

?“The Irony of Satire: Political Ideology and the Motivation to See What You Want to See in The Colbert Report,” a 2009 study in the International Journal of Press/Politics that its authors described as an investigation of “biased message processing” and “the influence of political ideology on perceptions of Stephen Colbert.” After much study, the authors found “no significant difference between [conservatives and liberals] in thinking Colbert was funny.”

Colbert-ism has insinuated itself into the undergraduate curriculum, too.

Boston University has offered a seminar called “The Colbert Report: American Satire” for the past two years, which explores Colbert’s use of “syllogism, logical fallacy, burlesque, and travesty,” as lecturer Michael Rodriguez described it on the school’s Web site.

This fall, Towson University will roll out a freshman seminar on politics and popular culture, with Colbert as its focus.

All this for a guy who would undoubtedly mock-celebrate the serious study of himself.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class-attrib]Image: Colbert Report. Courtesy of Business Insider / Comedy Central.[end-div]

Extreme Equals Happy, Moderate Equals Unhappy

[div class=attrib]From the New York Times:[end-div]

WHO is happier about life — liberals or conservatives? The answer might seem straightforward. After all, there is an entire academic literature in the social sciences dedicated to showing conservatives as naturally authoritarian, dogmatic, intolerant of ambiguity, fearful of threat and loss, low in self-esteem and uncomfortable with complex modes of thinking. And it was the candidate Barack Obama in 2008 who infamously labeled blue-collar voters “bitter,” as they “cling to guns or religion.” Obviously, liberals must be happier, right?

Wrong. Scholars on both the left and right have studied this question extensively, and have reached a consensus that it is conservatives who possess the happiness edge. Many data sets show this. For example, the Pew Research Center in 2006 reported that conservative Republicans were 68 percent more likely than liberal Democrats to say they were “very happy” about their lives. This pattern has persisted for decades. The question isn’t whether this is true, but why.

Many conservatives favor an explanation focusing on lifestyle differences, such as marriage and faith. They note that most conservatives are married; most liberals are not. (The percentages are 53 percent to 33 percent, according to my calculations using data from the 2004 General Social Survey, and almost none of the gap is due to the fact that liberals tend to be younger than conservatives.) Marriage and happiness go together. If two people are demographically the same but one is married and the other is not, the married person will be 18 percentage points more likely to say he or she is very happy than the unmarried person.

An explanation for the happiness gap more congenial to liberals is that conservatives are simply inattentive to the misery of others. If they recognized the injustice in the world, they wouldn’t be so cheerful. In the words of Jaime Napier and John Jost, New York University psychologists, in the journal Psychological Science, “Liberals may be less happy than conservatives because they are less ideologically prepared to rationalize (or explain away) the degree of inequality in society.” The academic parlance for this is “system justification.”

The data show that conservatives do indeed see the free enterprise system in a sunnier light than liberals do, believing in each American’s ability to get ahead on the basis of achievement. Liberals are more likely to see people as victims of circumstance and oppression, and doubt whether individuals can climb without governmental help. My own analysis using 2005 survey data from Syracuse University shows that about 90 percent of conservatives agree that “While people may begin with different opportunities, hard work and perseverance can usually overcome those disadvantages.” Liberals — even upper-income liberals — are a third less likely to say this.

So conservatives are ignorant, and ignorance is bliss, right? Not so fast, according to a study from the University of Florida psychologists Barry Schlenker and John Chambers and the University of Toronto psychologist Bonnie Le in the Journal of Research in Personality. These scholars note that liberals define fairness and an improved society in terms of greater economic equality. Liberals then condemn the happiness of conservatives, because conservatives are relatively untroubled by a problem that, it turns out, their political counterparts defined.

There is one other noteworthy political happiness gap that has gotten less scholarly attention than conservatives versus liberals: moderates versus extremists.

Political moderates must be happier than extremists, it always seemed to me. After all, extremists actually advertise their misery with strident bumper stickers that say things like, “If you’re not outraged, you’re not paying attention!”

But it turns out that’s wrong. People at the extremes are happier than political moderates. Correcting for income, education, age, race, family situation and religion, the happiest Americans are those who say they are either “extremely conservative” (48 percent very happy) or “extremely liberal” (35 percent). Everyone else is less happy, with the nadir at dead-center “moderate” (26 percent).

What explains this odd pattern? One possibility is that extremists have the whole world figured out, and sorted into good guys and bad guys. They have the security of knowing what’s wrong, and whom to fight. They are the happy warriors.

Whatever the explanation, the implications are striking. The Occupy Wall Street protesters may have looked like a miserable mess. In truth, they were probably happier than the moderates making fun of them from the offices above. And none, it seems, are happier than the Tea Partiers, many of whom cling to guns and faith with great tenacity. Which some moderately liberal readers of this newspaper might find quite depressing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Psychology Today.[end-div]

Famous Artworks Inspired by Other Famous Works

The Garden of Earthly Delights. Hieronymous Bosch.

The Tilled Field. Joan Miró.

[div class=attrib]From Flavorwire:[end-div]

We tend to think of appropriation as a postmodern thing, with artists in all media drawing on, referring to, and mashing up the most influential works of the past. But we forget that this has been happening for centuries — millennia, actually — as Renaissance painters paid tribute to Greek art, ideas circulated within the 19th-century French art scene, and Dada hijacked the course of art history, mocking and inverting everything that came before it. After the jump, we round up some of the best, most famous, and all-around strangest artworks inspired by other artworks. Some are homages, some are parodies, some are responses, and a few seem to function as all three.

Joan Miró’s The Tilled Field, inspired by Hieronymous Bosch’s The Garden of Earthly Delights

The resemblance between Joan Miró’s Surrealist painting and Bosch’s Early Netherlander triptych may not be as clear as the parallels between some of the other works on this list, but when you know what to look for, the resemblance is certainly there. Besides the colors, which do echo The Garden of Earthly Delights, Miró placed in his painting many objects that appear in Bosch’s — crudely sexualized figures, disembodied ears, flocks of birds. Although the styles are different, both have the same busy, chaotic energy.

[div class=attrib]More from this top 10 list after the jump.[end-div]

Resurgence of Western Marxism

The death-knell for Western capitalism has yet to sound. However, increasing economic turmoil, continued shenanigans in the financial industry, burgeoning inequity, and acute global political unease, are combining to undermine the appeal of capitalism to a growing number of young people. Welcome to Marxism 2.012.

[div class=attrib]From the Guardian:[end-div]

Class conflict once seemed so straightforward. Marx and Engels wrote in the second best-selling book of all time, The Communist Manifesto: “What the bourgeoisie therefore produces, above all, are its own grave-diggers. Its fall and the victory of the proletariat are equally inevitable.” (The best-selling book of all time, incidentally, is the Bible – it only feels like it’s 50 Shades of Grey.)

Today, 164 years after Marx and Engels wrote about grave-diggers, the truth is almost the exact opposite. The proletariat, far from burying capitalism, are keeping it on life support. Overworked, underpaid workers ostensibly liberated by the largest socialist revolution in history (China’s) are driven to the brink of suicide to keep those in the west playing with their iPads. Chinese money bankrolls an otherwise bankrupt America.

The irony is scarcely wasted on leading Marxist thinkers. “The domination of capitalism globally depends today on the existence of a Chinese Communist party that gives de-localised capitalist enterprises cheap labour to lower prices and deprive workers of the rights of self-organisation,” says Jacques Rancière, the French marxist thinker and Professor of Philosophy at the University of Paris VIII. “Happily, it is possible to hope for a world less absurd and more just than today’s.”

That hope, perhaps, explains another improbable truth of our economically catastrophic times – the revival in interest in Marx and Marxist thought. Sales of Das Kapital, Marx’s masterpiece of political economy, have soared ever since 2008, as have those of The Communist Manifesto and the Grundrisse (or, to give it its English title, Outlines of the Critique of Political Economy). Their sales rose as British workers bailed out the banks to keep the degraded system going and the snouts of the rich firmly in their troughs while the rest of us struggle in debt, job insecurity or worse. There’s even a Chinese theatre director called He Nian who capitalised on Das Kapital’s renaissance to create an all-singing, all-dancing musical.

And in perhaps the most lovely reversal of the luxuriantly bearded revolutionary theorist’s fortunes, Karl Marx was recently chosen from a list of 10 contenders to appear on a new issue of MasterCard by customers of German bank Sparkasse in Chemnitz. In communist East Germany from 1953 to 1990, Chemnitz was known as Karl Marx Stadt. Clearly, more than two decades after the fall of the Berlin Wall, the former East Germany hasn’t airbrushed its Marxist past. In 2008, Reuters reports, a survey of east Germans found 52% believed the free-market economy was “unsuitable” and 43% said they wanted socialism back. Karl Marx may be dead and buried in Highgate cemetery, but he’s alive and well among credit-hungry Germans. Would Marx have appreciated the irony of his image being deployed on a card to get Germans deeper in debt? You’d think.

Later this week in London, several thousand people will attend Marxism 2012, a five-day festival organised by the Socialist Workers’ Party. It’s an annual event, but what strikes organiser Joseph Choonara is how, in recent years, many more of its attendees are young. “The revival of interest in Marxism, especially for young people comes because it provides tools for analysing capitalism, and especially capitalist crises such as the one we’re in now,” Choonara says.

There has been a glut of books trumpeting Marxism’s relevance. English literature professor Terry Eagleton last year published a book called Why Marx Was Right. French Maoist philosopher Alain Badiou published a little red book called The Communist Hypothesis with a red star on the cover (very Mao, very now) in which he rallied the faithful to usher in the third era of the communist idea (the previous two having gone from the establishment of the French Republic in 1792 to the massacre of the Paris communards in 1871, and from 1917 to the collapse of Mao’s Cultural Revolution in 1976). Isn’t this all a delusion?

Aren’t Marx’s venerable ideas as useful to us as the hand loom would be to shoring up Apple’s reputation for innovation? Isn’t the dream of socialist revolution and communist society an irrelevance in 2012? After all, I suggest to Rancière, the bourgeoisie has failed to produce its own gravediggers. Rancière refuses to be downbeat: “The bourgeoisie has learned to make the exploited pay for its crisis and to use them to disarm its adversaries. But we must not reverse the idea of historical necessity and conclude that the current situation is eternal. The gravediggers are still here, in the form of workers in precarious conditions like the over-exploited workers of factories in the far east. And today’s popular movements – Greece or elsewhere – also indicate that there’s a new will not to let our governments and our bankers inflict their crisis on the people.”

That, at least, is the perspective of a seventysomething Marxist professor. What about younger people of a Marxist temper? I ask Jaswinder Blackwell-Pal, a 22 year-old English and drama student at Goldsmiths College, London, who has just finished her BA course in English and Drama, why she considers Marxist thought still relevant. “The point is that younger people weren’t around when Thatcher was in power or when Marxism was associated with the Soviet Union,” she says. “We tend to see it more as a way of understanding what we’re going through now. Think of what’s happening in Egypt. When Mubarak fell it was so inspiring. It broke so many stereotypes – democracy wasn’t supposed to be something that people would fight for in the Muslim world. It vindicates revolution as a process, not as an event. So there was a revolution in Egypt, and a counter-revolution and a counter-counter revolution. What we learned from it was the importance of organisation.”

This, surely is the key to understanding Marxism’s renaissance in the west: for younger people, it is untainted by association with Stalinist gulags. For younger people too, Francis Fukuyama’s triumphalism in his 1992 book The End of History – in which capitalism seemed incontrovertible, its overthrow impossible to imagine – exercises less of a choke-hold on their imaginations than it does on those of their elders.

Blackwell-Pal will be speaking Thursday on Che Guevara and the Cuban revolution at the Marxism festival. “It’s going to be the first time I’ll have spoken on Marxism,” she says nervously. But what’s the point thinking about Guevara and Castro in this day and age? Surely violent socialist revolution is irrelevant to workers’ struggles today? “Not at all!” she replies. “What’s happening in Britain is quite interesting. We have a very, very weak government mired in in-fighting. I think if we can really organise we can oust them.” Could Britain have its Tahrir Square, its equivalent to Castro’s 26th of July Movement? Let a young woman dream. After last year’s riots and today with most of Britain alienated from the rich men in its government’s cabinet, only a fool would rule it out.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Portrait of Karl Marx. Courtesy of International Institute of Social History in Amsterdam, Netherlands / Wikipedia.[end-div]

A Different Kind of Hotel

Bored of the annual family trip to Disneyland? Tired of staying in a suite hotel that still offers musak in the lobby, floral motifs on the walls, and ashtrays and saccharin packets next to the rickety minibar? Well, leaf through this list of 10 exotic and gorgeous hotels and start planning your next real escape today.

Wadi Rum Desert Lodge – The Valley of the Moon, Jordan.

[div class=attrib]From Flavorwire:[end-div]

A Backward Glance, Pulitzer Prize-winning author Edith Wharton’s gem of an autobiography is highbrow beach reading at its very best. In the memoir, she recalls time spent with her bff traveling buddy, Henry James, and quotes his arcadian proclamation, “summer afternoon — summer afternoon; to me those have always been the two most beautiful words in the English language.” Maybe so in the less than industrious heyday of inherited wealth, but in today’s world where most people work all day for a living, those two words just don’t have the same appeal as our two favorite words: summer getaway.

Like everyone else in our overworked and overheated city, rest and relaxation are all we can think about — especially on a hot Friday afternoon like this. In considering options for our celebrated summer respite, we thought we’d take a virtual gander to check out alternatives to the usual Hamptons summer share. From a treehouse where sloths join you for morning coffee to a giant sandcastle, click through to see some of the most unusual summer getaway destinations in the world.

[div class=attrib]See more stunning hotels after the jump.[end-div]

Solar Tornadoes

No, Solar tornadoes are not another manifestation of our slowly warming planet. Rather, these phenomena are believed to explain why the outer reaches of the solar atmosphere are so much hotter than its surface.

[div class=attrib]From ars technica:[end-div]

One of the abiding mysteries surrounding our Sun is understanding how the corona gets so hot. The Sun’s surface, which emits almost all the visible light, is about 5800 Kelvins. The surrounding corona rises to over a million K, but the heating process has not been identified. Most solar physicists suspect the process is magnetic, since the strong magnetic fields at the Sun’s surface drive much of the solar weather (including sunspots, coronal loops, prominences, and mass ejections). However, the diffuse solar atmosphere is magnetically too quiet on the large scales. The recent discovery of atmospheric “tornadoes”—swirls of gas over a thousand kilometers in diameter above the Sun’s surface—may provide a possible answer.

As described in Nature, these vortices occur in the chromosphere (the layer of the Sun’s atmosphere below the corona) and they are common. There are about 10 thousand swirls in evidence at any given time. Sven Wedemeyer-Böhm and colleagues identified the vortices using NASA’s Solar Dynamics Observatory (SDO) spacecraft and the Swedish Solar Telescope (SST). They measured the shape of the swirls as a function of height in the atmosphere, determining they grow wider at higher elevations, with the whole structure aligned above a concentration of the magnetic field on the Sun’s surface. Comparing these observations to computer simulations, the authors determined the vortices could be produced by a magnetic vortex exerting pressure on the gas in the atmosphere, accelerating it along a spiral trajectory up into the corona. Such acceleration could bring about the incredibly high temperatures observed in the Sun’s outer atmosphere.

The Sun’s atmosphere is divided into three major regions: the photosphere, the chromosphere, and the corona. The photosphere is the visible bit of the Sun, what we typically think of as the “surface.” It exhibits the behavior of rising gas and photons from the solar interior, as well as magnetic phenomena such as sunspots. The chromosphere is far less dense but hotter; the corona (“crown”) is still hotter and less dense, making an amorphous cloud around the sphere of the Sun. The chromosphere and corona are not seen without special equipment (except during total solar eclipses), but they can be studied with dedicated solar observatories.

To crack the problem of the super-hot corona, the researchers focused their attention on the chromosphere. Using data from SDO and SST, they measured the motion of various elements in the Sun’s atmosphere (iron, calcium, and helium) via the Doppler effect. These different gases all exhibited vortex behavior, aligned with the same spot on the photosphere. The authors identified 14 vortices during a single 55-minute observing run, which lasted for an average of about 13 minutes. Based on these statistics, they determined the Sun should have at least 11,000 vortices on its surface at any given time, at least during periods of low sunspot activity.

Due to the different wavelengths of light the observers used, they were able to map the shape and speed of the vortices as a function of height in the chromosphere. They found the familiar tornado shape: tapered at the base, widening at the top, reaching diameters of 1500 km. Each vortex was aligned along a single axis over a bright spot in the photosphere, which is the sign of a concentration of magnetic field lines.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A giant solar tornado from last fall large enought to swallow up 5 planet Earths is the first of its kind caught on film, March 6, 2012. Courtesy of Slate / NASA /Solar Dynamics Observatory (SDO).[end-div]

 

Busyness As Chronic Illness

Apparently, being busy alleviates the human existential threat. So, if your roughly 16 hours, or more, of wakefulness each day is crammed with memos, driving, meetings, widgets, calls, charts, quotas, angry customers, school lunches, deciding, reports, bank statements, kids, budgets, bills, baking, making, fixing, cleaning and mad bosses, then your life must be meaningful, right?

Think again.

Author Tim Kreider muses below on this chronic state of affairs, and hits close to the nerve when he suggests that, “I can’t help but wonder whether all this histrionic exhaustion isn’t a way of covering up the fact that most of what we do doesn’t matter.”

[div class=attrib]From the New York Times:[end-div]

If you live in America in the 21st century you’ve probably had to listen to a lot of people tell you how busy they are. It’s become the default response when you ask anyone how they’re doing: “Busy!” “So busy.” “Crazy busy.” It is, pretty obviously, a boast disguised as a complaint. And the stock response is a kind of congratulation: “That’s a good problem to have,” or “Better than the opposite.”

Notice it isn’t generally people pulling back-to-back shifts in the I.C.U. or commuting by bus to three minimum-wage jobs  who tell you how busy they are; what those people are is not busy but tired. Exhausted. Dead on their feet. It’s almost always people whose lamented busyness is purely self-imposed: work and obligations they’ve taken on voluntarily, classes and activities they’ve “encouraged” their kids to participate in. They’re busy because of their own ambition or drive or anxiety, because they’re addicted to busyness and dread what they might have to face in its absence.

Almost everyone I know is busy. They feel anxious and guilty when they aren’t either working or doing something to promote their work. They schedule in time with friends the way students with 4.0 G.P.A.’s  make sure to sign up for community service because it looks good on their college applications. I recently wrote a friend to ask if he wanted to do something this week, and he answered that he didn’t have a lot of time but if something was going on to let him know and maybe he could ditch work for a few hours. I wanted to clarify that my question had not been a preliminary heads-up to some future invitation; this was the invitation. But his busyness was like some vast churning noise through which he was shouting out at me, and I gave up trying to shout back over it.

Even children are busy now, scheduled down to the half-hour with classes and extracurricular activities. They come home at the end of the day as tired as grown-ups. I was a member of the latchkey generation and had three hours of totally unstructured, largely unsupervised time every afternoon, time I used to do everything from surfing the World Book Encyclopedia to making animated films to getting together with friends in the woods to chuck dirt clods directly into one another’s eyes, all of which provided me with important skills and insights that remain valuable to this day. Those free hours became the model for how I wanted to live the rest of my life.

The present hysteria is not a necessary or inevitable condition of life; it’s something we’ve chosen, if only by our acquiescence to it. Not long ago I  Skyped with a friend who was driven out of the city by high rent and now has an artist’s residency in a small town in the south of France. She described herself as happy and relaxed for the first time in years. She still gets her work done, but it doesn’t consume her entire day and brain. She says it feels like college — she has a big circle of friends who all go out to the cafe together every night. She has a boyfriend again. (She once ruefully summarized dating in New York: “Everyone’s too busy and everyone thinks they can do better.”) What she had mistakenly assumed was her personality — driven, cranky, anxious and sad — turned out to be a deformative effect of her environment. It’s not as if any of us wants to live like this, any more than any one person wants to be part of a traffic jam or stadium trampling or the hierarchy of cruelty in high school — it’s something we collectively force one another to do.

Busyness serves as a kind of existential reassurance, a hedge against emptiness; obviously your life cannot possibly be silly or trivial or meaningless if you are so busy, completely booked, in demand every hour of the day. I once knew a woman who interned at a magazine where she wasn’t allowed to take lunch hours out, lest she be urgently needed for some reason. This was an entertainment magazine whose raison d’être was obviated when “menu” buttons appeared on remotes, so it’s hard to see this pretense of indispensability as anything other than a form of institutional self-delusion. More and more people in this country no longer make or do anything tangible; if your job wasn’t performed by a cat or a boa constrictor in a Richard Scarry book I’m not sure I believe it’s necessary. I can’t help but wonder whether all this histrionic exhaustion isn’t a way of covering up the fact that most of what we do doesn’t matter.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Entrepreneur.com.[end-div]

First Artists: Neanderthals or Homo Sapiens?

The recent finding in a Spanish cave of a painted “red dot” dating from around 40,800 years ago suggests that our Neanderthal cousins may have beaten our species to claim the prize of “first artist”. Yet, evidence remains scant, and even if this were proven to be the case, we Homo sapiens can certainly lay claim to taking it beyond a “red dot” and making art our very own (and much else too.)

[div class=attrib]From the Guardian:[end-div]

Why do Neanderthals so fascinate Homo sapiens? And why are we so keen to exaggerate their virtues?

It is political correctness gone prehistoric. At every opportunity, people rush to attribute “human” virtues to this extinct human-like species. The latest generosity is to credit them with the first true art.

A recent redating of cave art in Spain has revealed the oldest paintings in Europe. A red dot in the cave El Castillo has now been dated at 40,800 years ago – considerably older than the cave art of Chauvet in France and contemporary with the arrival of the very first “modern humans”, Homo sapiens, in Europe.

This raises two possibilities, point out the researchers. Either the new humans from Africa started painting in caves the moment they entered Europe, or painting was already being done by the Neanderthals who were at that moment the most numerous relatives of modern humans on the European continent. One expert confesses to a “hunch” – which he acknowledges cannot be proven as things stand – that Neanderthals were painters.

That hunch goes against the weight of the existing evidence. Of course that hasn’t stopped it dominating all reports of the story: as far as media impressions go, the Neanderthals were now officially the first artists. Yet nothing of the sort has been proven, and plenty of evidence suggests that the traditional view is still far more likely.

In this view, the precocious development of art in ice age Europe marks out the first appearance of modern human consciousness, the intellectual birth of our species, the hand of Homo sapiens making its mark.

One crucial piece of evidence of where art came from is a piece of red ochre, engraved with abstract lines, that was discovered a decade ago in Blombos cave in South Africa. It is at least 70,000 years old and the oldest unmistakable artwork ever found. It is also a tool to make more art: ochre was great for making red marks on stone. It comes from Africa, where modern humans evolved, and reveals that when Homo sapiens made the move into Europe, our species could already draw on a long legacy of drawing and engraving. In fact, the latest finds at Blombos include a complete painting kit.

In other words, what is so surprising about the idea that Homo sapiens started to apply these skills immediately on discovering the caves of ice age Europe? It has to be more likely, on the face of it, than assuming these early Spanish images are by Neanderthals in the absence of any other solid evidence of paintings by them.

For, moving forward a few thousand years, the paintings of Chauvet and other French caves are certainly by us, Homo sapiens. And they remind us why this first art is so exciting and important: modern humans did not just do dots and handprints but magnificent, realistic portraits of animals. Their art is so superb in quality that it proves the existence of a higher mind, the capacity to create civilisation.

Is it possible that Neanderthals also used pigment to colour walls and also had the mental capacity to invent art? Of course it is, but the evidence at the moment still massively suggests art is a uniquely human achievement, unique, that is, to us – and fundamental to who we are.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A hand stencil in El Castillo cave, Spain, has been dated to earlier than 37,300 years ago and a red dot to earlier than 40,600 years ago, making them the oldest cave paintings in Europe. Courtesy of New Scientist / Pedro Saura.[end-div]

Ignorance [is] the Root and Stem of All Evil

Hailing from Classical Greece of around 2,400 years ago, Plato has given our contemporary world many important intellectual gifts. His broad interests in justice, mathematics, virtue, epistemology, rhetoric and art, laid the foundations for Western philosophy and science. Yet in his quest for deeper and broader knowledge he also had some important things to say about ignorance.

Massimo Pigliucci over at Rationally Speaking gives us his take on Platonic Ignorance. His caution is appropriate: in this age of information overload and extreme politicization it is ever more important for us to realize and acknowledge our own ignorance. Spreading falsehoods and characterizing opinion as fact to others — transferred ignorance — is rightly identified by Plato as a moral failing. In his own words (of course translated), “Ignorance [is] the Root and Stem of All Evil”.

[div class=attrib]From Rationally Speaking:[end-div]

Plato famously maintained that knowledge is “justified true belief,” meaning that to claim the status of knowledge our beliefs (say, that the earth goes around the sun, rather than the other way around) have to be both true (to the extent this can actually be ascertained) and justified (i.e., we ought to be able to explain to others why we hold such beliefs, otherwise we are simply repeating the — possibly true — beliefs of someone else).

It is the “justified” part that is humbling, since a moment’s reflection will show that a large number of things we think we know we actually cannot justify, which means that we are simply trusting someone else’s authority on the matter. (Which is okay, as long as we realize and acknowledge that to be the case.)

I was recently intrigued, however, not by Plato’s well known treatment of knowledge, but by his far less discussed views on the opposite of knowledge: ignorance. The occasion for these reflections was a talk by Katja Maria Vogt of Columbia University, delivered at CUNY’s Graduate Center, where I work. Vogt began by recalling the ancient skeptics’ attitude toward ignorance, as a “conscious positive stand,” meaning that skepticism is founded on one’s realization of his own ignorance. In this sense, of course, Socrates’ contention that he knew nothing becomes neither a self-contradiction (isn’t he saying that he knows that he knows nothing, thereby acknowledging that he knows something?), nor false modesty. Socrates was simply saying that he was aware of having no expertise while at the same time devoting his life to the quest for knowledge.

Vogt was particularly interested in Plato’s concept of “transferred ignorance,” which the ancient philosopher singled out as morally problematic. Transferred ignorance is the case when someone imparts “knowledge” that he is not aware is in fact wrong. Let us say, for instance, that I tell you that vaccines cause autism, and I do so on the basis of my (alleged) knowledge of biology and other pertinent matters, while, in fact, I am no medical researcher and have only vague notions of how vaccines actually work (i.e., imagine my name is Jenny McCarthy).

The problem, for Plato, is that in a sense I would be thinking of myself as smarter than I actually am, which of course carries a feeling of power over others. I wouldn’t simply be mistaken in my beliefs, I would be mistaken in my confidence in those beliefs. It is this willful ignorance (after all, I did not make a serious attempt to learn about biology or medical research) that carries moral implications.

So for Vogt the ancient Greeks distinguished between two types of ignorance: the self-aware, Socratic one (which is actually good) and the self-oblivious one of the overconfident person (which is bad). Need I point out that far too little of the former and too much of the latter permeate current political and social discourse? Of course, I’m sure a historian could easily come up with a plethora of examples of bad ignorance throughout human history, all the way back to the beginning of recorded time, but it does strike me that the increasingly fact-free public discourse on issues varying from economic policies to scientific research has brought Platonic transferred ignorance to never before achieved peaks (or, rather, valleys).

And I suspect that this is precisely because of the lack of appreciation of the moral dimension of transferred or willful ignorance. When politicians or commentators make up “facts” — or disregard actual facts to serve their own ideological agendas — they sometimes seem genuinely convinced that they are doing something good, at the very least for their constituents, and possibly for humanity at large. But how can it be good — in the moral sense — to make false knowledge one’s own, and even to actively spread it to others?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Socrates and Plato in a medieval picture. Courtesy of Wikipedia.[end-div]

Have a Laugh, Blame Twitter

Correlate 2 sets of totally independent statistics and you get to blame Twitter for most, if not all, of the world’s ills. That’s what Tim Cooley has done with this funny and informative #Blame Twitter infographic below.

Of course, even though the numbers are all verified and trusted, causation is entirely another factor. So, while 144,595 people die each day (on average), it is not (yet) as a result of using Twitter, and while our planet loses 1 hectare of forest for every 18,000 tweets, it’s not the endless Twittering that is causing de-forestation.

[div class=attrib]Infographic courtesy of Tim Cooley.[end-div]

Child Mutilation and Religious Ritual

A court in Germany recently banned circumcision at birth for religious reasons. Quite understandably the court saw that this practice violates bodily integrity. Aside from being morally repugnant to many theists and non-believers alike, the practice inflicts pain. So, why do some religions continue to circumcise children?

[div class=attrib]From Slate:[end-div]

A German court ruled on Tuesday that parents may not circumcise their sons at birth for religious reasons, because the procedure violates the child’s right to bodily integrity. Both Muslims and Jews circumcise their male children. Why is Christianity the only Abrahamic religion that doesn’t encourage circumcision?

Because Paul believed faith was more important than foreskin. Shortly after Jesus’ death, his followers had a disagreement over the nature of his message. Some acolytes argued that he offered salvation through Judaism, so gentiles who wanted to join his movement should circumcise themselves like any other Jew. The apostle Paul, however, believed that faith in Jesus was the only requirement for salvation. Paul wrote that Jews who believed in Christ could go on circumcising their children, but he urged gentiles not to circumcise themselves or their sons, because trying to mimic the Jews represented a lack of faith in Christ’s ability to save them. By the time that the Book of Acts was written in the late first or early second century, Paul’s position seems to have become the dominant view of Christian theologians. Gentiles were advised to follow only the limited set of laws—which did not include circumcision—that God gave to Noah after the flood rather than the full panoply of rules followed by the Jews.

Circumcision was uniquely associated with Jews in first-century Rome, even though other ethnic and religious groups practiced it. Romans wrote satirical poems mocking the Jews for taking a day off each week, refusing to eat pork, worshipping a sky god, and removing their sons’ foreskin. It is, therefore, neither surprising that early Christian converts sought advice on whether to adopt the practice of circumcision nor that Paul made it the focus of several of his famous letters.

The early compromise that Paul struck—ethnic Jewish Christians should circumcise, while Jesus’ gentile followers should not—held until Christianity became a legal religion in the fourth century. At that time, the two religions split permanently, and it became something of a heresy to suggest that one could be both Jewish and Christian. As part of the effort to distinguish the two religions, circumcisions became illegal for Christians, and Jews were forbidden from circumcising their slaves.

Although the church officially renounced religious circumcision around 300 years after Jesus’s death, Christians long maintained a fascination with it. In the 600s, Christians began celebrating the day Jesus was circumcised. According to medieval Christian legend, an angel bestowed Jesus’ foreskin upon Emperor Charlemagne in the Church of the Holy Sepulchre, where Christ was supposedly buried. Coptic Christians and a few other Christian groups in Africa resumed religious circumcisions long after their European colleagues abandoned it.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Apostle Paul. Courtesy of Wikipedia.[end-div]

100 Million Year Old Galactic Echo

Cosmologists have found what they believe to be the echoes of a galactic collision some 100 million years ago with our own Milky Way galaxy.

[div class=attrib]From Symmetry Magazine:[end-div]

Our galaxy, the Milky Way, is a large spiral galaxy surrounded by dozens of smaller satellite galaxies. Scientists have long theorized that occasionally these satellites will pass through the disk of the Milky Way, perturbing both the satellite and the disk. A team of astronomers from Canada and the United States have discovered what may well be the smoking gun of such an encounter, one that occurred close to our position in the galaxy and relatively recently, at least in the cosmological sense.

“We have found evidence that our Milky Way had an encounter with a small galaxy or massive dark matter structure perhaps as recently as 100 million years ago,” said Larry Widrow, professor at Queen’s University in Canada. “We clearly observe unexpected differences in the Milky Way’s stellar distribution above and below the Galaxy’s midplane that have the appearance of a vertical wave — something that nobody has seen before.”

The discovery is based on observations of some 300,000 nearby Milky Way stars by the Sloan Digital Sky Survey. Stars in the disk of the Milky Way move up and down at a speed of about 20-30 kilometers per second while orbiting the center of the galaxy at a brisk 220 kilometers per second. Widrow and his four collaborators from the University of Kentucky, the University of Chicago and Fermi National Accelerator Laboratory have found that the positions and motions of these nearby stars weren’t quite as regular as previously thought.

“Our part of the Milky Way is ringing like a bell,” said Brian Yanny, of the Department of Energy’s Fermilab. “But we have not been able to identify the celestial object that passed through the Milky Way. It could have been one of the small satellite galaxies that move around the center of our galaxy, or an invisible structure such as a dark matter halo.”

Adds Susan Gardner, professor of physics at the University of Kentucky: “The perturbation need not have been a single isolated event in the past, and it may even be ongoing. Additional observations may well clarify its origin.”

When the collaboration started analyzing the SDSS data on the Milky Way, they noticed a small but statistically significant difference in the distribution of stars north and south of the Milky Way’s midplane. For more than a year, the team members explored various explanations of this north-south asymmetry, such as the effect of interstellar dust on distance determinations and the way the stars surveyed were selected. When those attempts failed, they began to explore the alternative explanation that the data was telling them something about recent events in the history of the Galaxy.

The scientists used computer simulations to explore what would happen if a satellite galaxy or dark matter structure passed through the disk of the Milky Way. The simulations indicate that over the next 100 million years or so, our galaxy will “stop ringing:” the north-south asymmetry will disappear and the vertical motions of stars in the solar neighborhood will revert back to their equilibrium orbits — unless we get hit again.

[div class=attrib]Read the entire article after the jump.[end-div]

How Do Startup Companies Succeed?

A view from Esther Dyson, one of world’s leading digital technology entrepreneurs. She has served as a an early investor in numerous startups, including Flickr, del.icio.us, ZEDO, and Medspace, and is currently focused on startups in medical technology and aviation.

[div class=attrib]From Project Syndicate:[end-div]

The most popular stories often seem to end at the beginning. “…and so Juan and Alice got married.” Did they actually live happily ever after? “He was elected President.” But how did the country do under his rule? “The entrepreneur got her startup funding.” But did the company succeed?

Let’s consider that last one. Specifically, what happens to entrepreneurs once they get their money? Everywhere I go – and I have been in Moscow, Libreville (Gabon), and Dublin in the last few weeks – smart people ask how to get companies through the next phase of growth. How can we scale entrepreneurship to the point that it has a measurable and meaningful impact on the economy?

The real impact of both Microsoft and Google is not on their shareholders, or even on the people that they employ directly, but on the millions of people whom they have made more productive. That argues for companies that solve real problems, rather than for yet another photo-sharing app for rich, appealing (to advertisers) people with time on their hands.

It turns out that money is rarely enough – not just that there is not enough of it, but that entrepreneurs need something else. They need advice, contacts, customers, and employees immersed in a culture of effectiveness to succeed. But they also have to create something of real value to have meaningful economic impact in the long term.

The easy, increasingly popular answer is accelerators, incubators, camps, weekends – a host of locations and events to foster the development of startups. But these are just buildings and conferences unless they include people who can help with the software – contacts, customers, and culture. The people in charge, from NGOs to government officials, have great ideas about structures – tax policy, official financing, etc. – while the entrepreneurs themselves are too busy running their companies to find out about these things.

But this week in Dublin, I found what we need: not policies or theories, but actual living examples. Not far from the fancy hotel at which I was staying, and across from Google’s modish Irish offices, sits a squat old warehouse with a new sign: Startupbootcamp. You enter through a side door, into a cavern full of sawdust and cheap furniture (plus a pool table and a bar, of course).

What makes this place interesting is its sponsor: venerable old IBM. The mission of Startupbootcamp Europe is not to celebrate entrepreneurs, or even to educate them, but to help them scale up to meaningful businesses. Their new products can use IBM’s and other mentors’ contacts with the much broader world, whether for strategic marketing alliances, the power of an IBM endorsement, or, ultimately, an acquisition.

I was invited by Martin Kelly, who represents IBM’s venture arm in Ireland. He introduced me to the manager of the place, Eoghan Jennings, and a bunch of seasoned executives.

There was a three-time entrepreneur, Conor Hanley, co-founder of BiancaMed (recently sold to Resmed), who now has a sleep-monitoring tool and an exciting distribution deal with a large company he can’t yet mention; Jim Joyce, a former sales executive for Schering Plough who is now running Point of Care, which helps clinicians to help patients to manage their own care after they leave hospital; and Johnny Walker, a radiologist whose company operates scanners in the field and interprets them through a network of radiologists worldwide. Currently, Walker’s company, Global Diagnostics, is focused on pre-natal care, but give him time.

These guys are not the “startups”; they are the mentors, carefully solicited by Kelly from within the tightly knit Irish business community. He knew exactly what he was looking for: “In Ireland, we have people from lots of large companies. Joyce, for example, can put a startup in touch with senior management from virtually any pharma company around the world. Hanley knows manufacturing and tech partners. Walker understands how to operate in rural conditions.”

According to Jennings, a former chief financial officer of Xing, Europe’s leading social network, “We spent years trying to persuade people that they had a problem we could solve; now I am working with companies solving problems that people know they have.”  And that usually involves more than an Internet solution; it requires distribution channels, production facilities, market education, and the like. Startupbootcamp’s next batch of startups, not coincidentally, will be in the health-care sector.

Each of the mentors can help a startup to go global. Precisely because the Irish market is so small, it’s a good place to find people who know how to expand globally. In Ireland right now, as in so many countries, many large companies are laying off people with experience. Not all of them have the makings of an entrepreneur. But most of them have skills worth sharing, whether it’s how to run a sales meeting, oversee a development project, or manage a database of customers.

[div class=attrib]Read the entire article after the jump.[end-div]

National Education Rankings: C-

One would believe that the most affluent and open country on the planet would have one of the best, if not the best, education systems. Yet, the United States of America distinguishes itself by being thoroughly mediocre in a ranking of developed nations in science, mathematics and reading. How can we makes amends for our children?

[div class=attrib]From Slate:[end-div]

Take the 2009 PISA test, which assessed the knowledge of students from 65 countries and economies—34 of which are members of the development organization the OECD, including the United States—in math, science, and reading. Of the OECD countries, the United States came in 17th place in science literacy; of all countries and economies surveyed, it came in 23rd place. The U.S. score of 502 practically matched the OECD average of 501. That puts us firmly in the middle. Where we don’t want to be.

What do the leading countries do differently? To find out, Slate asked science teachers from five countries that are among the world’s best in science education—Finland, Singapore, South Korea, New Zealand, and Canada—how they approach their subject and the classroom. Their recommendations: Keep students engaged and make the science seem relevant.

Finland: “To Make Students Enjoy Chemistry Is Hard Work”

Finland was first among the 34 OECD countries in the 2009 PISA science rankings and second—behind mainland China—among all 65 nations and economies that took the test. Ari Myllyviita teaches chemistry and works with future science educators at the Viikki Teacher Training School of Helsinki University.

Finland’s National Core Curriculum is premised on the idea “that learning is a result of a student’s active and focused actions aimed to process and interpret received information in interaction with other students, teachers and the environment and on the basis of his or her existing knowledge structures.”

My conception of learning lies strongly on this citation from our curriculum. My aim is to support knowledge-building, socioculturally: to create socially supported activity in student’s zone of proximal development (the area where student need some support to achieve next level of understanding or skill). The student’s previous knowledge is the starting point, and then the learning is bound to the activity during lessons—experiments, simulations, and observing phenomena.

The National Core Curriculum also states, “The purpose of instruction in chemistry is to support development of students’ scientific thinking and modern worldview.” Our teaching is based on examination and observations of substances and chemical phenomena, their structures and properties, and reactions between substances. Through experiments and theoretical models, students are taught to understand everyday life and nature. In my classroom, I use discussion, lectures, demonstrations, and experimental work—quite often based on group work. Between lessons, I use social media and other information communication technologies to stay in touch with students.

In addition to the National Core Curriculum, my school has its own. They have the same bases, but our own curriculum is more concrete. Based on these, I write my course and lesson plans. Because of different learning styles, I use different kinds of approaches, sometimes theoretical and sometimes experimental. Always there are new concepts and perhaps new models to explain the phenomena or results.

To make students enjoy learning chemistry is hard work. I think that as a teacher, you have to love your subject and enjoy teaching even when there are sometimes students who don´t pay attention to you. But I get satisfaction when I can give a purpose for the future by being a supportive teacher.

New Zealand: “Students Disengage When a Teacher Is Simply Repeating Facts or Ideas”

New Zealand came in seventh place out of 65 in the 2009 PISA assessment. Steve Martin is head of junior science at Howick College. In 2010, he received the prime minister’s award for science teaching.

Science education is an important part of preparing students for their role in the community. Scientific understanding will allow them to engage in issues that concern them now and in the future, such as genetically modified crops. In New Zealand, science is also viewed as having a crucial role to play in the future of the economic health of the country. This can be seen in the creation of the “Prime Minister’s Science Prizes,” a program that identifies the nation’s leading scientists, emerging and future scientists, and science teachers.

The New Zealand Science Curriculum allows for flexibility depending on contextual factors such as school location, interests of students, and teachers’ specialization. The curriculum has the “Nature of Science” as its foundation, which supports students learning the skills essential to a scientist, such as problem-solving and effective communication. The Nature of Science refers to the skills required to work as a scientist, how to communicate science effectively through science-specific vocabulary, and how to participate in debates and issues with a scientific perspective.

School administrators support innovation and risk-taking by teachers, which fosters the “let’s have a go” attitude. In my own classroom, I utilize computer technology to create virtual science lessons that support and encourage students to think for themselves and learn at their own pace. Virtual Lessons are Web-based documents that support learning in and outside the classroom. They include support for students of all abilities by providing digital resources targeted at different levels of thinking. These could include digital flashcards that support vocabulary development, videos that explain the relationships between ideas or facts, and links to websites that allow students to create cartoon animations. The students are then supported by the use of instant messaging, online collaborative documents, and email so they can get support from their peers and myself at anytime. I provide students with various levels of success criteria, which are statements that students and teachers use to evaluate performance. In every lesson I provide the students with three different levels of success criteria, each providing an increase in cognitive demand. The following is an example based on the topic of the carbon cycle:
I can identify the different parts of the carbon cycle.

I can explain how all the parts interact with each other to form the carbon cycle.
I can predict the effect that removing one part of the carbon cycle has on the environment.
These provide challenge for all abilities and at the same time make it clear what students need to do to be successful. I value creativity and innovation, and this greatly influences the opportunities I provide for students.

My students learn to love to be challenged and to see that all ideas help develop greater understanding. Students value the opportunity to contribute to others’ understanding, and they disengage when a teacher is simply repeating facts or ideas.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Coloma 1914 Classroom. Courtesy of Coloma Convent School, Croydon UK.[end-div]

Persecution of Scientists: Old and New

The debate over the theory of evolution continues into the 21st century particularly in societies with a religious bent, including the United States of America. Yet, while the theory and corresponding evidence comes under continuous attack from mostly religious apologists, we generally do not see scientists themselves persecuted for supporting evolution, or not.

This cannot be said for climate scientists in Western countries, who while not physically abused or tortured or imprisoned do continue to be targets of verbal abuse and threats from corporate interests or dogmatic politicians and their followers. But, as we know persecution of scientists for embodying new, and thus threatening, ideas has been with us since the dawn of the scientific age. In fact, this behavior probably has been with us since our tribal ancestors moved out of Africa.

So, it is useful to remind ourselves how far we have come and of the distance we still have to travel.

[div class=attrib]From Wired:[end-div]

Turing was famously chemically-castrated after admitting to homosexual acts in the 1950s. He is one of a long line of scientists who have been persecuted for their beliefs or practices.

After admitting to “homosexual acts” in early 1952, Alan Turing was prosecuted and had to make the choice between a custodial sentence or chemical castration through hormone injections. Injections of oestrogen were intended to deal with “abnormal and uncontrollable” sexual urges, according to literature at the time.
He chose this option so that he could stay out of jail and continue his research, although his security clearance was revoked, meaning he could not continue with his cryptographic work. Turing experienced some disturbing side effects, including impotence, from the hormone treatment. Other known side effects include breast swelling, mood changes and an overall “feminization”. Turing completed his year of treatment without major incident. His medication was discontinued in April 1953 and the University of Manchester created a five-year readership position just for him, so it came as a shock when he committed suicide on 7 June, 1954.

Turing isn’t the only scientist to have been persecuted for his personal or professional beliefs or lifestyle. Here’s a a list of other prominent scientific luminaries who have been punished throughout history.

Rhazes (865-925)
Muhammad ibn Zakariy? R?z? or Rhazes was a medical pioneer from Baghdad who lived between 860 and 932 AD. He was responsible for introducing western teachings, rational thought and the works of Hippocrates and Galen to the Arabic world. One of his books, Continens Liber, was a compendium of everything known about medicine. The book made him famous, but offended a Muslim priest who ordered the doctor to be beaten over the head with his own manuscript, which caused him to go blind, preventing him from future practice.

Michael Servetus (1511-1553)
Servetus was a Spanish physician credited with discovering pulmonary circulation. He wrote a book, which outlined his discovery along with his ideas about reforming Christianity — it was deemed to be heretical. He escaped from Spain and the Catholic Inquisition but came up against the Protestant Inquisition in Switzerland, who held him in equal disregard. Under orders from John Calvin, Servetus was arrested, tortured and burned at the stake on the shores of Lake Geneva – copies of his book were accompanied for good measure.

Galileo Galilei (1564-1642)
The Italian astronomer and physicist Galileo Galilei was trialled and convicted in 1633 for publishing his evidence that supported the Copernican theory that the Earth revolves around the Sun. His research was instantly criticized by the Catholic Church for going against the established scripture that places Earth and not the Sun at the center of the universe. Galileo was found “vehemently suspect of heresy” for his heliocentric views and was required to “abjure, curse and detest” his opinions. He was sentenced to house arrest, where he remained for the rest of his life and his offending texts were banned.

Henry Oldenburg (1619-1677)
Oldenburg founded the Royal Society in London in 1662. He sought high quality scientific papers to publish. In order to do this he had to correspond with many foreigners across Europe, including the Netherlands and Italy. The sheer volume of his correspondence caught the attention of authorities, who arrested him as a spy. He was held in the Tower of London for several months.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Engraving of Galileo Galilei offering his telescope to three women (possibly Urania and attendants) seated on a throne; he is pointing toward the sky where some of his astronomical discoveries are depicted, 1655. Courtesy of Library of Congress.[end-div]

Higgs?

 

A week ago, on July 4, 2012 researchers at CERN told the world that they had found evidence of a new fundamental particle — the so-called Higgs boson, or something closely similar. If further particle collisions at CERN’s Large Hadron Collider uphold this finding over the coming years, this will rank as significant a discovery as that of the proton or the electro-magnetic force. While practical application of this discovery, in our lifetimes at least, is likely to be scant, it undeniably furthers our quest to understand the underlying mechanism of our existence.

So where might this discovery lead next?

[div class=attrib]From the New Scientist:[end-div]

“As a layman, I would say, I think we have it,” said Rolf-Dieter Heuer, director general of CERN at Wednesday’s seminar announcing the results of the search for the Higgs boson. But when pressed by journalists afterwards on what exactly “it” was, things got more complicated. “We have discovered a boson – now we have to find out what boson it is,” he said cryptically. Eh? What kind of particle could it be if it isn’t the Higgs boson? And why would it show up right where scientists were looking for the Higgs? We asked scientists at CERN to explain.

If we don’t know the new particle is a Higgs, what do we know about it?
We know it is some kind of boson, says Vivek Sharma of CMS, one of the two Large Hadron Collider experiments that presented results on Wednesday. There are only two types of elementary particle in the standard model: fermions, which include electrons, quarks and neutrinos, and bosons, which include photons and the W and Z bosons. The Higgs is a boson – and we know the new particle is too because one of the things it decays into is a pair of high-energy photons, or gamma rays. According to the rules of mathematical symmetry, only a boson could decay into exactly two other photons.

Anything else?
Another thing we can say about the new particle is that nothing yet suggests it isn’t a Higgs. The standard model, our leading explanation for the known particles and the forces that act on them, predicts the rate at which a Higgs of a given mass should decay into various particles. The rates of decay reported for the new particle yesterday are not exactly what would be predicted for its mass of about 125 gigaelectronvolts (GeV) – leaving the door open to more exotic stuff. “If there is such a thing as a 125 GeV Higgs, we know what its rate of decay should be,” says Sharma. But the decay rates are close enough for the differences to be statistical anomalies that will disappear once more data is taken. “There are no serious inconsistencies,” says Joe Incandela, head of CMS, who reported the results on Wednesday.

In that case, are the CERN scientists just being too cautious? What would be enough evidence to call it a Higgs boson?
As there could be many different kinds of Higgs bosons, there’s no straight answer. An easier question to answer is: what would make the new particle neatly fulfil the Higgs boson’s duty in the standard model? Number one is to give other particles mass via the Higgs field – an omnipresent entity that “slows” some particles down more than others, resulting in mass. Any particle that makes up this field must be “scalar”. The opposite of a vector, this means that, unlike a magnetic field, or gravity, it doesn’t have any directionality. “Only a scalar boson fixes the problem,” says Oliver Buchmueller, also of CMS.

When will we know whether it’s a scalar boson?
By the end of the year, reckons Buchmueller, when at least one outstanding property of the new particle – its spin – should be determined. Scalars’ lack of directionality means they have spin 0. As the particle is a boson, we already know its spin is a whole number and as it decays into two photons, mathematical symmetry again dictates that the spin can’t be 1. Buchmueller says LHC researchers will able to determine whether it has a spin of 0 or 2 by examining whether the Higgs’ decay particles shoot into the detector in all directions or with a preferred direction – the former would suggest spin 0. “Most people think it is a scalar, but it still needs to be proven,” says Buchmueller. Sharma is pretty sure it’s a scalar boson – that’s because it is more difficult to make a boson with spin 2. He adds that, although it is expected, confirmation that this is a scalar boson is still very exciting: “The beautiful thing is, if this turns out to be a scalar particle, we are seeing a new kind of particle. We have never seen a fundamental particle that is a scalar.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A typical candidate event including two high-energy photons whose energy (depicted by dashed yellow lines and red towers) is measured in the CMS electromagnetic calorimeter. The yellow lines are the measured tracks of other particles produced in the collision.[end-div]

Fifty Shades of Grey Matter: Now For Some Really Influential Books

While pop culture columnists, behavioral psychologists and literary gadflies debate the pros and cons of “Fifty Shades of Grey”, we look at some more notable, though perhaps no-less controversial works, in their time. Notable in the sense that ideas from any of these books — whether you are in agreement with them or not — have had a profound influence of our cultural, political, economic and scientific evolution.

Yet while all combined have come nowhere close to the 1 million-plus sales in just over 10 weeks, with 20 million in sales so far, of the sado-masochistic pulp fiction, they do offer an enlightening counter-balance. So, if you need some fleeting titillation by all means loan “Fifty Shades…” from a friend or neighbor — why buy one, everybody else has one already. But then, go to your local bookstore or click to Amazon and purchase a handful from this list spanning 30 centuries —  you will be reminded of our ongoing, if sometimes limited, intellectual progress as a species.

1    I Ching, Chinese classic texts
2    Hebrew Bible, Jewish scripture
3    Iliad and The Odyssey, Homer
4    Upanishads, Hindu scripture
5    The Way and Its Power, Lao-tzu
6    The Avesta, Zoroastrian scripture
7    Analects, Confucius
8    History of the Peloponnesian War, Thucydides
9    Works, Hippocrates
10    Works, Aristotle
11    History, Herodotus
12    The Republic, Plato
13    Elements, Euclid
14    Dhammapada, Theravada Buddhist scripture
15    Aeneid, Virgil
16    On the Nature of Reality, Lucretius
17    Allegorical Expositions of the Holy Laws, Philo of Alexandria
18    New Testament, Christian scripture
19    Parallel Lives, Plutarch
20    Annals, from the Death of the Divine Augustus, Cornelius Tacitus
21    Gospel of Truth, Valentinus
22    Meditations, Marcus Aurelius
23    Outlines of Pyrrhonism, Sextus Empiricus
24    Enneads, Plotinus
25    Confessions, Augustine of Hippo
26    Koran, Muslim scripture
27    Guide for the Perplexed, Moses Maimonides
28    Kabbalah, Text of Judaic mysticism
29    Summa Theologicae, Thomas Aquinas
30    The Divine Comedy, Dante Alighieri
31    In Praise of Folly, Desiderius Erasmus
32    The Prince, Niccolò Machiavelli
33    On the Babylonian Captivity of the Church, Martin Luther
34    Gargantua and Pantagruel, François Rabelais
35    Institutes of the Christian Religion, John Calvin
36    On the Revolution of the Celestial Orbs, Nicolaus Copernicus
37    Essays, Michel Eyquem de Montaigne
38    Don Quixote, Parts I and II, Miguel de Cervantes
39    The Harmony of the World, Johannes Kepler
40    Novum Organum, Francis Bacon
41    The First Folio [Works], William Shakespeare
42    Dialogue Concerning Two New Chief World Systems, Galileo Galilei
43    Discourse on Method, René Descartes
44    Leviathan, Thomas Hobbes
45    Works, Gottfried Wilhelm Leibniz
46    Pensées, Blaise Pascal
47    Ethics, Baruch de Spinoza
48    Pilgrim’s Progress, John Bunyan
49    Mathematical Principles of Natural Philosophy, Isaac Newton
50    Essay Concerning Human Understanding, John Locke
51    The Principles of Human Knowledge, George Berkeley
52    The New Science, Giambattista Vico
53    A Treatise of Human Nature, David Hume
54    The Encyclopedia, Denis Diderot, ed.
55    A Dictionary of the English Language, Samuel Johnson
56    Candide, François-Marie de Voltaire
57    Common Sense, Thomas Paine
58    An Enquiry Into the Nature and Causes of the Wealth of Nations, Adam Smith
59    The History of the Decline and Fall of the Roman Empire, Edward Gibbon
60    Critique of Pure Reason, Immanuel Kant
61    Confessions, Jean-Jacques Rousseau
62    Reflections on the Revolution in France, Edmund Burke
63    Vindication of the Rights of Women, Mary Wollstonecraft
64    An Enquiry Concerning Political Justice, William Godwin
65    An Essay on the Principle of Population, Thomas Robert Malthus
66    Phenomenology of Spirit, George Wilhelm Friedrich Hegel
67    The World as Will and Idea, Arthur Schopenhauer
68    Course in the Positivist Philosophy, Auguste Comte
69    On War, Carl Marie von Clausewitz
70    Either/Or, Søren Kierkegaard
71    Manifesto of the Communist Party, Karl Marx and Friedrich Engels
72    “Civil Disobedience,” Henry David Thoreau
73    The Origin of Species, Charles Darwin
74    On Liberty, John Stuart Mill
75    First Principles, Herbert Spencer
76    Experiments on Plant Hybridization, Gregor Mendel
77    War and Peace, Leo Tolstoy
78    Treatise on Electricity and Magnetism, James Clerk Maxwell
79    Thus Spake Zarathustra, Friedrich Nietzsche
80    The Interpretation of Dreams, Sigmund Freud
81    Pragmatism, William James
82    Relativity, Albert Einstein
83    The Mind and Society, Vilfredo Pareto
84    Psychological Types, Carl Gustav Jung
85    I and Thou, Martin Buber
86    The Trial, Franz Kafka
87    The Logic of Scientific Discovery, Karl Popper
88    The General Theory of Employment, Interest, and Money, John Maynard Keynes
89    Being and Nothingness, Jean-Paul Sartre
90    The Road to Serfdom, Friedrich von Hayek
91    The Second Sex, Simone de Beauvoir
92    Cybernetics, Norbert Wiener
93    Nineteen Eighty-Four, George Orwell
94    Beelzebub’s Tales to His Grandson, George Ivanovitch Gurdjieff
95    Philosophical Investigations, Ludwig Wittgenstein
96    Syntactic Structures, Noam Chomsky
97    The Structure of Scientific Revolutions, T. S. Kuhn
98    The Feminine Mystique, Betty Friedan
99    Quotations from Chairman Mao Tse-tung [The Little Red Book], Mao Zedong
100    Beyond Freedom and Dignity, B. F. Skinner

The well-rounded list featuring critically acclaimed novels, poetic masterpieces, scientific first principals, political and religious works was compiled by Martin Seymour-Smith, in his 1998 book, The 100 Most Influential Books Ever Written: The History of Thought from Ancient Times to Today. Seymour-Smith is a British poet, critic, and biographer.

[div class=attrib]Image: “On the Revolutions of Heavenly Spheres” by Nicolaus Copernicus, 1543.[end-div]