Corporate Corruption: Greed, Lies and Nothing New

The last couple of decades has seen some remarkable cases of corporate excess and corruption. The deep-rooted human inclinations toward greed, telling falsehoods and exhibiting questionable ethics can probably be traced to the dawn of bipedalism. However, in more recent times we have seen misdeeds particularly in the business world grow in their daring, scale and impact.

We’ve seen Worldcom overstate its cashflows, Parmalat falsifying accounts, Lehman Brothers (and other investment banks) hiding critical information from investors, Enron cooking all their books, Bernard Madoff marketing his immense Ponzi scheme, Halliburton overcharging government contracts, Tyco executives looting their own company, Wells Fargo and other retail banks robo-signing contracts, investment banks selling questionable products to investors and then betting against them, and now ever more recently, Barclays and other big banks manipulating interest rates.

These tales of gluttony and wrongdoing are a dream for social scientists; and for the public in general, well, we tend to let the fat cats just get fatter and nastier. And, where are the regulators, legislators and enforcers of the law? Well, they are generally asleep at the wheel or in bed, so to speak, with their corporate donors. No wonder we all yawn at the latest scandal. However, some suggest this undermines the very foundations of western capitalism.

[div class=attrib]From the New York Times:[end-div]

Perhaps the most surprising aspect of the Libor scandal is how familiar it seems. Sure, for some of the world’s leading banks to try to manipulate one of the most important interest rates in contemporary finance is clearly egregious. But is that worse than packaging billions of dollars worth of dubious mortgages into a bond and having it stamped with a Triple-A rating to sell to some dupe down the road while betting against it? Or how about forging documents on an industrial scale to foreclose fraudulently on countless homeowners?

The misconduct of the financial industry no longer surprises most Americans. Only about one in five has much trust in banks, according to Gallup polls, about half the level in 2007. And it’s not just banks that are frowned upon. Trust in big business overall is declining. Sixty-two percent of Americans believe corruption is widespread across corporate America. According to Transparency International, an anticorruption watchdog, nearly three in four Americans believe that corruption has increased over the last three years.

We should be alarmed that corporate wrongdoing has come to be seen as such a routine occurrence. Capitalism cannot function without trust. As the Nobel laureate Kenneth Arrow observed, “Virtually every commercial transaction has within itself an element of trust.”

The parade of financiers accused of misdeeds, booted from the executive suite and even occasionally jailed, is undermining this essential element. Have corporations lost whatever ethical compass they once had? Or does it just look that way because we are paying more attention than we used to?

This is hard to answer because fraud and corruption are impossible to measure precisely. Perpetrators understandably do their best to hide the dirty deeds from public view. And public perceptions of fraud and corruption are often colored by people’s sense of dissatisfaction with their lives.

Last year, the economists Justin Wolfers and Betsey Stevenson from the University of Pennsylvania published a study suggesting that trust in government and business falls when unemployment rises. “Much of the recent decline in confidence — particularly in the financial sector — may simply be a standard response to a cyclical downturn,” they wrote.

And waves of mistrust can spread broadly. After years of dismal employment prospects, Americans are losing trust in a broad range of institutions, including Congress, the Supreme Court, the presidency, public schools, labor unions and the church.

Corporate wrongdoing may be cyclical, too. Fraud is probably more lucrative, as well as easier to hide, amid the general prosperity of economic booms. And the temptation to bend the rules is probably highest toward the end of an economic upswing, when executives must be the most creative to keep the stream of profits rolling in.

The most toxic, no-doc, reverse amortization, liar loans flourished toward the end of the housing bubble. And we typically discover fraud only after the booms have turned to bust. As Warren Buffett famously said, “You only find out who is swimming naked when the tide goes out.”

Company executives are paid to maximize profits, not to behave ethically. Evidence suggests that they behave as corruptly as they can, within whatever constraints are imposed by law and reputation. In 1977, the United States Congress passed the Foreign Corrupt Practices Act, to stop the rampant practice of bribing foreign officials. Business by American multinationals in the most corrupt countries dropped. But they didn’t stop bribing. And American companies have been lobbying against the law ever since.

Extrapolating from frauds that were uncovered during and after the dot-com bubble, the economists Luigi Zingales and Adair Morse of the University of Chicago and Alexander Dyck of the University of Toronto estimated conservatively that in any given year a fraud was being committed by 11 to 13 percent of the large companies in the country.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Mug shot of Charles Ponzi (March 3, 1882 – January 18, 1949). Charles Ponzi was born in Italy and became known as a swindler for his money scheme. His aliases include Charles Ponei, Charles P. Bianchi, Carl and Carlo.. Courtesy of Wikipedia.[end-div]

London’s Telephone Box

London’s bright red telephone boxes (booths for our readers in the United States) are as iconic and recognizable as the Queen or Big Ben looming over the Houses of Parliament. Once as ubiquitous as the distinctive London Bobby’s (police officer) helmet, many of these red iron chambers have now been replaced by mobile phones. As a result BT has taken to auctioning some of its telephone boxes for a very good cause — ChildLine’s 25th anniversary. Though not before each is painted or re-imagined by an artist or designer. Check out our five favorites below, and see all of BT’s colorful “Artboxes”, here.

Accessorize

Proud of their London heritage, the ArtBox sports Accessorize’s trademark Union Jack design – customized and embellished in true Accessorize fashion.

 

 

 

Big Ben BT ArtBox

When Mandii first came to London from New Zealand, one of the first sights she wanted to see was Big Ben.

 

 

 

Peekaboo

Take a look and see what you find.

Evoking memories of the childhood game, hide and seek ‘Peekaboo’ invites you to consider issues of loneliness and neglect, and the role of the ‘finder’, which can be attributed to ChildLine.

 

 

Slip

A phonebox troubled by a landslide. Just incredible.

 

 

 

 

Londontotem

Loving the block colours and character designs. Their jolly spirit is infection, I mean, just look at their faces! The PhoneBox is like a mini street ornament in London isn’t it? A proper little totem pole in its own right!

 

 

 

[div class=attrib]Read more about BT’s Artbox project after the jump.[end-div]

[div class=attrib]Images courtesy of BT.[end-div]

Yayoi Kusama: Connecting All the Dots

 

 

 

 

 

 

 

 

Yayoi Kusama, c1939                                                           Yayoi Kusama, 2000

The art establishment has Yayoi Kusama in its sights, again. Over the last 60 years Kusama has created and evolved a style that is all her own, best seen rather than discussed.

A recent exhibit of Kusama’s work in Brisbane featured “The obliteration room”. This wonderful, interactive exhibit was commissioned specifically for kids aged 1-101 years. The exhibit features a whitewashed room with simple furniture, fixtures and objects all in white. The interactive — and fun — part features sheets of bright and colorful sticky dots given to each visitor. Armed with these dots visitors are encouraged to place them anywhere and everywhere. Results below (including a few, select dots courtesy of theDiagonal’s editor).

For an interesting timeline of her work, courtesy of the Queensland Art Gallery in Brisbane, Australia follow this jump.

[div class=attrib]From the Telegraph:[end-div]

There are spots before my eyes. I am at the National Museum of Art in Osaka, Japan, where crowds are flocking to a big exhibition of Yayoi Kusama’s work. Dots are a recurring theme in her art, a visual representation of the hallucinations and anxiety attacks she has suffered from since childhood, so the show is dominated by giant red polka-dotted spheres, and a disorienting room in which huge white fibreglass tulips are covered in red dots – as are the white walls, ceiling and floor.

There’s one of her unsettling infinity mirror rooms, illuminated by seemingly endless floating dots of light, and a giant pumpkin crawling with a distinctive pattern of dots she calls Nerves. But unlike her retrospective at Tate Modern in London, which ran from February to June this year, the emphasis here is on her recent paintings: one long gallery is filled with monochrome works, another with paintings so bright they hurt the eyes. The same primitive, repetitive motifs occur in all of them: dots, eyes, faces, zigzag patterns, amoebic blobs and snakelike forms bristling with cilia.

The sheer number is overwhelming, dizzying. When she was based in New York, her phallus sculptures and naked hippie ‘happenings’ were seen as scandalous and shameful by many in her home country, but the scale of this show is an indication of her standing in Japan, where she is fast becoming a national treasure.

The next day, I am invited to Kusama’s studio in a backstreet of the Shinjuku area of Tokyo, a short walk away from her private room in Siewa Hospital, a psychiatric unit where she has been a voluntary in-patient since 1977 and which she rarely leaves, except to work. Her studio is a cramped concrete and glass building, with cardboard boxes of supplies stacked up to the ceiling, the walls covered in racks of finished paintings, works in progress and blank canvases, a grey paint-spattered industrial carpet and a scruffy old office chair at the table where Kusama works under a glaring neon strip light.

She usually paints in comfortable pyjamas, one of her assistants tells me, her grey hair pulled up into a bun, but today she is upstairs having her hair and make-up done, ready to greet her guests.

When she finally comes down in the lift, a frail but colourful 83-year-old resplendent in a red wig and polka-dot ensemble, pushed in a polka-dotted wheelchair, she asks an assistant to show us some press cuttings of the Tate show, especially one from a paper from Matsumoto City, where she grew up. There’s something touching about this need to prove herself, but it’s also confusing – akin to J K Rowling showing off a review in The Gloucestershire Echo to verify that she is a published author.

Talking to Kusama can be a surreal experience. She is easily distracted, and although she lived in America for 20 years, she now speaks no English. She is surrounded by a team of assistants who translate for her, addressing her with respect as ‘sensei’ (‘master’ or ‘teacher’), and with whom she often seems to have long discussions before answering even the blandest questions. It’s hard to know what is being lost in translation, and what is down to the vagaries of age and health. But occasionally a question will engage her, and you’ll get a brief but fierce flash of the intelligence and focus she has so clearly poured into her work over the years.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Images: Yayoi Kusama, 1939 / Image courtesy: Ota Fine Arts, Tokyo / © Yayoi Kusama, Yayoi Kusama Studio Inc; Kusama, 2000 / Image courtesy: Ota Fine Arts, Tokyo / © Yayoi Kusama, Yayoi Kusama Studio Inc; theDiagonal / Queensland Art Gallery.[end-div]

Truthiness 101

Strangely and ironically it takes a satirist to tell the truth, and of course, academics now study the phenomenon.

[div class=attrib]From Washington Post:[end-div]

Nation, our so-called universities are in big trouble, and not just because attending one of them leaves you with more debt than the Greek government. No, we’re talking about something even more unsettling: the academic world’s obsession with Stephen Colbert.

Last we checked, Colbert was a mere TV comedian, or a satirist if you want to get fancy about it. (And, of course, being college professors, they do.) He’s a TV star, like Donald Trump, only less of a caricature.

Yet ever since Colbert’s show, “The Colbert Report,” began airing on Comedy Central in 2005, these ivory-tower eggheads have been devoting themselves to studying all things Colbertian. They’ve sliced and diced his comic stylings more ways than a Ginsu knife. Every academic discipline — well, among the liberal arts, at least — seems to want a piece of him. Political science. Journalism. Philosophy. Race relations. Communications studies. Theology. Linguistics. Rhetoric.

There are dozens of scholarly articles, monographs, treatises and essays about Colbert, as well as books of scholarly articles, monographs and essays. A University of Oklahoma student even earned her doctorate last year by examining him and his “Daily Show” running mate Jon Stewart. It was called “Political Humor and Third-Person Perception.”

The academic cult of Colbert (or is it “the cul of Colbert”?) is everywhere. Here’s a small sample. Jim .?.?.

?“Is Stephen Colbert America’s Socrates?,” chapter heading in “Stephen Colbert and Philosophy: I Am Philosophy (And So Can You!),” published by Open Court, 2009.

?“The Wørd Made Fresh: A Theological Exploration of Stephen Colbert,” published in Concepts (“an interdisciplinary journal of graduate studies”), Villanova University, 2010.

?“It’s All About Meme: The Art of the Interview and the Insatiable Ego of the Colbert Bump,” chapter heading in “The Stewart/Colbert Effect: Essays on the Real Impacts of Fake News,” published by McFarland Press, 2011.

?“The Irony of Satire: Political Ideology and the Motivation to See What You Want to See in The Colbert Report,” a 2009 study in the International Journal of Press/Politics that its authors described as an investigation of “biased message processing” and “the influence of political ideology on perceptions of Stephen Colbert.” After much study, the authors found “no significant difference between [conservatives and liberals] in thinking Colbert was funny.”

Colbert-ism has insinuated itself into the undergraduate curriculum, too.

Boston University has offered a seminar called “The Colbert Report: American Satire” for the past two years, which explores Colbert’s use of “syllogism, logical fallacy, burlesque, and travesty,” as lecturer Michael Rodriguez described it on the school’s Web site.

This fall, Towson University will roll out a freshman seminar on politics and popular culture, with Colbert as its focus.

All this for a guy who would undoubtedly mock-celebrate the serious study of himself.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class-attrib]Image: Colbert Report. Courtesy of Business Insider / Comedy Central.[end-div]

Extreme Equals Happy, Moderate Equals Unhappy

[div class=attrib]From the New York Times:[end-div]

WHO is happier about life — liberals or conservatives? The answer might seem straightforward. After all, there is an entire academic literature in the social sciences dedicated to showing conservatives as naturally authoritarian, dogmatic, intolerant of ambiguity, fearful of threat and loss, low in self-esteem and uncomfortable with complex modes of thinking. And it was the candidate Barack Obama in 2008 who infamously labeled blue-collar voters “bitter,” as they “cling to guns or religion.” Obviously, liberals must be happier, right?

Wrong. Scholars on both the left and right have studied this question extensively, and have reached a consensus that it is conservatives who possess the happiness edge. Many data sets show this. For example, the Pew Research Center in 2006 reported that conservative Republicans were 68 percent more likely than liberal Democrats to say they were “very happy” about their lives. This pattern has persisted for decades. The question isn’t whether this is true, but why.

Many conservatives favor an explanation focusing on lifestyle differences, such as marriage and faith. They note that most conservatives are married; most liberals are not. (The percentages are 53 percent to 33 percent, according to my calculations using data from the 2004 General Social Survey, and almost none of the gap is due to the fact that liberals tend to be younger than conservatives.) Marriage and happiness go together. If two people are demographically the same but one is married and the other is not, the married person will be 18 percentage points more likely to say he or she is very happy than the unmarried person.

An explanation for the happiness gap more congenial to liberals is that conservatives are simply inattentive to the misery of others. If they recognized the injustice in the world, they wouldn’t be so cheerful. In the words of Jaime Napier and John Jost, New York University psychologists, in the journal Psychological Science, “Liberals may be less happy than conservatives because they are less ideologically prepared to rationalize (or explain away) the degree of inequality in society.” The academic parlance for this is “system justification.”

The data show that conservatives do indeed see the free enterprise system in a sunnier light than liberals do, believing in each American’s ability to get ahead on the basis of achievement. Liberals are more likely to see people as victims of circumstance and oppression, and doubt whether individuals can climb without governmental help. My own analysis using 2005 survey data from Syracuse University shows that about 90 percent of conservatives agree that “While people may begin with different opportunities, hard work and perseverance can usually overcome those disadvantages.” Liberals — even upper-income liberals — are a third less likely to say this.

So conservatives are ignorant, and ignorance is bliss, right? Not so fast, according to a study from the University of Florida psychologists Barry Schlenker and John Chambers and the University of Toronto psychologist Bonnie Le in the Journal of Research in Personality. These scholars note that liberals define fairness and an improved society in terms of greater economic equality. Liberals then condemn the happiness of conservatives, because conservatives are relatively untroubled by a problem that, it turns out, their political counterparts defined.

There is one other noteworthy political happiness gap that has gotten less scholarly attention than conservatives versus liberals: moderates versus extremists.

Political moderates must be happier than extremists, it always seemed to me. After all, extremists actually advertise their misery with strident bumper stickers that say things like, “If you’re not outraged, you’re not paying attention!”

But it turns out that’s wrong. People at the extremes are happier than political moderates. Correcting for income, education, age, race, family situation and religion, the happiest Americans are those who say they are either “extremely conservative” (48 percent very happy) or “extremely liberal” (35 percent). Everyone else is less happy, with the nadir at dead-center “moderate” (26 percent).

What explains this odd pattern? One possibility is that extremists have the whole world figured out, and sorted into good guys and bad guys. They have the security of knowing what’s wrong, and whom to fight. They are the happy warriors.

Whatever the explanation, the implications are striking. The Occupy Wall Street protesters may have looked like a miserable mess. In truth, they were probably happier than the moderates making fun of them from the offices above. And none, it seems, are happier than the Tea Partiers, many of whom cling to guns and faith with great tenacity. Which some moderately liberal readers of this newspaper might find quite depressing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Psychology Today.[end-div]

Famous Artworks Inspired by Other Famous Works

The Garden of Earthly Delights. Hieronymous Bosch.

The Tilled Field. Joan Miró.

[div class=attrib]From Flavorwire:[end-div]

We tend to think of appropriation as a postmodern thing, with artists in all media drawing on, referring to, and mashing up the most influential works of the past. But we forget that this has been happening for centuries — millennia, actually — as Renaissance painters paid tribute to Greek art, ideas circulated within the 19th-century French art scene, and Dada hijacked the course of art history, mocking and inverting everything that came before it. After the jump, we round up some of the best, most famous, and all-around strangest artworks inspired by other artworks. Some are homages, some are parodies, some are responses, and a few seem to function as all three.

Joan Miró’s The Tilled Field, inspired by Hieronymous Bosch’s The Garden of Earthly Delights

The resemblance between Joan Miró’s Surrealist painting and Bosch’s Early Netherlander triptych may not be as clear as the parallels between some of the other works on this list, but when you know what to look for, the resemblance is certainly there. Besides the colors, which do echo The Garden of Earthly Delights, Miró placed in his painting many objects that appear in Bosch’s — crudely sexualized figures, disembodied ears, flocks of birds. Although the styles are different, both have the same busy, chaotic energy.

[div class=attrib]More from this top 10 list after the jump.[end-div]

Resurgence of Western Marxism

The death-knell for Western capitalism has yet to sound. However, increasing economic turmoil, continued shenanigans in the financial industry, burgeoning inequity, and acute global political unease, are combining to undermine the appeal of capitalism to a growing number of young people. Welcome to Marxism 2.012.

[div class=attrib]From the Guardian:[end-div]

Class conflict once seemed so straightforward. Marx and Engels wrote in the second best-selling book of all time, The Communist Manifesto: “What the bourgeoisie therefore produces, above all, are its own grave-diggers. Its fall and the victory of the proletariat are equally inevitable.” (The best-selling book of all time, incidentally, is the Bible – it only feels like it’s 50 Shades of Grey.)

Today, 164 years after Marx and Engels wrote about grave-diggers, the truth is almost the exact opposite. The proletariat, far from burying capitalism, are keeping it on life support. Overworked, underpaid workers ostensibly liberated by the largest socialist revolution in history (China’s) are driven to the brink of suicide to keep those in the west playing with their iPads. Chinese money bankrolls an otherwise bankrupt America.

The irony is scarcely wasted on leading Marxist thinkers. “The domination of capitalism globally depends today on the existence of a Chinese Communist party that gives de-localised capitalist enterprises cheap labour to lower prices and deprive workers of the rights of self-organisation,” says Jacques Rancière, the French marxist thinker and Professor of Philosophy at the University of Paris VIII. “Happily, it is possible to hope for a world less absurd and more just than today’s.”

That hope, perhaps, explains another improbable truth of our economically catastrophic times – the revival in interest in Marx and Marxist thought. Sales of Das Kapital, Marx’s masterpiece of political economy, have soared ever since 2008, as have those of The Communist Manifesto and the Grundrisse (or, to give it its English title, Outlines of the Critique of Political Economy). Their sales rose as British workers bailed out the banks to keep the degraded system going and the snouts of the rich firmly in their troughs while the rest of us struggle in debt, job insecurity or worse. There’s even a Chinese theatre director called He Nian who capitalised on Das Kapital’s renaissance to create an all-singing, all-dancing musical.

And in perhaps the most lovely reversal of the luxuriantly bearded revolutionary theorist’s fortunes, Karl Marx was recently chosen from a list of 10 contenders to appear on a new issue of MasterCard by customers of German bank Sparkasse in Chemnitz. In communist East Germany from 1953 to 1990, Chemnitz was known as Karl Marx Stadt. Clearly, more than two decades after the fall of the Berlin Wall, the former East Germany hasn’t airbrushed its Marxist past. In 2008, Reuters reports, a survey of east Germans found 52% believed the free-market economy was “unsuitable” and 43% said they wanted socialism back. Karl Marx may be dead and buried in Highgate cemetery, but he’s alive and well among credit-hungry Germans. Would Marx have appreciated the irony of his image being deployed on a card to get Germans deeper in debt? You’d think.

Later this week in London, several thousand people will attend Marxism 2012, a five-day festival organised by the Socialist Workers’ Party. It’s an annual event, but what strikes organiser Joseph Choonara is how, in recent years, many more of its attendees are young. “The revival of interest in Marxism, especially for young people comes because it provides tools for analysing capitalism, and especially capitalist crises such as the one we’re in now,” Choonara says.

There has been a glut of books trumpeting Marxism’s relevance. English literature professor Terry Eagleton last year published a book called Why Marx Was Right. French Maoist philosopher Alain Badiou published a little red book called The Communist Hypothesis with a red star on the cover (very Mao, very now) in which he rallied the faithful to usher in the third era of the communist idea (the previous two having gone from the establishment of the French Republic in 1792 to the massacre of the Paris communards in 1871, and from 1917 to the collapse of Mao’s Cultural Revolution in 1976). Isn’t this all a delusion?

Aren’t Marx’s venerable ideas as useful to us as the hand loom would be to shoring up Apple’s reputation for innovation? Isn’t the dream of socialist revolution and communist society an irrelevance in 2012? After all, I suggest to Rancière, the bourgeoisie has failed to produce its own gravediggers. Rancière refuses to be downbeat: “The bourgeoisie has learned to make the exploited pay for its crisis and to use them to disarm its adversaries. But we must not reverse the idea of historical necessity and conclude that the current situation is eternal. The gravediggers are still here, in the form of workers in precarious conditions like the over-exploited workers of factories in the far east. And today’s popular movements – Greece or elsewhere – also indicate that there’s a new will not to let our governments and our bankers inflict their crisis on the people.”

That, at least, is the perspective of a seventysomething Marxist professor. What about younger people of a Marxist temper? I ask Jaswinder Blackwell-Pal, a 22 year-old English and drama student at Goldsmiths College, London, who has just finished her BA course in English and Drama, why she considers Marxist thought still relevant. “The point is that younger people weren’t around when Thatcher was in power or when Marxism was associated with the Soviet Union,” she says. “We tend to see it more as a way of understanding what we’re going through now. Think of what’s happening in Egypt. When Mubarak fell it was so inspiring. It broke so many stereotypes – democracy wasn’t supposed to be something that people would fight for in the Muslim world. It vindicates revolution as a process, not as an event. So there was a revolution in Egypt, and a counter-revolution and a counter-counter revolution. What we learned from it was the importance of organisation.”

This, surely is the key to understanding Marxism’s renaissance in the west: for younger people, it is untainted by association with Stalinist gulags. For younger people too, Francis Fukuyama’s triumphalism in his 1992 book The End of History – in which capitalism seemed incontrovertible, its overthrow impossible to imagine – exercises less of a choke-hold on their imaginations than it does on those of their elders.

Blackwell-Pal will be speaking Thursday on Che Guevara and the Cuban revolution at the Marxism festival. “It’s going to be the first time I’ll have spoken on Marxism,” she says nervously. But what’s the point thinking about Guevara and Castro in this day and age? Surely violent socialist revolution is irrelevant to workers’ struggles today? “Not at all!” she replies. “What’s happening in Britain is quite interesting. We have a very, very weak government mired in in-fighting. I think if we can really organise we can oust them.” Could Britain have its Tahrir Square, its equivalent to Castro’s 26th of July Movement? Let a young woman dream. After last year’s riots and today with most of Britain alienated from the rich men in its government’s cabinet, only a fool would rule it out.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Portrait of Karl Marx. Courtesy of International Institute of Social History in Amsterdam, Netherlands / Wikipedia.[end-div]

A Different Kind of Hotel

Bored of the annual family trip to Disneyland? Tired of staying in a suite hotel that still offers musak in the lobby, floral motifs on the walls, and ashtrays and saccharin packets next to the rickety minibar? Well, leaf through this list of 10 exotic and gorgeous hotels and start planning your next real escape today.

Wadi Rum Desert Lodge – The Valley of the Moon, Jordan.

[div class=attrib]From Flavorwire:[end-div]

A Backward Glance, Pulitzer Prize-winning author Edith Wharton’s gem of an autobiography is highbrow beach reading at its very best. In the memoir, she recalls time spent with her bff traveling buddy, Henry James, and quotes his arcadian proclamation, “summer afternoon — summer afternoon; to me those have always been the two most beautiful words in the English language.” Maybe so in the less than industrious heyday of inherited wealth, but in today’s world where most people work all day for a living, those two words just don’t have the same appeal as our two favorite words: summer getaway.

Like everyone else in our overworked and overheated city, rest and relaxation are all we can think about — especially on a hot Friday afternoon like this. In considering options for our celebrated summer respite, we thought we’d take a virtual gander to check out alternatives to the usual Hamptons summer share. From a treehouse where sloths join you for morning coffee to a giant sandcastle, click through to see some of the most unusual summer getaway destinations in the world.

[div class=attrib]See more stunning hotels after the jump.[end-div]

Solar Tornadoes

No, Solar tornadoes are not another manifestation of our slowly warming planet. Rather, these phenomena are believed to explain why the outer reaches of the solar atmosphere are so much hotter than its surface.

[div class=attrib]From ars technica:[end-div]

One of the abiding mysteries surrounding our Sun is understanding how the corona gets so hot. The Sun’s surface, which emits almost all the visible light, is about 5800 Kelvins. The surrounding corona rises to over a million K, but the heating process has not been identified. Most solar physicists suspect the process is magnetic, since the strong magnetic fields at the Sun’s surface drive much of the solar weather (including sunspots, coronal loops, prominences, and mass ejections). However, the diffuse solar atmosphere is magnetically too quiet on the large scales. The recent discovery of atmospheric “tornadoes”—swirls of gas over a thousand kilometers in diameter above the Sun’s surface—may provide a possible answer.

As described in Nature, these vortices occur in the chromosphere (the layer of the Sun’s atmosphere below the corona) and they are common. There are about 10 thousand swirls in evidence at any given time. Sven Wedemeyer-Böhm and colleagues identified the vortices using NASA’s Solar Dynamics Observatory (SDO) spacecraft and the Swedish Solar Telescope (SST). They measured the shape of the swirls as a function of height in the atmosphere, determining they grow wider at higher elevations, with the whole structure aligned above a concentration of the magnetic field on the Sun’s surface. Comparing these observations to computer simulations, the authors determined the vortices could be produced by a magnetic vortex exerting pressure on the gas in the atmosphere, accelerating it along a spiral trajectory up into the corona. Such acceleration could bring about the incredibly high temperatures observed in the Sun’s outer atmosphere.

The Sun’s atmosphere is divided into three major regions: the photosphere, the chromosphere, and the corona. The photosphere is the visible bit of the Sun, what we typically think of as the “surface.” It exhibits the behavior of rising gas and photons from the solar interior, as well as magnetic phenomena such as sunspots. The chromosphere is far less dense but hotter; the corona (“crown”) is still hotter and less dense, making an amorphous cloud around the sphere of the Sun. The chromosphere and corona are not seen without special equipment (except during total solar eclipses), but they can be studied with dedicated solar observatories.

To crack the problem of the super-hot corona, the researchers focused their attention on the chromosphere. Using data from SDO and SST, they measured the motion of various elements in the Sun’s atmosphere (iron, calcium, and helium) via the Doppler effect. These different gases all exhibited vortex behavior, aligned with the same spot on the photosphere. The authors identified 14 vortices during a single 55-minute observing run, which lasted for an average of about 13 minutes. Based on these statistics, they determined the Sun should have at least 11,000 vortices on its surface at any given time, at least during periods of low sunspot activity.

Due to the different wavelengths of light the observers used, they were able to map the shape and speed of the vortices as a function of height in the chromosphere. They found the familiar tornado shape: tapered at the base, widening at the top, reaching diameters of 1500 km. Each vortex was aligned along a single axis over a bright spot in the photosphere, which is the sign of a concentration of magnetic field lines.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A giant solar tornado from last fall large enought to swallow up 5 planet Earths is the first of its kind caught on film, March 6, 2012. Courtesy of Slate / NASA /Solar Dynamics Observatory (SDO).[end-div]

 

Busyness As Chronic Illness

Apparently, being busy alleviates the human existential threat. So, if your roughly 16 hours, or more, of wakefulness each day is crammed with memos, driving, meetings, widgets, calls, charts, quotas, angry customers, school lunches, deciding, reports, bank statements, kids, budgets, bills, baking, making, fixing, cleaning and mad bosses, then your life must be meaningful, right?

Think again.

Author Tim Kreider muses below on this chronic state of affairs, and hits close to the nerve when he suggests that, “I can’t help but wonder whether all this histrionic exhaustion isn’t a way of covering up the fact that most of what we do doesn’t matter.”

[div class=attrib]From the New York Times:[end-div]

If you live in America in the 21st century you’ve probably had to listen to a lot of people tell you how busy they are. It’s become the default response when you ask anyone how they’re doing: “Busy!” “So busy.” “Crazy busy.” It is, pretty obviously, a boast disguised as a complaint. And the stock response is a kind of congratulation: “That’s a good problem to have,” or “Better than the opposite.”

Notice it isn’t generally people pulling back-to-back shifts in the I.C.U. or commuting by bus to three minimum-wage jobs  who tell you how busy they are; what those people are is not busy but tired. Exhausted. Dead on their feet. It’s almost always people whose lamented busyness is purely self-imposed: work and obligations they’ve taken on voluntarily, classes and activities they’ve “encouraged” their kids to participate in. They’re busy because of their own ambition or drive or anxiety, because they’re addicted to busyness and dread what they might have to face in its absence.

Almost everyone I know is busy. They feel anxious and guilty when they aren’t either working or doing something to promote their work. They schedule in time with friends the way students with 4.0 G.P.A.’s  make sure to sign up for community service because it looks good on their college applications. I recently wrote a friend to ask if he wanted to do something this week, and he answered that he didn’t have a lot of time but if something was going on to let him know and maybe he could ditch work for a few hours. I wanted to clarify that my question had not been a preliminary heads-up to some future invitation; this was the invitation. But his busyness was like some vast churning noise through which he was shouting out at me, and I gave up trying to shout back over it.

Even children are busy now, scheduled down to the half-hour with classes and extracurricular activities. They come home at the end of the day as tired as grown-ups. I was a member of the latchkey generation and had three hours of totally unstructured, largely unsupervised time every afternoon, time I used to do everything from surfing the World Book Encyclopedia to making animated films to getting together with friends in the woods to chuck dirt clods directly into one another’s eyes, all of which provided me with important skills and insights that remain valuable to this day. Those free hours became the model for how I wanted to live the rest of my life.

The present hysteria is not a necessary or inevitable condition of life; it’s something we’ve chosen, if only by our acquiescence to it. Not long ago I  Skyped with a friend who was driven out of the city by high rent and now has an artist’s residency in a small town in the south of France. She described herself as happy and relaxed for the first time in years. She still gets her work done, but it doesn’t consume her entire day and brain. She says it feels like college — she has a big circle of friends who all go out to the cafe together every night. She has a boyfriend again. (She once ruefully summarized dating in New York: “Everyone’s too busy and everyone thinks they can do better.”) What she had mistakenly assumed was her personality — driven, cranky, anxious and sad — turned out to be a deformative effect of her environment. It’s not as if any of us wants to live like this, any more than any one person wants to be part of a traffic jam or stadium trampling or the hierarchy of cruelty in high school — it’s something we collectively force one another to do.

Busyness serves as a kind of existential reassurance, a hedge against emptiness; obviously your life cannot possibly be silly or trivial or meaningless if you are so busy, completely booked, in demand every hour of the day. I once knew a woman who interned at a magazine where she wasn’t allowed to take lunch hours out, lest she be urgently needed for some reason. This was an entertainment magazine whose raison d’être was obviated when “menu” buttons appeared on remotes, so it’s hard to see this pretense of indispensability as anything other than a form of institutional self-delusion. More and more people in this country no longer make or do anything tangible; if your job wasn’t performed by a cat or a boa constrictor in a Richard Scarry book I’m not sure I believe it’s necessary. I can’t help but wonder whether all this histrionic exhaustion isn’t a way of covering up the fact that most of what we do doesn’t matter.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Entrepreneur.com.[end-div]

First Artists: Neanderthals or Homo Sapiens?

The recent finding in a Spanish cave of a painted “red dot” dating from around 40,800 years ago suggests that our Neanderthal cousins may have beaten our species to claim the prize of “first artist”. Yet, evidence remains scant, and even if this were proven to be the case, we Homo sapiens can certainly lay claim to taking it beyond a “red dot” and making art our very own (and much else too.)

[div class=attrib]From the Guardian:[end-div]

Why do Neanderthals so fascinate Homo sapiens? And why are we so keen to exaggerate their virtues?

It is political correctness gone prehistoric. At every opportunity, people rush to attribute “human” virtues to this extinct human-like species. The latest generosity is to credit them with the first true art.

A recent redating of cave art in Spain has revealed the oldest paintings in Europe. A red dot in the cave El Castillo has now been dated at 40,800 years ago – considerably older than the cave art of Chauvet in France and contemporary with the arrival of the very first “modern humans”, Homo sapiens, in Europe.

This raises two possibilities, point out the researchers. Either the new humans from Africa started painting in caves the moment they entered Europe, or painting was already being done by the Neanderthals who were at that moment the most numerous relatives of modern humans on the European continent. One expert confesses to a “hunch” – which he acknowledges cannot be proven as things stand – that Neanderthals were painters.

That hunch goes against the weight of the existing evidence. Of course that hasn’t stopped it dominating all reports of the story: as far as media impressions go, the Neanderthals were now officially the first artists. Yet nothing of the sort has been proven, and plenty of evidence suggests that the traditional view is still far more likely.

In this view, the precocious development of art in ice age Europe marks out the first appearance of modern human consciousness, the intellectual birth of our species, the hand of Homo sapiens making its mark.

One crucial piece of evidence of where art came from is a piece of red ochre, engraved with abstract lines, that was discovered a decade ago in Blombos cave in South Africa. It is at least 70,000 years old and the oldest unmistakable artwork ever found. It is also a tool to make more art: ochre was great for making red marks on stone. It comes from Africa, where modern humans evolved, and reveals that when Homo sapiens made the move into Europe, our species could already draw on a long legacy of drawing and engraving. In fact, the latest finds at Blombos include a complete painting kit.

In other words, what is so surprising about the idea that Homo sapiens started to apply these skills immediately on discovering the caves of ice age Europe? It has to be more likely, on the face of it, than assuming these early Spanish images are by Neanderthals in the absence of any other solid evidence of paintings by them.

For, moving forward a few thousand years, the paintings of Chauvet and other French caves are certainly by us, Homo sapiens. And they remind us why this first art is so exciting and important: modern humans did not just do dots and handprints but magnificent, realistic portraits of animals. Their art is so superb in quality that it proves the existence of a higher mind, the capacity to create civilisation.

Is it possible that Neanderthals also used pigment to colour walls and also had the mental capacity to invent art? Of course it is, but the evidence at the moment still massively suggests art is a uniquely human achievement, unique, that is, to us – and fundamental to who we are.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A hand stencil in El Castillo cave, Spain, has been dated to earlier than 37,300 years ago and a red dot to earlier than 40,600 years ago, making them the oldest cave paintings in Europe. Courtesy of New Scientist / Pedro Saura.[end-div]

Ignorance [is] the Root and Stem of All Evil

Hailing from Classical Greece of around 2,400 years ago, Plato has given our contemporary world many important intellectual gifts. His broad interests in justice, mathematics, virtue, epistemology, rhetoric and art, laid the foundations for Western philosophy and science. Yet in his quest for deeper and broader knowledge he also had some important things to say about ignorance.

Massimo Pigliucci over at Rationally Speaking gives us his take on Platonic Ignorance. His caution is appropriate: in this age of information overload and extreme politicization it is ever more important for us to realize and acknowledge our own ignorance. Spreading falsehoods and characterizing opinion as fact to others — transferred ignorance — is rightly identified by Plato as a moral failing. In his own words (of course translated), “Ignorance [is] the Root and Stem of All Evil”.

[div class=attrib]From Rationally Speaking:[end-div]

Plato famously maintained that knowledge is “justified true belief,” meaning that to claim the status of knowledge our beliefs (say, that the earth goes around the sun, rather than the other way around) have to be both true (to the extent this can actually be ascertained) and justified (i.e., we ought to be able to explain to others why we hold such beliefs, otherwise we are simply repeating the — possibly true — beliefs of someone else).

It is the “justified” part that is humbling, since a moment’s reflection will show that a large number of things we think we know we actually cannot justify, which means that we are simply trusting someone else’s authority on the matter. (Which is okay, as long as we realize and acknowledge that to be the case.)

I was recently intrigued, however, not by Plato’s well known treatment of knowledge, but by his far less discussed views on the opposite of knowledge: ignorance. The occasion for these reflections was a talk by Katja Maria Vogt of Columbia University, delivered at CUNY’s Graduate Center, where I work. Vogt began by recalling the ancient skeptics’ attitude toward ignorance, as a “conscious positive stand,” meaning that skepticism is founded on one’s realization of his own ignorance. In this sense, of course, Socrates’ contention that he knew nothing becomes neither a self-contradiction (isn’t he saying that he knows that he knows nothing, thereby acknowledging that he knows something?), nor false modesty. Socrates was simply saying that he was aware of having no expertise while at the same time devoting his life to the quest for knowledge.

Vogt was particularly interested in Plato’s concept of “transferred ignorance,” which the ancient philosopher singled out as morally problematic. Transferred ignorance is the case when someone imparts “knowledge” that he is not aware is in fact wrong. Let us say, for instance, that I tell you that vaccines cause autism, and I do so on the basis of my (alleged) knowledge of biology and other pertinent matters, while, in fact, I am no medical researcher and have only vague notions of how vaccines actually work (i.e., imagine my name is Jenny McCarthy).

The problem, for Plato, is that in a sense I would be thinking of myself as smarter than I actually am, which of course carries a feeling of power over others. I wouldn’t simply be mistaken in my beliefs, I would be mistaken in my confidence in those beliefs. It is this willful ignorance (after all, I did not make a serious attempt to learn about biology or medical research) that carries moral implications.

So for Vogt the ancient Greeks distinguished between two types of ignorance: the self-aware, Socratic one (which is actually good) and the self-oblivious one of the overconfident person (which is bad). Need I point out that far too little of the former and too much of the latter permeate current political and social discourse? Of course, I’m sure a historian could easily come up with a plethora of examples of bad ignorance throughout human history, all the way back to the beginning of recorded time, but it does strike me that the increasingly fact-free public discourse on issues varying from economic policies to scientific research has brought Platonic transferred ignorance to never before achieved peaks (or, rather, valleys).

And I suspect that this is precisely because of the lack of appreciation of the moral dimension of transferred or willful ignorance. When politicians or commentators make up “facts” — or disregard actual facts to serve their own ideological agendas — they sometimes seem genuinely convinced that they are doing something good, at the very least for their constituents, and possibly for humanity at large. But how can it be good — in the moral sense — to make false knowledge one’s own, and even to actively spread it to others?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Socrates and Plato in a medieval picture. Courtesy of Wikipedia.[end-div]

Have a Laugh, Blame Twitter

Correlate 2 sets of totally independent statistics and you get to blame Twitter for most, if not all, of the world’s ills. That’s what Tim Cooley has done with this funny and informative #Blame Twitter infographic below.

Of course, even though the numbers are all verified and trusted, causation is entirely another factor. So, while 144,595 people die each day (on average), it is not (yet) as a result of using Twitter, and while our planet loses 1 hectare of forest for every 18,000 tweets, it’s not the endless Twittering that is causing de-forestation.

[div class=attrib]Infographic courtesy of Tim Cooley.[end-div]

Child Mutilation and Religious Ritual

A court in Germany recently banned circumcision at birth for religious reasons. Quite understandably the court saw that this practice violates bodily integrity. Aside from being morally repugnant to many theists and non-believers alike, the practice inflicts pain. So, why do some religions continue to circumcise children?

[div class=attrib]From Slate:[end-div]

A German court ruled on Tuesday that parents may not circumcise their sons at birth for religious reasons, because the procedure violates the child’s right to bodily integrity. Both Muslims and Jews circumcise their male children. Why is Christianity the only Abrahamic religion that doesn’t encourage circumcision?

Because Paul believed faith was more important than foreskin. Shortly after Jesus’ death, his followers had a disagreement over the nature of his message. Some acolytes argued that he offered salvation through Judaism, so gentiles who wanted to join his movement should circumcise themselves like any other Jew. The apostle Paul, however, believed that faith in Jesus was the only requirement for salvation. Paul wrote that Jews who believed in Christ could go on circumcising their children, but he urged gentiles not to circumcise themselves or their sons, because trying to mimic the Jews represented a lack of faith in Christ’s ability to save them. By the time that the Book of Acts was written in the late first or early second century, Paul’s position seems to have become the dominant view of Christian theologians. Gentiles were advised to follow only the limited set of laws—which did not include circumcision—that God gave to Noah after the flood rather than the full panoply of rules followed by the Jews.

Circumcision was uniquely associated with Jews in first-century Rome, even though other ethnic and religious groups practiced it. Romans wrote satirical poems mocking the Jews for taking a day off each week, refusing to eat pork, worshipping a sky god, and removing their sons’ foreskin. It is, therefore, neither surprising that early Christian converts sought advice on whether to adopt the practice of circumcision nor that Paul made it the focus of several of his famous letters.

The early compromise that Paul struck—ethnic Jewish Christians should circumcise, while Jesus’ gentile followers should not—held until Christianity became a legal religion in the fourth century. At that time, the two religions split permanently, and it became something of a heresy to suggest that one could be both Jewish and Christian. As part of the effort to distinguish the two religions, circumcisions became illegal for Christians, and Jews were forbidden from circumcising their slaves.

Although the church officially renounced religious circumcision around 300 years after Jesus’s death, Christians long maintained a fascination with it. In the 600s, Christians began celebrating the day Jesus was circumcised. According to medieval Christian legend, an angel bestowed Jesus’ foreskin upon Emperor Charlemagne in the Church of the Holy Sepulchre, where Christ was supposedly buried. Coptic Christians and a few other Christian groups in Africa resumed religious circumcisions long after their European colleagues abandoned it.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Apostle Paul. Courtesy of Wikipedia.[end-div]

100 Million Year Old Galactic Echo

Cosmologists have found what they believe to be the echoes of a galactic collision some 100 million years ago with our own Milky Way galaxy.

[div class=attrib]From Symmetry Magazine:[end-div]

Our galaxy, the Milky Way, is a large spiral galaxy surrounded by dozens of smaller satellite galaxies. Scientists have long theorized that occasionally these satellites will pass through the disk of the Milky Way, perturbing both the satellite and the disk. A team of astronomers from Canada and the United States have discovered what may well be the smoking gun of such an encounter, one that occurred close to our position in the galaxy and relatively recently, at least in the cosmological sense.

“We have found evidence that our Milky Way had an encounter with a small galaxy or massive dark matter structure perhaps as recently as 100 million years ago,” said Larry Widrow, professor at Queen’s University in Canada. “We clearly observe unexpected differences in the Milky Way’s stellar distribution above and below the Galaxy’s midplane that have the appearance of a vertical wave — something that nobody has seen before.”

The discovery is based on observations of some 300,000 nearby Milky Way stars by the Sloan Digital Sky Survey. Stars in the disk of the Milky Way move up and down at a speed of about 20-30 kilometers per second while orbiting the center of the galaxy at a brisk 220 kilometers per second. Widrow and his four collaborators from the University of Kentucky, the University of Chicago and Fermi National Accelerator Laboratory have found that the positions and motions of these nearby stars weren’t quite as regular as previously thought.

“Our part of the Milky Way is ringing like a bell,” said Brian Yanny, of the Department of Energy’s Fermilab. “But we have not been able to identify the celestial object that passed through the Milky Way. It could have been one of the small satellite galaxies that move around the center of our galaxy, or an invisible structure such as a dark matter halo.”

Adds Susan Gardner, professor of physics at the University of Kentucky: “The perturbation need not have been a single isolated event in the past, and it may even be ongoing. Additional observations may well clarify its origin.”

When the collaboration started analyzing the SDSS data on the Milky Way, they noticed a small but statistically significant difference in the distribution of stars north and south of the Milky Way’s midplane. For more than a year, the team members explored various explanations of this north-south asymmetry, such as the effect of interstellar dust on distance determinations and the way the stars surveyed were selected. When those attempts failed, they began to explore the alternative explanation that the data was telling them something about recent events in the history of the Galaxy.

The scientists used computer simulations to explore what would happen if a satellite galaxy or dark matter structure passed through the disk of the Milky Way. The simulations indicate that over the next 100 million years or so, our galaxy will “stop ringing:” the north-south asymmetry will disappear and the vertical motions of stars in the solar neighborhood will revert back to their equilibrium orbits — unless we get hit again.

[div class=attrib]Read the entire article after the jump.[end-div]

How Do Startup Companies Succeed?

A view from Esther Dyson, one of world’s leading digital technology entrepreneurs. She has served as a an early investor in numerous startups, including Flickr, del.icio.us, ZEDO, and Medspace, and is currently focused on startups in medical technology and aviation.

[div class=attrib]From Project Syndicate:[end-div]

The most popular stories often seem to end at the beginning. “…and so Juan and Alice got married.” Did they actually live happily ever after? “He was elected President.” But how did the country do under his rule? “The entrepreneur got her startup funding.” But did the company succeed?

Let’s consider that last one. Specifically, what happens to entrepreneurs once they get their money? Everywhere I go – and I have been in Moscow, Libreville (Gabon), and Dublin in the last few weeks – smart people ask how to get companies through the next phase of growth. How can we scale entrepreneurship to the point that it has a measurable and meaningful impact on the economy?

The real impact of both Microsoft and Google is not on their shareholders, or even on the people that they employ directly, but on the millions of people whom they have made more productive. That argues for companies that solve real problems, rather than for yet another photo-sharing app for rich, appealing (to advertisers) people with time on their hands.

It turns out that money is rarely enough – not just that there is not enough of it, but that entrepreneurs need something else. They need advice, contacts, customers, and employees immersed in a culture of effectiveness to succeed. But they also have to create something of real value to have meaningful economic impact in the long term.

The easy, increasingly popular answer is accelerators, incubators, camps, weekends – a host of locations and events to foster the development of startups. But these are just buildings and conferences unless they include people who can help with the software – contacts, customers, and culture. The people in charge, from NGOs to government officials, have great ideas about structures – tax policy, official financing, etc. – while the entrepreneurs themselves are too busy running their companies to find out about these things.

But this week in Dublin, I found what we need: not policies or theories, but actual living examples. Not far from the fancy hotel at which I was staying, and across from Google’s modish Irish offices, sits a squat old warehouse with a new sign: Startupbootcamp. You enter through a side door, into a cavern full of sawdust and cheap furniture (plus a pool table and a bar, of course).

What makes this place interesting is its sponsor: venerable old IBM. The mission of Startupbootcamp Europe is not to celebrate entrepreneurs, or even to educate them, but to help them scale up to meaningful businesses. Their new products can use IBM’s and other mentors’ contacts with the much broader world, whether for strategic marketing alliances, the power of an IBM endorsement, or, ultimately, an acquisition.

I was invited by Martin Kelly, who represents IBM’s venture arm in Ireland. He introduced me to the manager of the place, Eoghan Jennings, and a bunch of seasoned executives.

There was a three-time entrepreneur, Conor Hanley, co-founder of BiancaMed (recently sold to Resmed), who now has a sleep-monitoring tool and an exciting distribution deal with a large company he can’t yet mention; Jim Joyce, a former sales executive for Schering Plough who is now running Point of Care, which helps clinicians to help patients to manage their own care after they leave hospital; and Johnny Walker, a radiologist whose company operates scanners in the field and interprets them through a network of radiologists worldwide. Currently, Walker’s company, Global Diagnostics, is focused on pre-natal care, but give him time.

These guys are not the “startups”; they are the mentors, carefully solicited by Kelly from within the tightly knit Irish business community. He knew exactly what he was looking for: “In Ireland, we have people from lots of large companies. Joyce, for example, can put a startup in touch with senior management from virtually any pharma company around the world. Hanley knows manufacturing and tech partners. Walker understands how to operate in rural conditions.”

According to Jennings, a former chief financial officer of Xing, Europe’s leading social network, “We spent years trying to persuade people that they had a problem we could solve; now I am working with companies solving problems that people know they have.”  And that usually involves more than an Internet solution; it requires distribution channels, production facilities, market education, and the like. Startupbootcamp’s next batch of startups, not coincidentally, will be in the health-care sector.

Each of the mentors can help a startup to go global. Precisely because the Irish market is so small, it’s a good place to find people who know how to expand globally. In Ireland right now, as in so many countries, many large companies are laying off people with experience. Not all of them have the makings of an entrepreneur. But most of them have skills worth sharing, whether it’s how to run a sales meeting, oversee a development project, or manage a database of customers.

[div class=attrib]Read the entire article after the jump.[end-div]

National Education Rankings: C-

One would believe that the most affluent and open country on the planet would have one of the best, if not the best, education systems. Yet, the United States of America distinguishes itself by being thoroughly mediocre in a ranking of developed nations in science, mathematics and reading. How can we makes amends for our children?

[div class=attrib]From Slate:[end-div]

Take the 2009 PISA test, which assessed the knowledge of students from 65 countries and economies—34 of which are members of the development organization the OECD, including the United States—in math, science, and reading. Of the OECD countries, the United States came in 17th place in science literacy; of all countries and economies surveyed, it came in 23rd place. The U.S. score of 502 practically matched the OECD average of 501. That puts us firmly in the middle. Where we don’t want to be.

What do the leading countries do differently? To find out, Slate asked science teachers from five countries that are among the world’s best in science education—Finland, Singapore, South Korea, New Zealand, and Canada—how they approach their subject and the classroom. Their recommendations: Keep students engaged and make the science seem relevant.

Finland: “To Make Students Enjoy Chemistry Is Hard Work”

Finland was first among the 34 OECD countries in the 2009 PISA science rankings and second—behind mainland China—among all 65 nations and economies that took the test. Ari Myllyviita teaches chemistry and works with future science educators at the Viikki Teacher Training School of Helsinki University.

Finland’s National Core Curriculum is premised on the idea “that learning is a result of a student’s active and focused actions aimed to process and interpret received information in interaction with other students, teachers and the environment and on the basis of his or her existing knowledge structures.”

My conception of learning lies strongly on this citation from our curriculum. My aim is to support knowledge-building, socioculturally: to create socially supported activity in student’s zone of proximal development (the area where student need some support to achieve next level of understanding or skill). The student’s previous knowledge is the starting point, and then the learning is bound to the activity during lessons—experiments, simulations, and observing phenomena.

The National Core Curriculum also states, “The purpose of instruction in chemistry is to support development of students’ scientific thinking and modern worldview.” Our teaching is based on examination and observations of substances and chemical phenomena, their structures and properties, and reactions between substances. Through experiments and theoretical models, students are taught to understand everyday life and nature. In my classroom, I use discussion, lectures, demonstrations, and experimental work—quite often based on group work. Between lessons, I use social media and other information communication technologies to stay in touch with students.

In addition to the National Core Curriculum, my school has its own. They have the same bases, but our own curriculum is more concrete. Based on these, I write my course and lesson plans. Because of different learning styles, I use different kinds of approaches, sometimes theoretical and sometimes experimental. Always there are new concepts and perhaps new models to explain the phenomena or results.

To make students enjoy learning chemistry is hard work. I think that as a teacher, you have to love your subject and enjoy teaching even when there are sometimes students who don´t pay attention to you. But I get satisfaction when I can give a purpose for the future by being a supportive teacher.

New Zealand: “Students Disengage When a Teacher Is Simply Repeating Facts or Ideas”

New Zealand came in seventh place out of 65 in the 2009 PISA assessment. Steve Martin is head of junior science at Howick College. In 2010, he received the prime minister’s award for science teaching.

Science education is an important part of preparing students for their role in the community. Scientific understanding will allow them to engage in issues that concern them now and in the future, such as genetically modified crops. In New Zealand, science is also viewed as having a crucial role to play in the future of the economic health of the country. This can be seen in the creation of the “Prime Minister’s Science Prizes,” a program that identifies the nation’s leading scientists, emerging and future scientists, and science teachers.

The New Zealand Science Curriculum allows for flexibility depending on contextual factors such as school location, interests of students, and teachers’ specialization. The curriculum has the “Nature of Science” as its foundation, which supports students learning the skills essential to a scientist, such as problem-solving and effective communication. The Nature of Science refers to the skills required to work as a scientist, how to communicate science effectively through science-specific vocabulary, and how to participate in debates and issues with a scientific perspective.

School administrators support innovation and risk-taking by teachers, which fosters the “let’s have a go” attitude. In my own classroom, I utilize computer technology to create virtual science lessons that support and encourage students to think for themselves and learn at their own pace. Virtual Lessons are Web-based documents that support learning in and outside the classroom. They include support for students of all abilities by providing digital resources targeted at different levels of thinking. These could include digital flashcards that support vocabulary development, videos that explain the relationships between ideas or facts, and links to websites that allow students to create cartoon animations. The students are then supported by the use of instant messaging, online collaborative documents, and email so they can get support from their peers and myself at anytime. I provide students with various levels of success criteria, which are statements that students and teachers use to evaluate performance. In every lesson I provide the students with three different levels of success criteria, each providing an increase in cognitive demand. The following is an example based on the topic of the carbon cycle:
I can identify the different parts of the carbon cycle.

I can explain how all the parts interact with each other to form the carbon cycle.
I can predict the effect that removing one part of the carbon cycle has on the environment.
These provide challenge for all abilities and at the same time make it clear what students need to do to be successful. I value creativity and innovation, and this greatly influences the opportunities I provide for students.

My students learn to love to be challenged and to see that all ideas help develop greater understanding. Students value the opportunity to contribute to others’ understanding, and they disengage when a teacher is simply repeating facts or ideas.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Coloma 1914 Classroom. Courtesy of Coloma Convent School, Croydon UK.[end-div]

Persecution of Scientists: Old and New

The debate over the theory of evolution continues into the 21st century particularly in societies with a religious bent, including the United States of America. Yet, while the theory and corresponding evidence comes under continuous attack from mostly religious apologists, we generally do not see scientists themselves persecuted for supporting evolution, or not.

This cannot be said for climate scientists in Western countries, who while not physically abused or tortured or imprisoned do continue to be targets of verbal abuse and threats from corporate interests or dogmatic politicians and their followers. But, as we know persecution of scientists for embodying new, and thus threatening, ideas has been with us since the dawn of the scientific age. In fact, this behavior probably has been with us since our tribal ancestors moved out of Africa.

So, it is useful to remind ourselves how far we have come and of the distance we still have to travel.

[div class=attrib]From Wired:[end-div]

Turing was famously chemically-castrated after admitting to homosexual acts in the 1950s. He is one of a long line of scientists who have been persecuted for their beliefs or practices.

After admitting to “homosexual acts” in early 1952, Alan Turing was prosecuted and had to make the choice between a custodial sentence or chemical castration through hormone injections. Injections of oestrogen were intended to deal with “abnormal and uncontrollable” sexual urges, according to literature at the time.
He chose this option so that he could stay out of jail and continue his research, although his security clearance was revoked, meaning he could not continue with his cryptographic work. Turing experienced some disturbing side effects, including impotence, from the hormone treatment. Other known side effects include breast swelling, mood changes and an overall “feminization”. Turing completed his year of treatment without major incident. His medication was discontinued in April 1953 and the University of Manchester created a five-year readership position just for him, so it came as a shock when he committed suicide on 7 June, 1954.

Turing isn’t the only scientist to have been persecuted for his personal or professional beliefs or lifestyle. Here’s a a list of other prominent scientific luminaries who have been punished throughout history.

Rhazes (865-925)
Muhammad ibn Zakariy? R?z? or Rhazes was a medical pioneer from Baghdad who lived between 860 and 932 AD. He was responsible for introducing western teachings, rational thought and the works of Hippocrates and Galen to the Arabic world. One of his books, Continens Liber, was a compendium of everything known about medicine. The book made him famous, but offended a Muslim priest who ordered the doctor to be beaten over the head with his own manuscript, which caused him to go blind, preventing him from future practice.

Michael Servetus (1511-1553)
Servetus was a Spanish physician credited with discovering pulmonary circulation. He wrote a book, which outlined his discovery along with his ideas about reforming Christianity — it was deemed to be heretical. He escaped from Spain and the Catholic Inquisition but came up against the Protestant Inquisition in Switzerland, who held him in equal disregard. Under orders from John Calvin, Servetus was arrested, tortured and burned at the stake on the shores of Lake Geneva – copies of his book were accompanied for good measure.

Galileo Galilei (1564-1642)
The Italian astronomer and physicist Galileo Galilei was trialled and convicted in 1633 for publishing his evidence that supported the Copernican theory that the Earth revolves around the Sun. His research was instantly criticized by the Catholic Church for going against the established scripture that places Earth and not the Sun at the center of the universe. Galileo was found “vehemently suspect of heresy” for his heliocentric views and was required to “abjure, curse and detest” his opinions. He was sentenced to house arrest, where he remained for the rest of his life and his offending texts were banned.

Henry Oldenburg (1619-1677)
Oldenburg founded the Royal Society in London in 1662. He sought high quality scientific papers to publish. In order to do this he had to correspond with many foreigners across Europe, including the Netherlands and Italy. The sheer volume of his correspondence caught the attention of authorities, who arrested him as a spy. He was held in the Tower of London for several months.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Engraving of Galileo Galilei offering his telescope to three women (possibly Urania and attendants) seated on a throne; he is pointing toward the sky where some of his astronomical discoveries are depicted, 1655. Courtesy of Library of Congress.[end-div]

Higgs?

 

A week ago, on July 4, 2012 researchers at CERN told the world that they had found evidence of a new fundamental particle — the so-called Higgs boson, or something closely similar. If further particle collisions at CERN’s Large Hadron Collider uphold this finding over the coming years, this will rank as significant a discovery as that of the proton or the electro-magnetic force. While practical application of this discovery, in our lifetimes at least, is likely to be scant, it undeniably furthers our quest to understand the underlying mechanism of our existence.

So where might this discovery lead next?

[div class=attrib]From the New Scientist:[end-div]

“As a layman, I would say, I think we have it,” said Rolf-Dieter Heuer, director general of CERN at Wednesday’s seminar announcing the results of the search for the Higgs boson. But when pressed by journalists afterwards on what exactly “it” was, things got more complicated. “We have discovered a boson – now we have to find out what boson it is,” he said cryptically. Eh? What kind of particle could it be if it isn’t the Higgs boson? And why would it show up right where scientists were looking for the Higgs? We asked scientists at CERN to explain.

If we don’t know the new particle is a Higgs, what do we know about it?
We know it is some kind of boson, says Vivek Sharma of CMS, one of the two Large Hadron Collider experiments that presented results on Wednesday. There are only two types of elementary particle in the standard model: fermions, which include electrons, quarks and neutrinos, and bosons, which include photons and the W and Z bosons. The Higgs is a boson – and we know the new particle is too because one of the things it decays into is a pair of high-energy photons, or gamma rays. According to the rules of mathematical symmetry, only a boson could decay into exactly two other photons.

Anything else?
Another thing we can say about the new particle is that nothing yet suggests it isn’t a Higgs. The standard model, our leading explanation for the known particles and the forces that act on them, predicts the rate at which a Higgs of a given mass should decay into various particles. The rates of decay reported for the new particle yesterday are not exactly what would be predicted for its mass of about 125 gigaelectronvolts (GeV) – leaving the door open to more exotic stuff. “If there is such a thing as a 125 GeV Higgs, we know what its rate of decay should be,” says Sharma. But the decay rates are close enough for the differences to be statistical anomalies that will disappear once more data is taken. “There are no serious inconsistencies,” says Joe Incandela, head of CMS, who reported the results on Wednesday.

In that case, are the CERN scientists just being too cautious? What would be enough evidence to call it a Higgs boson?
As there could be many different kinds of Higgs bosons, there’s no straight answer. An easier question to answer is: what would make the new particle neatly fulfil the Higgs boson’s duty in the standard model? Number one is to give other particles mass via the Higgs field – an omnipresent entity that “slows” some particles down more than others, resulting in mass. Any particle that makes up this field must be “scalar”. The opposite of a vector, this means that, unlike a magnetic field, or gravity, it doesn’t have any directionality. “Only a scalar boson fixes the problem,” says Oliver Buchmueller, also of CMS.

When will we know whether it’s a scalar boson?
By the end of the year, reckons Buchmueller, when at least one outstanding property of the new particle – its spin – should be determined. Scalars’ lack of directionality means they have spin 0. As the particle is a boson, we already know its spin is a whole number and as it decays into two photons, mathematical symmetry again dictates that the spin can’t be 1. Buchmueller says LHC researchers will able to determine whether it has a spin of 0 or 2 by examining whether the Higgs’ decay particles shoot into the detector in all directions or with a preferred direction – the former would suggest spin 0. “Most people think it is a scalar, but it still needs to be proven,” says Buchmueller. Sharma is pretty sure it’s a scalar boson – that’s because it is more difficult to make a boson with spin 2. He adds that, although it is expected, confirmation that this is a scalar boson is still very exciting: “The beautiful thing is, if this turns out to be a scalar particle, we are seeing a new kind of particle. We have never seen a fundamental particle that is a scalar.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A typical candidate event including two high-energy photons whose energy (depicted by dashed yellow lines and red towers) is measured in the CMS electromagnetic calorimeter. The yellow lines are the measured tracks of other particles produced in the collision.[end-div]

Fifty Shades of Grey Matter: Now For Some Really Influential Books

While pop culture columnists, behavioral psychologists and literary gadflies debate the pros and cons of “Fifty Shades of Grey”, we look at some more notable, though perhaps no-less controversial works, in their time. Notable in the sense that ideas from any of these books — whether you are in agreement with them or not — have had a profound influence of our cultural, political, economic and scientific evolution.

Yet while all combined have come nowhere close to the 1 million-plus sales in just over 10 weeks, with 20 million in sales so far, of the sado-masochistic pulp fiction, they do offer an enlightening counter-balance. So, if you need some fleeting titillation by all means loan “Fifty Shades…” from a friend or neighbor — why buy one, everybody else has one already. But then, go to your local bookstore or click to Amazon and purchase a handful from this list spanning 30 centuries —  you will be reminded of our ongoing, if sometimes limited, intellectual progress as a species.

1    I Ching, Chinese classic texts
2    Hebrew Bible, Jewish scripture
3    Iliad and The Odyssey, Homer
4    Upanishads, Hindu scripture
5    The Way and Its Power, Lao-tzu
6    The Avesta, Zoroastrian scripture
7    Analects, Confucius
8    History of the Peloponnesian War, Thucydides
9    Works, Hippocrates
10    Works, Aristotle
11    History, Herodotus
12    The Republic, Plato
13    Elements, Euclid
14    Dhammapada, Theravada Buddhist scripture
15    Aeneid, Virgil
16    On the Nature of Reality, Lucretius
17    Allegorical Expositions of the Holy Laws, Philo of Alexandria
18    New Testament, Christian scripture
19    Parallel Lives, Plutarch
20    Annals, from the Death of the Divine Augustus, Cornelius Tacitus
21    Gospel of Truth, Valentinus
22    Meditations, Marcus Aurelius
23    Outlines of Pyrrhonism, Sextus Empiricus
24    Enneads, Plotinus
25    Confessions, Augustine of Hippo
26    Koran, Muslim scripture
27    Guide for the Perplexed, Moses Maimonides
28    Kabbalah, Text of Judaic mysticism
29    Summa Theologicae, Thomas Aquinas
30    The Divine Comedy, Dante Alighieri
31    In Praise of Folly, Desiderius Erasmus
32    The Prince, Niccolò Machiavelli
33    On the Babylonian Captivity of the Church, Martin Luther
34    Gargantua and Pantagruel, François Rabelais
35    Institutes of the Christian Religion, John Calvin
36    On the Revolution of the Celestial Orbs, Nicolaus Copernicus
37    Essays, Michel Eyquem de Montaigne
38    Don Quixote, Parts I and II, Miguel de Cervantes
39    The Harmony of the World, Johannes Kepler
40    Novum Organum, Francis Bacon
41    The First Folio [Works], William Shakespeare
42    Dialogue Concerning Two New Chief World Systems, Galileo Galilei
43    Discourse on Method, René Descartes
44    Leviathan, Thomas Hobbes
45    Works, Gottfried Wilhelm Leibniz
46    Pensées, Blaise Pascal
47    Ethics, Baruch de Spinoza
48    Pilgrim’s Progress, John Bunyan
49    Mathematical Principles of Natural Philosophy, Isaac Newton
50    Essay Concerning Human Understanding, John Locke
51    The Principles of Human Knowledge, George Berkeley
52    The New Science, Giambattista Vico
53    A Treatise of Human Nature, David Hume
54    The Encyclopedia, Denis Diderot, ed.
55    A Dictionary of the English Language, Samuel Johnson
56    Candide, François-Marie de Voltaire
57    Common Sense, Thomas Paine
58    An Enquiry Into the Nature and Causes of the Wealth of Nations, Adam Smith
59    The History of the Decline and Fall of the Roman Empire, Edward Gibbon
60    Critique of Pure Reason, Immanuel Kant
61    Confessions, Jean-Jacques Rousseau
62    Reflections on the Revolution in France, Edmund Burke
63    Vindication of the Rights of Women, Mary Wollstonecraft
64    An Enquiry Concerning Political Justice, William Godwin
65    An Essay on the Principle of Population, Thomas Robert Malthus
66    Phenomenology of Spirit, George Wilhelm Friedrich Hegel
67    The World as Will and Idea, Arthur Schopenhauer
68    Course in the Positivist Philosophy, Auguste Comte
69    On War, Carl Marie von Clausewitz
70    Either/Or, Søren Kierkegaard
71    Manifesto of the Communist Party, Karl Marx and Friedrich Engels
72    “Civil Disobedience,” Henry David Thoreau
73    The Origin of Species, Charles Darwin
74    On Liberty, John Stuart Mill
75    First Principles, Herbert Spencer
76    Experiments on Plant Hybridization, Gregor Mendel
77    War and Peace, Leo Tolstoy
78    Treatise on Electricity and Magnetism, James Clerk Maxwell
79    Thus Spake Zarathustra, Friedrich Nietzsche
80    The Interpretation of Dreams, Sigmund Freud
81    Pragmatism, William James
82    Relativity, Albert Einstein
83    The Mind and Society, Vilfredo Pareto
84    Psychological Types, Carl Gustav Jung
85    I and Thou, Martin Buber
86    The Trial, Franz Kafka
87    The Logic of Scientific Discovery, Karl Popper
88    The General Theory of Employment, Interest, and Money, John Maynard Keynes
89    Being and Nothingness, Jean-Paul Sartre
90    The Road to Serfdom, Friedrich von Hayek
91    The Second Sex, Simone de Beauvoir
92    Cybernetics, Norbert Wiener
93    Nineteen Eighty-Four, George Orwell
94    Beelzebub’s Tales to His Grandson, George Ivanovitch Gurdjieff
95    Philosophical Investigations, Ludwig Wittgenstein
96    Syntactic Structures, Noam Chomsky
97    The Structure of Scientific Revolutions, T. S. Kuhn
98    The Feminine Mystique, Betty Friedan
99    Quotations from Chairman Mao Tse-tung [The Little Red Book], Mao Zedong
100    Beyond Freedom and Dignity, B. F. Skinner

The well-rounded list featuring critically acclaimed novels, poetic masterpieces, scientific first principals, political and religious works was compiled by Martin Seymour-Smith, in his 1998 book, The 100 Most Influential Books Ever Written: The History of Thought from Ancient Times to Today. Seymour-Smith is a British poet, critic, and biographer.

[div class=attrib]Image: “On the Revolutions of Heavenly Spheres” by Nicolaus Copernicus, 1543.[end-div]

Extending Moore’s Law Through Evolution

[div class=attrib]From Smithsonian:[end-div]

In 1965, Intel co-founder Gordon Moore made a prediction about computing that has held true to this day. Moore’s law, as it came to be known, forecasted that the number of transistors we’d be able to cram onto a circuit—and thereby, the effective processing speed of our computers—would double roughly every two years. Remarkably enough, this rule has been accurate for nearly 50 years, but most experts now predict that this growth will slow by the end of the decade.

Someday, though, a radical new approach to creating silicon semiconductors might enable this rate to continue—and could even accelerate it. As detailed in a study published in this month’s Proceedings of the National Academy of Sciences, a team of researchers from the University of California at Santa Barbara and elsewhere have harnessed the process of evolution to produce enzymes that create novel semiconductor structures.

“It’s like natural selection, but here, it’s artificial selection,” Daniel Morse, professor emeritus at UCSB and a co-author of the study, said in an interview. After taking an enzyme found in marine sponges and mutating it into many various forms, “we’ve selected the one in a million mutant DNAs capable of making a semiconductor.”

In an earlier study, Morse and other members of the research team had discovered silicatein—a natural enzyme used used by marine sponges to construct their silica skeletons. The mineral, as it happens, also serves as the building block of semiconductor computer chips. “We then asked the question—could we genetically engineer the structure of the enzyme to make it possible to produce other minerals and semiconductors not normally produced by living organisms?” Morse said.

To make this possible, the researchers isolated and made many copies of the part of the sponge’s DNA that codes for silicatein, then intentionally introduced millions of different mutations in the DNA. By chance, some of these would likely lead to mutant forms of silicatein that would produce different semiconductors, rather than silica—a process that mirrors natural selection, albeit on a much shorter time scale, and directed by human choice rather than survival of the fittest.

[div class=attrib]Read the entire article after the jump.[end-div]

Empathy and Touch

[div class=attrib]From Scientific American:[end-div]

When a friend hits her thumb with a hammer, you don’t have to put much effort into imagining how this feels. You know it immediately. You will probably tense up, your “Ouch!” may arise even quicker than your friend’s, and chances are that you will feel a little pain yourself. Of course, you will then thoughtfully offer consolation and bandages, but your initial reaction seems just about automatic. Why?

Neuroscience now offers you an answer: A recent line of research has demonstrated that seeing other people being touched activates primary sensory areas of your brain, much like experiencing the same touch yourself would do. What these findings suggest is beautiful in its simplicity—that you literally “feel with” others.

There is no denying that the exceptional interpersonal understanding we humans show is by and large a product of our emotional responsiveness. We are automatically affected by other people’s feelings, even without explicit communication. Our involvement is sometimes so powerful that we have to flee it, turning our heads away when we see someone get hurt in a movie. Researchers hold that this capacity emerged long before humans evolved. However, only quite recently has it been given a name: A mere hundred years ago, the word “Empathy”—a combination of the Greek “in” (em-) and “feeling” (pathos)—was coined by the British psychologist E. B. Titchener during his endeavor to translate the German Einfühlungsvermögen (“the ability to feel into”).

Despite the lack of a universally agreed-upon definition of empathy, the mechanisms of sharing and understanding another’s experience have always been of scientific and public interest—and particularly so since the introduction of “mirror neurons.” This important discovery was made two decades ago by  Giacomo Rizzolatti and his co-workers at the University of Parma, who were studying motor neuron properties in macaque monkeys. To compensate for the tedious electrophysiological recordings required, the monkey was occasionally given food rewards. During these incidental actions something unexpected happened: When the monkey, remaining perfectly still, saw the food being grasped by an experimenter in a specific way, some of its motor neurons discharged. Remarkably, these neurons normally fired when the monkey itself grasped the food in this way. It was as if the monkey’s brain was directly mirroring the actions it observed. This “neural resonance,” which was later also demonstrated in humans, suggested the existence of a special type of “mirror” neurons that help us understand other people’s actions.

Do you find yourself wondering, now, whether a similar mirror mechanism could have caused your pungent empathic reaction to your friend maltreating herself with a hammer? A group of scientists led by Christian Keysers believed so. The researchers had their participants watch short movie clips of people being touched, while using functional magnetic resonance imaging (fMRI) to record their brain activity. The brain scans revealed that the somatosensory cortex, a complex of brain regions processing touch information, was highly active during the movie presentations—although participants were not being touched at all. As was later confirmed by other studies, this activity strongly resembled the somatosensory response participants showed when they were actually touched in the same way. A recent study by Esther Kuehn and colleagues even found that, during the observation of a human hand being touched, parts of the somatosensory cortex were particularly active when (judging by perspective) the hand clearly belonged to another person.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

Media Consolidation

The age of the rambunctious and megalomaniacal newspaper baron has passed, excepting, of course, Rupert Murdoch. Though while the colorful personalities of the late-19th and early 20th centuries have mostly disappeared, the 21st century has replaced these aging white men with faceless international corporations, all of which are, of course, run by aging white men.

The infographic below puts the current media landscape in clear perspective; one statistic is clear: more and more people are consuming news and entertainment from fewer and fewer sources.

[div class=attrib]Infographic courtesy of Frugal Dad.[end-div]

A View on Innovation

Joi Ito Director of the MIT Media Lab muses on the subject of innovation in this article excerpted from the Edge.

[div class=attrib]From the Edge:[end-div]

I grew up in Japan part of my life, and we were surrounded by Buddhists. If you read some of the interesting books from the Dalai Lama talking about happiness, there’s definitely a difference in the way that Buddhists think about happiness, the world and how it works, versus the West. I think that a lot of science and technology has this somewhat Western view, which is how do you control nature, how do you triumph over nature? Even if you look at the gardens in Europe, a lot of it is about look at what we made this hedge do.

What’s really interesting and important to think about is, as we start to realize that the world is complex, and as the science that we use starts to become complex and, Timothy Leary used this quote, “Newton’s laws work well when things are normal sized, when they’re moving at a normal speed.” You can predict the motion of objects using Newton’s laws in most circumstances, but when things start to get really fast, really big, and really complex, you find out that Newton’s laws are actually local ordinances, and there’s a bunch of other stuff that comes into play.

One of the things that we haven’t done very well is we’ve been looking at science and technology as trying to make things more efficient, more effective on a local scale, without looking at the system around it. We were looking at objects rather than the system, or looking at the nodes rather than the network. When we talk about big data, when we talk about networks, we understand this.

I’m an Internet guy, and I divide the world into my life before the Internet and after the Internet. I helped build one of the first commercial Internet service providers in Japan, and when we were building that, there was a tremendous amount of resistance. There were lawyers who wrote these big articles about how the Internet was illegal because there was no one in charge. There was a competing standard back then called X.25, which was being built by the telephone companies and the government. It was centrally-planned, huge specifications; it was very much under control.

The Internet was completely distributed. David Weinberger would use the term ‘small pieces loosely joined.’ But it was really a decentralized innovation that was somewhat of a kind of working anarchy. As we all know, the Internet won. What the Internet winning was, was the triumph of distributed innovation over centralized innovation. It was a triumph of chaos over control. There were a bunch of different reasons. Moore’s law, lowering the cost of innovation—it was this kind of complexity that was going on, the fact that you could change things later, that made this kind of distributed innovation work. What happened when the Internet happened is that the Internet combined with Moore’s law, kept on driving the cost of innovation lower and lower and lower and lower. When you think about the Googles or the Yahoos or the Facebooks of the world, those products, those services were created not in big, huge R&D labs with hundreds of millions of dollars of funding; they were created by kids in dorm rooms.

In the old days, you’d have to have an idea and then you’d write a proposal for a grant or a VC, and then you’d raise the money, you’d plan the thing, you would hire the people and build it. Today, what you do is you build the thing, you raise the money and then you figure out the plan and then you figure out the business model. It’s completely the opposite, you don’t have to ask permission to innovate anymore. What’s really important is, imagine if somebody came up to you and said, “I’m going to build the most popular encyclopedia in the world, and the trick is anyone can edit it.” You wouldn’t have given the guy a desk, you wouldn’t have given the guy five bucks. But the fact that he can just try that, and in retrospect it works, it’s fine, what we’re realizing is that a lot of the greatest innovations that we see today are things that wouldn’t have gotten approval, right?

The Internet, the DNA and the philosophy of the Internet is all about freedom to connect, freedom to hack, and freedom to innovate. It’s really lowering the cost of distribution and innovation. What’s really important about that is that when you started thinking about how we used to innovate was we used to raise money and we would make plans. Well, it’s an interesting coincidence because the world is now so complex, so fast, so unpredictable, that you can’t. Your plans don’t really work that well. Every single major thing that’s happened, both good and bad, was probably unpredicted, and most of our plans failed.

Today, what you want is you want to have resilience and agility, and you want to be able to participate in, and interact with the disruptive things. Everybody loves the word ‘disruptive innovation.’ Well, how does, and where does disruptive innovation happen? It doesn’t happen in the big planned R&D labs; it happens on the edges of the network. Many important ideas, especially in the consumer Internet space, but more and more now in other things like hardware and biotech, you’re finding it happening around the edges.

What does it mean, innovation on the edges? If you sit there and you write a grant proposal, basically what you’re doing is you’re saying, okay, I’m going to build this, so give me money. By definition it’s incremental because first of all, you’ve got to be able to explain what it is you’re going to make, and you’ve got to say it in a way that’s dumbed-down enough that the person who’s giving you money can understand it. By definition, incremental research isn’t going to be very disruptive. Scholarship is somewhat incremental. The fact that if you have a peer review journal, it means five other people have to believe that what you’re doing is an interesting thing. Some of the most interesting innovations that happen, happen when the person doing it doesn’t even know what’s going on. True discovery, I think, happens in a very undirected way, when you figure it out as you go along.

Look at YouTube. First version of YouTube, if you saw 2005, it’s a dating site with video. It obviously didn’t work. The default was I am male, looking for anyone between 18 and 35, upload video. That didn’t work. They pivot it, it became Flicker for video. That didn’t work. Then eventually they latched onto Myspace and it took off like crazy. But they figured it out as they went along. This sort of discovery as you go along is a really, really important mode of innovation. The problem is, whether you’re talking about departments in academia or you’re talking about traditional sort of R&D, anything under control is not going to exhibit that behavior.

If you apply that to what I’m trying to do at the Media Lab, the key thing about the Media Lab is that we have undirected funds. So if a kid wants to try something, he doesn’t have to write me a proposal. He doesn’t have to explain to me what he wants to do. He can just go, or she can just go, and do whatever they want, and that’s really important, this undirected research.

The other part that’s really important, as you start to look for opportunities is what I would call pattern recognition or peripheral vision. There’s a really interesting study, if you put a dot on a screen and you put images like colors around it. If you tell the person to look at the dot, they’ll see the stuff on the first reading, but the minute you give somebody a financial incentive to watch it, I’ll give you ten bucks to watch the dot, those peripheral images disappear. If you’ve ever gone mushroom hunting, it’s a very similar phenomenon. If you are trying to find mushrooms in a forest, the whole thing is you have to stop looking, and then suddenly your pattern recognition kicks in and the mushrooms pop out. Hunters do this same thing, archers looking for animals.

When you focus on something, what you’re actually doing is only seeing really one percent of your field of vision. Your brain is filling everything else in with what you think is there, but it’s actually usually wrong, right? So what’s really important when you’re trying to discover those disruptive things that are happening in your periphery. If you are a newspaper and you’re trying to figure out what is the world like without printing presses, well, if you’re staring at your printing press, you’re not looking at the stuff around you. So what’s really important is how do you start to look around you?

[div class=attrib]Read the entire article following the jump.[end-div]

Happiness for Pessimists

Pessimists can take heart from Oliver Burkeman’s latest book “The Antidote”. His research shows that there are valid alternatives to the commonly held belief that positive thinking and goal visualization lead inevitably to happiness. He shows that there is “a long tradition in philosophical and spiritual thought which embraces negativity and bathes in insecurity and failure.” Glass half-full types, you may have been right all along.

[tube]bOJL7WkaadY[/tube]

King Canute or Mother Nature in North Carolina, Virginia, Texas?

Legislators in North Carolina recently went one better than King C’Nut (Canute). The king of Denmark, England, Norway and parts of Sweden during various periods between 1018 and 1035, famously and unsuccessfully tried to hold back the incoming tide. The now mythic story tells of Canute’s arrogance. Not to be outdone, North Carolina’s state legislature recently passed a law that bans state agencies from reporting that sea-level rise is accelerating.

The bill From North Carolina states:

“… rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of sea-level rise may be extrapolated linearly to estimate future rates of rise but shall not include scenarios of accelerated rates of sea-level rise.”

This comes hot on the heals of the recent revisionist push in Virginia where references to phrases such as “sea level rise” and “climate change” are forbidden in official state communications. Last year of course, Texas led the way for other states following the climate science denial program when the Texas Commission on Environmental Quality, which had commissioned a scientific study of Galveston Bay, removed all references to “rising sea levels”.

For more detailed reporting on this unsurprising and laughable state of affairs check out this article at Skeptical Science.

[div class=attrib]From Scientific American:[end-div]

Less than two weeks after the state’s senate passed a climate science-squelching bill, research shows that sea level along the coast between N.C. and Massachusetts is rising faster than anywhere on Earth.

Could nature be mocking North Carolina’s law-makers? Less than two weeks after the state’s senate passed a bill banning state agencies from reporting that sea-level rise is accelerating, research has shown that the coast between North Carolina and Massachusetts is experiencing the fastest sea-level rise in the world.

Asbury Sallenger, an oceanographer at the US Geological Survey in St Petersburg, Florida, and his colleagues analysed tide-gauge records from around North America. On 24 June, they reported in Nature Climate Change that since 1980, sea-level rise between Cape Hatteras, North Carolina, and Boston, Massachusetts, has accelerated to between 2 and 3.7 millimetres per year. That is three to four times the global average, and it means the coast could see 20–29 centimetres of sea-level rise on top of the metre predicted for the world as a whole by 2100 ( A. H. Sallenger Jr et al. Nature Clim. Change http://doi.org/hz4; 2012).

“Many people mistakenly think that the rate of sea-level rise is the same everywhere as glaciers and ice caps melt,” says Marcia McNutt, director of the US Geological Survey. But variations in currents and land movements can cause large regional differences. The hotspot is consistent with the slowing measured in Atlantic Ocean circulation, which may be tied to changes in water temperature, salinity and density.

North Carolina’s senators, however, have tried to stop state-funded researchers from releasing similar reports. The law approved by the senate on 12 June banned scientists in state agencies from using exponential extrapolation to predict sea-level rise, requiring instead that they stick to linear projections based on historical data.

Following international opprobrium, the state’s House of Representatives rejected the bill on 19 June. However, a compromise between the house and the senate forbids state agencies from basing any laws or plans on exponential extrapolations for the next three to four years, while the state conducts a new sea-level study.

According to local media, the bill was the handiwork of industry lobbyists and coastal municipalities who feared that investors and property developers would be scared off by predictions of high sea-level rises. The lobbyists invoked a paper published in the Journal of Coastal Research last year by James Houston, retired director of the US Army Corps of Engineers’ research centre in Vicksburg, Mississippi, and Robert Dean, emeritus professor of coastal engineering at the University of Florida in Gainesville. They reported that global sea-level rise has slowed since 1930 ( J. R. Houston and R. G. Dean J. Coastal Res. 27 , 409 – 417 ; 2011) — a contention that climate sceptics around the world have seized on.

Speaking to Nature, Dean accused the oceanographic community of ideological bias. “In the United States, there is an overemphasis on unrealistically high sea-level rise,” he says. “The reason is budgets. I am retired, so I have the freedom to report what I find without any bias or need to chase funding.” But Sallenger says that Houston and Dean’s choice of data sets masks acceleration in the sea-level-rise hotspot.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Policymic.[end-div]

The Inevitability of Life: A Tale of Protons and Mitochondria

A fascinating article by Nick Lane a leading researcher into the origins of life. Lane is a Research Fellow at University College London.

He suggests that it would be surprising if simple, bacterial-like, life were not common throughout the universe. However, the acquisition of one cell by another — an event that led to all higher organisms on planet Earth, is an altogether much rarer occurrence. So are we alone in the universe?

[div class=attrib]From the New Scientist:[end-div]

UNDER the intense stare of the Kepler space telescope, more and more planets similar to our own are revealing themselves to us. We haven’t found one exactly like Earth yet, but so many are being discovered that it appears the galaxy must be teeming with habitable planets.

These discoveries are bringing an old paradox back into focus. As physicist Enrico Fermi asked in 1950, if there are many suitable homes for life out there and alien life forms are common, where are they all? More than half a century of searching for extraterrestrial intelligence has so far come up empty-handed.

Of course, the universe is a very big place. Even Frank Drake’s famously optimistic “equation” for life’s probability suggests that we will be lucky to stumble across intelligent aliens: they may be out there, but we’ll never know it. That answer satisfies no one, however.

There are deeper explanations. Perhaps alien civilisations appear and disappear in a galactic blink of an eye, destroying themselves long before they become capable of colonising new planets. Or maybe life very rarely gets started even when conditions are perfect.

If we cannot answer these kinds of questions by looking out, might it be possible to get some clues by looking in? Life arose only once on Earth, and if a sample of one were all we had to go on, no grand conclusions could be drawn. But there is more than that. Looking at a vital ingredient for life – energy – suggests that simple life is common throughout the universe, but it does not inevitably evolve into more complex forms such as animals. I might be wrong, but if I’m right, the immense delay between life first appearing on Earth and the emergence of complex life points to another, very different explanation for why we have yet to discover aliens.

Living things consume an extraordinary amount of energy, just to go on living. The food we eat gets turned into the fuel that powers all living cells, called ATP. This fuel is continually recycled: over the course of a day, humans each churn through 70 to 100 kilograms of the stuff. This huge quantity of fuel is made by enzymes, biological catalysts fine-tuned over aeons to extract every last joule of usable energy from reactions.

The enzymes that powered the first life cannot have been as efficient, and the first cells must have needed a lot more energy to grow and divide – probably thousands or millions of times as much energy as modern cells. The same must be true throughout the universe.

This phenomenal energy requirement is often left out of considerations of life’s origin. What could the primordial energy source have been here on Earth? Old ideas of lightning or ultraviolet radiation just don’t pass muster. Aside from the fact that no living cells obtain their energy this way, there is nothing to focus the energy in one place. The first life could not go looking for energy, so it must have arisen where energy was plentiful.

Today, most life ultimately gets its energy from the sun, but photosynthesis is complex and probably didn’t power the first life. So what did? Reconstructing the history of life by comparing the genomes of simple cells is fraught with problems. Nevertheless, such studies all point in the same direction. The earliest cells seem to have gained their energy and carbon from the gases hydrogen and carbon dioxide. The reaction of H2 with CO2 produces organic molecules directly, and releases energy. That is important, because it is not enough to form simple molecules: it takes buckets of energy to join them up into the long chains that are the building blocks of life.

A second clue to how the first life got its energy comes from the energy-harvesting mechanism found in all known life forms. This mechanism was so unexpected that there were two decades of heated altercations after it was proposed by British biochemist Peter Mitchell in 1961.

Universal force field

Mitchell suggested that cells are powered not by chemical reactions, but by a kind of electricity, specifically by a difference in the concentration of protons (the charged nuclei of hydrogen atoms) across a membrane. Because protons have a positive charge, the concentration difference produces an electrical potential difference between the two sides of the membrane of about 150 millivolts. It might not sound like much, but because it operates over only 5 millionths of a millimetre, the field strength over that tiny distance is enormous, around 30 million volts per metre. That’s equivalent to a bolt of lightning.

Mitchell called this electrical driving force the proton-motive force. It sounds like a term from Star Wars, and that’s not inappropriate. Essentially, all cells are powered by a force field as universal to life on Earth as the genetic code. This tremendous electrical potential can be tapped directly, to drive the motion of flagella, for instance, or harnessed to make the energy-rich fuel ATP.

However, the way in which this force field is generated and tapped is extremely complex. The enzyme that makes ATP is a rotating motor powered by the inward flow of protons. Another protein that helps to generate the membrane potential, NADH dehydrogenase, is like a steam engine, with a moving piston for pumping out protons. These amazing nanoscopic machines must be the product of prolonged natural selection. They could not have powered life from the beginning, which leaves us with a paradox.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Transmission electron microscope image of a thin section cut through an area of mammalian lung tissue. The high magnification image shows a mitochondria. Courtesy of Wikipedia.[end-div]