The Killer Joke and the Killer Idea

Some jokes can make you laugh until you cry. Some jokes can kill. And, research shows that thoughts alone can have equally devastating consequences as well.

From BBC:

Beware the scaremongers. Like a witch doctor’s spell, their words might be spreading modern plagues.

We have long known that expectations of a malady can be as dangerous as a virus. In the same way that voodoo shamans could harm their victims through the power of suggestion, priming someone to think they are ill can often produce the actual symptoms of a disease. Vomiting, dizziness, headaches, and even death, could be triggered through belief alone. It’s called the “nocebo effect”.

But it is now becoming clear just how easily those dangerous beliefs can spread through gossip and hearsay – with potent effect. It may be the reason why certain houses seem cursed with illness, and why people living near wind turbines report puzzling outbreaks of dizziness, insomnia and vomiting. If you have ever felt “fluey” after a vaccination, believed your cell phone was giving you a headache, or suffered an inexplicable food allergy, you may have also fallen victim to a nocebo jinx. “The nocebo effect shows the brain’s power,” says Dimos Mitsikostas, from Athens Naval Hospital in Greece. “And we cannot fully explain it.”

A killer joke

Doctors have long known that beliefs can be deadly – as demonstrated by a rather nasty student prank that went horribly wrong. The 18th Century Viennese medic, Erich Menninger von Lerchenthal, describes how students at his medical school picked on a much-disliked assistant. Planning to teach him a lesson, they sprung upon him before announcing that he was about to be decapitated. Blindfolding him, they bowed his head onto the chopping block, before dropping a wet cloth on his neck. Convinced it was the kiss of a steel blade, the poor man “died on the spot”.

While anecdotes like this abound, modern researchers had mostly focused on the mind’s ability to heal, not harm – the “placebo effect”, from the Latin for “I will please”. Every clinical trial now randomly assigns patients to either a real drug, or a placebo in the form of an inert pill. The patient doesn’t know which they are taking, and even those taking the inert drug tend to show some improvement – thanks to their faith in the treatment.

Yet alongside the benefits, people taking placebos often report puzzling side effects – nausea, headaches, or pain – that are unlikely to come from an inert tablet. The problem is that people in a clinical trial are given exactly the same health warnings whether they are taking the real drug or the placebo – and somehow, the expectation of the symptoms can produce physical manifestations in some placebo takers. “It’s a consistent phenomenon, but medicine has never really dealt with it,” says Ted Kaptchuk at Harvard Medical School.

Over the last 10 years, doctors have shown that this nocebo effect – Latin for “I will harm” – is very common. Reviewing the literature, Mitsikostas has so far documented strong nocebo effects in many treatments for headache, multiple sclerosis, and depression. In trials for Parkinson’s disease, as many as 65% report adverse events as a result of their placebo. “And around one out of 10 treated will drop out of a trial because of nocebo, which is pretty high,” he says.

Although many of the side-effects are somewhat subjective – like nausea or pain – nocebo responses do occasionally show up as rashes and skin complaints, and they are sometimes detectable on physiological tests too. “It’s unbelievable – they are taking sugar pills and when you measure liver enzymes, they are elevated,” says Mitsikostas.

And for those who think these side effects are somehow “deliberately” willed or imagined, measures of nerve activity following nocebo treatment have shown that the spinal cord begins responding to heightened painbefore conscious deliberation would even be possible.

Consider the near fatal case of “Mr A”, reported by doctor Roy Reeves in 2007. Mr A was suffering from depression when he consumed a whole bottle of pills. Regretting his decision, Mr A rushed to ER, and promptly collapsed at reception. It looked serious; his blood pressure had plummeted, and he was hyperventilating; he was immediately given intravenous fluids. Yet blood tests could find no trace of the drug in his system. Four hours later, another doctor arrived to inform Reeves that the man had been in the placebo arm of a drugs trial; he had “overdosed” on sugar tablets. Upon hearing the news, the relieved Mr A soon recovered.

We can never know whether the nocebo effect would have actually killed Mr A, though Fabrizio Benedetti at the University of Turin Medical School thinks it is certainly possible. He has scanned subjects’ brains as they undergo nocebo suggestions, which seems to set off a chain of activation in the hypothalamus, and the pituitary and adrenal glands – areas that deal with extreme threats to our body. If your fear and belief were strong enough, the resulting cocktail of hormones could be deadly, he says.

Read the entire story here.

Why Are We Obsessed With Zombies?

Google-search-zombie

Previous generations worried about Frankenstein, evil robots, even more evil aliens, hungry dinosaurs and, more recently, vampires. Nowadays our culture seems to be singularly obsessed with zombies. Why?

From the Conversation:

The zombie invasion is here. Our bookshops, cinemas and TVs are dripping with the pustulating debris of their relentless shuffle to cultural domination.

A search for “zombie fiction” on Amazon currently provides you with more than 25,000 options. Barely a week goes by without another onslaught from the living dead on our screens. We’ve just seen the return of one of the most successful of these, The Walking Dead, starring Andrew Lincoln as small-town sheriff, Rick Grimes. The show follows the adventures of Rick and fellow survivors as they kill lots of zombies and increasingly, other survivors, as they desperately seek safety.

Generational monsters

Since at least the late 19th century each generation has created fictional enemies that reflect a broader unease with cultural or scientific developments. The “Yellow Peril” villains such as Fu Manchu were a response to the massive increase in Chinese migration to the US and Europe from the 1870s, for example.

As the industrial revolution steamed ahead, speculative fiction of authors such as H G Wells began to consider where scientific innovation would take mankind. This trend reached its height in the Cold War during the 1950s and 1960s. Radiation-mutated monsters and invasions from space seen through the paranoid lens of communism all postulated the imminent demise of mankind.

By the 1970s, in films such as The Parallax View and Three Days of the Condor, the enemy evolved into government institutions and powerful corporations. This reflected public disenchantment following years of increasing social conflict, Vietnam and the Watergate scandal.

In the 1980s and 1990s it was the threat of AIDS that was embodied in the monsters of the era, such as “bunny boiling” stalker Alex in Fatal Attraction. Alex’s obsessive pursuit of the man with whom she shared a one night stand, Susanne Leonard argues, represented “the new cultural alignment between risk and sexual contact”, a theme continued with Anne Rices’s vampire Lestat in her series The Vampire Chronicles.

Risk and anxiety

Zombies, the flesh eating undead, have been mentioned in stories for more than 4,000 years. But the genre really developed with the work of H G Wells, Poe and particularly H P Lovecraft in the early 20th century. Yet these ponderous adversaries, descendants of Mary Shelley’s Frankenstein, have little in common with the vast hordes that threaten mankind’s existence in the modern versions.

M Keith Booker argued that in the 1950s, “the golden age of nuclear fear”, radiation and its fictional consequences were the flip side to a growing faith that science would solve the world’s problems. In many respects we are now living with the collapse of this faith. Today we live in societies dominated by an overarching anxiety reflecting the risk associated with each unpredictable scientific development.

Now we know that we are part of the problem, not necessarily the solution. The “breakthroughs” that were welcomed in the last century now represent some of our most pressing concerns. People have lost faith in assumptions of social and scientific “progress”.

Globalisation

Central to this is globalisation. While generating enormous benefits, globalisation is also tearing communities apart. The political landscape is rapidly changing as established political institutions seem unable to meet the challenges presented by the social and economic dislocation.

However, although destructive, globalisation is also forging new links between people, through what Anthony Giddens calls the “emptying of time and space”. Modern digital media has built new transnational alliances, and, particularly in the West, confronted people with stark moral questions about the consequences of their own lifestyles.

As the faith in inexorable scientific “progress” recedes, politics is transformed. The groups emerging from outside the political mainstream engage in much older battles of faith and identity. Whether right-wing nationalists or Islamic fundamentalists, they seek to build “imagined communities” through race, religion or culture and “fear” is their currency.

Evolving zombies

Modern zombies are the product of this globalised, risk conscious world. No longer the work of a single “mad” scientist re-animating the dead, they now appear as the result of secret government programmes creating untreatable viruses. The zombies indiscriminately overwhelm states irrespective of wealth, technology and military strength, turning all order to chaos.

Meanwhile, the zombies themselves are evolving into much more tenacious adversaries. In Danny Boyle’s 28 Days Later it takes only 20 days for society to be devastated. Charlie Higson’s Enemy series of novels have the zombies getting leadership and using tools. In the film of Max Brooks’ novel, World War Z, the seemingly superhuman athleticism of the zombies reflects the devastating springboard that vast urban populations would provide for such a disease. The film, starring Brad Pitt, had a reported budget of US$190m, demonstrating what a big business zombies have become.

Read the entire article here.

Image courtesy of Google Search.

The Missing Sock Law

Google-search-socks

If you share a household with children, or adults who continually misplace things, you’ll be intimately familiar with the Missing Sock Law (MSL). No matter how hard you try to keep clothing, and people, organized, and no matter how diligent you are during the laundry process, you will always lose socks. After your weekly laundry you will always end up with an odd number of socks, they will always be mismatched and you will never find the missing ones again. This is the MSL, and science has yet to come up with a solution.

However, an increasing number of enterprising youngsters, non-OCD parents, and even some teens, are adopting a solution that’s been staring them in the face since socks were invented.  Apparently, it is now a monumentally cool fashion statement (at the time writing) to wear mismatched socks — there are strict rules of course, and parents, this is certainly not for you.

From WSJ:

Susana Yourcheck keeps a basket of mismatched socks in her laundry room, hoping that the missing match will eventually reappear. The pile is getting smaller these days, but not because the solitary socks are magically being reunited with their mates.

The credit for the smaller stash goes to her two teenage daughters, who no longer fuss to find socks that match. That’s because fashionable tweens and teens favor a jamboree of solids, colors and patterns on their feet.

“All my friends do it. Everyone in school wears them this way,” says 15-year-old Amelia Yourcheck.

For laundry-folding parents, the best match is sometimes a mismatch.

Generations of adults have cringed at their children’s fashion choices, suffering through bell bottoms, crop tops, piercings and tattoos. Socks have gone through various iterations of coolness: knee-high, no-see, wild patterns, socks worn with sandals, and no socks at all.

But the current trend has advantages for parents like Ms. Yourcheck. She has long been flummoxed by the mystery of socks that “disappear to the land of nowhere.”

“I’m not going to lie—[the mismatched look] bothers me. But I’m also kind of happy because at least we get some use out of them,” says Ms. Yourcheck, who is 40 years old and lives in Holly Springs, N.C.

“It definitely makes laundry way easier because they just go in a pile and you don’t have to throw the odd ones away,” agrees Washington, D.C., resident Jennifer Swanson Prince, whose 15-year-old daughter, Eleni, rocks the unmatched look. “And if we are lucky, the pile will go in a drawer.”

Some parents say they first noticed the trend a few years ago. Some saw girls whip off their shoes at a bat mitzvah celebration and go through a basket of mismatched socks that were supplied by the hosts for more comfortable dancing.

 For some teenage fashionistas, however, the style dictates that certain rules be followed. Among the most important: The socks must always be more or less the same length—no mixing a knee high with a short one. And while patterns can be combined, clashing seasons—as with snowflakes and flowers—are frowned upon.

The trend is so popular that retailers sell socks that go together, but don’t really go together.

“Matching is mundane, but mixing patterns and colors is monumentally cool,” states the website of LittleMissMatched, which has stores in New York, Florida and California. The company sells socks in sets of three that often sport the same pattern—stars, animal prints, argyles, but in different colors.

Read the entire article here.

Image courtesy of Google Search.

The Religion of String Theory

Hyperboloid-of-one-sheetRead anything about string theory and you’ll soon learn that it resembles more of a religion than a scientific principle. String theory researchers and their supporters will be the first to tell you that this elegant, but extremely complex, integration of gravity and quantum field theory,  cannot be confirmed through experiment. And, neither, can it be dispelled through experiment.

So, while the promise of string theory — to bring us one unified understanding of the entire universe — is deliciously tantalizing, it nonetheless forces us to take a giant leap of faith. I suppose that would put string theory originators, physicists Michael Green and John Schwarz, somewhere in the same pantheon as Moses and Joseph Smith.

From Quanta:

Thirty years have passed since a pair of physicists, working together on a stormy summer night in Aspen, Colo., realized that string theory might have what it takes to be the “theory of everything.”

“We must be getting pretty close,” Michael Green recalls telling John Schwarz as the thunder raged and they hammered away at a proof of the theory’s internal consistency, “because the gods are trying to prevent us from completing this calculation.”

Their mathematics that night suggested that all phenomena in nature, including the seemingly irreconcilable forces of gravity and quantum mechanics, could arise from the harmonics of tiny, vibrating loops of energy, or “strings.” The work touched off a string theory revolution and spawned a generation of specialists who believed they were banging down the door of the ultimate theory of nature. But today, there’s still no answer. Because the strings that are said to quiver at the core of elementary particles are too small to detect — probably ever — the theory cannot be experimentally confirmed. Nor can it be disproven: Almost any observed feature of the universe jibes with the strings’ endless repertoire of tunes.

The publication of Green and Schwarz’s paper “was 30 years ago this month,” the string theorist and popular-science author Brian Greene wrote in Smithsonian Magazine in January, “making the moment ripe for taking stock: Is string theory revealing reality’s deep laws? Or, as some detractors have claimed, is it a mathematical mirage that has sidetracked a generation of physicists?” Greene had no answer, expressing doubt that string theory will “confront data” in his lifetime.

Recently, however, some string theorists have started developing a new tactic that gives them hope of someday answering these questions. Lacking traditional tests, they are seeking validation of string theory by a different route. Using a strange mathematical dictionary that translates between laws of gravity and those of quantum mechanics, the researchers have identified properties called “consistency conditions” that they say any theory combining quantum mechanics and gravity must meet. And in certain highly simplified imaginary worlds, they claim to have found evidence that the only consistent theories of “quantum gravity” involve strings.

According to many researchers, the work provides weak but concrete support for the decades-old suspicion that string theory may be the only mathematically consistent theory of quantum gravity capable of reproducing gravity’s known form on the scale of galaxies, stars and planets, as captured by Albert Einstein’s theory of general relativity. And if string theory is the only possible approach, then its proponents say it must be true — with or without physical evidence. String theory, by this account, is “the only game in town.”

“Proving that a big class of stringlike models are the only things consistent with general relativity and quantum mechanics would be a way, to some extent, of confirming it,” said Tom Hartman, a theoretical physicist at Cornell University who has been following the recent work.

If they are successful, the researchers acknowledge that such a proof will be seen as controversial evidence that string theory is correct. “‘Correct’ is a loaded word,” said Mukund Rangamani, a professor at Durham University in the United Kingdom and the co-author of a paper posted recently to the physics preprint site arXiv.org that finds evidence of “string universality” in a class of imaginary universes.

So far, the theorists have shown that string theory is the only “game” meeting certain conditions in “towns” wildly different from our universe, but they are optimistic that their techniques will generalize to somewhat more realistic physical worlds. “We will continue to accumulate evidence for the ‘string universality’ conjecture in different settings and for different classes of theories,” said Alex Maloney, a professor of physics at McGill University in Montreal and co-author of another recent paper touting evidence for the conjecture, “and eventually a larger picture will become clear.”

Meanwhile, outside experts caution against jumping to conclusions based on the findings to date. “It’s clear that these papers are an interesting attempt,” said Matt Strassler, a visiting professor at Harvard University who has worked on string theory and particle physics. “But these aren’t really proofs; these are arguments. They are calculations, but there are weasel words in certain places.”

Proponents of string theory’s rival, an underdog approach called “loop quantum gravity,” believe that the work has little to teach us about the real world. “They should try to solve the problems of their theory, which are many,” said Carlo Rovelli, a loop quantum gravity researcher at the Center for Theoretical Physics in Marseille, France, “instead of trying to score points by preaching around that they are ‘the only game in town.’”

Mystery Theory

Over the past century, physicists have traced three of the four forces of nature — strong, weak and electromagnetic — to their origins in the form of elementary particles. Only gravity remains at large. Albert Einstein, in his theory of general relativity, cast gravity as smooth curves in space and time: An apple falls toward the Earth because the space-time fabric warps under the planet’s weight. This picture perfectly captures gravity on macroscopic scales.

But in small enough increments, space and time lose meaning, and the laws of quantum mechanics — in which particles have no definite properties like “location,” only probabilities — take over. Physicists use a mathematical framework called quantum field theory to describe the probabilistic interactions between particles. A quantum theory of gravity would describe gravity’s origin in particles called “gravitons” and reveal how their behavior scales up to produce the space-time curves of general relativity. But unifying the laws of nature in this way has proven immensely difficult.

String theory first arose in the 1960s as a possible explanation for why elementary particles called quarks never exist in isolation but instead bind together to form protons, neutrons and other composite “hadrons.” The theory held that quarks are unable to pull apart because they form the ends of strings rather than being free-floating points. But the argument had a flaw: While some hadrons do consist of pairs of quarks and anti-quarks and plausibly resemble strings, protons and neutrons contain three quarks apiece, invoking the ugly and uncertain picture of a string with three ends. Soon, a different theory of quarks emerged. But ideas die hard, and some researchers, including Green, then at the University of London, and Schwarz, at the California Institute of Technology, continued to develop string theory.

Problems quickly stacked up. For the strings’ vibrations to make physical sense, the theory calls for many more spatial dimensions than the length, width and depth of everyday experience, forcing string theorists to postulate that six extra dimensions must be knotted up at every point in the fabric of reality, like the pile of a carpet. And because each of the innumerable ways of knotting up the extra dimensions corresponds to a different macroscopic pattern, almost any discovery made about our universe can seem compatible with string theory, crippling its predictive power. Moreover, as things stood in 1984, all known versions of string theory included a nonsensical mathematical term known as an “anomaly.”

On the plus side, researchers realized that a certain vibration mode of the string fit the profile of a graviton, the coveted quantum purveyor of gravity. And on that stormy night in Aspen in 1984, Green and Schwarz discovered that the graviton contributed a term to the equations that, for a particular version of string theory, exactly canceled out the problematic anomaly. The finding raised the possibility that this version was the one, true, mathematically consistent theory of quantum gravity, and it helped usher in a surge of activity known as the “first superstring revolution.”

 But only a year passed before another version of string theory was also certified anomaly-free. In all, five consistent string theories were discovered by the end of the decade. Some conceived of particles as closed strings, others described them as open strings with dangling ends, and still others generalized the concept of a string to higher-dimensional objects known as “D-branes,” which resemble quivering membranes in any number of dimensions. Five string theories seemed an embarrassment of riches.

Read the entire story here.

Image: Image of (1 + 1)-dimensional anti-de Sitter space embedded in flat (1 + 2)-dimensional space. The embedded surface contains closed timelike curves circling the x1 axis. Courtesy of Wikipedia.

Why Are Most Satirists Liberal?

Stephen_Colbert_2014Oliver Morrison over at The Atlantic has a tremendous article that ponders the comedic divide that spans our political landscape. Why, he asks, do most political satirists identify with left-of-center thought? And, why are the majority of radio talk show hosts right-wing? Why is there no right-wing Stephen Colbert, and why no leftie Rush? These are very interesting questions.

You’ll find some surprising answers, which go beyond the Liberal stereotype of the humorless Republican with no grasp of satire or irony.

From the Atlantic:

Soon after Jon Stewart arrived at The Daily Show in 1999, the world around him began to change. First, George W. Bush moved into the White House. Then came 9/11, and YouTube, and the advent of viral videos. Over the years, Stewart and his cohort mastered the very difficult task of sorting through all the news quickly and turning it around into biting, relevant satire that worked both for television and the Internet.

Now, as Stewart prepares to leave the show, the brand of comedy he helped invent is stronger than ever. Stephen Colbert is getting ready to bring his deadpan smirk to The Late Show. Bill Maher is continuing to provoke pundits and politicians with his blunt punch lines. John Oliver’s Last Week Tonight is about to celebrate the end of a wildly popular first year. Stewart has yet to announce his post-Daily Show plans, but even if he retires, the genre seems more than capable of carrying on without him.

Stewart, Colbert, Maher, Oliver and co. belong to a type of late-night satire that’s typically characterized as liberal, skewering Republicans (and, less frequently, Democrats) for absurd statements or pompousness or flagrant hypocrisy. “The Daily Show, The Colbert Report, Funny Or Die, and The Onion, while not partisan organs, all clearly have a left-of-center orientation,” wrote Jonathan Chait in The New Republic in 2011.This categorization, though, begs the question of why the form has no equal on the other side of the ideological spectrum. Some self-identified conservative comics argue that the biased liberal media hasn’t given them a chance to thrive. Others point out that Obama is a more difficult target than his Republican predecessor: He was the first African-American president, which meant comedians have had to tip-toe around anything with racial connotations, and his restrained personality has made him difficult to parody.

But six years in, Obama’s party has been thoroughly trounced in the midterms and publicly excoriated by right-wing politicians, yet there’s a dearth of conservative satirists taking aim, even though the niche-targeted structure of cable media today should make it relatively easy for them to find an audience. After all, it would have been difficult for Stewart or Colbert to find an audience during the era when three broadcast stations competed for the entire country and couldn’t afford to alienate too many viewers. But cable TV news programs need only find a niche viewership. Why then, hasn’t a conservative Daily Show found its own place on Fox?

Liberal satirists are certainly having no trouble making light of liberal institutions and societies. Portlandia is about to enter its fifth season skewering the kinds of liberals who don’t understand that eco-terrorismand militant feminism may not be as politically effective as they think. Jon Stewart has had success poking fun at Obama’s policies. And Alison Dagnes, a professor of political science at Shippensburg University, has found that the liberal Clinton was the butt of more jokes on late-night shows of the 1990s than either George W. Bush or Obama would later be.

So if liberals are such vulnerable targets for humor, why do relatively few conservative comedians seem to be taking aim at them?

ne explanation is simply that proportionately fewer people with broadly conservative sensibilities choose to become comedians. Just as liberals dominate academia, journalism, and other writing professions, there are nearly three times as many liberal- as conservative-minded people in the creative arts according to a recent study. Alison Dagnes, a professor of political science at Shippensburg University, argues that the same personality traits that shape political preferences also guide the choice of professions. These tendencies just get more pronounced in the case of comedy, which usually requires years of irregular income, late hours, and travel, as well as a certain tolerance for crudeness and heckling.

There are, of course, high-profile conservative comedians in America, such as the members of the Blue  Collar Comedy Tour. But these performers, who include Jeff Foxworthy and Larry the Cable Guy, tend carefully to avoid politicized topics, mocking so-called “rednecks” in the same spirit as Borscht Belt acts mocked Jewish culture.

When it comes to actual political satire, one of the most well-known figures nationally is Dennis Miller, a former Saturday Night Live cast member who now has a weekly segment on Fox News’ O’Reilly Factor. On a recent show, O’Reilly brought up the Democrats’ election losses, and Miller took the bait. “I think liberalism is like a nude beach,” Miller said. “It’s better off in your mind than actually going there.” His jokes are sometimes amusing, but they tend to be grounded in vague ideologies, not the attentive criticism to the news of the day that has given liberal satires plenty of fodder five days a week. The real problem, Frank Rich wrote about Miller, “is that his tone has become preachy. He too often seems a pundit first and a comic second.”

The Flipside, a more recent attempt at conservative satire, was launched this year by Kfir Alfia, who got his start in political performance a decade ago when he joined the Protest Warriors, a conservative group that counter-demonstrated at anti-war protests. The Flipside started airing this fall in more than 200 stations across the country, but its growth is hampered by its small budget, according to The Flipside’s producer, Rodney Lee Connover, who said he has to work 10 times as hard because his show has 10 times fewer resources than the liberal shows supported by cable networks.

Connover was a writer along with Miller on The 1/2 Hour News Hour, the first major attempt to create a conservative counterpart to The Daily Showin 2007. It was cancelled after just 13 episodes and has remained the worst-rated show of all time on Metacritic. It was widely panned by critics who complained that it was trying to be political first and funny second, so the jokes were unsurprising and flat.

The host of The Flipside, Michael Loftus, says he’s doing the same thing as Jon Stewart, just with some conservative window-dressing. Wearing jeans, Loftus stands and delivers his jokes on a set that looks like the set of Tool Time, the fictional home-improvement show Tim Allen hosts on the sitcom Home Improvement: The walls are decorated with a dartboard, a “Men at Work” sign, and various other items the producers might expect to find in a typical American garage. In a recent episode, after Republicans won the Senate, Loftus sang the song, “Looks like we made it …” to celebrate the victory.

But rather than talking about the news, as Colbert and Stewart do, or deconstructing a big political issue, as Oliver does, Loftus frequently makes dated references without offering new context to freshen them up. “What’s the deal with Harry Reid?” he asked in a recent episode. “You either hate him or you hate him, am I right? The man is in the business of telling people how greedy they are, and how they don’t pay their fair share, and he lives in the Ritz Carlton … This guy is literally Mr. Burns from The Simpsons.” Much of his material seems designed to resonate with only the most ardent Fox News viewers. Loftus obviously can’t yet attract the kinds of celebrity guests his network competitors can. But instead of playing games with the guests he can get, he asks softball questions that simply allow them to spout off.

Greg Gutfeld, the host of Fox’s Red Eye, can also be funny, but his willing-to-be-controversial style often comes across as more hackneyed than insightful. “You know you’re getting close to the truth when someone is calling you a racist,” he once said. Gutfeld has also railed against “greenie” leftists who shop at Whole Foods, tolerance, and football players who are openly gay. Gutfeld’s shtick works okay during its 3 a.m. timeslot, but a recent controversy over sexist jokes about a female fighter pilot highlighted just how far his humor is from working in prime time.

So if conservatives have yet to produce their own Jon Stewart, it could be the relatively small number of working conservative comedians, or their lack of power in the entertainment industry. Or it could be that shows like The Flipside are failing at least, in part, because they’re just not that funny. But what is it about political satire that makes it so hard for conservatives to get it right?

Read the entire article here.

Image: Stephen Colbert at the 2014 MontClair Film Festival. Courtesy of the 2014 MontClair Film Festival.

Bit Rot is In Your Future

1978_AMC_Matador_sedan_red_NC_detail_of_factory_AM-FM-stereo-8-track_unit

If you are over the age of 55 or 60 you may well have some 8-track cassettes still stashed in the trunk (or boot if you’re a Brit) of your car. If you’re over 50 it’s possible that you may have some old floppy disks or regular music cassettes stored in a bottom drawer. If you’re over 40 you’re likely to have boxes of old VHS tapes and crate-loads of CDs (or even laser disks) under your bed. So, if you fall into one of these categories most of the content memorized on any of these media types is now very likely to be beyond your reach — your car (hopefully) does not have an 8-track player; you dumped your Sony Walkman for an iPod; and your CDs have been rendered obsolete by music that descends to your ears from the “cloud”.

[Of course, 45s and 33s still seem to have a peculiar and lasting appeal — and thanks to the analog characteristics of vinyl the music encoded in the spiral grooves is still relatively easily accessible. But this will be the subject of another post].

So our technological progress, paradoxically, comes at a cost. As our technologies become simpler to use and content becomes easier to construct and disseminate, it becomes “bit rot” for future generations. That is, our digital present will become lost to more advanced technologies in the future. One solution would be to hold on to your 8-track player. But, Vint Cerf, currently a VP at Google and one of the founding fathers of the internet, has other ideas.

From the Guardian:

Piles of digitised material – from blogs, tweets, pictures and videos, to official documents such as court rulings and emails – may be lost forever because the programs needed to view them will become defunct, Google’s vice-president has warned.

Humanity’s first steps into the digital world could be lost to future historians, Vint Cerf told the American Association for the Advancement of Science’s annual meeting in San Jose, California, warning that we faced a “forgotten generation, or even a forgotten century” through what he called “bit rot”, where old computer files become useless junk.

Cerf called for the development of “digital vellum” to preserve old software and hardware so that out-of-date files could be recovered no matter how old they are.

“When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history,” he said.

“We don’t want our digital lives to fade away. If we want to preserve them, we need to make sure that the digital objects we create today can still be rendered far into the future,” he added.

The warning highlights an irony at the heart of modern technology, where music, photos, letters and other documents are digitised in the hope of ensuring their long-term survival. But while researchers are making progress in storing digital files for centuries, the programs and hardware needed to make sense of the files are continually falling out of use.

“We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised,” Cerf told the Guardian. “If there are photos you really care about, print them out.”

Advertisement

Ancient civilisations suffered no such problems, because histories written in cuneiform on baked clay tablets, or rolled papyrus scrolls, needed only eyes to read them. To study today’s culture, future scholars would be faced with PDFs, Word documents, and hundreds of other file types that can only be interpreted with dedicated software and sometimes hardware too.

The problem is already here. In the 1980s, it was routine to save documents on floppy disks, upload Jet Set Willy from cassette to the ZX spectrum, slaughter aliens with a Quickfire II joystick, and have Atari games cartridges in the attic. Even if the disks and cassettes are in good condition, the equipment needed to run them is mostly found only in museums.

The rise of gaming has its own place in the story of digital culture, but Cerf warns that important political and historical documents will also be lost to bit rot. In 2005, American historian Doris Kearns Goodwin wrote Team of Rivals: the Political Genius of Abraham Lincoln, describing how Lincoln hired those who ran against him for presidency. She went to libraries around the US, found the physical letters of the people involved, and reconstructed their conversations. “In today’s world those letters would be emails and the chances of finding them will be vanishingly small 100 years from now,” said Cerf.

He concedes that historians will take steps to preserve material considered important by today’s standards, but argues that the significance of documents and correspondence is often not fully appreciated until hundreds of years later. Historians have learned how the greatest mathematician of antiquity considered the concept of infinity and anticipated calculus in 3BC after the Archimedes palimpsest was found hidden under the words of a Byzantine prayer book from the 13th century. “We’ve been surprised by what we’ve learned from objects that have been preserved purely by happenstance that give us insights into an earlier civilisation,” he said.

Researchers at Carnegie Mellon University in Pittsburgh have made headway towards a solution to bit rot, or at least a partial one. There, Mahadev Satyanarayanan takes digital snapshots of computer hard drives while they run different software programs. These can then be uploaded to a computer that mimics the one the software ran on. The result is a computer that can read otherwise defunct files. Under a project called Olive, the researchers have archived Mystery House, the original 1982 graphic adventure game for the Apple II, an early version of WordPerfect, and Doom, the original 1993 first person shooter game.

Inventing new technology is only half the battle, though. More difficult still could be navigating the legal permissions to copy and store software before it dies. When IT companies go out of business, or stop supporting their products, they may sell the rights on, making it a nightmarish task to get approval.

Read the entire article here.

Image: 1978 AMC Matador sedan red NC detail of factory AM-FM-stereo-8-track unit. Courtesy of CZmarlin / Wikipedia.

Yawn. Selfies Are So, Like, Yesterday!

DOOB 3D-image

If you know a dedicated and impassioned narcissist it’s time to convince him or her to ditch the selfie. Oh, and please ensure she or he discards the selfie-stick while they’re at it. You see, the selfie — that ubiquitous expression of the me-me-generation — is now rather passé.

So, where does a self-absorbed individual turn next? Enter the 3D printed version of yourself courtesy of a German company called DOOB 3D, with its Dooblicator scanner and high-res 3D printer. Connoisseurs of self can now — for a mere $395 — replicate themselves with a 10-inch facsimile. If you’re a cheapskate, you can get a Playmobil-sized replica for $95; while a 14-inch selfie-doll will fetch you $695. Love it!

To learn more about DOOB 3D visit their website.

From Wired:

We love looking at images of ourselves. First there were Olan Mills portraits. Nowadays there are selfies and selfie-stick selfies and drone selfies.

If you’re wondering what comes next, Dusseldorf-based DOOB 3D thinks it has the answer—and contrary to what the company’s name suggests, it doesn’t involve getting high and watching Avatar.

DOOB 3D can produce a detailed, four-inch figurine of your body—yes, a 3-D selfie. Making one of these figurines requires a massive pile of hardware and software: 54 DSLRs, 54 lenses, a complex 3-D modeling pipeline, and an $80,000 full-color 3-D printer, not to mention a room-size scanning booth.

Factor that all in and the $95 asking price for a replica of yourself that’s roughly the size of most classic Star Wars action figures doesn’t seem so bad. A Barbie-esque 10-inch model goes for $395, while a 14-inch figure that’s more along the lines of an old-school G.I. Joe doll costs $695.

The company has eight 3-D scanning booths (called “Doob-licators”) scattered in strategic locations throughout the world. There’s one in Dusseldorf, one in Tokyo, one at Santa Monica Place in Los Angeles, and one in New York City’s Chelsea Market. The company also says they’re set to add more U.S. locations soon, although details aren’t public yet.

In New York, the pop-up DOOB shop in Chelsea Market was a pretty big hit. According to Michael Anderson, CEO of DOOB 3D USA, the Doob-licator saw about 500 customers over the winter holiday season. About 10 percent of the booth’s customers got their pets Doob-licated.

“At first, (people got DOOBs made) mostly on a whim,” says Anderson of the holiday-season spike. Most people just walk up and stand in line, but you can also book an appointment in advance.

“Now that awareness has been built,” Anderson says, “there has been a shift where at least two thirds of our customers have planned ahead to get a DOOB.”

Each Doob-licator is outfitted with 54 Canon EOS Rebel T5i DSLRs, arranged in nine columns of six cameras each. You can make an appointment or just wait in line: A customer steps in, strikes a pose, and the Doob-licator operator fires all the cameras at once. That creates a full-body scan in a fraction of a second. The next step involves feeding all those 18-megapixel images through the company’s proprietary software, which creates a 3-D model of the subject.

The printing process requires more patience. The company operates three high-end 3-D printing centers to support its scanning operations: One in Germany, one in Tokyo, and one in Brooklyn. They all use 3D Systems’ ProJet 660Pro, a high-resolution (600 x 540 dpi) 3-D printer that creates full-color objects on the fly. The printer uses a resin polymer material, and the full range of CMYK color is added to each powder layer as it’s printed.

With a top printing speed of 1.1 inches per hour and a process that sometimes involves thousands of layers of powder, the process takes a few hours for the smallest-size DOOB and half a day or more for the larger ones. And depending on how many DOOBs are lined up in the queue, your mini statue takes between two and eight weeks to arrive in the mail.

Once you step inside that Doob-licator, it’s like international waters: You are largely unbound by laws and restrictions. Do you want to get naked? Go right ahead. Along with your nude statue, the company will also send you a 3-D PDF and keep your data in its database in case you want additional copies made (you can request that your data be deleted if that sounds too creepy).

Read the entire article here.

Image courtesy of of DOOB 3D.

Apocalypse Now in Three Simple Steps

Google-search-apocalypse

Step One: Return to the Seventh Century.

Step Two: Fight the armies from Rome.

Step Three: Await… the apocalypse.

Just three simple steps — pretty straightforward really. Lots of violence, bloodshed and torture along the way. But apparently it’s worth every beheaded infidel, every crucified apostate, every subjugated or raped woman, every tormented child. This is the world according to ISIS, and it makes all other apocalyptic traditions seem like a trip to the candy store.

This makes one believe that apocalyptic Jews and Christians really don’t take their end-of-days beliefs very seriously — otherwise wouldn’t they be fighting alongside their Muslim brothers to reach the other side as quickly as possible?

Hmm. Which God to believe?

If you do nothing else today, read the entire in-depth article below.

From the Atlantic:

What is the Islamic State?

Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.

The group seized Mosul, Iraq, last June, and already rules an area larger than the United Kingdom. Abu Bakr al-Baghdadi has been its leader since May 2010, but until last summer, his most recent known appearance on film was a grainy mug shot from a stay in U.S. captivity at Camp Bucca during the occupation of Iraq. Then, on July 5 of last year, he stepped into the pulpit of the Great Mosque of al-Nuri in Mosul, to deliver a Ramadan sermon as the first caliph in generations—upgrading his resolution from grainy to high-definition, and his position from hunted guerrilla to commander of all Muslims. The inflow of jihadists that followed, from around the world, was unprecedented in its pace and volume, and is continuing.

Our ignorance of the Islamic State is in some ways understandable: It is a hermit kingdom; few have gone there and returned. Baghdadi has spoken on camera only once. But his address, and the Islamic State’s countless other propaganda videos and encyclicals, are online, and the caliphate’s supporters have toiled mightily to make their project knowable. We can gather that their state rejects peace as a matter of principle; that it hungers for genocide; that its religious views make it constitutionally incapable of certain types of change, even if that change might ensure its survival; and that it considers itself a harbinger of—and headline player in—the imminent end of the world.

The Islamic State, also known as the Islamic State of Iraq and al-Sham (ISIS), follows a distinctive variety of Islam whose beliefs about the path to the Day of Judgment matter to its strategy, and can help the West know its enemy and predict its behavior. Its rise to power is less like the triumph of the Muslim Brotherhood in Egypt (a group whose leaders the Islamic State considers apostates) than like the realization of a dystopian alternate reality in which David Koresh or Jim Jones survived to wield absolute power over not just a few hundred people, but some 8 million.

We have misunderstood the nature of the Islamic State in at least two ways. First, we tend to see jihadism as monolithic, and to apply the logic of al?Qaeda to an organization that has decisively eclipsed it. The Islamic State supporters I spoke with still refer to Osama bin Laden as “Sheikh Osama,” a title of honor. But jihadism has evolved since al-Qaeda’s heyday, from about 1998 to 2003, and many jihadists disdain the group’s priorities and current leadership.

Bin Laden viewed his terrorism as a prologue to a caliphate he did not expect to see in his lifetime. His organization was flexible, operating as a geographically diffuse network of autonomous cells. The Islamic State, by contrast, requires territory to remain legitimate, and a top-down structure to rule it. (Its bureaucracy is divided into civil and military arms, and its territory into provinces.)

We are misled in a second way, by a well-intentioned but dishonest campaign to deny the Islamic State’s medieval religious nature. Peter Bergen, who produced the first interview with bin Laden in 1997, titled his first book Holy War, Inc. in part to acknowledge bin Laden as a creature of the modern secular world. Bin Laden corporatized terror and franchised it out. He requested specific political concessions, such as the withdrawal of U.S. forces from Saudi Arabia. His foot soldiers navigated the modern world confidently. On Mohammad Atta’s last full day of life, he shopped at Walmart and ate dinner at Pizza Hut.

There is a temptation to rehearse this observation—that jihadists are modern secular people, with modern political concerns, wearing medieval religious disguise—and make it fit the Islamic State. In fact, much of what the group does looks nonsensical except in light of a sincere, carefully considered commitment to returning civilization to a seventh-century legal environment, and ultimately to bringing about the apocalypse.

The most-articulate spokesmen for that position are the Islamic State’s officials and supporters themselves. They refer derisively to “moderns.” In conversation, they insist that they will not—cannot—waver from governing precepts that were embedded in Islam by the Prophet Muhammad and his earliest followers. They often speak in codes and allusions that sound odd or old-fashioned to non-Muslims, but refer to specific traditions and texts of early Islam.

To take one example: In September, Sheikh Abu Muhammad al-Adnani, the Islamic State’s chief spokesman, called on Muslims in Western countries such as France and Canada to find an infidel and “smash his head with a rock,” poison him, run him over with a car, or “destroy his crops.” To Western ears, the biblical-sounding punishments—the stoning and crop destruction—juxtaposed strangely with his more modern-sounding call to vehicular homicide. (As if to show that he could terrorize by imagery alone, Adnani also referred to Secretary of State John Kerry as an “uncircumcised geezer.”)

But Adnani was not merely talking trash. His speech was laced with theological and legal discussion, and his exhortation to attack crops directly echoed orders from Muhammad to leave well water and crops alone—unless the armies of Islam were in a defensive position, in which case Muslims in the lands of kuffar, or infidels, should be unmerciful, and poison away.

The reality is that the Islamic State is Islamic. Very Islamic. Yes, it has attracted psychopaths and adventure seekers, drawn largely from the disaffected populations of the Middle East and Europe. But the religion preached by its most ardent followers derives from coherent and even learned interpretations of Islam.

Virtually every major decision and law promulgated by the Islamic State adheres to what it calls, in its press and pronouncements, and on its billboards, license plates, stationery, and coins, “the Prophetic methodology,” which means following the prophecy and example of Muhammad, in punctilious detail. Muslims can reject the Islamic State; nearly all do. But pretending that it isn’t actually a religious, millenarian group, with theology that must be understood to be combatted, has already led the United States to underestimate it and back foolish schemes to counter it. We’ll need to get acquainted with the Islamic State’s intellectual genealogy if we are to react in a way that will not strengthen it, but instead help it self-immolate in its own excessive zeal.

Read the entire article here.

Image: Apocalypse. Courtesy if Google Search.

What’s Next For the LHC?

As CERN’s Large Hadron Collider gears up for a restart in March 2015 after a refit that doubled its particle smashing power, researchers are pondering what may come next. During its previous run scientists uncovered signals identifying the long-sought Higgs boson. Now, particle physicists have their eyes and minds on more exotic, but no less significant, particle discoveries. And — of course — these come with suitably exotic names: gluino, photino, selectron, squark, axion — the list goes on. But beyond these creative names lie possible answers to some very big questions: What is the composition of dark matter (and even dark energy)? How does gravity fit in with all the other identified forces? Do other fundamental particles exist?

From the Smithsonian:

The Large Hadron Collider, the world’s biggest and most famous particle accelerator, will reopen in March after a years-long upgrade. So what’s the first order of business for the rebooted collider? Nothing less than looking for a particle that forces physicists to reconsider everything they think they know about how the universe works.

Since the second half of the twentieth century, physicists have used the Standard Model of physics to describe how particles look and act. But though the model explains pretty much everything scientists have observed using particle accelerators, it doesn’t account for everything they can observe in the universe, including the existence of dark matter.

That’s where supersymmetry, or SUSY, comes in. Supersymmetry predicts that each particle has what physicists call a “superpartner”—a more massive sub-atomic partner particle that acts like a twin of the particle we can observe. Each observable particle would have its own kind of superpartner, pairing bosons with “fermions,” electrons with “selectrons,” quarks with “squarks,” photons with “photinos,” and gluons with “gluinos.”

If scientists could identify a single superparticle, they could be on track for a more complete theory of particle physics that accounts for strange inconsistencies between existing knowledge and observable phenomena. Scientists used the Large Hadron Collider to identify Higgs boson particles in 2012, but it didn’t behave quite as they expected. One surprise was that its mass was much lighter than predicted—an inconsistency that would be explained by the existence of a supersymmetric particle.

Scientists hope that the rebooted—and more powerful—LHC will reveal just such a particle. “Higher energies at the new LHC could boost the production of hypothetical supersymmetric particles called gluinos by a factor of 60, increasing the odds of finding it,” reports Emily Conover for Science.

If the LHC were to uncover a single superparticle, it wouldn’t just be a win for supersymmetry as a theory—it could be a step toward understanding the origins of our universe. But it could also create a lot of work for scientists—after all, a supersymmetric universe is one that would hold at least twice as many particles.

Read the entire article here.

 

Kims as Modern Messiahs

Kim Il Sung

Read this deconstruction of the modern North Korean state under the auspices of the Kim dynasty and you’ll see how easy it is to establish a cult of personality of messianic proportions.

From the Guardian:

In 1994, as it descended into famine, the Democratic People’s Republic of Korea (DPRK) spent millions of dollars raising a ziggurat on top of the mausoleum of Tangun, the founder of the ancient Korean Kojoson dynasty. Despite other more pressing matters, the regime felt it had urgent reasons to commemorate the life of a man whose reign began in 2,333 BC.

Unlike later Korean kingdoms, Tangun’s capital was close to Pyongyang, not Seoul. And so, in 1994, as South Korea blazed ahead in the battle for economic and political legitimacy on the Korean peninsula, the North reached into the past to claim its own.

It was said Tangun’s father had come to earth from heaven near the holy Mount Paektu on North Korea’s border with China. And despite all evidence to the contrary, it was also claimed as the birthplace of North Korea’s late leader Kim Jong-il, and its “founding father” Kim Il-sung’s base for his anti-Japanese guerrilla struggle.

When it came into being in 1948, official history writers dated Kim Il-sung’s Korea back to the year of his own birth. The now familiar Juche calendar, inaugurated in 1997, recalculated time from the year Kim Il-sung was said to have come to earth from heaven in 1912. Like some ancient creation myth newly minted, time itself began, or was renewed, with the birth of Kim Il-sung.

Equally importantly, in 1994 the renovation of Tangun’s Tomb coincided with another multi-million dollar renovation of the Kumsusan Memorial Palace, in which the embalmed body of Kim Il-sung would be displayed, preserving him as the country’s Eternal President.

To this day, the childhood hagiography of Kim Il-sung remains one of the key didactic tools of the North Korean state. The stories of his childhood resound from the walls of “Kim Il-sung Research Institutes” in schools, to the books children enjoy, to the texts electronically loaded on their Samjiyeon tablets.

He was born an ordinary man named Kim Song-ju on 15 April 1912, at the zenith of western and Japanese imperialism. In the first of his eight-volume memoir, he describes the era before his birth as a time of subjugation and national humiliation for the Korean race, and trumpets the new era of his guerrilla struggle.

Yet his birth also coincided with an omen of imperialism’s doom; it was the day the Titanic disappeared beneath the waters of the North Atlantic. In North Korea’s revolutionary cosmology, there is no such thing as chance. There is only destiny.

According to Kim Il-sung, his great-grandfather moved from North Jeolla Province, settling his family in Mangyongdae, then a village on the outskirts of the capital Pyongyang. For generations his family laboured there as farmers and grave keepers, and their suffering would come to symbolise the Korean nation under feudalism and Japanese imperialism. Kim describing them as “the epitome of the misfortune and distress that befell our people after they lost their country”.

In the memoir, Kim Il-sung’s childhood reminiscences lurch from affectations of modesty to statements of self-aggrandisement. In his preface, for example, the Great Leader claims: “I have never considered my life to be extraordinary.” Two pages later he declares: “my whole life… is the epitome of the history of my country and my people.”

Kim even insists it was his own great-grandfather who led the attack on the General Sherman when it sailed the Taedong into Pyongyang in 1866, achieving one of Korea’s first great victories against western economic and military might. Kim’s ancestors glories foreshadow the greater ones to come.

The greatest influence upon the young Kim Il-sung is said to be his father, Kim Hyong-jik. A charismatic teacher and self-taught physician, Kim Hyong-jik becomes a prophetic figure in the history of his nation, raising an heir who will return as saviour to a liberated homeland.

Kim Il-sung’s account says he prepared for his vocation from a tender age; he recalls vowing to defeat the forces of imperialism at the age of five, when he was playing on a swing in his mother’s arms. There could be no clearer distillation of North Korean children’s culture, rehearsed to this day via the Korean Children’s Union and military games in which toddlers and primary school students eviscerate effigies of American and Japanese imperialists. In the revolutionary imagination there is no difference between warriors and innocents.

He wrote himself into the history of the March 1st Movement of 1919, when Korean protests against Japanese imperial rule were violently crushed. “I, then six years old, also joined the ranks of demonstrators,” he says. “When the adults cheered for independence, I joined them. The enemy used swords and guns indiscriminately against the masses … This was the day when I witnessed Korean blood being spilled for the first time. My young heart burned with indignation.”

From that point, the Kim family’s instinctive resistance to Japanese imperialism becomes increasingly bound to the political vision articulated by the Soviet Union. Kim Il-sung recalls his father’s realisation that “the national liberation movement in our country should shift from a nationalist movement to a communist movement.” Instead of bedtime stories of old Korea, his father teaches Kim of Lenin and the October Revolution.

In a series of semi-comic interludes, the young Kim Il-sung scores early victories against the enemy, setting the model for countless juvenile heroes in North Korean children’s literature. For instance, he recalls “wrestling with a Japanese boy bigger than me who I got down with a belly throw.”

In other acts of resistance, Kim lines roads with spikes to tear the wheels of Japanese police bicycles, and defaces Japanese primary school textbooks in protest at linguistic imperialism. Such antics are undoubtedly exaggerated, yet the hagiography is careful to limit Kim Il-sung’s proto-guerrilla struggle to plausible feats of childhood derring-do. Unlike his son, Kim Jong-il, he is not depicted as a Napoleonic genius at 10 years-old.

Kim Hyong-jik does not live to see Korea free with his own eyes. Before he dies in exile in Manchuria, he issues a command to his now 14-year-old son: “You must not forget that you belong to the country and the people. You must win back your country at all costs, even if your bones are broken and your bodies are torn apart.”

Despite his father’s rousing words, Kim Il-sung is still too young to lead a guerrilla war that many North Koreans, until recently, could still recall from living memory. So before Kim’s war begins he studies in Manchuria, albeit in a middle school transformed into a kind of revolutionary Hogwarts.

Even today, the legend of Yuwen Middle School endures. During Kim Jong-il’s state visit to China in September 2010 he detoured to Jilin, undertaking a pilgrimage to his father’s school. There, according to state television, the Dear Leader became “immersed in thoughts while looking at the precious historic objects that contain the bodily odour of our Supreme Leader from his school years some 80 years back.” It was an exquisite act of political theatre. Only days later, returning to Pyongyang, Kim Jong-il revealed that Kim Jong-un would be his young successor.

Read the entire article here.

Image: Kim il-sung and adoring children. Courtesy of AP.

Where Will I Get My News (and Satire)

Google-search-jon-stewart

Jon Stewart. Jon Stewart, you dastardly, villainous so-and-so. How could you? How could you decide to leave the most important show in media history — The Daily Show — after a mere 16 years? Where will I get my news? Where will I find another hypocrisy-meter? Where will I find another truth-seeking David to fend us from the fear-mongering neocon Goliaths led by Rogers Ailes over at the Foxion News Channel? Where will I find such a thoroughly delicious merging of news, fact and satire. Jon Stewart how could you?!

From the Guardian?

“Where will I get my news each night,” lamented Bill Clinton this week. This might have been a reaction to the fall from grace of Brian Williams, America’s top-rated news anchor, who was suspended for embellishing details of his adventures in Iraq. In fact the former US president was anticipating withdrawal symptoms for the impending departure of the comedian Jon Stewart, who – on the same day as Williams’s disgrace – announced that he will step down as the Daily Show host.

Stewart, who began his stint 16 years ago, has achieved something extraordinary from behind a studio desk on a comedy cable channel. Merging the intense desire for factual information with humour, irreverence, scepticism and usually appropriate cynicism, Stewart’s show proved a magnet for opinion formers, top politicians – who clamoured to appear – and most significantly the young, for whom the mix proved irresistible. His ridiculing of neocons became a nightly staple. His rejection from the outset of the Iraq war was prescient. And always he was funny, not least this week in using Williams’s fall to castigate the media for failing to properly scrutinise the Iraq war. Bill Clinton does not mourn alone.

Read the entire story here.

Image courtesy of Google Search.

US Politicians Are Not Scientists, They’re…

A recent popular refrain from politicians in the US is “I am not a scientist”. This is code, mostly from the mouths of Republicans, for a train of thought that goes something like this:

1. Facts discovered through the scientific method are nothing more than opinion.

2. However, my beliefs are fact.

3. Hence, anything that is explained through science is wrong.

4. Thus, case closed.

Those who would have us believe that climate change is an illusion now take cover behind this quaint “I am not a scientist” phrase, and in so doing are able to shirk from questions of any consequence. So, it’s good to hear potential Republican presidential candidate, Scott Walker, tow the party line recently by telling us that he’s no scientist and “punting” (aka ignoring) on questions of climate change. This on the same day that NASA, Cornell and Columbia warn that global warming is likely to bring severe, multi-decade long megadroughts — the worst in a thousand years — to the central and southwestern US in our children’s lifetimes.

The optimist in me hopes that when my children come of age they will elect politicians who are scientists or leaders who accept the scientific method. Please. It’s time to ditch Flat Earthers, creationists and “believers”. It’s time to shun those who shun critical thinking, reason and evidence. It’s time to move beyond those who merely say anything or nothing to get elected.

From ars technica:

Given that February 12 would be Charles Darwin’s 206th birthday, having people spare some thought for the theory of evolution doesn’t seem outrageously out of place this week. But, for a US politician visiting London, a question on the matter was clearly unwelcome.

Scott Walker, governor of Wisconsin and possible presidential candidate, was obviously hoping for a chance to have a few experiences that would make him seem more credible on the foreign policy scene. But the host of a British TV show asked some questions that, for many in the US, touch on matters of personal belief and the ability to think critically: “Are you comfortable with the idea of evolution? Do you believe in it? Do you accept it?” (A video that includes these questions along with extensive commentary is available here.)

Walker, rather than oblige his host, literally answered that he was going to dodge the question, saying, “For me, I’m going to punt on that one as well. That’s a question a politician shouldn’t be involved in one way or another.”

“Punting,” for those not up on their sports metaphors, is a means of tactically giving up. When a football team punts, it gives the other team control of the ball but prevents a variety of many worse situations from developing.

In some ways, this is an improvement for a politician. When it comes to climate change, many politicians perform a dodge by saying “I’m not a scientist” and then proceed to make stupid pronouncements about the state of science. Here, Walker didn’t make any statements whatsoever.

So, that’s a step up from excusing stupidity. But is this really a question that should be punted? To begin with, Walker may not feel it’s a question a politician should be involved with, but plenty of other politicians clearly do. At a minimum, punting meant Walker passed on an opportunity to explain why he feels those efforts to interfere in science education are misguided and why his stand is more principled.

But, much more realistically, Walker is punting not because he feels the question shouldn’t be answered by politicians, but because he sees lots of political downsides to answering. Politicians had been getting hit with the evolution question since at least 2007, and our initial analysis of it still stands. If you agree with over a century of scientific exploration, you run the risk of alienating a community that has established itself as a reliable contributor of votes to Republican politicians such as Walker. We could see why he would want to avoid that.

Saying you refuse to accept evolution raises valid questions about your willingness to analyze evidence rationally and accept the opinions of people with expertise in a topic. Either that, or it suggests you’re willing to say anything in order to improve your chances of being elected. But punting is effectively the same thing—it suggests you’ll avoid saying anything in order to improve your chances of being elected.

Read the entire article here.

Social Media Metes Out Social (Networking) Justice

Before the age of Facebook and Twitter if you were to say something utterly stupid, bigoted, sexist or racist among a small group of friends or colleagues it would, usually, have gone no further. Some members of your audience may have chastised you, while others may have agreed or ignored you. But then the comment would have been largely forgotten.

This is no longer so in our age of social networking and constant inter-connectedness. Our technologies distribute, repeat and amplify our words and actions, which now seem to take on lives of their very own. Love it or hate it — welcome to the age of social networking justice — a 21st century digital pillory.

Say something stupid or do something questionable today — and you’re likely to face a consequential backlash that stretches beyond the present and into your future. Just take the case of Justine Sacco.

From NYT:

As she made the long journey from New York to South Africa, to visit family during the holidays in 2013, Justine Sacco, 30 years old and the senior director of corporate communications at IAC, began tweeting acerbic little jokes about the indignities of travel. There was one about a fellow passenger on the flight from John F. Kennedy International Airport:

“?‘Weird German Dude: You’re in First Class. It’s 2014. Get some deodorant.’ — Inner monologue as I inhale BO. Thank God for pharmaceuticals.”

Then, during her layover at Heathrow:

“Chilly — cucumber sandwiches — bad teeth. Back in London!”

And on Dec. 20, before the final leg of her trip to Cape Town:

“Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”

She chuckled to herself as she pressed send on this last one, then wandered around Heathrow’s international terminal for half an hour, sporadically checking her phone. No one replied, which didn’t surprise her. She had only 170 Twitter followers.

Sacco boarded the plane. It was an 11-hour flight, so she slept. When the plane landed in Cape Town and was taxiing on the runway, she turned on her phone. Right away, she got a text from someone she hadn’t spoken to since high school: “I’m so sorry to see what’s happening.” Sacco looked at it, baffled.

Then another text: “You need to call me immediately.” It was from her best friend, Hannah. Then her phone exploded with more texts and alerts. And then it rang. It was Hannah. “You’re the No. 1 worldwide trend on Twitter right now,” she said.

Sacco’s Twitter feed had become a horror show. “In light of @Justine-Sacco disgusting racist tweet, I’m donating to @care today” and “How did @JustineSacco get a PR job?! Her level of racist ignorance belongs on Fox News. #AIDS can affect anyone!” and “I’m an IAC employee and I don’t want @JustineSacco doing any communications on our behalf ever again. Ever.” And then one from her employer, IAC, the corporate owner of The Daily Beast, OKCupid and Vimeo: “This is an outrageous, offensive comment. Employee in question currently unreachable on an intl flight.” The anger soon turned to excitement: “All I want for Christmas is to see @JustineSacco’s face when her plane lands and she checks her inbox/voicemail” and “Oh man, @JustineSacco is going to have the most painful phone-turning-on moment ever when her plane lands” and “We are about to watch this @JustineSacco bitch get fired. In REAL time. Before she even KNOWS she’s getting fired.”

The furor over Sacco’s tweet had become not just an ideological crusade against her perceived bigotry but also a form of idle entertainment. Her complete ignorance of her predicament for those 11 hours lent the episode both dramatic irony and a pleasing narrative arc. As Sacco’s flight traversed the length of Africa, a hashtag began to trend worldwide: #HasJustineLandedYet. “Seriously. I just want to go home to go to bed, but everyone at the bar is SO into #HasJustineLandedYet. Can’t look away. Can’t leave” and “Right, is there no one in Cape Town going to the airport to tweet her arrival? Come on, Twitter! I’d like pictures #HasJustineLandedYet.”

A Twitter user did indeed go to the airport to tweet her arrival. He took her photograph and posted it online. “Yup,” he wrote, “@JustineSacco HAS in fact landed at Cape Town International. She’s decided to wear sunnies as a disguise.”

By the time Sacco had touched down, tens of thousands of angry tweets had been sent in response to her joke. Hannah, meanwhile, frantically deleted her friend’s tweet and her account — Sacco didn’t want to look — but it was far too late. “Sorry @JustineSacco,” wrote one Twitter user, “your tweet lives on forever.”

Read the entire article here.

There’s Gold in Them Thar Dumpsters

One enterprising person has taken his passion for recycling and reuse to extraordinary lengths. Matt Malone is a professional dumpster diver, making a profitable business from others’ trash. And, there’s another great benefit to his growing business — keeping untold quantities of discarded goods, and some of it hazardous material, out of our landfills. Ours is a thoroughly wasteful society and sadly our consumer culture still rewards businesses for this wastefulness.

From Wired:

Matt Malone doesn’t mind being called a professional dumpster diver. He tells me this a little after 2 am on the morning of July 7 as we cruise the trash receptacles behind the stores of a shopping center just off the Capital of Texas Highway in Austin. Given the image that conjures, though, it’s worth pointing out that Malone has a pretty good day job, earning a six-figure salary as a security specialist for Slait Consulting. He is also founder of Assero Security, a startup that he says has recently been offered seed money by not one but two separate investors. Nevertheless, the 37-year-old Malone does spend a good many of his off-hours digging through the trash. And the fact is, he earns a sizable amount of money from this activity—more per hour than he makes at his Slait job.

Malone stops his Chevy Avalanche next to the dumpster in back of an Office Depot. Within seconds, he’s out of the truck and sticking his magnetized flashlight to the inside of the dumpster’s wall. He heaves himself up onto the metal rim to lean inside and begins digging through a top layer of cardboard and packing materials. Half a minute later I hear what I will learn is Malone’s version of eureka: “Hell yes! Hell yes!” He comes out with a box containing a complete Uniden Wireless Video Surveillance System—two cameras and a wireless monitor—which normally retails for $419. A quick inspection reveals that it’s all in perfect condition, although someone has clearly opened and repacked it. “A return,” he says, then plunges back into the dumpster.

Ten minutes later, when he’s again behind the wheel of the Avalanche, Malone continues to tell me about the material benefits of dumpster diving. If he were to dedicate himself to the activity as a full-time job, he says, finding various discarded treasures, refurbishing and selling them off, he’s confident he could pull in at least $250,000 a year—there is that much stuff simply tossed into dumpsters in the Austin area. He lists a few recent “recoveries”: vacuums, power tools, furniture, carpeting, industrial machines, assorted electronics. Much of it needs a little love, he says, but a lot of it, like this Uniden system, is in perfect condition.

But, he quickly adds, his foraging isn’t just about dollars. It’s also about the knowledge he acquires and the people he shares it with. He prefers to be known as a “for-profit archaeologist.” After all, archaeologists have always studied garbage. The esteemed William Rathje, who established the Garbage Project at the University of Arizona, observed shortly before his 2012 death that refuse, more than anything else human beings produce, “gives us insight into the long-term values of a civilization.”

As for Malone, the main insight he’s obtained from digging through our civilization’s trash is that most people don’t place a lot of value in value anymore.

Malone started dumpster diving nine years ago, when he was working at a lower-level corporate security job. His employer had assigned him to conduct what’s called a “zero-knowledge attack” on an Austin-based company. “That means you hire me and don’t give me any information about your operation,” Malone explains. “I’m just a random guy who wants to break into your system.” The most effective way to do this was to dig through his client’s trash; many hacks and identity thefts come from information left in dumpsters. Sure enough, after just a couple of weeks of looking through the dumpsters outside the client’s offices, he had amassed a box full of documents, loaded with the confidential information of thousands of customers. (“It made quite an impression” on his client, he recalls.)

But he also discovered something else. One night while doing his research, he decided to poke around in neighboring trash bins, including the dumpster at OfficeMax. Inside he discovered “a whole bunch of printers, discontinued lines that were still in the boxes.” He took the printers home and put them in his garage. But he couldn’t stop wondering what else was out there in the dumpsters of Austin. Before long, he went back out to see what else he could find.

A short and wiry man whose manic enthusiasm and radiant smile lend him a quirky charm, Malone says that at first he looked for items he could use himself, especially in his main passion, building and riding “mini chopper” motorcycles. On a hunch he checked the dumpster behind the Emerson Electric warehouse in an industrial park near his home, where he discovered several discarded motors that would provide enough power to move a mini chopper along at 40 to 50 miles per hour. Then, out of curiosity, he turned his attention to the dumpsters at Home Depot, Harbor Freight, Big Lots, Sears, Best Buy, and a few others. He was astounded at what he found: building materials, power tools, HEPA filters, and a dizzying array of electronics.

At first, Malone mainly used his discoveries for various hobby projects. Along with his mini choppers, he built an electric skateboard, a set of plasma speakers, several 3-D projectors, and a computer that ran while submerged in mineral oil. “People would come over and ask, ‘Man, where’d you get that?’” he recalls. “I’d say, ‘Well, I made it.’ I didn’t say right away that I made it mostly from stuff I got out of dumpsters.” Inevitably his friends would ask to buy his various toys, and—usually already bored with them and having moved on to a new project—he would agree to sell. Even so, his garage soon overflowed, and Malone decided he should make some space by staging a weekend yard sale.

That sale provided several revelations. The biggest was what sold with the drive-by public. “I had all my cool stuff out front, a couple of very nice computers, mini choppers, some high-end printers—the big-ticket stuff—thinking, ‘This is what’s going to make me the money.’” It wasn’t. Instead, people flocked to “the small stuff”: the photo paper and toner he’d pulled out of the dumpsters at OfficeMax and Office Depot, the hand tools he’d found in the trash at Harbor Freight, the CDs from GameStop dumpsters, the assorted seasonal tchotchkes that had been tossed by the employees at Pier 1 and Cost Plus. “I eventually figured out that I had to sell the big stuff on Amazon or Craigslist,” Malone says. But all those small sales added up: By Sunday afternoon he had collected a little more than $3,000 in cash. “And that was when I realized, ‘This has the potential to be something.’”

At the time, Malone explains, he was working for a company called Vintage IT and making only about half of his current salary, so he appreciated the opportunity to augment his income. He began to organize his approach, making daily checks of the various malls and business parks closest to his home to ascertain what days and times dumpsters were most likely filled with desirable items. Within a few weeks he knew exactly when the trash was collected at every store and business on his route so he could time his visits for when the dumpsters were fullest. He also learned to look for stores that were changing locations or—better yet—going out of business. Store remodels were also good targets. “I was learning as I went along and designing a kind of collection system before I even realized that was what I was doing.”

As we drive by a shopping center just off the Mopac Expressway, Malone remembers the weeks when the Circuit City that once anchored this mall was closing. “I went back day after day after day,” he says. “I got brand-new stereos, GPS devices, some really nice cameras, flatscreen TVs. I got a boom box there that was bigger than I am. And what was great was that you could sell it at retail, because it was all still in the boxes.”

Suddenly, Malone spots a huge “yarder” dumpster directly behind Bealls department store—an indication the store may be remodeling. Within moments he has pulled his truck alongside the yarder and used the truck bed to climb in. Wading through the cardboard and bubble wrap, Malone quickly finds three slightly used dress-form mannequins that he is sure can be sold to an owner of one of the pop-up clothing stores that have become popular in Austin. That’s just the beginning, though. During the next 15 minutes, he’s so deep in the bowels of the dumpster that at moments all I can see are his shoulders and the back of his head; he exclaims “Hell yes!” at least a half dozen times. When Malone is finished there are two large stacks of laminated MDF boards and plate-glass panels from discarded store displays in the back of the truck. He can use the boards at a workshop that he maintains in a small business park a couple of minutes from his North Austin home. “These precut boards are really expensive,” Malone says. “That’s money I won’t be spending.” Malone has operated a number of trash-related enterprises out of his shop, often with names like Chinese Scooter Repair.

Malone can get downright philosophical about the empire he’s managed to build out of garbage. “We can only do what we do here because we live in a society where most people have been conditioned to look past what’s right in front of them.”

Read the entire article here.

Creative Destruction

Internet_map

Author Andrew Keen ponders the true value of the internet in his new book The Internet is Not the Answer. Quite rightfully he asserts that many billions of consumers have benefited from the improved convenience and usually lower prices of every product imaginable delivered through a couple of clicks online. But there is a higher price to pay — one that touches on the values we want for our society and the deeper costs to our culture.

From the Guardian:

During every minute of every day of 2014, according to Andrew Keen’s new book, the world’s internet users – all three billion of them – sent 204m emails, uploaded 72 hours of YouTube video, undertook 4m Google searches, shared 2.46m pieces of Facebook content, published 277,000 tweets, posted 216,000 new photos on Instagram and spent $83,000 on Amazon.

By any measure, for a network that has existed recognisably for barely 20 years (the first graphical web browser, Mosaic, was released in 1993), those are astonishing numbers: the internet, plainly, has transformed all our lives, making so much of what we do every day – communicating, shopping, finding, watching, booking – unimaginably easier than it was. A Pew survey in the United States found last year that 90% of Americans believed the internet had been good for them.

So it takes a brave man to argue that there is another side to the internet; that stratospheric numbers and undreamed-of personal convenience are not the whole story. Keen (who was once so sure the internet was the answer that he sank all he had into a startup) is now a thoughtful and erudite contrarian who believes the internet is actually doing untold damage. The net, he argues, was meant to be “power to the people, a platform for equality”: an open, decentralised, democratising technology that liberates as it empowers as it informs.

Instead, it has handed extraordinary power and wealth to a tiny handful of people, while simultaneously, for the rest of us, compounding and often aggravating existing inequalities – cultural, social and economic – whenever and wherever it has found them. Individually, it may work wonders for us. Collectively, it’s doing us no good at all. “It was supposed to be win-win,” Keen declares. “The network’s users were supposed to be its beneficiaries. But in a lot of ways, we are its victims.”

This is not, Keen acknowledges, a very popular view, especially in Silicon Valley, where he has spent the best part of the past 30-odd years after an uneventful north London childhood (the family was in the rag trade). But The Internet is Not the Answer – Keen’s third book (the first questioned the value of user-generated content, the second the point of social media; you get where he’s coming from) – has been “remarkably well received”, he says. “I’m not alone in making these points. Moderate opinion is starting to see that this is a problem.”

What seems most unarguable is that, whatever else it has done, the internet – after its early years as a network for academics and researchers from which vulgar commercial activity was, in effect, outlawed – has been largely about the money. The US government’s decision, in 1991, to throw the nascent network open to private enterprise amounted, as one leading (and now eye-wateringly wealthy) Californian venture capitalist has put it, to “the largest creation of legal wealth in the history of the planet”.

The numbers Keen reels off are eye-popping: Google, which now handles 3.5bn searches daily and controls more than 90% of the market in some countries, including Britain, was valued at $400bn last year – more than seven times General Motors, which employs nearly four times more people. Its two founders, Larry Page and Sergey Brin, are worth $30bn apiece. Facebook’s Mark Zuckerberg, head of the world’s second biggest internet site – used by 19% of people in the world, half of whom access it six days a week or more – is sitting on a similar personal pile, while at $190bn in July last year, his company was worth more than Coca-Cola, Disney and AT&T.

Jeff Bezos of Amazon also has $30bn in his bank account. And even more recent online ventures look to be headed the same way: Uber, a five-year-old startup employing about 1,000 people and once succinctly described as “software that eats taxis”, was valued last year at more than $18bn – roughly the same as Hertz and Avis combined. The 700-staff lodging rental site Airbnb was valued at $10bn in February last year, not far off half as much as the Hilton group, which owns nearly 4,000 hotels and employs 150,000 people. The messaging app WhatsApp, bought by Facebook for $19bn, employs just 55, while the payroll of Snapchat – which turned down an offer of $3bn – numbers barely 20.

Part of the problem here, argues Keen, is that the digital economy is, by its nature, winner-takes-all. “There’s no inevitable or conspiratorial logic here; no one really knew it would happen,” he says. “There are just certain structural qualities that mean the internet lends itself to monopolies. The internet is a perfect global platform for free-market capitalism – a pure, frictionless, borderless economy … It’s a libertarian’s wet dream. Digital Milton Friedman.”Nor are those monopolies confined to just one business. Keen cites San Francisco-based writer Rebecca Solnit’s incisive take on Google: imagine it is 100 years ago, and the post office, the phone company, the public libraries, the printing houses, Ordnance Survey maps and the cinemas were all controlled by the same secretive and unaccountable organisation. Plus, he adds, almost as an afterthought: “Google doesn’t just own the post office – it has the right to open everyone’s letters.”

Advertisement

This, Keen argues, is the net economy’s natural tendency: “Google is the search and information monopoly and the largest advertising company in history. It is incredibly strong, joining up the dots across more and more industries. Uber’s about being the transport monopoly; Airbnb the hospitality monopoly; TaskRabbit the labour monopoly. These are all, ultimately, monopoly plays – that’s the logic. And that should worry people.”

It is already having consequences, Keen says, in the real world. Take – surely the most glaring example – Amazon. Keen’s book cites a 2013 survey by the US Institute for Local Self-Reliance, which found that while it takes, on average, a regular bricks-and-mortar store 47 employees to generate $10m in turnover, Bezos’s many-tentacled, all-consuming and completely ruthless “Everything Store” achieves the same with 14. Amazon, that report concluded, probably destroyed 27,000 US jobs in 2012.

“And we love it,” Keen says. “We all use Amazon. We strike this Faustian deal. It’s ultra-convenient, fantastic service, great interface, absurdly cheap prices. But what’s the cost? Truly appalling working conditions; we know this. Deep hostility to unions. A massive impact on independent retail; in books, savage bullying of publishers. This is back to the early years of the 19th century. But we’re seduced into thinking it’s good; Amazon has told us what we want to hear. Bezos says, ‘This is about you, the consumer.’ The problem is, we’re not just consumers. We’re citizens, too.”

Read the entire article here.

Image: Visualization of routing paths through a portion of the Internet. Courtesy of the Opte Project.

Are Most CEOs Talented or Lucky?

According to Harold G. Hamm, founder and CEO of Continental Resources, most CEOs are lucky not talented. You see, Hamm’s net worth has reached around $18 billion and in recent divorce filings he claims to only have been responsible for generating around 10 percent of this wealth since founding his company in 1988. Interestingly, even though he made most of the key company appointments and oversaw all the key business decisions, he seems to be rather reticent in claiming much of the company’s success as his own. Strange then that his company  would compensate him to the tune of around $43 million during 2006-2013 for essentially being a lucky slacker!

This, of course, enables him to minimize the amount owed to his ex-wife. Thus, one has to surmise from these shenanigans that some CEOs are not only merely lucky, they’re also stupid.

On a broader note this does raise the question of why many CEOs are rewarded such extraordinary sums when it’s mostly luck guiding their company’s progress!

From NYT:

The divorce of the oil billionaire Harold G. Hamm from Sue Ann Arnall has gained attention largely for its outsize dollar amounts. Mr. Hamm, the chief executive and founder of Continental Resources, who was worth more than $18 billion at one point, wrote his ex-wife a check last month for $974,790,317.77 to settle their split. She’s appealing to get more; he’s appealing to pay less.

Yet beyond the staggering sums, the Hamm divorce raises a fundamental question about the wealth of executives and entrepreneurs: How much do they owe their fortunes to skill and hard work, and how much comes from happenstance and luck?

Mr. Hamm, seeking to exploit a wrinkle in divorce law, made the unusual argument that his wealth came largely from forces outside his control, like global oil prices, the expertise of his deputies and other people’s technology. During the nine-week divorce trial, his lawyers claimed that although Mr. Hamm had founded Continental Resources and led the company to become a multibillion-dollar energy giant, he was responsible for less than 10 percent of his personal and corporate success.

Some in the courtroom started calling it the “Jed Clampett defense,” after the lead character in “The Beverly Hillbillies” TV series who got rich after tapping a gusher in his swampland.

In a filing last month supporting his appeal, Mr. Hamm cites the recent drop in oil prices and subsequent 50 percent drop in Continental’s share price and his fortune as further proof that forces outside his control direct his company’s fortunes.

Lawyers for Ms. Arnall argue that Mr. Hamm is responsible for more than 90 percent of his fortune.

While rooted in a messy divorce, the dispute frames a philosophical and ethical debate over inequality and the obligations of the wealthy. If wealth comes mainly from luck or circumstance, many say the wealthy owe a greater debt to society in the form of taxes or charity. If wealth comes from skill and hard work, perhaps higher taxes would discourage that effort.

Sorting out what value is created by luck or skill is a tricky proposition in itself. The limited amount of academic research on the topic, which mainly looks at how executives can influence a company’s value, has often found that broader market forces often have a bigger impact on a company’s success than an executive’s actions.

“As we know from the research, the performance of a large firm is due primarily to things outside the control of the top executive,” said J. Scott Armstrong, a professor at the Wharton School at the University of Pennsylvania. “We call that luck. Executives freely admit this — when they encounter bad luck.”

A study conducted from 1992 to 2011 of how C.E.O. compensation changed in response to luck or events beyond the executives’ control showed that their pay was 25 percent higher when luck favored the C.E.O.

Some management experts say the role of luck is nearly impossible to measure because it depends on the particular industry. Oil, for instance, is especially sensitive to outside forces.

“Within any industry, a more talented management team is going to tend to do better,” said Steven Neil Kaplan of the University of Chicago Booth School of Business. “That is why investors and boards of directors look for the best talent to run their companies. That is why company stock prices often move a lot, in both directions, when a C.E.O. dies or a new C.E.O. is hired.”

The Hamm case hinged on a quirk in divorce law known as “active versus passive appreciation.” In Oklahoma, and many other states, if a spouse owns an asset before the marriage, the increase in the value of an asset during marriage is not subject to division if the increase was because of “passive” appreciation. Passive appreciation is when an asset grows on its own because of factors outside either spouse’s control, like land that appreciates without any improvements or passively held stocks. Any value that’s not deemed as “passive” is considered “active” — meaning it increased because of the efforts, skills or funding of a spouse and can therefore be subject to division in a divorce.

The issue has been at the center of some other big divorces. In the 2002 divorce of the Chicago taxi magnate David Markin and Susan Markin, filed in Palm Beach, Fla., Mr. Markin claimed he was “merely a passenger on this corporate ship traveling through the ocean,” according to the judge. But he ruled that Mr. Markin was more like “the captain of the ship. Certainly he benefited by sailing through some good weather. However, he picked the course and he picked the crew. In short, he was directly responsible for everything that happened.” Ms. Markin was awarded more than $30 million, along with other assets.

Mr. Hamm, now 69, also had favorable conditions after founding Continental Resources well before his marriage in 1988 to Sue Ann, then a lawyer at the company. By this fall, when the trial ended, Continental had a market capitalization of over $30 billion; Mr. Hamm’s stake of 68 percent and other wealth exceeded $18 billion.

Their divorce trial was closed to the public, and all but a few of the documents are under seal. Neither Mr. Hamm nor his lawyers or representatives would comment. Ms. Arnall and her spokesman also declined to comment.

According to people with knowledge of the case, however, Mr. Hamm’s chief strategy was to claim most of his wealth as passive appreciation, and therefore not subject to division. During his testimony, the typically commanding Mr. Hamm, who had been the face of the company for decades, said he couldn’t recall certain decisions, didn’t know much about the engineering aspects of oil drilling and didn’t attend critical meetings.

Mr. Hamm’s lawyers calculated that only 5 to 10 percent of his wealth came from his own effort, skill, management or investment. It’s unclear how they squared this argument with his compensation, which totaled $42.7 million from 2006 to 2013, according to Equilar, an executive compensation data company.

Ms. Arnall called more than 80 witnesses — from Continental executives to leading economists like Glenn Hubbard and Kenneth Button — to show how much better Continental had done than its peers and that Mr. Hamm made most or all of the key decisions about the company’s strategy, finances and operations. They estimated that Mr. Hamm was responsible for $14 billion to $17 billion of his $18 billion fortune.

Read the entire article here.

 

The Paradox That is Humanity

Romans_punish_Cretan_Saracens

Fanatical brutality and altruism. Greed and self-sacrifice. Torture and love. Cruelty and remorse. Care and wickedness. These are the paradoxical traits that make us uniquely human. Many people give of themselves, love unconditionally, exhibit kindness, selflessness and compassion at every turn. And yet, describing the immolation, crucifixions and beheadings of fellow humans by humans as inhuman or “beastial” rather misses the point. While some animals maim and kill their own, and even feast on the spoils, humans have risen above all other species to a pinnacle of barbaric behavior that demands that we all continually reflect on our humanity, both good and evil. Sadly, this is not news: persecution of one group by another is encoded in our DNA.

From the Guardian:

It describes itself as “an inclusive school where gospel values underpin a caring and supporting ethos, manifest in care for each individual”. And I have no reason to doubt it. But one of the questions raised by the popularity of Hilary Mantel’s Wolf Hall is whether St Thomas More Catholic School is named after a monster or a saint. With Mantel, gone is the More of heroic humanism popularised by Robert Bolt’s fawning A Man for All Seasons. In its place she reminds us that More was persecutor-in-chief towards those who struggled to see the Bible translated into English and personally responsible for the burning of a number of men who dared question the ultimate authority of the Roman church.

This week’s Wolf Hall episode ended with the death of Middle Temple lawyer James Bainham at Smithfield on 30 April 1532. More tortured Bainham in the Tower of London for questioning the sanctity of Thomas Becket and for speaking out against the financial racket of the doctrine of purgatory that “picked men’s purses”. At first, under the pressure of torture, Bainham recanted his views. But within weeks of being released, Bainham re-asserted them. And so More had him burnt at the stake.

The recent immolation of Jordanian pilot Lieutenant Muadh al-Kasasbeh by Islamic State (Isis) brings home the horrendous reality of what this involves. I watched it on the internet. And I wish I hadn’t. I felt voyeuristic and complicit. And though I justified watching on the grounds that I was going to write about it, and thus (apparently) needed to see the truly horrific footage, I don’t think I was right to do so. As well as seeing things that I will never be able to un-see, I felt morally soiled – as if I had done exactly what Isis had wanted me to do. I mean, if no one ever watched this stuff,  they wouldn’t make it.

Afterwards, I wandered down to Smithfield market to get some air. I sat in a posh cafe and tried to picture what the place must have been like when Bainham was killed. Both then and now, death by burning was a staged event, deliberately public, a theatre of cruelty designed for political/religious instruction. In his book on burnings in 16th century England, the historian Eamon Duffy recounts a burning in Dartford in 1555: “‘Thither came … fruiterers wyth horse loades of cherries, and sold them’.” Can you imagine: passing round the cherries as you watch people burn? What sort of creatures are we?

Yes, religion is the common factor here. But if there is no God (as some say) and religion is a purely human phenomenon, then it is humanity that is also in the dock. For when we speak of these acts as “inhuman”, or of the “inhumanity” of Isis, we are surely kidding ourselves: history teaches that human beings are often exactly like this. We are often viciously cruel and without an ounce of pity and, yet, all too often in denial about our basic capacity for wickedness. One cannot be in denial after watching that video.

And yet the thing that it is almost impossible for us to get our heads around is that this capacity for wickedness can also co-exist with an extraordinary capacity for love and care and self-sacrifice. More, of course, is a perfect case in point. As well as being declared a saint, More was famously one of the early humanists, a friend of Erasmus. In his Utopia, he fantasised about a world where people lived together in harmony, with no private property to divide them. He championed female education and (believe it or not) religious toleration.

Robert Bolt may have only reflected one aspect of More’s character, but he did stand up for what he believed in, even to the point of death. And when More was declared a saint in 1935, it was partially a powerful and deliberate witness to German Christians to do the same. And who would have guessed that, within a few years, apparently civilized Europe would return again to the burning of human bodies, this time on an industrial scale. And this time, not in the name of God.

Read the entire article here.

Image: 12th century Byzantine manuscript illustration depicting Byzantine Greeks (Christian/Eastern Orthodox) punishing Cretan Saracens (Muslim) in the 9th century. Courtesy of Madrid Skylitzes / Wikipedia.

 

24 Hours with the Fox Circus

Frozen-Elsa

The Fox Channel is a domain of superlatives; nowhere else in our global media landscape can you find — under one roof — such utter, dumbed-down nonsense; opinionated drivel served up as “news” or “fact”; medieval, Murdochian dogma; misogyny and racism; perversion of science; and sheer journalistic piffle. In the space of a recent 24 hours the channel went from broadcasting the unedited immolation of Jordanian pilot, Muadh al-Kasasbeh, which is nothing more than an act of “murder-porn” (gratuitous profiteering and complicity with the murderers), to the denunciation of the movie Frozen as anti-male propaganda. Oh Jon Stewart, you are such a lucky man to be a contemporary of this network — long may you both reign!

From the Guardian:

If you believe what you hear on Fox News, Disney’s Frozen is nothing but misandrist propaganda.

During Wednesday’s Fox & Friends – the crown jewel program at a network known for its loose interpretation of facts – host Steve Doocy raised awareness about Hollywood’s latest nefarious plot to undermine American masculinity. Doocy took issue with Disney’s Frozen, the wildly successful children’s film released more than a year ago, saying the movie empowers young girls by “turning our men into fools and villains” – an agenda they dubbed the “Frozen effect”.

Penny Young Nance, the CEO of Concerned Women for America – “the women’s group that loves men” – went on: “We want to empower women, but we don’t have to do it at the cost of tearing down men … Men are essential in our society.”

“It would be nice for Hollywood to have more male figures,” Doocy concluded, a wish at odds with nearly every metric for gender equality in Hollywood.

The sentiment has been almost universally condemned on social media.

“If I see one more thinkpiece about how Hollywood is too kind to women and not respectful enough to the male population I just don’t know what I’ll do,” Kevin Fallon wrote at the Daily Beast.

Frozen, which generated more than $1bn in global ticket sales and won the Golden Globe for best animated film, has been hailed as an unexpectedly feminist work from a company not exactly known for empowering depictions of women.

Read the entire article here.

Image: Elsa, Frozen. Courtesy of Disney.

FCC Flexes Title II

US-FCC-Seal.svgChairman of the US Federal Communications Commission (FCC) was once beholden to the pseudo-monopolies that are cable and wireless providers. Now, he seems to be fighting to keep the internet fair, neutral and open — for consumers. Hard to believe. But, let’s face it, if Comcast and other telecoms behemoths are against Wheeler’s proposal then it must be good for consumer.

From Wired:

After more than a decade of debate and a record-setting proceeding that attracted nearly 4 million public comments, the time to settle the Net Neutrality question has arrived. This week, I will circulate to the members of the Federal Communications Commission (FCC) proposed new rules to preserve the internet as an open platform for innovation and free expression. This proposal is rooted in long-standing regulatory principles, marketplace experience, and public input received over the last several months.

Broadband network operators have an understandable motivation to manage their network to maximize their business interests. But their actions may not always be optimal for network users. The Congress gave the FCC broad authority to update its rules to reflect changes in technology and marketplace behavior in a way that protects consumers. Over the years, the Commission has used this authority to the public’s great benefit.

The internet wouldn’t have emerged as it did, for instance, if the FCC hadn’t mandated open access for network equipment in the late 1960s. Before then, AT&T prohibited anyone from attaching non-AT&T equipment to the network. The modems that enabled the internet were usable only because the FCC required the network to be open.

Companies such as AOL were able to grow in the early days of home computing because these modems gave them access to the open telephone network.

I personally learned the importance of open networks the hard way. In the mid-1980s I was president of a startup, NABU: The Home Computer Network. My company was using new technology to deliver high-speed data to home computers over cable television lines. Across town Steve Case was starting what became AOL. NABU was delivering service at the then-blazing speed of 1.5 megabits per second—hundreds of times faster than Case’s company. “We used to worry about you a lot,” Case told me years later.

But NABU went broke while AOL became very successful. Why that is highlights the fundamental problem with allowing networks to act as gatekeepers.

While delivering better service, NABU had to depend on cable television operators granting access to their systems. Steve Case was not only a brilliant entrepreneur, but he also had access to an unlimited number of customers nationwide who only had to attach a modem to their phone line to receive his service. The phone network was open whereas the cable networks were closed. End of story.

The phone network’s openness did not happen by accident, but by FCC rule. How we precisely deliver that kind of openness for America’s broadband networks has been the subject of a debate over the last several months.

Originally, I believed that the FCC could assure internet openness through a determination of “commercial reasonableness” under Section 706 of the Telecommunications Act of 1996. While a recent court decision seemed to draw a roadmap for using this approach, I became concerned that this relatively new concept might, down the road, be interpreted to mean what is reasonable for commercial interests, not consumers.

That is why I am proposing that the FCC use its Title II authority to implement and enforce open internet protections.

Using this authority, I am submitting to my colleagues the strongest open internet protections ever proposed by the FCC. These enforceable, bright-line rules will ban paid prioritization, and the blocking and throttling of lawful content and services. I propose to fully apply—for the first time ever—those bright-line rules to mobile broadband. My proposal assures the rights of internet users to go where they want, when they want, and the rights of innovators to introduce new products without asking anyone’s permission.

All of this can be accomplished while encouraging investment in broadband networks. To preserve incentives for broadband operators to invest in their networks, my proposal will modernize Title II, tailoring it for the 21st century, in order to provide returns necessary to construct competitive networks. For example, there will be no rate regulation, no tariffs, no last-mile unbundling. Over the last 21 years, the wireless industry has invested almost $300 billion under similar rules, proving that modernized Title II regulation can encourage investment and competition.

Congress wisely gave the FCC the power to update its rules to keep pace with innovation. Under that authority my proposal includes a general conduct rule that can be used to stop new and novel threats to the internet. This means the action we take will be strong enough and flexible enough not only to deal with the realities of today, but also to establish ground rules for the as yet unimagined.

The internet must be fast, fair and open. That is the message I’ve heard from consumers and innovators across this nation. That is the principle that has enabled the internet to become an unprecedented platform for innovation and human expression. And that is the lesson I learned heading a tech startup at the dawn of the internet age. The proposal I present to the commission will ensure the internet remains open, now and in the future, for all Americans.

Read the entire article here.

Image: Official US FCC government seal.

Je Suis Muadh al-Kasasbeh

The abhorrent, brutish murder of Jordanian pilot Muadh al-Kasasbeh by callous, cowardly murderers must take some kind of hideous prize. As some commentators have mentioned already, this act is not a cruel, new invention by depraved psychopaths, it represents a move backwards towards humanity’s sad vicious past, but played out for our social video-age.

From the Guardian:

Images of a Jordanian pilot being burned alive by the militants of Islamic State (Isis) began to filter on to social media and mainstream news sites on Monday. As with beheadings and other brutal acts carried out by the group in the past, there were calls not to share the video or stills of it, out of respect for the dead pilot and his family and in order not to further publicise the terrorists’ message. But it seems the details were so gruesome that many couldn’t help but watch and share.

I refused to look (I never do: it feels too much like giving Isis the attention it craves). But that didn’t stop others trying to tell me in vivid detail what the video showed. Someone even said it was “Bond villain-like”. Isis, it seems, has created a whole new kind of murderous cinematic experience.

Some internet users clearly find the unrelenting goriness of it all captivating – stonings, decapitations, throwing people off tall buildings, sticking severed heads on spikes. Perhaps there’s a compulsion to see just how far Isis will go. But the very act of choosing to witness these things makes us, in some way, complicit.

Media organisations face a particular dilemma, as the atrociousness arguably makes the crimes even more newsworthy. But any decision to transmit these images takes us into difficult territory. When Fox News posts all of the footage with the warning “extremely graphic video” attached, one could be forgiven for thinking that a steadfast commitment to truth-telling isn’t the only factor at play. But these videos are designed to be a grotesque form of clickbait. Making them available to ever-wider audiences only helps the terrorists achieve their traffic targets.

For some, displaying the video is not only a journalistic virtue. Watching it is somehow necessary to drive the full horror of Isis home. Piers Morgan, the former editor of the Daily Mirror, wrote in the Daily Mail how he couldn’t help but give in to the impulse to click, but is “glad” he did so because he now knows “what these monsters are capable of”. I am not sure how their nature is news to him. Did the rape, enslavement and summary execution of thousands of people and the murder of hostages not give it away?

He even imagines it could help win the battle of ideas. “If any Muslim remains in any doubt as to whether this is the right time to stand up and cry ‘Not in my name or my religion!’, then I suggest they too watch the video.” Where is this Muslim world full of doubt as to whether Isis is an enemy?

Morgan suggests that the fact the latest victim is a Sunni Arab means some sort of Rubicon has been crossed. This betrays his view that there is widespread implicit support for Isis among Muslims because they oppose the west, and that this video will shake them out of their complacency. Morgan helpfully adds: “This is your war.”

Most Muslims recoil in horror at the thought of Isis, and don’t need a video to help them along. Isis is playing a game of braggadocio and provocation, dressing it up in the language of prisoner exchanges and execution, as though it really is the state it claims to be. Anyone who views and disseminates these videos is playing their assigned part in the killers’ script. The crime doesn’t end with the death of the victim: the video and the process of watching and reacting to it are extensions of the terrorist act.

Thousands have been killed, off camera, in equally brutal ways, but these films allow Isis to revel in the toe-curling revulsion they inevitably provoke. It wants to generate just the kind of reaction that Morgan felt: he claims he was seized by “such uncontrollable rage that no amount of reasonable argument will ever temper it”.

But such videos turn the internet into a grisly public square where we all gather and watch in horror, then disband, unwittingly participating in a macabre cycle of action and reaction.

Isis has certainly murdered foreign hostages, Yezidis and members of other ethnic and religious groups. But the overwhelming majority of those targeted have been Muslims. No one is in any doubt whose war this is, or that Isis is capable of the stuff of nightmares. Films of its crimes are superfluous and risk distracting us from the continual suffering of those who live under it. So why keep looking?

Read the entire article here.

World Population: 100

World-as-100-People

Take today’s world and shrink its population down to just 100 people. Then apply a range of global measures to this group. Voila! You get a fascinating view of humanity at a scale that your brain can comprehend. For instance, from the total of a 100 people: 48 live on less than $2 per day, 93 do not have a college degree, 16 are undernourished or starving, 23 have no shelter, and 17 are illiterate.

The infographic was designed by graphic designer Jack Hagley. You can check out the infographic and read more of Hagley’s work here.

Infographic courtesy of Jack Hagley.

A Higher Purpose

In a fascinating essay, excerpted below, Michael Malone wonders if the tech gurus of Silicon Valley should be solving bigger problems. We see venture capitalists scrambling over one another to find the next viral, mobile app — perhaps one that automatically writes your tweets, or one that vibrates your smartphone if you say too many bad words. Should our capital markets — now with an attention span of 15 seconds — reward the so-called innovators of these so-called essential apps with millions or even billions in company valuations?

Shouldn’t Silicon Valley be tackling the hard problems? Wouldn’t humanity be better served, not from a new killer SnapChat replacement app, but from more efficient reverse osmosis; mitigation for Alzheimer’s (and all sundry of other chronic ailments); progress with alternative energy sources and more efficient energy sinks; next generation antibiotics; ridding the world of land-mines; growing and delivering nutritious food to those who need it most? Admittedly, these are some hard problems. But, isn’t that the point!

From Technology Review:

The view from Mike Steep’s office on Palo Alto’s Coyote Hill is one of the greatest in Silicon Valley.

Beyond the black and rosewood office furniture, the two large computer monitors, and three Indonesian artifacts to ward off evil spirits, Steep looks out onto a panorama stretching from Redwood City to Santa Clara. This is the historic Silicon Valley, the birthplace of Hewlett-Packard and Fairchild Semiconductor, Intel and Atari, Netscape and Google. This is the home of innovations that have shaped the modern world. So is Steep’s employer: Xerox’s Palo Alto Research Center, or PARC, where personal computing and key computer-­networking technologies were invented, and where he is senior vice president of global business operations.

And yet Mike Steep is disappointed at what he sees out the windows.

“I see a community that acts like it knows where it’s going, but that seems to have its head in the sand,” he says. He gestures towards the Hewlett-Packard headquarters a few blocks away and Hoover Tower at Stanford University. “This town used to think big—the integrated circuit, personal computers, the Internet. Are we really leveraging all that intellectual power and creativity creating Instagram and dating apps? Is this truly going to change the world?”

After spending years at Microsoft, HP, and Apple, Steep joined PARC in 2013 to help the legendary ideas factory better capitalize on its work. As part of the job, he travels around the world visiting R&D executives in dozens of big companies, and increasingly he worries that the Valley will become irrelevant to them. Steep is one of 22 tech executives on a board the mayor of London set up to promote a “smart city”; they advise officials on how to allocate hundreds of millions of pounds for projects that would combine physical infrastructure such as new high-speed rail with sensors, databases, and analytics. “I know for a fact that China and an array of other countries are chasing this project, which will be the template for scores of similar big-city infrastructure projects around the world in years to come,” Steep says. “From the U.S.? IBM. From Silicon Valley? Many in England ask if anyone here has even heard of the London subway project. That’s unbelievable. Why don’t we leverage opportunities like this here in the Valley?”

Steep isn’t alone in asking whether Silicon Valley is devoting far too many resources to easy opportunities in mobile apps and social media at the expense of attacking bigger problems in energy, medicine, and transportation (see Q&A: Peter Thiel). But if you put that argument to many investors and technologists here, you get a reasonable comeback: has Silicon Valley really ever set out to directly address big problems? In fact, the classic Valley approach has been to size up which technologies it can quickly and ambitiously advance, and then let the world make of them what it will. That is how we got Facebook and Google, and it’s why the Valley’s clean-tech affair was a short-lived mismatch. And as many people point out with classic Silicon Valley confidence, the kind of work that made the area great is still going on in abundance.

The next wave

A small group of executives, surrounded by hundreds of bottles of wine, sits in the private dining room at Bella Vita, an Italian restaurant in Los Altos’s picturesque downtown of expensive tiny shops. Within a few miles, one can find the site of the original Fairchild Semiconductor, Steve Jobs’s house, and the saloon where Nolan Bushnell set up the first Atari game. The host of this gathering is Carl Guardino, CEO of the Silicon Valley Leadership Group, an industry association dedicated to the economic health of the Valley. The 400 organizations that belong to the group are mostly companies that were founded long before the mobile-app craze; only 10 percent are startups. That is evident at this dinner, to which Guardino has invited three of his board members: Steve Berglund, CEO of Trimble, a maker of GPS equipment; Tom Werner, CEO of the solar provider SunPower; and Greg Becker, CEO of Silicon Valley Bank.

These are people who, like Steep, spend much of their time meeting with people in governments and other companies. Asked whether the Valley is falling out of touch with what the world really needs, each disagrees, vehemently. They are almost surprised by the question. “This is the most adaptive and flexible business community on the planet,” says Becker. “It is always about innovation—and going where the opportunity leads next. If you’re worried that the Valley is overpursuing one market or another, then just wait a while and it will change direction again. That’s what we are all about.”

“This is the center of world capitalism, and capitalism is always in flux,” Werner adds. “Are there too many social-­networking and app companies out there right now? Probably. But what makes you think it’s going to stay that way for long? We have always undergone corrections. It’s the nature of who we are … But we’ll come out stronger than ever, and in a whole different set of markets and new technologies. This will still be the best place on the planet for innovation.”

Berglund contends that a generational change already under way will reduce the emphasis on apps. “Young people don’t seem to care as much about code as their generational elders,” he says. “They want to build things—stuff like robots and drones. Just go to the Maker Faire and watch them. They’re going to take this valley in a whole different direction.”

Berglund could be right. In the first half of 2014, according to CB Insights, Internet startups were the leading recipient of venture investment in San Francisco and Silicon Valley (the area got half of the U.S. total; New York was second at 10 percent). But investment in the Internet sector accounted for 59 percent of the total, down from a peak of 68 percent in 2011.

Doug Henton, who heads the consulting firm Collaborative Economics and oversaw an upcoming research report on the state of the Valley, argues that since 1950 the area has experienced five technological waves. Each has lasted about 10 to 20 years and encompassed a frenzy followed by a crash and shakeout and then a mature “deployment period.” Henton has identified these waves as defense (1950s and 1960s), integrated circuits (1960s and 1970s), personal computers (1970s and 1980s), Internet (1990s), and social media (2000s and 2010s). By these lights, the social-media wave, however dominant it is in the public eye, soon may be replaced by another wave. Henton suggests that it’s likely to involve the combination of software, hardware, and sensors in wearable devices and the “Internet of things.”

Read the entire essay here.

MondayMap: Bro or Dude Country?

Dude_Frequency

If you’re a male in Texas and have one or more BFFs, then chances are that you refer to each of them as “bro”. If you and your BFFs hang out in the deep south, then you’re more likely to call them “fella”. Fans of the Big Lebowski will be glad to hear the “dude” lives on — but mostly only in California, Southwestern US and around the Great Lakes.

See more maps of bros, fellas, dudes, and pals at Frank Jacobs blog here.

Image courtesy of Frank Jacobs / Jack Grieve and Diansheng Guo.

Silicon Death Valley

boo-com

Have you ever wondered what happens to the 99 percent of Silicon Valley startups that don’t make billionaires (or even millionaires) of their founders? It’s not all milk and honey in the land of sunshine. After all, for every Google or Facebook there are hundreds of humiliating failures — think: Webvan, Boo.com, Pets.com. Beautyjungle.com, Boxman, Flooz, eToys.

The valley’s venture capitalists tend to bury their business failures rather quietly, careful not to taint their reputations as omnipotent, infallible futurists. From the ashes of these failures some employees move on to well-established corporate serfdom and others find fresh challenges at new startups. But there is a fascinating middle-ground, between success and failure — an entrepreneurial twilight zone populated by zombie businesses.

From the Guardian:

It is probably Silicon Valley’s most striking mantra: “Fail fast, fail often.” It is recited at technology conferences, pinned to company walls, bandied in conversation.

Failure is not only invoked but celebrated. Entrepreneurs give speeches detailing their misfires. Academics laud the virtue of making mistakes. FailCon, a conference about “embracing failure”, launched in San Francisco in 2009 and is now an annual event, with technology hubs in Barcelona, Tokyo, Porto Alegre and elsewhere hosting their own versions.

While the rest of the world recoils at failure, in other words, technology’s dynamic innovators enshrine it as a rite of passage en route to success.

But what about those tech entrepreneurs who lose – and keep on losing? What about those who start one company after another, refine pitches, tweak products, pivot strategies, reinvent themselves … and never succeed? What about the angst masked behind upbeat facades?

Silicon Valley is increasingly asking such questions, even as the tech boom rewards some startups with billion-dollar valuations, sprinkling stardust on founders who talk of changing the world.

“It’s frustrating if you’re trying and trying and all you read about is how much money Airbnb and Uber are making,” said Johnny Chin, 28, who endured three startup flops but is hopeful for his fourth attempt. “The way startups are portrayed, everything seems an overnight success, but that’s a disconnect from reality. There can be a psychic toll.”

It has never been easier or cheaper to launch a company in the hothouse of ambition, money and software that stretches from San Francisco to Cupertino, Mountain View, Menlo Park and San Jose.

In 2012 the number of seed investment deals in US tech reportedly more than tripled, to 1,700, from three years earlier. Investment bankers are quitting Wall Street for Silicon Valley, lured by hopes of a cooler and more creative way to get rich.

Most startups fail. However many entrepreneurs still overestimate the chances of success – and the cost of failure.

Some estimates put the failure rate at 90% – on a par with small businesses in other sectors. A similar proportion of alumni from Y Combinator, a legendary incubator which mentors bright prospects, are said to also struggle.

Companies typically die around 20 months after their last financing round and after having raised $1.3m, according to a study by the analytics firms CB Insights titled The RIP Report – startup death trends.

Advertisement

Failure is difficult to quantify because it does not necessarily mean liquidation. Many startups limp on for years, ignored by the market but sustained by founders’ savings or investors.

“We call them the walking dead,” said one manager at a tech behemoth, who requested anonymity. “They don’t necessarily die. They putter along.”

Software engineers employed by such zombies face a choice. Stay in hope the company will take off, turning stock options into gold. Or quit and take one of the plentiful jobs at other startups or giants like Apple and Google.

Founders face a more agonising dilemma. Continue working 100-hour weeks and telling employees and investors their dream is alive, that the metrics are improving, and hope it’s true, or pull the plug.

The loss aversion principle – the human tendency to strongly prefer avoiding losses to acquiring gains – tilts many towards the former, said Bruno Bowden, a former engineering manager at Google who is now a venture investor and entrepreneur.

“People will do a lot of irrational things to avoid losing even if it’s to their detriment. You push and push and exhaust yourself.”

Silicon Valley wannabes tell origin fables of startup founders who maxed out credit cards before dazzling Wall Street, the same way Hollywood’s struggling actors find solace in the fact Brad Pitt dressed as a chicken for El Pollo Loco before his breakthrough.

“It’s painful to be one of the walking dead. You lie to yourself and mask what’s not working. You amplify little wins,” said Chin, who eventually abandoned startups which offered micro, specialised versions of Amazon and Yelp.

That startup founders were Silicon Valley’s “cool kids”, glamorous buccaneers compared to engineers and corporate drones, could make failure tricky to recognise, let alone accept, he said. “People are very encouraging. Everything is amazing, cool, awesome. But then they go home and don’t use your product.”

Chin is bullish about his new company, Bannerman, an Uber-type service for event security and bodyguards, and has no regrets about rolling the tech dice. “I love what I do. I couldn’t do anything else.”

Read the entire story here.

Image: Boo.com, 1999. Courtesy of the WayBackMachine, Internet Archive.

Universal Amniotic Fluid

Another day, another physics paper describing the origin of the universe. This is no wonder. Since the development of general relativity and quantum mechanics — two mutually incompatible descriptions of our reality — theoreticians have been scurrying to come up with a grand theory, a rapprochement of sorts. This one describes the universe as a quantum fluid, perhaps made up of hypothesized gravitons.

From Nature Asia:

The prevailing model of cosmology, based on Einstein’s theory of general relativity, puts the universe at around 13.8 billion years old and suggests it originated from a “singularity” – an infinitely small and dense point – at the Big Bang.

 To understand what happened inside that tiny singularity, physicists must marry general relativity with quantum mechanics – the laws that govern small objects. Applying both of these disciplines has challenged physicists for decades. “The Big Bang singularity is the most serious problem of general relativity, because the laws of physics appear to break down there,” says Ahmed Farag Ali, a physicist at Zewail City of Science and Technology, Egypt.

 In an effort to bring together the laws of quantum mechanics and general relativity, and to solve the singularity puzzle, Ali and Saurya Das, a physicist at the University of Lethbridge in Alberta Canada, employed an equation that predicts the development of singularities in general relativity. That equation had been developed by Das’s former professor, Amal Kumar Raychaudhuri, when Das was an undergraduate student at Presidency University, in Kolkata, India, so Das was particularly familiar and fascinated by it.

 When Ali and Das made small quantum corrections to the Raychaudhuri equation, they realised it described a fluid, made up of small particles, that pervades space. Physicists have long believed that a quantum version of gravity would include a hypothetical particle, called the graviton, which generates the force of gravity. In their new model — which will appear in Physics Letters B in February — Ali and Das propose that such gravitons could form this fluid.

To understand the origin of the universe, they used this corrected equation to trace the behaviour of the fluid back through time. Surprisingly, they found that it did not converge into a singularity. Instead, the universe appears to have existed forever. Although it was smaller in the past, it never quite crunched down to nothing, says Das.

 “Our theory serves to complement Einstein’s general relativity, which is very successful at describing physics over large distances,” says Ali. “But physicists know that to describe short distances, quantum mechanics must be accommodated, and the quantum Raychaudhui equation is a big step towards that.”

The model could also help solve two other cosmic mysteries. In the late 1990s, astronomers discovered that the expansion of the universe is accelerating due the presence of a mysterious dark energy, the origin of which is not known. The model has the potential to explain it since the fluid creates a minor but constant outward force that expands space. “This is a happy offshoot of our work,” says Das.

 Astronomers also now know that most matter in the universe is in an invisible mysterious form called dark matter, only perceptible through its gravitational effect on visible matter such as stars. When Das and a colleague set the mass of the graviton in the model to a small level, they could make the density of their fluid match the universe’s observed density of dark matter, while also providing the right value for dark energy’s push.

Read the entire article here.

 

True “False Memory”

Apparently it is surprisingly easy to convince people to remember a crime, or other action, that they never committed. Makes one wonder how many of the around 2 million people in US prisons are incarcerated due to these false memories in both inmates and witnesses.

From ars technica:

The idea that memories are not as reliable as we think they are is disconcerting, but it’s pretty well-established. Various studies have shown that participants can be persuaded to create false childhood memories—of being lost in a shopping mall or hospitalized, or even highly implausible scenarios like having tea with Prince Charles.

The creation of false memories has obvious implications for the legal system, as it gives us reasons to distrust both eyewitness accounts and confessions. It’s therefore important to know exactly what kinds of false memories can be created, what influences the creation of a false memory, and whether false recollections can be distinguished from real ones.

A recent paper in Psychological Science found that 71 percent of participants exposed to certain interview techniques developed false memories of having committed a crime as a teenager. In reality, none of these people had experienced contact with the police during the age bracket in question.

After establishing a pool of potential participants, the researchers sent out questionnaires to the caregivers of these individuals. They eliminated any participants who had been involved in some way with an assault or theft, or had other police contact between the ages of 11 and 14. They also asked the caregivers to describe in detail a highly emotional event that the participant had experienced at this age. The caregivers were asked not to discuss the content of the questionnaire with the participants.

The 60 eligible participants were divided into two groups: one that would be given false memories of committing an assault, theft, or assault with a weapon, and another that would be provided with false memories of another emotional event—an injury, an attack by a dog, or the loss of a large sum of money. In the first of three interviews with each participant, the interviewer presented the true memory that had been provided by the caregiver. Once the interviewer’s credibility and knowledge of the participant’s background had been established, the false memory was presented.

For both kinds of memory, the interviewer gave the participant “cues”, such as their age at the time, people who had been involved, and the time of year. Participants were then asked to recall the details of what had happened. No participants recalled the false event the first time it was mentioned—which would have rung alarm bells—but were reassured that people could often uncover memories like these through effort.

A number of tactics were used to induce the false memory. Social pressure was applied to encourage recall of details, the interviewer attempted to build a rapport with the participants, and the participants were told that their caregivers had corroborated the facts. They were also encouraged to use visualization techniques to “uncover” the memory.

In each of the three interviews, participants were asked to provide as many details as they could for both events. After the final interview, they were informed that the second memory was false, and asked whether they had really believed the events had occurred. They were also asked to rate how surprised they were to find out that it was false. Only participants who answered that they had genuinely believed the false memory, and who could give more than ten details of the event, were classified as having a true false memory. Of the participants in the group with criminal false stories, 71 percent developed a “true” false memory. The group with non-criminal false stories was not significantly different, with 77 percent of participants classified as having a false memory. The details participants provided for their false memories did not differ significantly in either quality or quantity from their true memories.

This study is only a beginning, and there is still a great deal of work to be done. There are a number of factors that couldn’t be controlled for but which may have influenced the results. For instance, the researchers suggest that, since only one interviewer was involved, her individual characteristics may have influenced the results, raising the question of whether only certain kinds of interviewers can achieve these effects. It isn’t clear whether participants were fully honest about having believed in the false memory, since they could have just been trying to cooperate; the results could also have been affected by the fact that there were no negative consequences to telling the false story.

Read the entire article here.

Focus on Process, Not Perfect Grades

If you are a parent of a school-age child then it is highly likely that you have, on multiple occasions, chastised her or him and withheld privileges for poor grades. It’s also likely that you have rewarded the same child for being smart at math or having Picasso-like artistic talent. I have done this myself. But, there is a better way to nurture young minds, and it is through “telling stories about achievements that result from hard work.”

From Scientific American:

A brilliant student, Jonathan sailed through grade school. He completed his assignments easily and routinely earned As. Jonathan puzzled over why some of his classmates struggled, and his parents told him he had a special gift. In the seventh grade, however, Jonathan suddenly lost interest in school, refusing to do homework or study for tests. As a consequence, his grades plummeted. His parents tried to boost their son’s confidence by assuring him that he was very smart. But their attempts failed to motivate Jonathan (who is a composite drawn from several children). Schoolwork, their son maintained, was boring and pointless.

Our society worships talent, and many people assume that possessing superior intelligence or ability—along with confidence in that ability—is a recipe for success. In fact, however, more than 35 years of scientific investigation suggests that an overemphasis on intellect or talent leaves people vulnerable to failure, fearful of challenges and unwilling to remedy their shortcomings.

The result plays out in children like Jonathan, who coast through the early grades under the dangerous notion that no-effort academic achievement defines them as smart or gifted. Such children hold an implicit belief that intelligence is innate and fixed, making striving to learn seem far less important than being (or looking) smart. This belief also makes them see challenges, mistakes and even the need to exert effort as threats to their ego rather than as opportunities to improve. And it causes them to lose confidence and motivation when the work is no longer easy for them.

Praising children’s innate abilities, as Jonathan’s parents did, reinforces this mind-set, which can also prevent young athletes or people in the workforce and even marriages from living up to their potential. On the other hand, our studies show that teaching people to have a “growth mind-set,” which encourages a focus on “process” (consisting of personal effort and effective strategies) rather than on intelligence or talent, helps make them into high achievers in school and in life.

The Opportunity of Defeat
I first began to investigate the underpinnings of human motivation—and how people persevere after setbacks—as a psychology graduate student at Yale University in the 1960s. Animal experiments by psychologists Martin Seligman, Steven Maier and Richard Solomon, all then at the University of Pennsylvania, had shown that after repeated failures, most animals conclude that a situation is hopeless and beyond their control. After such an experience, the researchers found, an animal often remains passive even when it can effect change—a state they called learned helplessness.

People can learn to be helpless, too, but not everyone reacts to setbacks this way. I wondered: Why do some students give up when they encounter difficulty, whereas others who are no more skilled continue to strive and learn? One answer, I soon discovered, lay in people’s beliefs about why they had failed.

In particular, attributing poor performance to a lack of ability depresses motivation more than does the belief that lack of effort is to blame. In 1972, when I taught a group of elementary and middle school children who displayed helpless behavior in school that a lack of effort (rather than lack of ability) led to their mistakes on math problems, the kids learned to keep trying when the problems got tough. They also solved many more problems even in the face of difficulty. Another group of helpless children who were simply rewarded for their success on easier problems did not improve their ability to solve hard math problems. These experiments were an early indication that a focus on effort can help resolve helplessness and engender success.

Subsequent studies revealed that the most persistent students do not ruminate about their own failure much at all but instead think of mistakes as problems to be solved. At the University of Illinois in the 1970s I, along with my then graduate student Carol Diener, asked 60 fifth graders to think out loud while they solved very difficult pattern-recognition problems. Some students reacted defensively to mistakes, denigrating their skills with comments such as “I never did have a good rememory,” and their problem-solving strategies deteriorated.

Others, meanwhile, focused on fixing errors and honing their skills. One advised himself: “I should slow down and try to figure this out.” Two schoolchildren were particularly inspiring. One, in the wake of difficulty, pulled up his chair, rubbed his hands together, smacked his lips and said, “I love a challenge!” The other, also confronting the hard problems, looked up at the experimenter and approvingly declared, “I was hoping this would be informative!” Predictably, the students with this attitude outperformed their cohorts in these studies.

Read the entire article here.

The Great Unknown: Consciousness

Google-search-consciousness

Much has been written in the humanities and scientific journals about consciousness. Scholars continue to probe and pontificate and theorize. And yet we seem to know more of the ocean depths and our cosmos than we do of that interminable, self-aware inner voice that sits behind our eyes.

From the Guardian:

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Read the entire story here.

Image courtesy of Google Search.

Feminism in Saudi Arabia? Hypocrisy in the West!

We are constantly reminded on the immense struggle that is humanity’s progress. Often it seems like one step forward and several back. Cultural relativism and hypocrisy continue to run rampant in a world that celebrates selfies and serfdom.

Oh, and in case you haven’t heard: the rulers of Saudi Arabia are feminists. But then again, so too are the white males who control most of the power, wealth, media and political machinery in the West.

From the Guardian:

Christine Lagarde, the first woman to head the IMF, has paid tribute to the late King Abdullah of Saudi Arabia. He was a strong advocate of women, she said. This is almost certainly not what she thinks. She even hedged her remarks about with qualifiers like “discreet” and “appropriate”. There are constraints of diplomacy and obligations of leadership and navigating between them can be fraught. But this time there was only one thing to say. Abdullah led a country that abuses women’s rights, and indeed all human rights, in a way that places it beyond normal diplomacy.

The constraints and restrictions on Saudi women are too notorious and too numerous to itemise. Right now, two women are in prison for the offence of trying to drive over the border in to Saudi Arabia. It is not just the ban on driving. There is also the ban on going out alone, the ban on voting, the death penalty for adultery, and the total obliteration of public personality – almost of a sense of existence – by the obligatory veil. And there are the terrible punishments meted out to those who infringe these rules that are not written down but “interpreted” – Islam mediated through the conventions of a deeply conservative people.

Lagarde is right. King Abdullah did introduce reforms. Women can now work almost anywhere they want, although their husband brother or father will have to drive them there (and the children to school). They can now not just study law but practise as lawyers. There are women on the Sharia council and it was through their efforts that domestic violence has been criminalised. But enforcement is in the hands of courts that do not necessarily recognise the change. These look like reforms with all the substance of a Potemkin village, a flimsy structure to impress foreign opinion.

Pressure for change is driven by women themselves, exploiting social media by actions that range from the small, brave actions of defiance – posting images of women at the wheel (ovaries, despite men’s fears, apparently undamaged) – to the large-scale subversive gesture such as the YouTube TV programmes reported by the Economist.

But the point about the Lagarde remarks is that there are signs the Saudi authorities really can be sensitive to the rare criticism that comes from western governments, and the western media. Such protests may yet spare blogger Raif Badawi from further punishment for alleged blasphemy. Today’s lashing has been delayed for the third successive week .The Saudi authorities, like any despotic regime, are trying to appease their critics and contain the pressure for change that social media generates by conceding inch by inch so that, like the slow downhill creep of a glacier, the religious authorities and mainstream social opinion don’t notice it is happening.

But beyond Saudi’s borders, it is surely the duty of everyone who really does believe in equality and human rights to shout and finger point and criticise at every opportunity. Failing to do so is what makes Christine Lagarde’s remarks a betrayal of the women who literally risk everything to try to bring about change in the oppressive patriarchy in which they live. They are typical of the desire not to offend the world’s biggest oil producer and the west’s key Middle Eastern ally, a self-censorship that allows the Saudis to claim they respect human rights while breaching every known norm of behaviour.

Read the entire article here.

 

Education And Reality

Recent studies show that having a higher level of education does not necessarily lead to greater acceptance of reality. This seems to fly in the face of oft cited anecdotal evidence and prevailing beliefs that suggest people with lower educational attainment are more likely to reject accepted scientific fact, such as evolutionary science and climate change.

From ars technica:

We like to think that education changes people for the better, helping them critically analyze information and providing a certain immunity from disinformation. But if that were really true, then you wouldn’t have low vaccination rates clustering in areas where parents are, on average, highly educated.

Vaccination isn’t generally a political issue. (Or, it is, but it’s rejected both by people who don’t trust pharmaceutical companies and by those who don’t trust government mandates; these tend to cluster on opposite ends of the political spectrum.) But some researchers decided to look at a number of issues that have become politicized, such as the Iraq War, evolution, and climate change. They find that, for these issues, education actually makes it harder for people to accept reality, an effect they ascribe to the fact that “highly educated partisans would be better equipped to challenge information inconsistent with predispositions.”

The researchers looked at two sets of questions about the Iraq War. The first involved the justifications for the war (weapons of mass destruction and links to Al Qaeda), as well as the perception of the war outside the US. The second focused on the role of the troop surge in reducing violence within Iraq. At the time the polls were taken, there was a clear reality: no evidence of an active weapons program or links to Al Qaeda; the war was frowned upon overseas; and the surge had successfully reduced violence in the country.

On the three issues that were most embarrassing to the Bush administration, Democrats were more likely to get things right, and their accuracy increased as their level of education rose. In contrast, the most and least educated Republicans were equally likely to have things wrong. When it came to the surge, the converse was true. Education increased the chances that Republicans would recognize reality, while the Democratic acceptance of the facts stayed flat even as education levels rose. In fact, among Democrats, the base level of recognition that the surge was a success was so low that it’s not even clear it would have been possible to detect a downward trend.

When it came to evolution, the poll question didn’t even ask whether people accepted the reality of evolution. Instead, it asked “Is there general agreement among scientists that humans have evolved over time, or not?” (This phrasing generally makes it easier for people to accept the reality of evolution, since it’s not asking about their personal beliefs.) Again, education increased the acceptance of this reality among both Democrats and Republicans, but the magnitude of the effect was much smaller among Republicans. In fact, the impact of ideology was stronger than education itself: “The effect of Republican identification on the likelihood of believing that there is a scientific consensus is roughly three times that of the effect of education.”

For climate change, the participants were asked “Do you believe that the earth is getting warmer because of human activity or natural patterns?” Overall, about the beliefs of 70 percent of those polled lined up with scientific conclusions on the matter. And, among the least educated, party affiliation made very little difference in terms of getting this right. But, as education rose, Democrats were more likely to get this right, while Republicans saw their accuracy drop. At the highest levels of education, Democrats got it right 90 percent of the time, while Republicans less than half.

The results are in keeping with a number of other studies that have been published of late, which also show that partisan divides over things that could be considered factual sometimes increase with education. Typically, these issues are widely perceived as political. (With some exceptions; GMOs, for example.) In this case, the authors suspect that education simply allows people to deploy more sophisticated cognitive filters that end up rejecting information that could otherwise compel them to change their perceptions.

The authors conclude that’s somewhat mixed news for democracy itself. Education is intended to improve people’s ability to assimilate information upon which to base their political judgements. And, to a large extent, it does: people, on average, got 70 percent of the questions right, and there was only a single case where education made matters worse.

Read the entire article here.