MondayMap: Imagining a Post-Post-Ottoman World

Sykes_Picot_Agreement_Map_signed_8_May_1916

The United States is often portrayed as the world’s bully and nefarious geo-political schemer — a nation responsible for many of the world’s current political ills. However, it is the French and British who should be called to account for much of the globe’s ongoing turmoil, particularly in the Middle East. After the end of WWI the victors expeditiously carved up the spoils of the vanquished Austro-Hungarian and Ottoman Empires. Much of Eastern Europe and the Middle East was divvied and traded just a kids might swap baseball or football (soccer) cards today. Then President of France Georges Clemenceau and British Prime Minister David Lloyd George famously bartered and gifted — amongst themselves and their friends — entire regions and cities without thought to historical precedence, geographic and ethnic boundaries, or even the basic needs of entire populations. Their decisions were merely lines to be drawn and re-drawn on a map.

So, it would be a fascinating — though rather naive — exercise to re-draw many of today’s arbitrary and contrived boundaries, and to revert regions to their more appropriate owners. Of course, where and when should this thought experiment begin and end? Pre-roman empire, post-normans, before the Prussians, prior to the Austro-Hungarian Empire, or after the Ottomans, post-Soviets, or after Tito, or way before the Huns, Vandals and the Barbarians and any number of the Germanic tribes?

Nevertheless, essayist Yaroslav Trofimov takes a stab at re-districting to pre-Ottoman boundaries and imagines a world with less bloodshed. A worthy dream.

From WSJ:

Shortly after the end of World War I, the French and British prime ministers took a break from the hard business of redrawing the map of Europe to discuss the easier matter of where frontiers would run in the newly conquered Middle East.

Two years earlier, in 1916, the two allies had agreed on their respective zones of influence in a secret pact—known as the Sykes-Picot agreement—for divvying up the region. But now the Ottoman Empire lay defeated, and the United Kingdom, having done most of the fighting against the Turks, felt that it had earned a juicier reward.

“Tell me what you want,” France’s Georges Clemenceau said to Britain’s David Lloyd George as they strolled in the French embassy in London.

“I want Mosul,” the British prime minister replied.

“You shall have it. Anything else?” Clemenceau asked.

In a few seconds, it was done. The huge Ottoman imperial province of Mosul, home to Sunni Arabs and Kurds and to plentiful oil, ended up as part of the newly created country of Iraq, not the newly created country of Syria.

The Ottomans ran a multilingual, multireligious empire, ruled by a sultan who also bore the title of caliph—commander of all the world’s Muslims. Having joined the losing side in the Great War, however, the Ottomans saw their empire summarily dismantled by European statesmen who knew little about the region’s people, geography and customs.

The resulting Middle Eastern states were often artificial creations, sometimes with implausibly straight lines for borders. They have kept going since then, by and large, remaining within their colonial-era frontiers despite repeated attempts at pan-Arab unification.

The built-in imbalances in some of these newly carved-out states—particularly Syria and Iraq—spawned brutal dictatorships that succeeded for decades in suppressing restive majorities and perpetuating the rule of minority groups.

But now it may all be coming to an end. Syria and Iraq have effectively ceased to function as states. Large parts of both countries lie beyond central government control, and the very meaning of Syrian and Iraqi nationhood has been hollowed out by the dominance of sectarian and ethnic identities.

The rise of Islamic State is the direct result of this meltdown. The Sunni extremist group’s leader, Abu Bakr al-Baghdadi, has proclaimed himself the new caliph and vowed to erase the shame of the “Sykes-Picot conspiracy.” After his men surged from their stronghold in Syria last summer and captured Mosul, now one of Iraq’s largest cities, he promised to destroy the old borders. In that offensive, one of the first actions taken by ISIS (as his group is also known) was to blow up the customs checkpoints between Syria and Iraq.

“What we are witnessing is the demise of the post-Ottoman order, the demise of the legitimate states,” says Francis Ricciardone, a former U.S. ambassador to Turkey and Egypt who is now at the Atlantic Council, a Washington think tank. “ISIS is a piece of that, and it is filling in a vacuum of the collapse of that order.”

In the mayhem now engulfing the Middle East, it is mostly the countries created a century ago by European colonialists that are coming apart. In the region’s more “natural” nations, a much stronger sense of shared history and tradition has, so far, prevented a similar implosion.

“Much of the conflict in the Middle East is the result of insecurity of contrived states,” says Husain Haqqani, an author and a former Pakistani ambassador to the U.S. “Contrived states need state ideologies to make up for lack of history and often flex muscles against their own people or against neighbors to consolidate their identity.”

In Egypt, with its millennial history and strong sense of identity, almost nobody questioned the country’s basic “Egyptian-ness” throughout the upheaval that has followed President Hosni Mubarak’s ouster in a 2011 revolution. As a result, most of Egypt’s institutions have survived the turbulence relatively intact, and violence has stopped well short of outright civil war.

Turkey and Iran—both of them, in bygone eras, the center of vast empires—have also gone largely unscathed in recent years, even though both have large ethnic minorities of their own, including Arabs and Kurds.

The Middle East’s “contrived” countries weren’t necessarily doomed to failure, and some of them—notably Jordan—aren’t collapsing, at least not yet. The world, after all, is full of multiethnic and multiconfessional states that are successful and prosperous, from Switzerland to Singapore to the U.S., which remains a relative newcomer as a nation compared with, say, Iran.

Read the entire article here.

Image: Map of Sykes–Picot Agreement showing Eastern Turkey in Asia, Syria and Western Persia, and areas of control and influence agreed between the British and the French. Royal Geographical Society, 1910-15. Signed by Mark Sykes and François Georges-Picot, 8 May 1916. Courtesy of Wikipedia.

Send to Kindle

Yes M’Lady

google-Thunderbirds

Beneath the shell that envelops us as adults lies the child. We all have one inside — that vulnerable being who dreams, plays and improvises. Sadly, our contemporary society does a wonderful job of selectively numbing these traits, usually as soon as we enter school; our work finishes the process by quashing all remnants of our once colorful and unbounded imaginations. OK, I’m exaggerating a little to make my point. But I’m certain this strikes a chord.

Keeping this in mind, it’s awesomely brilliant to see Thunderbirds making a comeback. You may recall the original Thunderbirds TV shows in the mid-sixties. Created by Gerry and Sylvia Anderson, the marionette puppets and their International Rescue science-fiction machines would save us weekly from the forces of evil, destruction and chaos. The child who lurks within me utterly loved this show — everything would come to a halt to make way for this event on saturday mornings. Now I have a chance of reliving it with my kids, and maintaining some degree of childhood wonder in the process. Thunderbirds are go…

From the Guardian:

5, 4, 3, 2, 1 … Thunderbirds are go – but not quite how older viewers will remember. International Rescue has been given a makeover for the modern age, with the Tracy brothers, Brains, Lady Penelope and Parker smarter, fitter and with better gadgets than they ever had when the “supermarionation” show began on ITV half a century ago.

But fans fearful that its return, complete with Hollywood star Rosamund Pike voicing Lady Penelope, will trample all over their childhood memories can rest easy.

Unlike the 2004 live action film which Thunderbirds creator, the late Gerry Anderson, described as the “biggest load of crap I have ever seen in my life”, the new take on the children’s favourite, called Thunderbirds Are Go, remains remarkably true to the spirit of the 50-year-old original.

Gone are the puppet strings – audience research found that younger viewers wanted something more dynamic – but along with computer generated effects are models and miniature sets (“actually rather huge” said executive producer Estelle Hughes) that faithfully recall the original Thunderbirds.

Speaking after the first screening of the new ITV series on Tuesday, executive producer Giles Ridge said: “We felt we should pay tribute to all those elements that made it special but at the same time update it so it’s suitable and compelling for a modern audience.

“The basic DNA of the show – five young brothers on a secret hideaway island with the most fantastic craft you could imagine, helping people around the world who are in trouble, that’s not a bad place to start.”

The theme music is intact, albeit given a 21st century makeover, as is the Tracy Island setting – complete with the avenue of palm trees that makes way for Thunderbird 2 and the swimming pool that slides into the mountain for the launch of Thunderbird 1.

Lady Penelope – as voiced by Pike – still has a cut-glass accent and is entirely unflappable. When she is not saving the world she is visiting Buckingham Palace or attending receptions at 10 Downing Street. There is also a nod – blink and you miss it – to another Anderson puppet series, Stingray.

Graham, who voiced Parker in the original series, returns in the same role. “I think they were checking me out to see if I was still in one piece,” said Graham, now 89, of the meeting when he was first approached to appear in the new series.

“I was absolutely thrilled to repeat the voice and character of Parker. Although I am older my voice hasn’t changed too much over the years.”

He said the voice of Parker had come from a wine waiter who used to work in the royal household, whom Anderson had taken him to see in a pub in Cookham, Berkshire.

“He came over and said, ‘Would you like to see the wine list, sir?’ And Parker was born. Thank you, old mate.”

Brains, as voiced by Fonejacker star Kayvan Novak, now has an Indian accent.

Sylvia Anderson, Anderson’s widow, who co-created the show, will make a guest appearance as Lady Penelope’s “crazy aunt”.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Your Current Dystopian Nightmare: In Just One Click

Amazon was supposed to give you back precious time by making shopping and spending painlessly simple. Apps on your smartphone were supposed to do the same for all manner of re-tooled on-demand services. What wonderful time-saving inventions! So, now you can live in the moment and make use of all this extra free time. It’s your time now. You’ve won it back and no one can take it away.

And, what do you spend this newly earned free time doing? Well, you sit at home in your isolated cocoon, you shop for more things online, you download some more great apps that promise to bring even greater convenience, you interact less with real humans, and, best of all, you spend more time working. Welcome to your new dystopian nightmare, and it’s happening right now. Click.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire story here.

Send to Kindle

The Me-Useum

art-in-island-museum

The smartphone and its partner in crime, the online social network, begat the ubiquitous selfie. The selfie begat the self-stick. And, now we have the selfie museum. This is not an April Fool’s prank. Quite the contrary.

The Art in Island museum in Manila is making the selfie part of the visitor experience. Despite the obvious crassness, it may usher in a way for this and other museums to engage with their visitors more personally, and for visitors to connect with art more intimately. Let’s face it, if you ever try to pull a selfie-like stunt, or even take a photo, in the Louvre or the Prado galleries you would be escorted rather promptly to the nearest padded cell.

From the Guardian:

Selfiemania in art galleries has reached new heights of surreal comedy at a museum in Manila. Art in Island is a museum specifically designed for taking selfies, with “paintings” you can touch, or even step inside, and unlimited, unhindered photo opportunities. It is full of 3D reproductions of famous paintings that are designed to offer the wackiest possible selfie poses.

Meanwhile, traditional museums are adopting diverse approaches to the mania for narcissistic photography. I have recently visited museums with wildly contrasting policies on picture taking. At the Prado in Madrid, all photography is banned. Anything goes? No, nothing goes. Guards leap on anyone wielding a camera.

At the Musée d’Orsay in Paris photography is a free-for-all. Even selfie sticks are allowed. I watched a woman elaborately pose in front of Manet’s Le Déjeuner sur l’herbe so she could photograph herself with her daft selfie stick. This ostentatious technology turns holiday snaps into a kind of performance art. That is what the Manila museum indulges.

My instincts are to ban selfie sticks, selfies, cameras and phones from museums. But my instincts are almost certanly wrong.

Surely the bizarre selfie museum in Manila is a warning to museums, such as New York’s MoMA, that seek to ban, at the very least, selfie sticks – let alone photography itself. If you frustrate selfie enthusiasts, they may just create their own simulated galleries with phoney art that’s “fun” – or stop going to art galleries entirely.

It is better for photo fans to be inside real art museums, looking – however briefly – at actual art than to create elitist barriers between museums and the children of the digital age.

The lure of the selfie stick, which has caused such a flurry of anxiety at museums, is exaggerated. It really is a specialist device for the hardcore selfie lover. At the Musée d’Orsay there are no prohibitions, but only that one visitor, in front of the Manet, out of all the thousands was actually using a selfie stick.

And there’s another reason to go easy on selfies in museums, however irritating such low-attention-span, superficial behaviour in front of masterpieces may be.

Read the entire story here.

Image: Jean-François Millet’s gleaners break out of his canvas. The original, The Gleaners (Des glaneuses) was completed in 1857. Courtesy of Art in Island Museum. Manila, Philippines.

Send to Kindle

Electric Sheep?

I couldn’t agree more with Michael Newton’s analysis — Blade Runner remains a dystopian masterpiece, thirty-three years on. Long may it reign and rain.

And, here’s another toast to the brilliant mind of Philip K Dick. The author’s work Do Androids Dream of Electric Sheep?, published in 1968, led to this noir science-fiction classic.

From the Guardian:

It’s entirely apt that a film dedicated to replication should exist in multiple versions; there is not one Blade Runner, but seven. Though opinions on which is best vary and every edition has its partisans, the definitive rendering of Ridley Scott’s 1982 dystopian film is most likely The Final Cut (2002), about to play out once more in cinemas across the UK. Aptly, too, repetition is written into the movie’s plot (there are spoilers coming), that sees Deckard (played by Harrison Ford) as an official bounty hunter (or “Blade Runner”) consigned to hunt down, one after the other, four Nexus-6 replicants (genetically-designed artificial human beings, intended as slaves for Earth’s off-world colonies). One by one, our equivocal hero seeks out the runaways: worldly-wise Zhora (Joanna Cassidy); stolid Leon (Brion James); the “pleasure-model” Pris (Daryl Hannah); and the group’s apparent leader, the ultimate Nietzschean blond beast, Roy Batty (the wonderful Rutger Hauer). Along the way, Deckard meets and falls in love with another replicant, Rachael (Sean Young), as beautiful and cold as a porcelain doll.

In Blade Runner, as in all science-fiction, the “future” is a style. Here that style is part film noir and part Gary Numan. The 40s influence is everywhere: in Rachael’s Joan-Crawford shoulder pads, the striped shadows cast by Venetian blinds, the atmosphere of defeat. It’s not just noir, Ridley Scott also taps into 70s cop shows and movies that themselves tapped into nostalgic style, with their yearning jazz and their sad apartments; Deckard even visits a strip joint as all TV detectives must. The movie remains one of the most visually stunning in cinema history. It plots a planet of perpetual night, a landscape of shadows, rain and reflected neon (shone on windows or the eye) in a world not built to a human scale; there, the skyscrapers dwarf us like the pyramids. High above the Philip Marlowe world, hover cars swoop and dirigible billboards float by. More dated now than its hard-boiled lustre is the movie’s equal and opposite involvement in modish early 80s dreams; the soundtrack by Vangelis was up-to-the-minute, while the replicants dress like extras in a Billy Idol video, a post-punk, synth-pop costume party. However, it is noir romanticism that wins out, gifting the film with its forlorn Californian loneliness.

It is a starkly empty film, preoccupied as it is with the thought that people themselves might be hollow. The plot depends on the notion that the replicants must be allowed to live no longer than four years, because as time passes they begin to develop raw emotions. Why emotion should be a capital offence is never sufficiently explained; but it is of a piece with the film’s investigation of a flight from feeling – what psychologist Ian D Suttie once named the “taboo on tenderness”. Intimacy here is frightful (everyone appears to live alone), especially that closeness that suggests that the replicants might be indistinguishable from us.

Advertisement

This anxiety may originally have had tacit political resonances. In the novel that the film is based on, Philip K Dick’s thoughtful Do Androids Dream of Electric Sheep? (1968), the dilemma of the foot soldier plays out, commanded to kill an adversary considered less human than ourselves, yet troubled by the possibility that the enemy are in fact no different. Shades of Vietnam darken the story, as well as memories of America’s slave-owning past. We are told that the replicants can do everything a human being can do, except feel empathy. Yet how much empathy do we feel for faraway victims or inconvenient others?

Ford’s Deckard may or may not be as gripped by uncertainty about his job as Dick’s original blade runner. In any case, his brusque “lack of affect” provides one of the long-standing puzzles of the film: is he, too, a replicant? Certainly Ford’s perpetual grumpiness (it sometimes seems his default acting position), his curdled cynicism, put up barriers to feeling that suggest it is as disturbing for him as it is for the hunted Leon or Roy. Though some still doubt, it seems clear that Deckard is indeed a replicant, his imaginings and memories downloaded from some database, his life as transitory as that of his victims. However, as we watch Blade Runner, Deckard doesn’t feel like a replicant; he is dour and unengaged, but lacks his victims’ detached innocence, their staccato puzzlement at their own untrained feelings. The antithesis of the scowling Ford, Hauer’s Roy is a sinister smiler, or someone whose face falls at the brush of an unassimilable emotion.

Read the entire article here.

Video: Blade Runner clip.

Send to Kindle

April Can Mean Only One Thing

April-fool-Hailo-app

The advent of April in the United States usually brings the impending  tax day to mind. In the UK when April rolls in, it means the media goes overboard with April Fool’s jokes. Here’s a smattering of the silliest from Britain’s most serious media outlets.

From the Telegraph: transparent Marmite, Yessus Juice, prison release voting app, Burger King cologne (for men).

From the Guardian: Jeremy Clarkson and fossil fuel divestment.

From the Independent: a round-up of the best gags, including the proposed Edinburgh suspension bridge featuring a gap, Simon Cowell’s effigy on the new £5 note, grocery store aisle trampolines for the short of stature.

Image: Hailo’s new piggyback rideshare service.

Send to Kindle

A New Mobile App or Genomic Understanding?

Eyjafjallajökull

Silicon Valley has been a tremendous incubator for some of most our recent inventions: the first integrated transistor chip, which led to Intel; the first true personal computer, which led to Apple. Yet, this esteemed venture capital (VC) community now seems to need a self-medication of innovation. Aren’t we all getting a little jaded from yet another “new, great mobile app” — worth in the tens of billions (but having no revenue model) — courtesy of a bright and young group of 20-somethings?

It is indeed gratifying to see innovators, young and old, rewarded for their creativity and perseverance. Yet, we should be encouraging more of our pioneers to look beyond the next cool smartphone invention. Perhaps our technological and industrial luminaries and their retinues of futurists could do us all a favor if they channeled more of their speculative funds at longer-term and more significant endeavors: cost-effective desalination; cheaper medications; understanding and curing our insidious diseases; antibiotic replacements; more effective recycling; cleaner power; cheaper and stronger infrastructure; more effective education. These are all difficult problems. But therein lies the reward.

Clearly some pioneering businesses are investing in these areas. But isn’t it time we insisted that the majority of our private and public intellectual capital (and financial) should be invested in truly meaningful ways. Here’s an example from Iceland — with their national human genome project.

From ars technica:

An Icelandic genetics firm has sequenced the genomes of 2,636 of its countrymen and women, finding genetic markers for a variety of diseases, as well as a new timeline for the paternal ancestor of all humans.

Iceland is, in many ways, perfectly suited to being a genetic case study. It has a small population with limited genetic diversity, a result of the population descending from a small number of settlers—between 8 and 20 thousand, who arrived just 1100 years ago. It also has an unusually well-documented genealogical history, with information sometimes stretching all the way back to the initial settlement of the country. Combined with excellent medical records, it’s a veritable treasure trove for genetic researchers.

The researchers at genetics firm deCODE compared the complete genomes of participants with historical and medical records, publishing their findings in a series of four papers in Nature Genetics last Wednesday. The wealth of data allowed them to track down genetic mutations that are related to a number of diseases, some of them rare. Although few diseases are caused by a single genetic mutation, a combination of mutations can increase the risk for certain diseases. Having access to a large genetic sample with corresponding medical data can help to pinpoint certain risk-increasing mutations.

Among their headline findings was the identification of the gene ABCA7 as a risk factor for Alzheimer’s disease. Although previous research had established that a gene in this region was involved in Alzheimer’s, this result delivers a new level of precision. The researchers replicated their results in further groups in Europe and the United States.

Also identified was a genetic mutation that causes early-onset atrial fibrillation, a heart condition causing an irregular and often very fast heart rate. It’s the most common cardiac arrhythmia condition, and it’s considered early-onset if it’s diagnosed before the age of 60. The researchers found eight Icelanders diagnosed with the condition, all carrying a mutation in the same gene, MYL4.

The studies also turned up a gene with an unusual pattern of inheritance. It causes increased levels of thyroid stimulation when it’s passed down from the mother, but decreased levels when inherited from the father.

Genetic research in mice often involves “knocking out” or switching off a particular gene to explore the effects. However, mouse genetics aren’t a perfect approximation of human genetics. Obviously, doing this in humans presents all sorts of ethical problems, but a population such as Iceland provides the perfect natural laboratory to explore how knockouts affect human health.

The data showed that eight percent of people in Iceland have the equivalent of a knockout, one gene that isn’t working. This provides an opportunity to look at the data in a different way: rather than only looking for people with a particular diagnosis and finding out what they have in common genetically, the researchers can look for people who have genetic knockouts, and then examine their medical records to see how their missing genes affect their health. It’s then possible to start piecing together the story of how certain genes affect physiology.

Finally, the researchers used the data to explore human history, using Y chromosome data from 753 Icelandic males. Based on knowledge about mutation rates, Y chromosomes can be used to trace the male lineage of human groups, establishing dates of events like migrations. This technique has also been used to work out when the common ancestor of all humans was alive. The maternal ancestor, known as “Mitochondrial Eve,” is thought to have lived 170,000 to 180,000 years ago, while the paternal ancestor had previously been estimated to have lived around 338,000 years ago.

The Icelandic data allowed the researchers to calculate what they suggest is a more accurate mutation rate, placing the father of all humans at around 239,000 years ago. This is the estimate with the greatest likelihood, but the full range falls between 174,000 and 321,000 years ago. This estimate places the paternal ancestor closer in time to the maternal ancestor.

Read the entire story here.

Image: Gígjökull, an outlet glacier extending from Eyjafjallajökull, Iceland. Courtesy of Andreas Tille / Wikipedia.

Send to Kindle

Women Are From Venus, Men Can’t Remember

Yet another body of research underscores how different women are from men. This time, we are told, that the sexes generally encode and recall memories differently. So, the next time you take issue with a spouse (of different gender) about a — typically trivial — past event keep in mind that your own actions, mood and gender will affect your recall. If you’re female, your memories may be much more vivid than your male counterpart, but not necessarily more correct. If you (male) won last night’s argument, your spouse (female) will — unfortunately for you — remember it more accurately than you, which of course will lead to another argument.

From WSJ:

Carrie Aulenbacher remembers the conversation clearly: Her husband told her he wanted to buy an arcade machine he found on eBay. He said he’d been saving up for it as a birthday present to himself. The spouses sat at the kitchen table and discussed where it would go in the den.

Two weeks later, Ms. Aulenbacher came home from work and found two arcade machines in the garage—and her husband beaming with pride.

“What are these?” she demanded.

“I told you I was picking them up today,” he replied.

She asked him why he’d bought two. He said he’d told her he was getting “a package deal.” She reminded him they’d measured the den for just one. He stood his ground.

“I believe I told her there was a chance I was going to get two,” says Joe Aulenbacher, who is 37 and lives in Erie, Pa.

“It still gets me going to think about it a year later,” says Ms. Aulenbacher, 36. “My home is now overrun with two machines I never agreed upon.” The couple compromised by putting one game in the den and the other in Mr. Aulenbacher’s weight room.

It is striking how many arguments in a relationship start with two different versions of an event: “Your tone of voice was rude.” “No it wasn’t.” “You didn’t say you’d be working late.” “Yes I did.” “I told you we were having dinner with my mother tonight.” “No, honey. You didn’t.”

How can two people have different memories of the same event? It starts with the way each person perceives the event in the first place—and how they encoded that memory. “You may recall something differently at least in part because you understood it differently at the time,” says Dr. Michael Ross, professor emeritus in the psychology department at the University of Waterloo in Ontario, Canada, who has studied memory for many years.

Researchers know that spouses sometimes can’t even agree on concrete events that happened in the past 24 hours—such as whether they had an argument or whether one received a gift from the other. A study in the early 1980s, published in the journal “Behavioral Assessment,” found that couples couldn’t perfectly agree on whether they had sex the previous night.

Women tend to remember more about relationship issues than men do. When husbands and wives are asked to recall concrete relationship events, such as their first date, an argument or a recent vacation, women’s memories are more vivid and detailed.

But not necessarily more accurate. When given a standard memory test where they are shown names or pictures and then asked to recall them, women do just about the same as men.

Researchers have found that women report having more emotions during relationship events than men do. They may remember events better because they pay more attention to the relationship and reminisce more about it.

People also remember their own actions better. So they can recall what they did, just not what their spouse did. Researchers call this an egocentric bias, and study it by asking people to recall their contributions to events, as well as their spouse’s. Who cleans the kitchen more? Who started the argument? Whether the event is positive or negative, people tend to believe that they had more responsibility.

Your mood—both when an event happens and when you recall it later—plays a big part in memory, experts say. If you are in a positive mood or feeling positive about the other person, you will more likely recall a positive experience or give a positive interpretation to a negative experience. Similarly, negative moods tend to reap negative memories.

Negative moods may also cause stronger memories. A person who lost an argument remembers it more clearly than the person who won it, says Dr. Ross. Men tend to win more arguments, he says, which may help to explain why women remember the spat more. But men who lost an argument remember it as well as women who lost.

Read the entire article here.

Send to Kindle

Heads in the Rising Tide

King-Knut

Officials from the state of Florida seem to have their heads in the sand (and other places); sand that is likely to be swept from their very own Florida shores as sea levels rise. However, surely climate change could be an eventual positive for Florida: think warmer climate and huge urban swathes underwater — a great new Floridian theme park! But, remember, don’t talk about it. I suppose officials will soon be looking for a contemporary version of King Canute to help them out of this watery pickle.

From Wired:

The oceans are slowly overtaking Florida. Ancient reefs of mollusk and coral off the present-day coasts are dying. Annual extremes in hot and cold, wet and dry, are becoming more pronounced. Women and men of science have investigated, and a great majority agree upon a culprit. In the outside world, this culprit has a name, but within the borders of Florida, it does not. According to a  Miami Herald investigation, the state Department of Environmental Protection has since 2010 had an unwritten policy prohibiting the use of some well-understood phrases for the meteorological phenomena slowly drowning America’s weirdest-shaped state. It’s … that thing where burning too much fossil fuel puts certain molecules into a certain atmosphere, disrupting a certain planetary ecosystem. You know what we’re talking about. We know you know. They know we know you know. But are we allowed to talk about … you know? No. Not in Florida. It must not be spoken of. Ever.

Unless … you could, maybe, type around it? It’s worth a shot.

The cyclone slowdown

It has been nine years since Florida was hit by a proper hurricane. Could that be a coincidence? Sure. Or it could be because of … something. A nameless, voiceless something. A feeling, like a pricking-of-thumbs, this confluence-of-chemistry-and-atmospheric-energy-over-time. If so, this anonymous dreadfulness would, scientists say, lead to a drier middle layer of atmosphere over the ocean. Because water vapor stores energy, this dry air will suffocate all but the most energetic baby storms. “So the general thinking, is that that as [redacted] levels increase, it ultimately won’t have an effect on the number of storms,” says Jim Kossin, a scientist who studies, oh, how about “things-that-happen-in-the-atmosphere-over-long-time-periods” at the National Centers for Environmental Information. “However, there is a lot of evidence that if a storm does form, it has a chance of getting very strong.”

Storms darken the sky

Hurricanes are powered by energy in the sea. And as cold and warm currents thread around the globe, storms go through natural, decades-long cycles of high-to-low intensity. “There is a natural 40-to-60-year oscillation in what sea surface temperatures are doing, and this is driven by ocean-wide currents that move on very slow time scales,” says Kossin, who has authored reports for the Intergovernmental Panel on, well, let’s just call it Chemical-and-Thermodynamic-Alterations-to-Long-Term-Atmospheric-Conditions. But in recent years, storms have become stronger than that natural cycle would otherwise predict. Kossin says that many in his field agree that while the natural churning of the ocean is behind this increasing intensity, other forces are at work. Darker, more sinister forces, like thermodynamics. Possibly even chemistry. No one knows for sure. Anyway, storms are getting less frequent, but stronger. It’s an eldritch tale of unspeakable horror, maybe.

 Read the entire article here.

Image: King Knut (or Cnut or Canute) the Great, illustrated in a medieval manuscript. Courtesy of Der Spiegel Geschichte.

Send to Kindle

The Big Crunch

cmb

It may just be possible that prophetic doomsayers have been right all along. The end is coming… well, in a few tens of billions of years. A group of physicists propose that the cosmos will soon begin collapsing in on itself. Keep in mind that soon in cosmological terms runs into the billions of years. So, it does appear that we still have some time to crunch down our breakfast cereal a few more times before the ultimate universal apocalypse. Clearly this may not please those who seek the end of days within their lifetimes, and for rather different — scientific — reasons, cosmologists seem to be unhappy too.

From Phys:

Physicists have proposed a mechanism for “cosmological collapse” that predicts that the universe will soon stop expanding and collapse in on itself, obliterating all matter as we know it. Their calculations suggest that the collapse is “imminent”—on the order of a few tens of billions of years or so—which may not keep most people up at night, but for the physicists it’s still much too soon.

In a paper published in Physical Review Letters, physicists Nemanja Kaloper at the University of California, Davis; and Antonio Padilla at the University of Nottingham have proposed the cosmological collapse mechanism and analyzed its implications, which include an explanation of dark energy.

“The fact that we are seeing dark energy now could be taken as an indication of impending doom, and we are trying to look at the data to put some figures on the end date,” Padilla told Phys.org. “Early indications suggest the collapse will kick in in a few tens of billions of years, but we have yet to properly verify this.”

The main point of the paper is not so much when exactly the universe will end, but that the mechanism may help resolve some of the unanswered questions in physics. In particular, why is the universe expanding at an accelerating rate, and what is the dark energy causing this acceleration? These questions are related to the cosmological constant problem, which is that the predicted vacuum energy density of the universe causing the expansion is much larger than what is observed.

“I think we have opened up a brand new approach to what some have described as ‘the mother of all physics problems,’ namely the cosmological constant problem,” Padilla said. “It’s way too early to say if it will stand the test of time, but so far it has stood up to scrutiny, and it does seem to address the issue of vacuum energy contributions from the standard model, and how they gravitate.”

The collapse mechanism builds on the physicists’ previous research on vacuum energy sequestering, which they proposed to address the cosmological constant problem. The dynamics of vacuum energy sequestering predict that the universe will collapse, but don’t provide a specific mechanism for how collapse will occur.

According to the new mechanism, the universe originated under a set of specific initial conditions so that it naturally evolved to its present state of acceleration and will continue on a path toward collapse. In this scenario, once the collapse trigger begins to dominate, it does so in a period of “slow roll” that brings about the accelerated expansion we see today. Eventually the universe will stop expanding and reach a turnaround point at which it begins to shrink, culminating in a “big crunch.”

Read the entire article here.

Image: Image of the Cosmic Microwave Background (CMB) from nine years of WMAP data. The image reveals 13.77 billion year old temperature fluctuations (shown as color differences) that correspond to the seeds that grew to become the galaxies. Courtesy of NASA.

Send to Kindle

PowerPoint Karaoke Olympics

PPT-karaokeIt may not be beyond the realm of fantasy to imagine a day in the not too distant future when PowerPoint Karaoke features as an olympic sport. Ugh!

Without a doubt karaoke has set human culture back at least a thousand years (thanks Japan). And, Powerpoint has singlehandedly dealt killer blows to creativity, deep thought and literary progress (thanks Microsoft). Surely, combining these two banes of modern society into a competitive event is the stuff of true horror. But, this hasn’t stopped the activity from becoming a burgeoning improv phenomenon for corporate hacks — validating the trend in which humans continue making fools of themselves. After all, it must be big — and there’s probably money in it — if the WSJ is reporting on it.

Nonetheless,

  • Count
  • me
  • out!

From the WSJ:

On a sunny Friday afternoon earlier this month, about 100 employees of Adobe Systems Inc. filed expectantly into an auditorium to watch PowerPoint presentations.

“I am really thrilled to be here today,” began Kimberley Chambers, a 37-year-old communications manager for the software company, as she nervously clutched a microphone. “I want to talk you through…my experience with whales, in both my personal and professional life.”

Co-workers giggled. Ms. Chambers glanced behind her, where a PowerPoint slide displayed four ink sketches of bare-chested male torsos, each with a distinct pattern of chest hair. The giggles became guffaws. “What you might not know,” she continued, “is that whales can be uniquely identified by a number of different characteristics, not the least of which is body hair.”

Ms. Chambers, sporting a black blazer and her employee ID badge, hadn’t seen this slide in advance, nor the five others that popped up as she clicked her remote control. To accompany the slides, she gave a nine-minute impromptu talk about whales, a topic she was handed 30 seconds earlier.

Forums like this at Adobe, called “PowerPoint karaoke” or “battle decks,” are cropping up as a way for office workers of the world to mock an oppressor, the ubiquitous PowerPoint presentation. The mix of improvised comedy and corporate-culture takedown is based on a simple notion: Many PowerPoint presentations are unintentional parody already, so why not go all the way?

Library associations in Texas and California held PowerPoint karaoke sessions at their annual conferences. At a Wal-Mart StoresInc. event last year, workers gave fake talks based on real slides from a meatpacking supplier. Twitter Inc. Chief Executive Dick Costolo, armed with his training from comedy troupe Second City, has faced off with employees at “battle decks” contests during company meetings.

One veteran corporate satirist gives these events a thumbs up. “Riffing off of PowerPoints without knowing what your next slide is going to be? The humorist in me says it’s kinda brilliant,” said “Dilbert” cartoonist Scott Adams, who has spent 26 years training his jaundiced eye on office work. “I assume this game requires drinking?” he asked. (Drinking is technically not required, but it is common.)

Mr. Adams, who worked for years at a bank and at a telephone company, said PowerPoint is popular because it offers a rare dose of autonomy in cubicle culture. But it often bores, because creators lose sight of their mission. “If you just look at a page and drag things around and play with fonts, you think you’re a genius and you’re in full control of your world,” he said.

At a February PowerPoint karaoke show in San Francisco, contestants were given pairings of topics and slides ranging from a self-help seminar for people who abuse Amazon Prime, with slides including a dog balancing a stack of pancakes on its nose, to a sermon on “Fifty Shades of Grey,” with slides including a pyramid dotted with blocks of numbers. Another had to explain the dating app Tinder to aliens invading the Earth, accompanied by a slide of old floppy disk drives, among other things.

Read and sing-a-long to the entire article here.

Send to Kindle

Circadian Misalignment and Your Smartphone

Google-search-smartphone-night

You take your portable electronics everywhere, all the time. You watch TV with or on your smartphone. You eat with a fork in one hand and your smartphone in the other. In fact, you probably wish you had two pairs of arms so you could eat, drink and use your smartphone and laptop at the same time. You use your smartphone in your car — hopefully or sensibly not while driving. You read texts on your smartphone while in the restroom. You use it at the movie theater, at the theater (much to the dismay of stage actors). It’s with you at the restaurant, on the bus or metro, in the aircraft, in the bath (despite chances of getting electrically shocked). You check your smartphone first thing in the morning and last thing before going to sleep. And, if your home or work-life demands you will check it periodically throughout the night.

Let’s leave aside for now the growing body of anecdotal and formal evidence that smartphones are damaging your physical wellbeing. This includes finger, hand and wrist problems (from texting); and neck and posture problems (from constantly bending over your small screen). Now there is evidence that constant use, especially at night, is damaging your mental wellbeing and increasing the likelihood of additional, chronic physical ailments. It appears that the light from our constant electronic companions is not healthy, particularly as it disrupts our regular rhythm of sleep.

From Wired:

For More than 3 billion years, life on Earth was governed by the cyclical light of sun, moon and stars. Then along came electric light, turning night into day at the flick of a switch. Our bodies and brains may not have been ready.

A fast-growing body of research has linked artificial light exposure to disruptions in circadian rhythms, the light-triggered releases of hormones that regulate bodily function. Circadian disruption has in turn been linked to a host of health problems, from cancer to diabetes, obesity and depression. “Everything changed with electricity. Now we can have bright light in the middle of night. And that changes our circadian physiology almost immediately,” says Richard Stevens, a cancer epidemiologist at the University of Connecticut. “What we don’t know, and what so many people are interested in, are the effects of having that light chronically.”

Stevens, one of the field’s most prominent researchers, reviews the literature on light exposure and human health the latest Philosophical Transactions of the Royal Society B. The new article comes nearly two decades after Stevens first sounded the alarm about light exposure possibly causing harm; writing in 1996, he said the evidence was “sparse but provocative.” Since then, nighttime light has become even more ubiquitous: an estimated 95 percent of Americans regularly use screens shortly before going to sleep, and incandescent bulbs have been mostly replaced by LED and compact fluorescent lights that emit light in potentially more problematic wavelengths. Meanwhile, the scientific evidence is still provocative, but no longer sparse.

As Stevens says in the new article, researchers now know that increased nighttime light exposure tracks with increased rates of breast cancer, obesity and depression. Correlation isn’t causation, of course, and it’s easy to imagine all the ways researchers might mistake those findings. The easy availability of electric lighting almost certainly tracks with various disease-causing factors: bad diets, sedentary lifestyles, exposure to they array of chemicals that come along with modernity. Oil refineries and aluminum smelters, to be hyperbolic, also blaze with light at night.

Yet biology at least supports some of the correlations. The circadian system synchronizes physiological function—from digestion to body temperature, cell repair and immune system activity—with a 24-hour cycle of light and dark. Even photosynthetic bacteria thought to resemble Earth’s earliest life forms have circadian rhythms. Despite its ubiquity, though, scientists discovered only in the last decade what triggers circadian activity in mammals: specialized cells in the retina, the light-sensing part of the eye, rather than conveying visual detail from eye to brain, simply signal the presence or absence of light. Activity in these cells sets off a reaction that calibrates clocks in every cell and tissue in a body. Now, these cells are especially sensitive to blue wavelengths—like those in a daytime sky.

But artificial lights, particularly LCDs, some LEDs, and fluorescent bulbs, also favor the blue side of the spectrum. So even a brief exposure to dim artificial light can trick a night-subdued circadian system into behaving as though day has arrived. Circadian disruption in turn produces a wealth of downstream effects, including dysregulation of key hormones. “Circadian rhythm is being tied to so many important functions,” says Joseph Takahashi, a neurobiologist at the University of Texas Southwestern. “We’re just beginning to discover all the molecular pathways that this gene network regulates. It’s not just the sleep-wake cycle. There are system-wide, drastic changes.” His lab has found that tweaking a key circadian clock gene in mice gives them diabetes. And a tour-de-force 2009 study put human volunteers on a 28-hour day-night cycle, then measured what happened to their endocrine, metabolic and cardiovascular systems.

Crucially, that experiment investigated circadian disruption induced by sleep alteration rather than light exposure, which is also the case with the many studies linking clock-scrambling shift work to health problems. Whether artificial light is as problematic as disturbed sleep patterns remains unknown, but Stevens thinks that some and perhaps much of what’s now assumed to result from sleep issues is actually a function of light. “You can wake up in the middle of the night and your melatonin levels don’t change,” he says. “But if you turn on a light, melatonin starts falling immediately. We need darkness.” According to Stevens, most people live in a sort of “circadian fog.”

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

3D Printing Magic

If you’ve visited this blog before you know I’m a great fan of 3D printing. Though some uses, such as printing 3D selfies, seem dubious at best. So, when Carbon3D unveiled its fundamentally different, and better, approach to 3D printing I was intrigued. The company uses an approach called continuous liquid interface production (CLIP), which seems to construct objects from a magical ooze. Check out the video — you’ll be enthralled. The future is here.

Learn more about Carbon3D here.

From Wired:

EVEN IF YOU have little interest in 3-D printing, you’re likely to find  Carbon3D’s Continuous Liquid Interface Production (CLIP) technology fascinating. Rather than the time-intensive printing of a 3-D object layer by layer like most printers, Carbon3D’s technique works 25 to 100 times faster than what you may have seen before, and looks a bit like Terminator 2‘s liquid metal T-1000 in the process.

CLIP creations grow out of a pool of UV-sensitive resin in a process that’s similar to the way laser 3-D printers work, but at a much faster pace. Instead of the laser used in conventional 3-D printers, CLIP uses an ultraviolet projector on the underside of a resin tray to project an image for how each layer should form. Light shines through an oxygen-permeable window onto the resin, which hardens it. Areas of resin that are exposed to oxygen don’t harden, while those that are cut off form the 3-D printed shape.

In practice, all that physics translates to unprecedented 3-D printing speed. At this week’s TED Conference in Vancouver, Carbon3D CEO and co-founder Dr. Joseph DeSimone demonstrated the printer onstage with a bit of theatrical underselling, wagering that his creation could produce in 10 minutes a geometric ball shape that would take a regular 3-D printer up to 10 hours. The CLIP process churned out the design in a little under 7 minutes.

Read the entire story here.

Video courtesy of Carbon3D.

Send to Kindle

We Are All Always Right, All of the Time

You already know this: you believe that your opinion is correct all the time, about everything. And, interestingly enough, your friends and neighbors believe that they are always right too. Oh, and the colleague at the office with whom you argue all the time — she’s right all the time too.

How can this be, when in an increasingly science-driven, objective universe facts trump opinion? Well, not so fast. It seems that we humans have an internal mechanism that colors our views based on a need for acceptance within a broader group. That is, we generally tend to spin our rational views in favor of group consensus, versus supporting the views of a subject matter expert, which might polarize the group. This is both good and bad. Good because it reinforces the broader benefits of being within a group; bad because we are more likely to reject opinion, evidence and fact from experts outside of our group — think climate change.

From the Washington Post:

It’s both the coolest — and also in some ways the most depressing — psychology study ever.

Indeed, it’s so cool (and so depressing) that the name of its chief finding — the Dunning-Kruger effect — has at least halfway filtered into public consciousness. In the classic 1999 paper, Cornell researchers David Dunning and Justin Kruger found that the less competent people were in three domains — humor, logic, and grammar — the less likely they were to be able to recognize that. Or as the researchers put it:

We propose that those with limited knowledge in a domain suffer from a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it.

Dunning and Kruger didn’t directly apply this insight to our debates about science. But I would argue that the effect named after them certainly helps to explain phenomena like vaccine denial, in which medical authorities have voiced a very strong opinion, but some parents just keep on thinking that, somehow, they’re in a position to challenge or ignore this view.

So why do I bring this classic study up now?

The reason is that an important successor to the Dunning-Kruger paper has just been come out — and it, too, is pretty depressing (at least for those of us who believe that domain expertise is a thing to be respected and, indeed, treasured)This time around, psychologists have not uncovered an endless spiral of incompetence and the inability to perceive it. Rather, they’ve shown that people have an “equality bias” when it comes to competence or expertise, such that even when it’s very clear that one person in a group is more skilled, expert, or competent (and the other less), they are nonetheless inclined to seek out a middle ground in determining how correct different viewpoints are.

Yes, that’s right — we’re all right, nobody’s wrong, and nobody gets hurt feelings.

The new study, just published in the Proceedings of the National Academy of Sciences, is by Ali Mahmoodi of the University of Tehran and a long list of colleagues from universities in the UK, Germany, China, Denmark, and the United States. And no wonder: The research was transnational, and the same experiment — with the same basic results — was carried out across cultures in China, Denmark, and Iran.

Read the entire story here.

Send to Kindle

Hyper-Parenting and Couch Potato Kids

Google-search-kids-playing

Parents who are overly engaged in micro-managing the academic, athletic and social lives of their kids may be responsible for ensuring their offspring lead less active lives. A new research study finds children of so-called hyper-parents are significantly less active than peers with less involved parents. Hyper-parenting seems to come in 4 flavors: helicopter parents who hover over their child’s every move; tiger moms who constantly push for superior academic attainment; little emperor parents who constantly bestow their kids material things; and concerted cultivation parents who over-schedule their kids with never-ending after-school activities. If you recognize yourself in one of these parenting styles, take a deep breath, think back on when as a 7-12 year-old you had the most fun, and let you kids play outside — preferably in the rain and mud!

From the WSJ / Preventive Medicine:

Hyper-parenting may increase the risk of physical inactivity in children, a study in the April issue of Preventive Medicine suggests.

Children with parents who tended to be overly involved in their academic, athletic and social lives—a child-rearing style known as hyper-parenting—spent less time outdoors, played fewer after-school sports and were less likely to bike or walk to school, friends’ homes, parks and playgrounds than children with less-involved parents.

Hyperparenting, although it’s intended to benefit children by giving them extra time and attention, could have adverse consequences for their health, the researchers said.

The study, at Queen’s University in Ontario, surveyed 724 parents of children, ages 7 to 12 years old, born in the U.S. and Canada from 2002 to 2007. (The survey was based on parents’ interaction with the oldest child.)

Questionnaires assessed four hyper-parenting styles: helicopter or overprotective parents; little-emperor parents who shower children with material goods; so-called tiger moms who push for exceptional achievement; and parents who schedule excessive extracurricular activities, termed concerted cultivation. Hyperparenting was ranked in five categories from low to high based on average scores in the four styles.

Children’s preferred play location was their yard at home, and 64% of the children played there at least three times a week. Only 12% played on streets and cul-de-sacs away from home. Just over a quarter walked or cycled to school or friends’ homes, and slightly fewer to parks and playgrounds. Organized sports participation was 26%.

Of parents, about 40% had high hyper-parenting scores and 6% had low scores. The most active children had parents with low to below-average scores in all four hyper-parenting styles, while the least active had parents with average-to-high hyper-parenting scores. The difference between children in the low and high hyper-parenting groups was equivalent to about 20 physical-activity sessions a week, the researchers said.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Humor Versus Horror

Faced with unspeakable horror many of usually turn away. Some courageous souls turn to humor to counter the vileness of others. So, it is heartwarming to see comedians and satirists taking up rhetorical arms in the backyards of murderers and terrorists. Fighting violence and terror with much of the same may show progress in the short-term, but ridiculing our enemies with humor and thoughtful dialogue is the only long-term way to fight evil in its many human forms. A profound thank you to these four brave Syrian refugees who, in the face of much personal danger, are able to laugh at their foes.

From the Guardian:

They don’t have much to laugh about. But four young Syrian refugees from Aleppo believe humour may be the only antidote to the horrors taking place back home.

Settled in a makeshift studio in the Turkish city of Gaziantep 40 miles from the Syrian border, the film-makers decided ridicule was an effective way of responding to Islamic State and its grisly record of extreme violence.

“The entire world seems to be terrified of Isis, so we want to laugh at them, expose their hypocrisy and show that their interpretation of Islam does not represent the overwhelming majority of Muslims,” says Maen Watfe, 27. “The media, especially the western media, obsessively reproduce Isis propaganda portraying them as strong and intimidating. We want to show their weaknesses.”

The films and videos on Watfe and his three friends’ website mock the Islamist extremists and depict them as naive simpletons, hypocritical zealots and brutal thugs. It’s a high-risk undertaking. They have had to move house and keep their addresses secret from even their best friends after receiving death threats.

But the video activists – Watfe, Youssef Helali, Mohammed Damlakhy and Aya Brown – will not be deterred.

Their film The Prince shows Isis leader and self-appointed caliph Abu Bakr al-Baghdadi drinking wine, listening to pop music and exchanging selfies with girls on his smartphone. A Moroccan jihadi arrives saying he came to Syria to “liberate Jerusalem”. The leader swaps the wine for milk and switches the music to Islamic chants praising martyrdom. Then he hands the Moroccan a suicide belt and sends him off against a unit of Free Syrian Army fighters. The grenades detonate, and Baghdadi reaches for his glass of wine and turns the pop music back on.

It is pieces like this that have brought hate mail and threats via social media.

“One of them said that they would finish us off like they finished off Charlie [Hebdo],” Brown, 26, recalls. She declined to give her real name out of fear for her family, who still live in Aleppo. “In the end we decided to move from our old apartment.”

The Turkish landlord told them Arabic-speaking men had repeatedly asked for their whereabouts after they left, and kept the studio under surveillance.

Follow the story here.

Video: Happy Valentine. Courtesy of Dayaaltaseh Productions.

Send to Kindle

Household Chores for Kids Are Good

Google-kid-chores

Apparently household chores are becoming rather yesterday. Several recent surveys — no doubt commissioned by my children — show that shared duties in the home are a dying phenomenon. No, I here you cry. Not only do chores provide a necessary respite from the otherwise 24/7-videogame-texting addiction, they help establish a sense of responsibility and reinforce our increasingly imperiled altruistic tendencies. So, parents, get out the duster, vacuum, fresh sheets, laundry basket and put those (little) people to work before it’s too late. But first of all let’s rename “chores” to responsibilities.

From WSJ:

Today’s demands for measurable childhood success—from the Common Core to college placement—have chased household chores from the to-do lists of many young people. In a survey of 1,001 U.S. adults released last fall by Braun Research, 82% reported having regular chores growing up, but only 28% said that they require their own children to do them. With students under pressure to learn Mandarin, run the chess club or get a varsity letter, chores have fallen victim to the imperatives of resume-building—though it is hardly clear that such activities are a better use of their time.

“Parents today want their kids spending time on things that can bring them success, but ironically, we’ve stopped doing one thing that’s actually been a proven predictor of success—and that’s household chores,” says Richard Rende, a developmental psychologist in Paradise Valley, Ariz., and co-author of the forthcoming book “Raising Can-Do Kids.” Decades of studies show the benefits of chores—academically, emotionally and even professionally.

Giving children household chores at an early age helps to build a lasting sense of mastery, responsibility and self-reliance, according to research by Marty Rossmann, professor emeritus at the University of Minnesota. In 2002, Dr. Rossmann analyzed data from a longitudinal study that followed 84 children across four periods in their lives—in preschool, around ages 10 and 15, and in their mid-20s. She found that young adults who began chores at ages 3 and 4 were more likely to have good relationships with family and friends, to achieve academic and early career success and to be self-sufficient, as compared with those who didn’t have chores or who started them as teens.

Chores also teach children how to be empathetic and responsive to others’ needs, notes psychologist Richard Weissbourd of the Harvard Graduate School of Education. In research published last year, he and his team surveyed 10,000 middle- and high-school students and asked them to rank what they valued more: achievement, happiness or caring for others.

Almost 80% chose either achievement or happiness over caring for others. As he points out, however, research suggests that personal happiness comes most reliably not from high achievement but from strong relationships. “We’re out of balance,” says Dr. Weissbourd. A good way to start readjusting priorities, he suggests, is by learning to be kind and helpful at home.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

The Damned Embuggerance

Google-search-terry-pratchett-books

Sadly, genre-busting author Sir Terry Pratchett succumbed to DEATH on March 12, 2015. Luckily, for those of us still fending off the clutches of Reaper Man we have seventy-plus works of his to keep us company in the darkness.

So now that our world contains a little less magic it’s important to remind ourselves of a few choice words of his:

A man is not truly dead while his name is still spoken.

Stories of imagination tend to upset those without one.

It’s not worth doing something unless someone, somewhere, would much rather you weren’t doing it.

The truth may be out there, but the lies are inside your head.

Goodness is about what you do. Not who you pray to.

From the Guardian:

Neil Gaiman led tributes from the literary, entertainment and fantasy worlds to Terry Pratchett after the author’s death on Thursday, aged 66.

The author of the Discworld novels, which sold in the tens of millions worldwide, had been afflicted with a rare form of early-onset Alzheimer’s disease.

Gaiman, who collaborated with Pratchett on the huge hit Good Omens, tweeted: “I will miss you, Terry, so much,” pointing to “the last thing I wrote about you”, on the Guardian.

“Terry Pratchett is not a jolly old elf at all,” wrote Gaiman last September. “Not even close. He’s so much more than that. As Terry walks into the darkness much too soon, I find myself raging too: at the injustice that deprives us of – what? Another 20 or 30 books? Another shelf-full of ideas and glorious phrases and old friends and new, of stories in which people do what they really do best, which is use their heads to get themselves out of the trouble they got into by not thinking? … I rage at the imminent loss of my friend. And I think, ‘What would Terry do with this anger?’ Then I pick up my pen, and I start to write.”

Appealing to readers to donate to Alzheimer’s research, Gaiman added on his blog: “Thirty years and a month ago, a beginning author met a young journalist in a Chinese Restaurant, and the two men became friends, and they wrote a book, and they managed to stay friends despite everything. Last night, the author died.

“There was nobody like him. I was fortunate to have written a book with him, when we were younger, which taught me so much.

“I knew his death was coming and it made it no easier.”

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

The Internet 0f Th1ngs

Google-search-IoT

Technologist Marc Goodman describes a not too distant future in which all our appliances, tools, products… anything and everything is plugged into the so-called Internet of Things (IoT). The IoT describes a world where all things are connected to everything else, making for a global mesh of intelligent devices from your connected car and your WiFi enabled sneakers to your smartwatch and home thermostat. You may well believe it advantageous to have your refrigerator ping the local grocery store when it runs out of fresh eggs and milk or to have your toilet auto-call a local plumber when it gets stopped-up.

But, as our current Internet shows us — let’s call it the Internet of People — not all is rosy in this hyper-connected, 24/7, always-on digital ocean. What are you to do when hackers attack all your home appliances in a “denial of home service attack (DohS)”, or when your every move inside your home is scrutinized, collected, analyzed and sold to the nearest advertiser, or when your cooktop starts taking and sharing selfies with the neighbors?

Goodman’s new book on this important subject, excerpted here, is titled Future Crimes.

From the Guardian:

If we think of today’s internet metaphorically as about the size of a golf ball, tomorrow’s will be the size of the sun. Within the coming years, not only will every computer, phone and tablet be online, but so too will every car, house, dog, bridge, tunnel, cup, clock, watch, pacemaker, cow, streetlight, bridge, tunnel, pipeline, toy and soda can. Though in 2013 there were only 13bn online devices, Cisco Systems has estimated that by 2020 there will be 50bn things connected to the internet, with room for exponential growth thereafter. As all of these devices come online and begin sharing data, they will bring with them massive improvements in logistics, employee efficiency, energy consumption, customer service and personal productivity.

This is the promise of the internet of things (IoT), a rapidly emerging new paradigm of computing that, when it takes off, may very well change the world we live in forever.

The Pew Research Center defines the internet of things as “a global, immersive, invisible, ambient networked computing environment built through the continued proliferation of smart sensors, cameras, software, databases, and massive data centres in a world-spanning information fabric”. Back in 1999, when the term was first coined by MIT researcher Kevin Ashton, the technology did not exist to make the IoT a reality outside very controlled environments, such as factory warehouses. Today we have low-powered, ultra-cheap computer chips, some as small as the head of a pin, that can be embedded in an infinite number of devices, some for mere pennies. These miniature computing devices only need milliwatts of electricity and can run for years on a minuscule battery or small solar cell. As a result, it is now possible to make a web server that fits on a fingertip for $1.

The microchips will receive data from a near-infinite range of sensors, minute devices capable of monitoring anything that can possibly be measured and recorded, including temperature, power, location, hydro-flow, radiation, atmospheric pressure, acceleration, altitude, sound and video. They will activate miniature switches, valves, servos, turbines and engines – and speak to the world using high-speed wireless data networks. They will communicate not only with the broader internet but with each other, generating unfathomable amounts of data. The result will be an always-on “global, immersive, invisible, ambient networked computing environment”, a mere prelude to the tidal wave of change coming next.

In the future all objects may be smart

The broad thrust sounds rosy. Because chips and sensors will be embedded in everyday objects, we will have much better information and convenience in our lives. Because your alarm clock is connected to the internet, it will be able to access and read your calendar. It will know where and when your first appointment of the day is and be able to cross-reference that information against the latest traffic conditions. Light traffic, you get to sleep an extra 10 minutes; heavy traffic, and you might find yourself waking up earlier than you had hoped.

When your alarm does go off, it will gently raise the lights in the house, perhaps turn up the heat or run your bath. The electronic pet door will open to let Fido into the backyard for his morning visit, and the coffeemaker will begin brewing your coffee. You won’t have to ask your kids if they’ve brushed their teeth; the chip in their toothbrush will send a message to your smartphone letting you know the task is done. As you walk out the door, you won’t have to worry about finding your keys; the beacon sensor on the key chain makes them locatable to within two inches. It will be as if the Jetsons era has finally arrived.

While the hype-o-meter on the IoT has been blinking red for some time, everything described above is already technically feasible. To be certain, there will be obstacles, in particular in relation to a lack of common technical standards, but a wide variety of companies, consortia and government agencies are hard at work to make the IoT a reality. The result will be our transition from connectivity to hyper-connectivity, and like all things Moore’s law related, it will be here sooner than we realise.

The IoT means that all physical objects in the future will be assigned an IP address and be transformed into information technologies. As a result, your lamp, cat or pot plant will be part of an IT network. Things that were previously silent will now have a voice, and every object will be able to tell its own story and history. The refrigerator will know exactly when it was manufactured, the names of the people who built it, what factory it came from, and the day it left the assembly line, arrived at the retailer, and joined your home network. It will keep track of every time its door has been opened and which one of your kids forgot to close it. When the refrigerator’s motor begins to fail, it can signal for help, and when it finally dies, it will tell us how to disassemble its parts and best recycle them. Buildings will know every person who has ever worked there, and streetlights every car that has ever driven by.

All of these objects will communicate with each other and have access to the massive processing and storage power of the cloud, further enhanced by additional mobile and social networks. In the future all objects may become smart, in fact much smarter than they are today, and as these devices become networked, they will develop their own limited form of sentience, resulting in a world in which people, data and things come together. As a consequence of the power of embedded computing, we will see billions of smart, connected things joining a global neural network in the cloud.

In this world, the unknowable suddenly becomes knowable. For example, groceries will be tracked from field to table, and restaurants will keep tabs on every plate, what’s on it, who ate from it, and how quickly the waiters are moving it from kitchen to customer. As a result, when the next E coli outbreak occurs, we won’t have to close 500 eateries and wonder if it was the chicken or beef that caused the problem. We will know exactly which restaurant, supplier and diner to contact to quickly resolve the problem. The IoT and its billions of sensors will create an ambient intelligence network that thinks, senses and feels and contributes profoundly to the knowable universe.

Things that used to make sense suddenly won’t, such as smoke detectors. Why do most smoke detectors do nothing more than make loud beeps if your life is in mortal danger because of fire? In the future, they will flash your bedroom lights to wake you, turn on your home stereo, play an MP3 audio file that loudly warns, “Fire, fire, fire.” They will also contact the fire department, call your neighbours (in case you are unconscious and in need of help), and automatically shut off flow to the gas appliances in the house.

The byproduct of the IoT will be a living, breathing, global information grid, and technology will come alive in ways we’ve never seen before, except in science fiction movies. As we venture down the path toward ubiquitous computing, the results and implications of the phenomenon are likely to be mind-blowing. Just as the introduction of electricity was astonishing in its day, it eventually faded into the background, becoming an imperceptible, omnipresent medium in constant interaction with the physical world. Before we let this happen, and for all the promise of the IoT, we must ask critically important questions about this brave new world. For just as electricity can shock and kill, so too can billions of connected things networked online.

One of the central premises of the IoT is that everyday objects will have the capacity to speak to us and to each other. This relies on a series of competing communications technologies and protocols, many of which are eminently hackable. Take radio-frequency identification (RFID) technology, considered by many the gateway to the IoT. Even if you are unfamiliar with the name, chances are you have already encountered it in your life, whether it’s the security ID card you use to swipe your way into your office, your “wave and pay” credit card, the key to your hotel room, your Oyster card.

Even if you don’t use an RFID card for work, there’s a good chance you either have it or will soon have it embedded in the credit card sitting in your wallet. Hackers have been able to break into these as well, using cheap RFID readers available on eBay for just $50, tools that allow an attacker to wirelessly capture a target’s credit card number, expiration date and security code. Welcome to pocket picking 2.0.

More productive and more prison-like

A much rarer breed of hacker targets the physical elements that make up a computer system, including the microchips, electronics, controllers, memory, circuits, components, transistors and sensors – core elements of the internet of things. These hackers attack a device’s firmware, the set of computer instructions present on every electronic device we encounter, including TVs, mobile phones, game consoles, digital cameras, network routers, alarm systems, CCTVs, USB drives, traffic lights, gas station pumps and smart home management systems. Before we add billions of hackable things and communicate with hackable data transmission protocols, important questions must be asked about the risks for the future of security, crime, terrorism, warfare and privacy.

In the same way our every move online can be tracked, recorded, sold and monetised today, so too will that be possible in the near future in the physical world. Real space will become just like cyberspace. With the widespread adoption of more networked devices, what people do in their homes, cars, workplaces, schools and communities will be subjected to increased monitoring and analysis by the corporations making these devices. Of course these data will be resold to advertisers, data brokers and governments, providing an unprecedented view into our daily lives. Unfortunately, just like our social, mobile, locational and financial information, our IoT data will leak, providing further profound capabilities to stalkers and other miscreants interested in persistently tracking us. While it would certainly be possible to establish regulations and build privacy protocols to protect consumers from such activities, the greater likelihood is that every IoT-enabled device, whether an iron, vacuum, refrigerator, thermostat or lightbulb, will come with terms of service that grant manufacturers access to all your data. More troublingly, while it may be theoretically possible to log off in cyberspace, in your well-connected smart home there will be no “opt-out” provision.

We may find ourselves interacting with thousands of little objects around us on a daily basis, each collecting seemingly innocuous bits of data 24/7, information these things will report to the cloud, where it will be processed, correlated, and reviewed. Your smart watch will reveal your lack of exercise to your health insurance company, your car will tell your insurer of your frequent speeding, and your dustbin will tell your local council that you are not following local recycling regulations. This is the “internet of stool pigeons”, and though it may sound far-fetched, it’s already happening. Progressive, one of the largest US auto insurance companies, offers discounted personalised rates based on your driving habits. “The better you drive, the more you can save,” according to its advertising. All drivers need to do to receive the lower pricing is agree to the installation of Progressive’s Snapshot black-box technology in their cars and to having their braking, acceleration and mileage persistently tracked.

The IoT will also provide vast new options for advertisers to reach out and touch you on every one of your new smart connected devices. Every time you go to your refrigerator to get ice, you will be presented with ads for products based on the food your refrigerator knows you’re most likely to buy. Screens too will be ubiquitous, and marketers are already planning for the bounty of advertising opportunities. In late 2013, Google sent a letter to the Securities and Exchange Commission noting, “we and other companies could [soon] be serving ads and other content on refrigerators, car dashboards, thermostats, glasses and watches, to name just a few possibilities.”

Knowing that Google can already read your Gmail, record your every web search, and track your physical location on your Android mobile phone, what new powerful insights into your personal life will the company develop when its entertainment system is in your car, its thermostat regulates the temperature in your home, and its smart watch monitors your physical activity?

Not only will RFID and other IoT communications technologies track inanimate objects, they will be used for tracking living things as well. The British government has considered implanting RFID chips directly under the skin of prisoners, as is common practice with dogs. School officials across the US have begun embedding RFID chips in student identity cards, which pupils are required to wear at all times. In Contra Costa County, California, preschoolers are now required to wear basketball-style jerseys with electronic tracking devices built in that allow teachers and administrators to know exactly where each student is. According to school district officials, the RFID system saves “3,000 labour hours a year in tracking and processing students”.

Meanwhile, the ability to track employees, how much time they take for lunch, the length of their toilet breaks and the number of widgets they produce will become easy. Moreover, even things such as words typed per minute, eye movements, total calls answered, respiration, time away from desk and attention to detail will be recorded. The result will be a modern workplace that is simultaneously more productive and more prison-like.

At the scene of a suspected crime, police will be able to interrogate the refrigerator and ask the equivalent of, “Hey, buddy, did you see anything?” Child social workers will know there haven’t been any milk or nappies in the home, and the only thing stored in the fridge has been beer for the past week. The IoT also opens up the world for “perfect enforcement”. When sensors are everywhere and all data is tracked and recorded, it becomes more likely that you will receive a moving violation for going 26 miles per hour in a 25-mile-per-hour zone and get a parking ticket for being 17 seconds over on your meter.

The former CIA director David Petraeus has noted that the IoT will be “transformational for clandestine tradecraft”. While the old model of corporate and government espionage might have involved hiding a bug under the table, tomorrow the very same information might be obtained by intercepting in real time the data sent from your Wi-Fi lightbulb to the lighting app on your smart phone. Thus the devices you thought were working for you may in fact be on somebody else’s payroll, particularly that of Crime, Inc.

A network of unintended consequences

For all the untold benefits of the IoT, its potential downsides are colossal. Adding 50bn new objects to the global information grid by 2020 means that each of these devices, for good or ill, will be able to potentially interact with the other 50bn connected objects on earth. The result will be 2.5 sextillion potential networked object-to-object interactions – a network so vast and complex it can scarcely be understood or modelled. The IoT will be a global network of unintended consequences and black swan events, ones that will do things nobody ever planned. In this world, it is impossible to know the consequences of connecting your home’s networked blender to the same information grid as an ambulance in Tokyo, a bridge in Sydney, or a Detroit auto manufacturer’s production line.

The vast levels of cyber crime we currently face make it abundantly clear we cannot even adequately protect the standard desktops and laptops we presently have online, let alone the hundreds of millions of mobile phones and tablets we are adding annually. In what vision of the future, then, is it conceivable that we will be able to protect the next 50bn things, from pets to pacemakers to self-driving cars? The obvious reality is that we cannot.

Our technological threat surface area is growing exponentially and we have no idea how to defend it effectively. The internet of things will become nothing more than the Internet of things to be hacked.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

Luck

Four-leaf_clover

Some think they have it constantly at their side, like a well-trained puppy. Others crave and seek it. And yet others believe they have been shunned by it. Some put their love lives down to it, and many believe it has had a hand in guiding their careers, friendships, and finances. Of course, many know that it — luck — plays a crucial part in their fortunes at the poker table, roulette wheel or at the races. So what really is luck? Does it stem from within or does it envelope us like a benevolent (mostly) aether? And more importantly, how can more of us find some and tune it to our purposes? 

Carlin Flora over at Aeon presents an insightful analysis, with some rather simple answers. Oh, and you may wish to give away that rabbit’s foot.

From aeon:

In 1992, Archie Karas, then a waiter, headed out to Las Vegas. By 1995, he had turned $50 into $40 million, in what has become known as the biggest winning streak in gambling history. Most of us would call it an instance of great luck, or we might say of Archie himself: ‘What a lucky guy!’ The cold-hearted statistician would laugh at our superstious notions, and instead describe a series of chance processes that happened to work out for Karas. In the larger landscape where randomness reigns, anything can happen at any given casino. Calling its beneficiaries lucky is simply sticking a label on it after the fact.

To investigate luck is to take on one of the grandest of all questions: how can we explain what happens to us, and whether we will be winners, losers or somewhere in the middle at love, work, sports, gambling and life overall? As it turns out, new findings suggest that luck is not a phenomenon that appears exclusively in hindsight, like a hail storm on your wedding day. Nor is it an expression of our desire to see patterns where none exist, like a conviction that your yellow sweater is lucky. The concept of luck is not a myth.

Instead, the studies show, luck can be powered by past good or bad luck, personality and, in a meta-twist, even our own ideas and beliefs about luck itself. Lucky streaks are real, but they are the product of more than just blind fate. Our ideas about luck influence the way we behave in risky situations. We really can make our own luck, though we don’t like to think of ourselves as lucky – a descriptor that undermines other qualities, like talent and skill. Luck can be a force, but it’s one we interact with, shape and cultivate. Luck helps determine our fate here on Earth, even if you think its ultimate cause divine.

Luck is perspective and point of view: if a secular man happened to survive because he took a meeting outside his office at the World Trade Center on the morning of 11 September 2001, he might simply acknowledge random chance in life without assigning a deeper meaning. A Hindu might conclude he had good karma. A Christian might say God was watching out for him so that he could fulfil a special destiny in His service. The mystic could insist he was born under lucky stars, as others are born with green eyes.

Traditionally, the Chinese think luck is an inner trait, like intelligence or upbeat mood, notes Maia Young, a management expert at the University of California, Los Angeles. ‘My mom always used to tell me, “You have a lucky nose”, because its particular shape was a lucky one, according to Chinese lore.’ Growing up in the American Midwest, it dawned on Young that the fleeting luck that Americans often talked about – a luck that seemed to visit the same person at certain times (‘I got lucky on that test!’) but not others (‘I got caught in traffic before my interview!’) – was not equivalent to the unchanging, stable luck her mother saw in her daughter, her nose being an advertisement of its existence within.

‘It’s something that I have that’s a possession of mine, that can be more relied upon than just dumb luck,’ says Young. The distinction stuck with her. You might think someone with a lucky nose wouldn’t roll up their sleeves to work hard – why bother? – but here’s another cultural difference in perceptions of luck. ‘In Chinese culture,’ she says, ‘hard work can go hand-in-hand with being lucky. The belief system accommodates both.’

On the other hand, because Westerners see effort and good fortune as taking up opposite corners of the ring, they are ambivalent about luck. They might pray for it and sincerely wish others they care about ‘Good luck!’ but sometimes they just don’t want to think of themselves as lucky. They’d rather be deserving. The fact that they live in a society that is neither random nor wholly meritocratic makes for an even messier slamdance between ‘hard work’ and ‘luck’. Case in point: when a friend gets into a top law or medical school, we might say: ‘Congratulations! You’ve persevered. You deserve it.’ Were she not to get in, we would say: ‘Acceptance is arbitrary. Everyone’s qualified these days – it’s the luck of the draw.’

Read the entire article here.

Image: Four-leaf clover. Some consider it a sign of god luck. Courtesy of Phyzome.

Send to Kindle

Nuisance Flooding = Sea-Level Rise

hurricane_andrewGovernment officials in Florida are barred from using the terms “climate change”, “global warming”, “sustainable” and other related terms. Apparently, they’ll have to use the euphemism “nuisance flooding” in place of “sea-level rise”. One wonders what literary trick they’ll conjure up next time the state gets hit by a hurricane — “Oh, that? Just a ‘mischievous little breeze’, I’m not a scientist you know.”

From the Guardian:

Officials with the Florida Department of Environmental Protection (DEP), the agency in charge of setting conservation policy and enforcing environmental laws in the state, issued directives in 2011 barring thousands of employees from using the phrases “climate change” and “global warming”, according to a bombshell report by the Florida Center for Investigative Reporting (FCIR).

The report ties the alleged policy, which is described as “unwritten”, to the election of Republican governor Rick Scott and his appointment of a new department director that year. Scott, who was re-elected last November, has declined to say whether he believes in climate change caused by human activity.

“I’m not a scientist,” he said in one appearance last May.

Scott’s office did not return a call Sunday from the Guardian, seeking comment. A spokesperson for the governor told the FCIR team: “There’s no policy on this.”

The FCIR report was based on statements by multiple named former employees who worked in different DEP offices around Florida. The instruction not to refer to “climate change” came from agency supervisors as well as lawyers, according to the report.

“We were told not to use the terms ‘climate change’, ‘global warming’ or ‘sustainability’,” the report quotes Christopher Byrd, who was an attorney with the DEP’s Office of General Counsel in Tallahassee from 2008 to 2013, as saying. “That message was communicated to me and my colleagues by our superiors in the Office of General Counsel.”

“We were instructed by our regional administrator that we were no longer allowed to use the terms ‘global warming’ or ‘climate change’ or even ‘sea-level rise’,” said a second former DEP employee, Kristina Trotta. “Sea-level rise was to be referred to as ‘nuisance flooding’.”

According to the employees’ accounts, the ban left damaging holes in everything from educational material published by the agency to training programs to annual reports on the environment that could be used to set energy and business policy.

The 2014 national climate assessment for the US found an “imminent threat of increased inland flooding” in Florida due to climate change and called the state “uniquely vulnerable to sea level rise”.

Read the entire story here.

Image: Hurricane Floyd 1999, a “mischievous little breeze”. Courtesy of NASA.

Send to Kindle

The Power of Mediocrity

Over-achievers may well frown upon the slacking mediocre souls who strive to do less. But, mediocrity has a way of pervading the lives of the constantly striving 18 hour-a-day, multi-taskers as well. The figure of speech “jack of all trades, master of none”, sums up the inevitability of mediocrity for those who strive to do everything, but do nothing well. In fact, pursuit of the mediocre may well be an immutable universal law — both for under-archievers and over-archievers, and for that vast, second-rate, mediocre middle-ground of averageness.

From the Guardian:

In the early years of the last century, Spanish philosopher José Ortega y Gassetproposed a solution to society’s ills that still strikes me as ingenious, in a deranged way. He argued that all public sector workers from the top down (though, come to think of it, why not everyone else, too?) should be demoted to the level beneath their current job. His reasoning foreshadowed the Peter Principle: in hierarchies, people “rise to their level of incompetence”. Do your job well, and you’re rewarded with promotion, until you reach a job you’re less good at, where you remain.

In a recent book, The Hard Thing About Hard Things, the tech investor Ben Horowitz adds a twist: “The Law of Crappy People”. As soon as someone on a given rung at a company gets as good as the worst person the next rung up, he or she may expect a promotion. Yet, if it’s granted, the firm’s talent levels will gradually slide downhill. No one person need be peculiarly crappy for this to occur; bureaucracies just tend to be crappier than the sum of their parts.

Yet it’s wrong to think of these pitfalls as restricted to organisations. There’s a case to be made that the gravitational pull of the mediocre affects all life – as John Stuart Mill put it, that “the general tendency of things throughout the world is to render mediocrity the ascendant power among mankind”. True, it’s most obvious in the workplace (hence the observation that “a meeting moves at the pace of the slowest mind in the room”), but the broader point is that in any domain – work, love, friendship, health – crappy solutions crowd out good ones time after time, so long as they’re not so bad as to destroy the system. People and organisations hit plateaux not because they couldn’t do better, but because a plateau is a tolerable, even comfortable place. Even evolution – life itself! – is all about mediocrity. “Survival of the fittest” isn’t a progression towards greatness; it just means the survival of the sufficiently non-terrible.

And mediocrity is cunning: it can disguise itself as achievement. The cliche of a “mediocre” worker is a Dilbert-esque manager with little to do. But as Greg McKeown notes, in his book Essentialism: The Disciplined Pursuit Of Less, the busyness of the go-getter can lead to mediocrity, too. Throw yourself at every opportunity and you’ll end up doing unimportant stuff – and badly. You can’t fight this with motivational tricks or cheesy mission statements: you need a discipline, a rule you apply daily, to counter the pull of the sub-par. For a company, that might mean stricter, more objective promotion policies. For the over-busy person, there’s McKeown’s “90% Rule” – when considering an option, ask: does it score at least 9/10 on some relevant criterion? If not, say no. (Ideally, that criterion is: “Is this fulfilling?”, but the rule still works if it’s “Does this pay the bills?”).

Read the entire story here.

Send to Kindle

The Demise of the Language of Landscape

IMG_2006

In his new book entitled Landmarks author Robert Macfarlane ponders the relationship of words to our natural landscape. Reviewers describe the book as a “field guide to the literature of nature”. Sadly, Macfarlane’s detailed research for the book chronicles a disturbing trend: the culling of many words from our everyday lexicon that describe our natural world to make way for the buzzwords of progress. This substitution comes in the form of newer memes that describe our narrow, urbanized and increasingly virtual world circumscribed by technology. Macfarlane cited Oxford Junior Dictionary (OJD) as a vivid example of the evisceration of our language of landscape. The OJD has removed words such as acorn, beech, conker, dandelion, heather, heron, kingfisher, pasture and willow. In their place we now find words like attachmentblogbroadbandbullet-pointcelebritychatroomcut-and-pasteMP3 player and voice-mail. Get the idea?

I’m no fundamentalist luddite — I’m writing a blog after all — but surely some aspects of our heritage warrant protection. We are an intrinsic part of the natural environment despite our increasing urbanization. Don’t we all crave the escape to a place where we can lounge under a drooping willow surrounded by nothing more than the buzzing of insects and the babbling of a stream. I’d rather that than deal with the next attachment or voice-mail.

What a loss it would be for our children, and a double-edged loss at that. We, the preceding generation continue to preside over the systematic destruction of our natural landscape. And, in doing so we remove the words as well — the words that once described what we still crave.

From the Guardian:

Eight years ago, in the coastal township of Shawbost on the Outer Hebridean island of Lewis, I was given an extraordinary document. It was entitled “Some Lewis Moorland Terms: A Peat Glossary”, and it listed Gaelic words and phrases for aspects of the tawny moorland that fills Lewis’s interior. Reading the glossary, I was amazed by the compressive elegance of its lexis, and its capacity for fine discrimination: a caochan, for instance, is “a slender moor-stream obscured by vegetation such that it is virtually hidden from sight”, while a feadan is “a small stream running from a moorland loch”, and a fèith is “a fine vein-like watercourse running through peat, often dry in the summer”. Other terms were striking for their visual poetry: rionnach maoim means “the shadows cast on the moorland by clouds moving across the sky on a bright and windy day”; èit refers to “the practice of placing quartz stones in streams so that they sparkle in moonlight and thereby attract salmon to them in the late summer and autumn”, and teine biorach is “the flame or will-o’-the-wisp that runs on top of heather when the moor burns during the summer”.

The “Peat Glossary” set my head a-whirr with wonder-words. It ran to several pages and more than 120 terms – and as that modest “Some” in its title acknowledged, it was incomplete. “There’s so much language to be added to it,” one of its compilers, Anne Campbell, told me. “It represents only three villages’ worth of words. I have a friend from South Uist who said her grandmother would add dozens to it. Every village in the upper islands would have its different phrases to contribute.” I thought of Norman MacCaig’s great Hebridean poem “By the Graveyard, Luskentyre”, where he imagines creating a dictionary out of the language of Donnie, a lobster fisherman from the Isle of Harris. It would be an impossible book, MacCaig concluded:

A volume thick as the height of the Clisham,

A volume big as the whole of Harris,

A volume beyond the wit of scholars.

The same summer I was on Lewis, a new edition of the Oxford Junior Dictionarywas published. A sharp-eyed reader noticed that there had been a culling of words concerning nature. Under pressure, Oxford University Press revealed a list of the entries it no longer felt to be relevant to a modern-day childhood. The deletions included acornadderashbeechbluebellbuttercupcatkinconkercowslipcygnetdandelionfernhazelheatherheronivykingfisherlarkmistletoenectarnewtotterpasture and willow. The words taking their places in the new edition included attachmentblock-graphblogbroadbandbullet-pointcelebritychatroomcommitteecut-and-pasteMP3 player and voice-mail. As I had been entranced by the language preserved in the prose?poem of the “Peat Glossary”, so I was dismayed by the language that had fallen (been pushed) from the dictionary. For blackberry, read Blackberry.

I have long been fascinated by the relations of language and landscape – by the power of strong style and single words to shape our senses of place. And it has become a habit, while travelling in Britain and Ireland, to note down place words as I encounter them: terms for particular aspects of terrain, elements, light and creaturely life, or resonant place names. I’ve scribbled these words in the backs of notebooks, or jotted them down on scraps of paper. Usually, I’ve gleaned them singly from conversations, maps or books. Now and then I’ve hit buried treasure in the form of vernacular word-lists or remarkable people – troves that have held gleaming handfuls of coinages, like the Lewisian “Peat Glossary”.

Not long after returning from Lewis, and spurred on by the Oxford deletions, I resolved to put my word-collecting on a more active footing, and to build up my own glossaries of place words. It seemed to me then that although we have fabulous compendia of flora, fauna and insects (Richard Mabey’s Flora Britannica and Mark Cocker’s Birds Britannica chief among them), we lack a Terra Britannica, as it were: a gathering of terms for the land and its weathers – terms used by crofters, fishermen, farmers, sailors, scientists, miners, climbers, soldiers, shepherds, poets, walkers and unrecorded others for whom particularised ways of describing place have been vital to everyday practice and perception. It seemed, too, that it might be worth assembling some of this terrifically fine-grained vocabulary – and releasing it back into imaginative circulation, as a way to rewild our language. I wanted to answer Norman MacCaig’s entreaty in his Luskentyre poem: “Scholars, I plead with you, / Where are your dictionaries of the wind … ?”

Read the entire article here and then buy the book, which is published in March 2015.

Image: Sunset over the Front Range. Courtesy of the author.

Send to Kindle

News Anchor as Cult Hero

Google-search-news-anchor

Why and when did the news anchor, or newsreader as he or she is known in non-US parts of the world, acquire the status of cult hero? And, why is this a peculiarly US phenomenon? Let’s face it TV newsreaders in the UK, on the BBC or ITV, certainly do not have a following along the lines their US celebrity counterparts like Brian Williams, Megyn Kelly or Anderson Cooper. Why?

From the Guardian:

A game! Spot the odd one out in the following story. This year has been a terrible one so far for those who care about American journalism: the much-loved New York Times journalist David Carr died suddenly on 12 February; CBS correspondent Bob Simon was killed in a car crash the day before; Jon Stewart, famously the “leading news source for young Americans”, announced that he is quitting the Daily Show; his colleague Stephen Colbert is moving over from news satire to the softer arena of a nightly talk show; NBC anchor Brian Williams, as famous in America as Jeremy Paxman is in Britain, has been suspended after it was revealed he had “misremembered” events involving himself while covering the war in Iraq; Bill O’Reilly, an anchor on Fox News, the most watched cable news channel in the US, has been accused of being on similarly vague terms with the truth.

News of the Fox News anchor probably sounds like “dog bites man” to most Britons, who remember that this network recently described Birmingham as a no-go area for non-Muslims. But this latest scandal involving O’Reilly reveals something quite telling about journalism in America.

Whereas in Britain journalists are generally viewed as occupying a place on the food chain somewhere between bottom-feeders and cockroaches, in America there remains, still, a certain idealisation of journalists, protected by a gilded halo hammered out by sentimental memories of Edward R Murrow and Walter Cronkite.

Even while Americans’ trust in mass media continues to plummet, journalists enjoy a kind of heroic fame that would baffle their British counterparts. Television anchors and commentators, from Rachel Maddow on the left to Sean Hannity on the right, are lionised in a way that, say, Huw Edwards, is, quite frankly, not. A whole genre of film exists in the US celebrating the heroism of journalists, from All the President’s Men to Good Night, and Good Luck. In Britain, probably the most popular depiction of journalists came from Spitting Image, where they were snuffling pigs in pork-pie hats.

So whenever a journalist in the US has been caught lying, the ensuing soul-searching and garment-rending discovery has been about as prolonged and painful as a PhD on proctology. The New York Times and the New Republic both imploded when it was revealed that their journalists, respectively Jayson Blair and Stephen Glass, had fabricated their stories. Their tales have become part of American popular culture – The Wire referenced Blair in its fifth season and a film was made about the New Republic’s scandal – like national myths that must never be forgotten.

By contrast, when it was revealed that The Independent’s Johann Hari had committed plagiarism and slandered his colleagues on Wikipedia, various journalists wrote bewildering defences of him and the then Independent editor said initially that Hari would return to the paper. Whereas Hari’s return to the public sphere three years after his resignation has been largely welcomed by the British media, Glass and Blair remain shunned figures in the US, more than a decade after their scandals.

Which brings us back to the O’Reilly scandal, now unfolding in the US. Once it was revealed that NBC’s liberal Brian Williams had exaggerated personal anecdotes – claiming to have been in a helicopter that was shot at when he was in the one behind, for starters – the hunt was inevitably on for an equally big conservative news scalp. Enter stage left: Bill O’Reilly.

So sure, O’Reilly claimed that in his career he has been in “active war zones” and “in the Falklands” when he in fact covered a protest in Buenos Aires during the Falklands war. And sure, O’Reilly’s characteristically bullish defence that he “never said” he was “on the Falkland Islands” (original quote: “I was in a situation one time, in a war zone in Argentina, in the Falklands …”) and that being at a protest thousands of miles from combat constitutes “a war zone” verges on the officially bonkers (as the Washington Post put it, “that would mean that any reporter who covered an anti-war protest in Washington during the Iraq War was doing combat reporting”). But does any of this bother either O’Reilly or Fox News? It does not.

Unlike Williams, who slunk away in shame, O’Reilly has been bullishly combative, threatening journalists who dare to cover the story and saying that they deserve to be “in the kill zone”. Fox News too has been predictably untroubled by allegations of lies: “Fox News chairman and CEO Roger Ailes and all senior management are in full support of Bill O’Reilly,” it said in a statement.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

The US Senator From Oklahoma and the Snowball

By their own admission Republicans in the US Congress are not scientists, and clearly most, if not all, have no grasp of science, the scientific method, or the meaning of scientific theory or broad scientific consensus. The Senator from Oklahoma, James Inhofe, is the perfect embodiment of this extraordinary condition — perhaps a psychosis even — whereby a human living in the 21st century has no clue. Senator Inhofe recently gave us his infantile analysis of climate change on the Senate floor, accompanied by a snowball. This will make you then laugh, then cry.

From Scientific American:

“In case we have forgotten, because we keep hearing that 2014 has been the warmest year on record, I ask the chair, you know what this is? It’s a snowball. And that’s just from outside here. So it’s very, very cold out.”

Oklahoma Senator James Inhofe, the biggest and loudest climate change denier in Congress, last week on the floor of the senate. But his facile argument, that it’s cold enough for snow to exist in Washington, D.C., therefore climate change is a hoax, was rebutted in the same venue by Rhode Island Senator Sheldon Whitehouse:

“You can believe NASA and you can believe what their satellites measure on the planet, or you can believe the Senator with the snowball. The United States Navy takes this very seriously, to the point where Admiral Locklear, who is the head of the Pacific Command, has said that climate change is the biggest threat that we face in the Pacific…you can either believe the United States Navy or you can believe the Senator with the snowball…every major American scientific society has put itself on record, many of them a decade ago, that climate change is deadly real. They measure it, they see it, they know why it happens. The predictions correlate with what we see as they increasingly come true. And the fundamental principles, that it is derived from carbon pollution, which comes from burning fossil fuels, are beyond legitimate dispute…so you can believe every single major American scientific society, or you can believe the Senator with the snowball.”

Read the entire story here.

Video: Senator Inhofe with Snowball. Courtesy of C-Span.

Send to Kindle

Time For a New Body, Literally

Brainthatwouldntdie_film_poster

Let me be clear. I’m not referring to a hair transplant, but a head transplant.

A disturbing story has been making the media rounds recently. Dr. Sergio Canavero from the Turin Advanced Neuromodulation Group in Italy, suggests that the time is right to attempt the transplantation of a human head onto a different body. Canavero believes that advances in surgical techniques and immunotherapy are such that a transplantation could be attempted by 2017. Interestingly enough, he has already had several people volunteer for a new body.

Ethics aside, it certainly doesn’t stretch the imagination to believe Hollywood’s elite would clamor for this treatment. Now, I wonder if some people, liking their own body, would want a new head?

From New Scientist:

It’s heady stuff. The world’s first attempt to transplant a human head will be launched this year at a surgical conference in the US. The move is a call to arms to get interested parties together to work towards the surgery.

The idea was first proposed in 2013 by Sergio Canavero of the Turin Advanced Neuromodulation Group in Italy. He wants to use the surgery to extend the lives of people whose muscles and nerves have degenerated or whose organs are riddled with cancer. Now he claims the major hurdles, such as fusing the spinal cord and preventing the body’s immune system from rejecting the head, are surmountable, and the surgery could be ready as early as 2017.

Canavero plans to announce the project at the annual conference of the American Academy of Neurological and Orthopaedic Surgeons (AANOS) in Annapolis, Maryland, in June. Is society ready for such momentous surgery? And does the science even stand up?

The first attempt at a head transplant was carried out on a dog by Soviet surgeon Vladimir Demikhov in 1954. A puppy’s head and forelegs were transplanted onto the back of a larger dog. Demikhov conducted several further attempts but the dogs only survived between two and six days.

The first successful head transplant, in which one head was replaced by another, was carried out in 1970. A team led by Robert White at Case Western Reserve University School of Medicine in Cleveland, Ohio, transplanted the head of one monkey onto the body of another. They didn’t attempt to join the spinal cords, though, so the monkey couldn’t move its body, but it was able to breathe with artificial assistance. The monkey lived for nine days until its immune system rejected the head. Although few head transplants have been carried out since, many of the surgical procedures involved have progressed. “I think we are now at a point when the technical aspects are all feasible,” says Canavero.

This month, he published a summary of the technique he believes will allow doctors to transplant a head onto a new body (Surgical Neurology Internationaldoi.org/2c7). It involves cooling the recipient’s head and the donor body to extend the time their cells can survive without oxygen. The tissue around the neck is dissected and the major blood vessels are linked using tiny tubes, before the spinal cords of each person are cut. Cleanly severing the cords is key, says Canavero.

The recipient’s head is then moved onto the donor body and the two ends of the spinal cord – which resemble two densely packed bundles of spaghetti – are fused together. To achieve this, Canavero intends to flush the area with a chemical called polyethylene glycol, and follow up with several hours of injections of the same stuff. Just like hot water makes dry spaghetti stick together, polyethylene glycol encourages the fat in cell membranes to mesh.

Next, the muscles and blood supply would be sutured and the recipient kept in a coma for three or four weeks to prevent movement. Implanted electrodes would provide regular electrical stimulation to the spinal cord, because research suggests this can strengthen new nerve connections.

When the recipient wakes up, Canavero predicts they would be able to move and feel their face and would speak with the same voice. He says that physiotherapy would enable the person to walk within a year. Several people have already volunteered to get a new body, he says.

The trickiest part will be getting the spinal cords to fuse. Polyethylene glycol has been shown to prompt the growth of spinal cord nerves in animals, and Canavero intends to use brain-dead organ donors to test the technique. However, others are sceptical that this would be enough. “There is no evidence that the connectivity of cord and brain would lead to useful sentient or motor function following head transplantation,” says Richard Borgens, director of the Center for Paralysis Research at Purdue University in West Lafayette, Indiana.

Read the entire article here.

Image: Theatrical poster for the movie The Brain That Wouldn’t Die (1962). Courtesy of Wikipedia.

Send to Kindle

Jon Ronson Versus His Spambot Infomorph Imposter

While this may sound like a 1980′s monster flick, it’s rather more serious.

Author, journalist, filmmaker Jon Ronson weaves a fun but sinister tale of the theft of his own identity. The protagonists: a researcher in technology and cyberculture, a so-called “creative technologist” and a university lecturer in English and American literature. Not your typical collection of “identity thieves”, trolls, revenge pornographers, and online shamers. But an unnerving, predatory trio nevertheless.

From the Guardian:

In early January 2012, I noticed that another Jon Ronson had started posting on Twitter. His photograph was a photograph of my face. His Twitter name was @jon_ronson. His most recent tweet read: “Going home. Gotta get the recipe for a huge plate of guarana and mussel in a bap with mayonnaise :D #yummy.”

“Who are you?” I tweeted him.

“Watching #Seinfeld. I would love a big plate of celeriac, grouper and sour cream kebab with lemongrass #foodie,” he tweeted. I didn’t know what to do.

The next morning, I checked @jon_ronson’s timeline before I checked my own. In the night he had tweeted, “I’m dreaming something about #time and #cock.” He had 20 followers.

I did some digging. A young academic from Warwick University called Luke Robert Mason had a few weeks earlier posted a comment on the Guardian site. It was in response to a short video I had made about spambots. “We’ve built Jon his very own infomorph,” he wrote. “You can follow him on Twitter here: @jon_ronson.”

I tweeted him: “Hi!! Will you take down your spambot please?”

Ten minutes passed. Then he replied, “We prefer the term infomorph.”

“But it’s taken my identity,” I wrote.

“The infomorph isn’t taking your identity,” he wrote back. “It is repurposing social media data into an infomorphic aesthetic.”

I felt a tightness in my chest.

“#woohoo damn, I’m in the mood for a tidy plate of onion grill with crusty bread. #foodie,” @jon_ronson tweeted.

I was at war with a robot version of myself.

Advertisement

A month passed. @jon_ronson was tweeting 20 times a day about its whirlwind of social engagements, its “soirées” and wide circle of friends. The spambot left me feeling powerless and sullied.

I tweeted Luke Robert Mason. If he was adamant that he wouldn’t take down his spambot, perhaps we could at least meet? I could film the encounter and put it on YouTube. He agreed.

I rented a room in central London. He arrived with two other men – the team behind the spambot. All three were academics. Luke was the youngest, handsome, in his 20s, a “researcher in technology and cyberculture and director of the Virtual Futures conference”. David Bausola was a “creative technologist” and the CEO of the digital agency Philter Phactory. Dan O’Hara had a shaved head and a clenched jaw. He was in his late 30s, a lecturer in English and American literature at the University of Cologne.

I spelled out my grievances. “Academics,” I began, “don’t swoop into a person’s life uninvited and use him for some kind of academic exercise, and when I ask you to take it down you’re, ‘Oh, it’s not a spambot, it’s an infomorph.’”

Dan nodded. He leaned forward. “There must be lots of Jon Ronsons out there?” he began. “People with your name? Yes?”

I looked suspiciously at him. “I’m sure there are people with my name,” I replied, carefully.

“I’ve got the same problem,” Dan said with a smile. “There’s another academic out there with my name.”

“You don’t have exactly the same problem as me,” I said, “because my exact problem is that three strangers have stolen my identity and have created a robot version of me and are refusing to take it down.”

Dan let out a long-suffering sigh. “You’re saying, ‘There is only one Jon Ronson’,” he said. “You’re proposing yourself as the real McCoy, as it were, and you want to maintain that integrity and authenticity. Yes?”

I stared at him.

“We’re not quite persuaded by that,” he continued. “We think there’s already a layer of artifice and it’s your online personality – the brand Jon Ronson – you’re trying to protect. Yeah?”

“No, it’s just me tweeting,” I yelled.

“The internet is not the real world,” said Dan.

“I write my tweets,” I replied. “And I press send. So it’s me on Twitter.” We glared at each other. “That’s not academic,” I said. “That’s not postmodern. That’s the fact of it. It’s a misrepresentation of me.”

“You’d like it to be more like you?” Dan said.

“I’d like it to not exist,” I said.

“I find that quite aggressive,” he said. “You’d like to kill these algorithms? You must feel threatened in some way.” He gave me a concerned look. “We don’t go around generally trying to kill things we find annoying.”

“You’re a troll!” I yelled.

I dreaded uploading the footage to YouTube, because I’d been so screechy. I steeled myself for mocking comments and posted it. I left it 10 minutes. Then, with apprehension, I had a look.

“This is identity theft,” read the first comment I saw. “They should respect Jon’s personal liberty.”

Read the entire story here.

Video: JON VS JON Part 2 | Escape and Control. Courtesy of Jon Ronson.

Send to Kindle

Another London Bridge

nep-bridge-008

I don’t live in London. But having been born and raised there I still have a particular affinity for this great city. So, when the London Borough of Wandsworth recently published submissions for a new bridge of the River Thames I had to survey the designs. Over 70 teams submitted ideas since the process was opened to competition in December 2014.  The bridge will eventually span the river between Nine Elms and Pimlico.

Please check out the official designs here. Some are quite extraordinary.

Image: Scheme 008. Courtesy of Nine Elms to Pimlico (NEP) Bridge Competition, London Borough of Wandsworth.

Send to Kindle

A Physics Based Theory of Life

Carnot_heat_engine

Those who subscribe to the non-creationist theory of the origins of life tend gravitate towards the idea of assembly of self-replicating, organic molecules in our primeval oceans — the so-called primordial soup theory. Recently however, professor Jeremy England of MIT has proposed a thermodynamic explanation, which posits that inorganic matter tends to organize — under the right conditions — in a way that enables it to dissipate increasing amounts of energy. This is one of the fundamental attributes of living organisms.

Could we be the product of the Second Law of Thermodynamics, nothing more than the expression of increasing entropy?

Read more of this fascinating new hypothesis below or check out England’s paper on the Statistical Physics of Self-replication.

From Quanta:

Why does life exist?

Popular hypotheses credit a primordial soup, a bolt of lightning and a colossal stroke of luck. But if a provocative new theory is correct, luck may have little to do with it. Instead, according to the physicist proposing the idea, the origin and subsequent evolution of life follow from the fundamental laws of nature and “should be as unsurprising as rocks rolling downhill.”

From the standpoint of physics, there is one essential difference between living things and inanimate clumps of carbon atoms: The former tend to be much better at capturing energy from their environment and dissipating that energy as heat. Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life.

“You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant,” England said.

England’s theory is meant to underlie, rather than replace, Darwin’s theory of evolution by natural selection, which provides a powerful description of life at the level of genes and populations. “I am certainly not saying that Darwinian ideas are wrong,” he explained. “On the contrary, I am just saying that from the perspective of the physics, you might call Darwinian evolution a special case of a more general phenomenon.”

His idea, detailed in a recent paper and further elaborated in a talk he is delivering at universities around the world, has sparked controversy among his colleagues, who see it as either tenuous or a potential breakthrough, or both.

England has taken “a very brave and very important step,” said Alexander Grosberg, a professor of physics at New York University who has followed England’s work since its early stages. The “big hope” is that he has identified the underlying physical principle driving the origin and evolution of life, Grosberg said.

“Jeremy is just about the brightest young scientist I ever came across,” said Attila Szabo, a biophysicist in the Laboratory of Chemical Physics at the National Institutes of Health who corresponded with England about his theory after meeting him at a conference. “I was struck by the originality of the ideas.”

Others, such as Eugene Shakhnovich, a professor of chemistry, chemical biology and biophysics at Harvard University, are not convinced. “Jeremy’s ideas are interesting and potentially promising, but at this point are extremely speculative, especially as applied to life phenomena,” Shakhnovich said.

England’s theoretical results are generally considered valid. It is his interpretation — that his formula represents the driving force behind a class of phenomena in nature that includes life — that remains unproven. But already, there are ideas about how to test that interpretation in the lab.

“He’s trying something radically different,” said Mara Prentiss, a professor of physics at Harvard who is contemplating such an experiment after learning about England’s work. “As an organizing lens, I think he has a fabulous idea. Right or wrong, it’s going to be very much worth the investigation.”

At the heart of England’s idea is the second law of thermodynamics, also known as the law of increasing entropy or the “arrow of time.” Hot things cool down, gas diffuses through air, eggs scramble but never spontaneously unscramble; in short, energy tends to disperse or spread out as time progresses. Entropy is a measure of this tendency, quantifying how dispersed the energy is among the particles in a system, and how diffuse those particles are throughout space. It increases as a simple matter of probability: There are more ways for energy to be spread out than for it to be concentrated. Thus, as particles in a system move around and interact, they will, through sheer chance, tend to adopt configurations in which the energy is spread out. Eventually, the system arrives at a state of maximum entropy called “thermodynamic equilibrium,” in which energy is uniformly distributed. A cup of coffee and the room it sits in become the same temperature, for example. As long as the cup and the room are left alone, this process is irreversible. The coffee never spontaneously heats up again because the odds are overwhelmingly stacked against so much of the room’s energy randomly concentrating in its atoms.

Although entropy must increase over time in an isolated or “closed” system, an “open” system can keep its entropy low — that is, divide energy unevenly among its atoms — by greatly increasing the entropy of its surroundings. In his influential 1944 monograph “What Is Life?” the eminent quantum physicist Erwin Schrödinger argued that this is what living things must do. A plant, for example, absorbs extremely energetic sunlight, uses it to build sugars, and ejects infrared light, a much less concentrated form of energy. The overall entropy of the universe increases during photosynthesis as the sunlight dissipates, even as the plant prevents itself from decaying by maintaining an orderly internal structure.

Life does not violate the second law of thermodynamics, but until recently, physicists were unable to use thermodynamics to explain why it should arise in the first place. In Schrödinger’s day, they could solve the equations of thermodynamics only for closed systems in equilibrium. In the 1960s, the Belgian physicist Ilya Prigogine made progress on predicting the behavior of open systems weakly driven by external energy sources (for which he won the 1977 Nobel Prize in chemistry). But the behavior of systems that are far from equilibrium, which are connected to the outside environment and strongly driven by external sources of energy, could not be predicted.

Read the entire story here.

Image: Carnot engine diagram, where an amount of heat QH flows from a high temperature TH furnace through the fluid of the “working body” (working substance) and the remaining heat QC flows into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions. Courtesy of Wikipedia.

Send to Kindle

Net Neutrality Lives!

The US Federal Communications Commission (FCC) took a giant step in the right direction, on February 26, 2015, when it voted to regulate internet broadband much like a public utility. This is a great victory for net neutrality advocates and consumers who had long sought to protect equal access for all to online services and information. Tim Berners Lee, inventor of the World Wide Web, offered his support and praise for the ruling, saying:

“It’s about consumer rights, it’s about free speech, it’s about democracy.”

From the Guardian:

Internet activists scored a landmark victory on Thursday as the top US telecommunications regulator approved a plan to govern broadband internet like a public utility.

Following one of the most intense – and bizarre – lobbying battles in the history of modern Washington politics, the Federal Communications Commission (FCC) passed strict new rules that give the body its greatest power over the cable industry since the internet went mainstream.

FCC chairman Tom Wheeler – a former telecom lobbyist turned surprise hero of net-neutrality supporters – thanked the 4m people who had submitted comments on the new rules. “Your participation has made this the most open process in FCC history,” he said. “We listened and we learned.”

Wheeler said that while other countries were trying to control the internet, the sweeping new US protections on net neutrality – the concept that all information and services should have equal access online – represented “a red-letter day for internet freedom”.

“The internet is simply too important to be left without rules and without a referee on the field,” said Wheeler. “Today’s order is more powerful and more expansive than any previously suggested.”

Broadband providers will be banned from creating so-called “fast lanes” blocking or slowing traffic online, and will oversee mobile broadband as well as cable. The FCC would also have the authority to challenge unforeseen barriers broadband providers might create as the internet develops.

Activists and tech companies argue the new rules are vital to protect net neutrality – the concept that all information and services should have equal access to the internet. The FCC’s two Republican commissioners, Ajit Pai and Michael O’Rielly, voted against the plan but were overruled at a much anticipated meeting by three Democratic members on the panel.

Republicans have long fought the FCC’s net neutrality protections, arguing the rules will create an unnecessary burden on business. They have accused Barack Obama of bullying the regulator into the move in order to score political points, with conservative lawmakers and potential 2016 presidential candidates expected to keep the fight going well into that election campaign.

Pai said the FCC was flip-flopping for “one reason and one reason only: president Obama told us to do so”.

Wheeler dismissed accusations of a “secret” plan “nonsense”. “This is no more a plan to regulate the internet than the first amendment is a plan to regulate free speech,” Wheeler said.

“This is the FCC using all the tools in our toolbox to protect innovators and consumers.”

Obama offered his support to the rules late last year, following an online activism campaign that pitched internet organisers and companies from Netflix and Reddit to the online craft market Etsy and I Can Has Cheezburger? – weblog home of the Lolcats meme – against Republican leaders and the cable and telecom lobbies.

Broadband will now be regulated under Title II of the Communications Act – the strongest legal authority the FCC has in its authority. Obama called on the independent regulator to implement Title II last year, leading to charges that he unduly influenced Wheeler’s decision that are now being investigated in Congress.

A small band of protesters gathered in the snow outside the FCC’s Washington headquarters before the meeting on Thursday, in celebration of their success in lobbying for a dramatic U-turn in regulation. Wheeler and his Democratic colleagues, Mignon Clyburn and Jessica Rosenworcel, were cheered as they sat down for the meeting.

Joining the activists outside was Apple co-counder Steve Wozniak, who said the FCC also needed more power to prevent future attacks on the open internet.

“We have won on net neutrality,” Wozniak told the Guardian. “This is important because they don’t want the FCC to have oversight over other bad stuff.”

Tim Berners Lee, inventor of the world wide web, addressed the meeting via video, saying he applauded the FCC’s decision to protect net neutrality: “More than anything else, the action you take today will preserve the reality of a permission-less innovation that is the heart of the internet.”

“It’s about consumer rights, it’s about free speech, it’s about democracy,” Berners Lee said.

Clyburn compared the new rules to the Bill of Rights. “We are here to ensure that there is only one internet,” she said. “We want to ensure that those with deep pockets have the same opportunity as those with empty pockets too succeed.”

Read the entire story here.

Send to Kindle