Spam, Spam, Spam: All Natural

Google-search-natural-junk-food

Parents through the ages have often decried the mangling of their mother tongue by subsequent generations. Language is fluid after all, particularly English, and our youth constantly add their own revisions to carve a divergent path from their elders. But, the focus of our disdain for the ongoing destruction of our linguistic heritage should really be corporations and their hordes of marketeers and lawyers. Take the once simple and meaningful word “natural”. You’ll see its oxymoronic application each time you stroll along the aisle at your grocery store: one hundred percent natural fruit roll-ups; all natural chicken rings; completely natural corn-dogs; totally naturally flavored cheese puffs. The word — natural — has become meaningless.

From NYT:

It isn’t every day that the definition of a common English word that is ubiquitous in common parlance is challenged in federal court, but that is precisely what has happened with the word “natural.” During the past few years, some 200 class-action suits have been filed against food manufacturers, charging them with misuse of the adjective in marketing such edible oxymorons as “natural” Cheetos Puffs, “all-natural” Sun Chips, “all-natural” Naked Juice, “100 percent all-natural” Tyson chicken nuggets and so forth. The plaintiffs argue that many of these products contain ingredients — high-fructose corn syrup, artificial flavors and colorings, chemical preservatives and genetically modified organisms — that the typical consumer wouldn’t think of as “natural.”

Judges hearing these cases — many of them in the Northern District of California — have sought a standard definition of the adjective that they could cite to adjudicate these claims, only to discover that no such thing exists.

Something in the human mind, or heart, seems to need a word of praise for all that humanity hasn’t contaminated, and for us that word now is “natural.” Such an ideal can be put to all sorts of rhetorical uses. Among the antivaccination crowd, for example, it’s not uncommon to read about the superiority of something called “natural immunity,” brought about by exposure to the pathogen in question rather than to the deactivated (and therefore harmless) version of it made by humans in laboratories. “When you inject a vaccine into the body,” reads a post on an antivaxxer website, Campaign for Truth in Medicine, “you’re actually performing an unnatural act.” This, of course, is the very same term once used to decry homosexuality and, more recently, same-sex marriage, which the Family Research Council has taken to comparing unfavorably to what it calls “natural marriage.”

So what are we really talking about when we talk about natural? It depends; the adjective is impressively slippery, its use steeped in dubious assumptions that are easy to overlook. Perhaps the most incoherent of these is the notion that nature consists of everything in the world except us and all that we have done or made. In our heart of hearts, it seems, we are all creationists.

In the case of “natural immunity,” the modifier implies the absence of human intervention, allowing for a process to unfold as it would if we did nothing, as in “letting nature take its course.” In fact, most of medicine sets itself against nature’s course, which is precisely what we like about it — at least when it’s saving us from dying, an eventuality that is perhaps more natural than it is desirable.

Yet sometimes medicine’s interventions are unwelcome or go overboard, and nature’s way of doing things can serve as a useful corrective. This seems to be especially true at the beginning and end of life, where we’ve seen a backlash against humanity’s technological ingenuity that has given us both “natural childbirth” and, more recently, “natural death.”

This last phrase, which I expect will soon be on many doctors’ lips, indicates the enduring power of the adjective to improve just about anything you attach it to, from cereal bars all the way on up to dying. It seems that getting end-of-life patients and their families to endorse “do not resuscitate” orders has been challenging. To many ears, “D.N.R.” sounds a little too much like throwing Grandpa under the bus. But according to a paper in The Journal of Medical Ethics, when the orders are reworded to say “allow natural death,” patients and family members and even medical professionals are much more likely to give their consent to what amounts to exactly the same protocols.

The word means something a little different when applied to human behavior rather than biology (let alone snack foods). When marriage or certain sexual practices are described as “natural,” the word is being strategically deployed as a synonym for “normal” or “traditional,” neither of which carries nearly as much rhetorical weight. “Normal” is by now too obviously soaked in moral bigotry; by comparison, “natural” seems to float high above human squabbling, offering a kind of secular version of what used to be called divine law. Of course, that’s exactly the role that “natural law” played for America’s founding fathers, who invoked nature rather than God as the granter of rights and the arbiter of right and wrong.

Read the entire article here.

Image courtesy of Google Search.

 

The Rich and Powerful Live by Different Rules

Bradley_ManningNever has there been such a wonderful example of blatant utter hypocrisy. This time from the United States Department of Justice. It would be refreshing to convey to our leaders that not only do “Black Lives Matter”, “Less Privileged Lives Matter” as well.

Former director of the CIA no less, and ex-four star general David Petraeus copped a mere two years of probation and a $100,000 fine for leaking classified information to his biographer. Chelsea Manning, formerly Bradley Manning, intelligence analyst and ex-army private, was sentenced to 35 years in prison in 2013 for disclosing classified documents to WikiLeaks.

And, there are many other similar examples.

DCIA David PetraeusWe wince when hearing of oligarchic corruption and favoritism in other nations, such as Russia and China. But, in this country it goes by the euphemism known as “justice” so it must be OK.

From arstechnica:

Yesterday [April 23, 2015], former CIA Director David Petraeus was handed two years of probation and a $100,000 fine after agreeing to a plea deal that ends in no jail time for leaking classified information to Paula Broadwell, his biographer and lover.

“I now look forward to moving on with the next phase of my life and continuing to serve our great nation as a private citizen,” Petraeus said outside the federal courthouse in Charlotte, North Carolina on Thursday.

Lower-level government leakers have not, however, been as likely to walk out of a courthouse applauding the US as Petraeus did. Trevor Timm, executive director of the Freedom of the Press Foundation, called the Petraeus plea deal a “gross hypocrisy.”

“At the same time as Petraeus got off virtually scot-free, the Justice Department has been bringing the hammer down upon other leakers who talk to journalists—sometimes for disclosing information much less sensitive than Petraeus did,” he said.

The Petraeus sentencing came days after the Justice Department demanded (PDF) up to a 24-year-term for Jeffrey Sterling, a former CIA agent who leaked information to a Pulitzer Prize-winning writer about a botched mission to sell nuclear plans to Iran in order to hinder its nuclear-weapons progress.

“A substantial sentence in this case would send an appropriate and much needed message to all persons entrusted with the handling of classified information, i.e., that intentional breaches of the laws governing the safeguarding of national defense information will be pursued aggressively, and those who violate the law in this manner will be tried, convicted, and punished accordingly,” the Justice Department argued in Sterling’s case this week.

The Daily Beast sums up the argument that the Petraeus deal involves a double standard by noting other recent penalties for lower-level leakers:

“Chelsea Manning, formerly Bradley Manning, was sentenced to 35 years in prison in 2013 for disclosing classified documents to WikiLeaks. Stephen Jin-Woo Kim, a former State Department contractor, entered a guilty plea last year to one felony count of disclosing classified information to a Fox News reporter in February 2014. He was sentenced to 13 months in prison. On Monday, prosecutors urged a judge to sentence Jeffrey Sterling, a former CIA officer, to at least 20 years in prison for leaking classified plans to sabotage Iran’s nuclear-weapons program to a New York Times reporter. Sterling will be sentenced next month. And former CIA officer John C. Kiriakou served 30 months in federal prison after he disclosed the name of a covert operative to a reporter. He was released in February and is finishing up three months of house arrest.”

The information Petraeus was accused of leaking, according to the original indictment, contained “classified information regarding the identities of covert officers, war strategy, intelligence capabilities and mechanisms, diplomatic discussions, quotes and deliberative discussions from high-level National Security Council meetings.” The leak also included “discussions with the president of the United States.”

The judge presiding over the case, US Magistrate Judge David Keesler, increased the government’s recommended fine of $40,000 to $100,000 because of Petraeus’ “grave but uncharacteristic error in judgement.”

Read the entire story here.

Images: Four-Star General David Petraeus; Private Chelsea Manning. Courtesy of Wikipedia.

Belief and the Falling Light

[tube]dpmXyJrs7iU[/tube]

Many of us now accept that lights falling from the sky are rocky interlopers from the asteroid clouds within our solar system, rather than visiting angels or signs from an angry (or mysteriously benevolent) God. New analysis of the meteor that overflew Chelyabinsk in Russia in 2013 suggests that one of the key founders of Christianity may have witnessed a similar natural phenomenon around two thousand years ago. However, at the time, Saul (later to become Paul the evangelist) interpreted the dazzling light on the road to Damascus — Acts of the Apostles, New Testament — as a message from a Christian God. The rest, as they say, is history. Luckily, recent scientific progress now means that most of us no longer establish new religious movements based on fireballs in the sky. But, we are awed nonetheless.

From the New Scientist:

Nearly two thousand years ago, a man named Saul had an experience that changed his life, and possibly yours as well. According to Acts of the Apostles, the fifth book of the biblical New Testament, Saul was on the road to Damascus, Syria, when he saw a bright light in the sky, was blinded and heard the voice of Jesus. Changing his name to Paul, he became a major figure in the spread of Christianity.

William Hartmann, co-founder of the Planetary Science Institute in Tucson, Arizona, has a different explanation for what happened to Paul. He says the biblical descriptions of Paul’s experience closely match accounts of the fireball meteor seen above Chelyabinsk, Russia, in 2013.

Hartmann has detailed his argument in the journal Meteoritics & Planetary Science (doi.org/3vn). He analyses three accounts of Paul’s journey, thought to have taken place around AD 35. The first is a third-person description of the event, thought to be the work of one of Jesus’s disciples, Luke. The other two quote what Paul is said to have subsequently told others.

“Everything they are describing in those three accounts in the book of Acts are exactly the sequence you see with a fireball,” Hartmann says. “If that first-century document had been anything other than part of the Bible, that would have been a straightforward story.”

But the Bible is not just any ancient text. Paul’s Damascene conversion and subsequent missionary journeys around the Mediterranean helped build Christianity into the religion it is today. If his conversion was indeed as Hartmann explains it, then a random space rock has played a major role in determining the course of history (see “Christianity minus Paul”).

That’s not as strange as it sounds. A large asteroid impact helped kill off the dinosaurs, paving the way for mammals to dominate the Earth. So why couldn’t a meteor influence the evolution of our beliefs?

“It’s well recorded that extraterrestrial impacts have helped to shape the evolution of life on this planet,” says Bill Cooke, head of NASA’s Meteoroid Environment Office in Huntsville, Alabama. “If it was a Chelyabinsk fireball that was responsible for Paul’s conversion, then obviously that had a great impact on the growth of Christianity.”

Hartmann’s argument is possible now because of the quality of observations of the Chelyabinsk incident. The 2013 meteor is the most well-documented example of larger impacts that occur perhaps only once in 100 years. Before 2013, the 1908 blast in Tunguska, also in Russia, was the best example, but it left just a scattering of seismic data, millions of flattened trees and some eyewitness accounts. With Chelyabinsk, there is a clear scientific argument to be made, says Hartmann. “We have observational data that match what we see in this first-century account.”

Read the entire article here.

Video: Meteor above Chelyabinsk, Russia in 2013. Courtesy of Tuvix72.

Endless Political Campaigning

US-politicians

The great capitalist market has decided — endless political campaigning in the United States is beneficial. If you think the presidential campaign to elect the next leader in 2016 began sometime last year you are not mistaken. In fact, it really does seem that political posturing for the next election often begins before the current one is even decided. We all complain: too many ads, too much negativity, far too much inanity and little substance. Yet, we allow the process to continue, and to grow in scale. Would you put up with a political campaign that lasts a mere 38 days? The British seem to do it. But, then again, the United States is so much more advanced, right?

From WSJ:

On March 23, Ted Cruz announced he is running for president in a packed auditorium at Liberty University in Lynchburg, Va. On April 7, Rand Paul announced he is running for president amid the riverboat décor of the Galt House hotel in Louisville, Ky. On April 12, Hillary Clinton announced she is running for president in a brief segment of a two-minute video. On April 13, Marco Rubio announced he is running before a cheering crowd at the Freedom Tower in Miami. And these are just the official announcements.

Jeb Bush made it known in December that he is interested in running. Scott Walker’s rousing speech at the Freedom Summit in Des Moines, Iowa, on Jan. 24 left no doubt that he will enter the race. Chris Christie’s appearance in New Hampshire last week strongly suggests the same. Previous presidential candidates Mike Huckabee,Rick Perry and Rick Santorum seem almost certain to run. Pediatric surgeon Ben Carson is reportedly ready to announce his run on May 4 at the Detroit Music Hall.

With some 570 days left until Election Day 2016, the race for president is very much under way—to the dismay of a great many Americans. They find the news coverage of the candidates tiresome (what did Hillary order at Chipotle?), are depressed by the negative campaigning that is inevitable in an adversarial process, and dread the onslaught of political TV ads. Too much too soon!

They also note that other countries somehow manage to select their heads of government much more quickly. The U.K. has a general election campaign going on right now. It began on March 30, when the queen, on the advice of the prime minister, dissolved Parliament, and voting will take place on May 7. That’s 38 days later. Britons are complaining that the electioneering goes on too long.

American presidential campaigns did not always begin so soon, but they have for more than a generation now. As a young journalist, Sidney Blumenthal (in recent decades a consigliere to the Clintons) wrote quite a good book titled “The Permanent Campaign.” It was published in 1980. Mr. Blumenthal described what was then a relatively new phenomenon.

When Jimmy Carter announced his candidacy for president in January 1975, he was not taken particularly seriously. But his perseverance paid off, and he took the oath of office two years later. His successors—Ronald Reagan, George H.W. Bush and Bill Clinton—announced their runs in the fall before their election years, although they had all been busy assembling campaigns before that. George W. Bush announced in June 1999, after the adjournment of the Texas legislature. Barack Obama announced in February 2007, two days before Lincoln’s birthday, in Lincoln’s Springfield, Ill. By that standard, declared candidates Mr. Cruz, Mr. Paul, Mrs. Clinton and Mr. Rubio got a bit of a late start.

Why are American presidential campaigns so lengthy? And is there anything that can be done to compress them to a bearable timetable?

One clue to the answers: The presidential nominating process, the weakest part of our political system, is also the one part that was not envisioned by the Founding Fathers. The framers of the Constitution created a powerful presidency, confident (justifiably, as it turned out) that its first incumbent, George Washington, would set precedents that would guide the republic for years to come.

But they did not foresee that even in Washington’s presidency, Americans would develop political parties, which they abhorred. The Founders expected that later presidents would be chosen, usually by the House of Representatives, from local notables promoted by different states in the Electoral College. They did not expect that the Federalist and Republican parties would coalesce around two national leaders—Washington’s vice president, John Adams, and Washington’s first secretary of state, Thomas Jefferson—in the close elections of 1796 and 1800.

The issue then became: When a president followed George Washington’s precedent and retired after two terms, how would the parties choose nominees, in a republic that, from the start, was regionally, ethnically and religiously diverse?

Read the entire story here.

Image courtesy of Google Search.

Religious Dogma and DNA

Despite ongoing conflicts around the global that are fueled or governed by religious fanaticism it is entirely plausible that our general tendency to supernatural belief is encoded in our DNA. Of course this does not mean that a God or that various gods exist, it merely implies that over time natural selection generally favored those who believed in deities over those did not. We are such complex and contradictory animals.

From NYT:

Most of us find it mind-boggling that some people seem willing to ignore the facts — on climate change, on vaccines, on health care — if the facts conflict with their sense of what someone like them believes. “But those are the facts,” you want to say. “It seems weird to deny them.”

And yet a broad group of scholars is beginning to demonstrate that religious belief and factual belief are indeed different kinds of mental creatures. People process evidence differently when they think with a factual mind-set rather than with a religious mind-set. Even what they count as evidence is different. And they are motivated differently, based on what they conclude. On what grounds do scholars make such claims?

First of all, they have noticed that the very language people use changes when they talk about religious beings, and the changes mean that they think about their realness differently. You do not say, “I believe that my dog is alive.” The fact is so obvious it is not worth stating. You simply talk in ways that presume the dog’s aliveness — you say she’s adorable or hungry or in need of a walk. But to say, “I believe that Jesus Christ is alive” signals that you know that other people might not think so. It also asserts reverence and piety. We seem to regard religious beliefs and factual beliefs with what the philosopher Neil Van Leeuwen calls different “cognitive attitudes.”

Second, these scholars have remarked that when people consider the truth of a religious belief, what the belief does for their lives matters more than, well, the facts. We evaluate factual beliefs often with perceptual evidence. If I believe that the dog is in the study but I find her in the kitchen, I change my belief. We evaluate religious beliefs more with our sense of destiny, purpose and the way we think the world should be. One study found that over 70 percent of people who left a religious cult did so because of a conflict of values. They did not complain that the leader’s views were mistaken. They believed that he was a bad person.

Third, these scholars have found that religious and factual beliefs play different roles in interpreting the same events. Religious beliefs explain why, rather than how. People who understand readily that diseases are caused by natural processes might still attribute sickness at a particular time to demons, or healing to an act of God. The psychologist Cristine H. Legare and her colleagues recently demonstrated that people use both natural and supernatural explanations in this interdependent way across many cultures. They tell a story, as recounted by Tracy Kidder’s book on the anthropologist and physician Paul Farmer, about a woman who had taken her tuberculosis medication and been cured — and who then told Dr. Farmer that she was going to get back at the person who had used sorcery to make her ill. “But if you believe that,” he cried, “why did you take your medicines?” In response to the great doctor she replied, in essence, “Honey, are you incapable of complexity?”

Moreover, people’s reliance on supernatural explanations increases as they age. It may be tempting to think that children are more likely than adults to reach out to magic to explain something, and that they increasingly put that mind-set to the side as they grow up, but the reverse is true. It’s the young kids who seem skeptical when researchers ask them about gods and ancestors, and the adults who seem clear and firm. It seems that supernatural ideas do things for adults they do not yet do for children.

Finally, scholars have determined that people don’t use rational, instrumental reasoning when they deal with religious beliefs. The anthropologist Scott Atran and his colleagues have shown that sacred values are immune to the normal cost-benefit trade-offs that govern other dimensions of our lives. Sacred values are insensitive to quantity (one cartoon can be a profound insult). They don’t respond to material incentives (if you offer people money to give up something that represents their sacred value, and they often become more intractable in their refusal). Sacred values may even have different neural signatures in the brain.

The danger point seems to be when people feel themselves to be completely fused with a group defined by its sacred value. When Mr. Atran and his colleagues surveyed young men in two Moroccan neighborhoods associated with militant jihad (one of them home to five men who helped plot the 2004 Madrid train bombings, and then blew themselves up), they found that those who described themselves as closest to their friends and who upheld Shariah law were also more likely to say that they would suffer grievous harm to defend Shariah law. These people become what Mr. Atran calls “devoted actors” who are unconditionally committed to their sacred value, and they are willing to die for it.

Read the entire article here.

Dark Matter May Cause Cancer and Earthquakes

Abell 1689

Leave aside the fact that there is no direct evidence for the existence of dark matter. In fact, theories that indirectly point to its existence seem rather questionable as well. That said, cosmologists are increasingly convinced that dark matter’s gravitational effects can be derived from recent observations of gravitationally lenses galaxy clusters. Some researchers postulate that this eerily murky non-substance — it doesn’t interact with anything in our visible universe except, perhaps, gravity — may be a cause for activities much closer to home. All very interesting.

From NYT:

Earlier this year, Dr. Sabine Hossenfelder, a theoretical physicist in Stockholm, made the jarring suggestion that dark matter might cause cancer. She was not talking about the “dark matter” of the genome (another term for junk DNA) but about the hypothetical, lightless particles that cosmologists believe pervade the universe and hold the galaxies together.

Though it has yet to be directly detected, dark matter is presumed to exist because we can see the effects of its gravity. As its invisible particles pass through our bodies, they could be mutating DNA, the theory goes, adding at an extremely low level to the overall rate of cancer.

It was unsettling to see two such seemingly different realms, cosmology and oncology, suddenly juxtaposed. But that was just the beginning. Shortly after Dr. Hossenfelder broached her idea in an online essay, Michael Rampino, a professor at New York University, added geology and paleontology to the picture.

Dark matter, he proposed in an article for the Royal Astronomical Society, is responsible for the mass extinctions that have periodically swept Earth, including the one that killed the dinosaurs.

His idea is based on speculations by other scientists that the Milky Way is sliced horizontally through its center by a thin disk of dark matter. As the sun, traveling around the galaxy, bobs up and down through this darkling plane, it generates gravitational ripples strong enough to dislodge distant comets from their orbits, sending them hurtling toward Earth.

An earlier version of this hypothesis was put forth last year by the Harvard physicists Lisa Randall and Matthew Reece. But Dr. Rampino has added another twist: During Earth’s galactic voyage, dark matter accumulates in its core. There the particles self-destruct, generating enough heat to cause deadly volcanic eruptions. Struck from above and below, the dinosaurs succumbed.

It is surprising to see something as abstract as dark matter take on so much solidity, at least in the human mind. The idea was invented in the early 1930s as a theoretical contrivance — a means of explaining observations that otherwise didn’t make sense.

Galaxies appear to be rotating so fast that they should have spun apart long ago, throwing off stars like sparks from a Fourth of July pinwheel. There just isn’t enough gravity to hold a galaxy together, unless you assume that it hides a huge amount of unseen matter — particles that neither emit or absorb light.

Some mavericks propose alternatives, attempting to tweak the equations of gravity to account for what seems like missing mass. But for most cosmologists, the idea of unseeable matter has become so deeply ingrained that it has become almost impossible to do without it.

Said to be five times more abundant than the stuff we can see, dark matter is a crucial component of the theory behind gravitational lensing, in which large masses like galaxies can bend light beams and cause stars to appear in unexpected parts of the sky.

That was the explanation for the spectacular observation of an “Einstein Cross” reported last month. Acting like an enormous lens, a cluster of galaxies deflected the light of a supernova into four images — a cosmological mirage. The light for each reflection followed a different path, providing glimpses of four different moments of the explosion.

Continue reading the main storyContinue reading the main story

But not even a galactic cluster exerts enough gravity to bend light so severely unless you postulate that most of its mass consists of hypothetical dark matter. In fact, astronomers are so sure that dark matter exists that they have embraced gravitational lensing as a tool to map its extent.

Dark matter, in other words, is used to explain gravitational lensing, and gravitational lensing is taken as more evidence for dark matter.

Some skeptics have wondered if this is a modern-day version of what ancient astronomers called “saving the phenomena.” With enough elaborations, a theory can account for what we see without necessarily describing reality. The classic example is the geocentric model of the heavens that Ptolemy laid out in the Almagest, with the planets orbiting Earth along paths of complex curlicues.

Ptolemy apparently didn’t care whether his filigrees were real. What was important to him was that his model worked, predicting planetary movements with great precision.

Modern scientists are not ready to settle for such subterfuge. To show that dark matter resides in the world and not just in their equations, they are trying to detect it directly.

Though its identity remains unknown, most theorists are betting that dark matter consists of WIMPs — weakly interacting massive particles. If they really exist, it might be possible to glimpse them when they interact with ordinary matter.

Read the entire article here.

Image: Abell 1689 galaxy cluster. Courtesy ofNASA, ESA, and D. Coe (NASA JPL/Caltech and STScI).

MondayMap: Imagining a Post-Post-Ottoman World

Sykes_Picot_Agreement_Map_signed_8_May_1916

The United States is often portrayed as the world’s bully and nefarious geo-political schemer — a nation responsible for many of the world’s current political ills. However, it is the French and British who should be called to account for much of the globe’s ongoing turmoil, particularly in the Middle East. After the end of WWI the victors expeditiously carved up the spoils of the vanquished Austro-Hungarian and Ottoman Empires. Much of Eastern Europe and the Middle East was divvied and traded just a kids might swap baseball or football (soccer) cards today. Then President of France Georges Clemenceau and British Prime Minister David Lloyd George famously bartered and gifted — amongst themselves and their friends — entire regions and cities without thought to historical precedence, geographic and ethnic boundaries, or even the basic needs of entire populations. Their decisions were merely lines to be drawn and re-drawn on a map.

So, it would be a fascinating — though rather naive — exercise to re-draw many of today’s arbitrary and contrived boundaries, and to revert regions to their more appropriate owners. Of course, where and when should this thought experiment begin and end? Pre-roman empire, post-normans, before the Prussians, prior to the Austro-Hungarian Empire, or after the Ottomans, post-Soviets, or after Tito, or way before the Huns, Vandals and the Barbarians and any number of the Germanic tribes?

Nevertheless, essayist Yaroslav Trofimov takes a stab at re-districting to pre-Ottoman boundaries and imagines a world with less bloodshed. A worthy dream.

From WSJ:

Shortly after the end of World War I, the French and British prime ministers took a break from the hard business of redrawing the map of Europe to discuss the easier matter of where frontiers would run in the newly conquered Middle East.

Two years earlier, in 1916, the two allies had agreed on their respective zones of influence in a secret pact—known as the Sykes-Picot agreement—for divvying up the region. But now the Ottoman Empire lay defeated, and the United Kingdom, having done most of the fighting against the Turks, felt that it had earned a juicier reward.

“Tell me what you want,” France’s Georges Clemenceau said to Britain’s David Lloyd George as they strolled in the French embassy in London.

“I want Mosul,” the British prime minister replied.

“You shall have it. Anything else?” Clemenceau asked.

In a few seconds, it was done. The huge Ottoman imperial province of Mosul, home to Sunni Arabs and Kurds and to plentiful oil, ended up as part of the newly created country of Iraq, not the newly created country of Syria.

The Ottomans ran a multilingual, multireligious empire, ruled by a sultan who also bore the title of caliph—commander of all the world’s Muslims. Having joined the losing side in the Great War, however, the Ottomans saw their empire summarily dismantled by European statesmen who knew little about the region’s people, geography and customs.

The resulting Middle Eastern states were often artificial creations, sometimes with implausibly straight lines for borders. They have kept going since then, by and large, remaining within their colonial-era frontiers despite repeated attempts at pan-Arab unification.

The built-in imbalances in some of these newly carved-out states—particularly Syria and Iraq—spawned brutal dictatorships that succeeded for decades in suppressing restive majorities and perpetuating the rule of minority groups.

But now it may all be coming to an end. Syria and Iraq have effectively ceased to function as states. Large parts of both countries lie beyond central government control, and the very meaning of Syrian and Iraqi nationhood has been hollowed out by the dominance of sectarian and ethnic identities.

The rise of Islamic State is the direct result of this meltdown. The Sunni extremist group’s leader, Abu Bakr al-Baghdadi, has proclaimed himself the new caliph and vowed to erase the shame of the “Sykes-Picot conspiracy.” After his men surged from their stronghold in Syria last summer and captured Mosul, now one of Iraq’s largest cities, he promised to destroy the old borders. In that offensive, one of the first actions taken by ISIS (as his group is also known) was to blow up the customs checkpoints between Syria and Iraq.

“What we are witnessing is the demise of the post-Ottoman order, the demise of the legitimate states,” says Francis Ricciardone, a former U.S. ambassador to Turkey and Egypt who is now at the Atlantic Council, a Washington think tank. “ISIS is a piece of that, and it is filling in a vacuum of the collapse of that order.”

In the mayhem now engulfing the Middle East, it is mostly the countries created a century ago by European colonialists that are coming apart. In the region’s more “natural” nations, a much stronger sense of shared history and tradition has, so far, prevented a similar implosion.

“Much of the conflict in the Middle East is the result of insecurity of contrived states,” says Husain Haqqani, an author and a former Pakistani ambassador to the U.S. “Contrived states need state ideologies to make up for lack of history and often flex muscles against their own people or against neighbors to consolidate their identity.”

In Egypt, with its millennial history and strong sense of identity, almost nobody questioned the country’s basic “Egyptian-ness” throughout the upheaval that has followed President Hosni Mubarak’s ouster in a 2011 revolution. As a result, most of Egypt’s institutions have survived the turbulence relatively intact, and violence has stopped well short of outright civil war.

Turkey and Iran—both of them, in bygone eras, the center of vast empires—have also gone largely unscathed in recent years, even though both have large ethnic minorities of their own, including Arabs and Kurds.

The Middle East’s “contrived” countries weren’t necessarily doomed to failure, and some of them—notably Jordan—aren’t collapsing, at least not yet. The world, after all, is full of multiethnic and multiconfessional states that are successful and prosperous, from Switzerland to Singapore to the U.S., which remains a relative newcomer as a nation compared with, say, Iran.

Read the entire article here.

Image: Map of Sykes–Picot Agreement showing Eastern Turkey in Asia, Syria and Western Persia, and areas of control and influence agreed between the British and the French. Royal Geographical Society, 1910-15. Signed by Mark Sykes and François Georges-Picot, 8 May 1916. Courtesy of Wikipedia.

 

Yes M’Lady

google-Thunderbirds

Beneath the shell that envelops us as adults lies the child. We all have one inside — that vulnerable being who dreams, plays and improvises. Sadly, our contemporary society does a wonderful job of selectively numbing these traits, usually as soon as we enter school; our work finishes the process by quashing all remnants of our once colorful and unbounded imaginations. OK, I’m exaggerating a little to make my point. But I’m certain this strikes a chord.

Keeping this in mind, it’s awesomely brilliant to see Thunderbirds making a comeback. You may recall the original Thunderbirds TV shows in the mid-sixties. Created by Gerry and Sylvia Anderson, the marionette puppets and their International Rescue science-fiction machines would save us weekly from the forces of evil, destruction and chaos. The child who lurks within me utterly loved this show — everything would come to a halt to make way for this event on saturday mornings. Now I have a chance of reliving it with my kids, and maintaining some degree of childhood wonder in the process. Thunderbirds are go…

From the Guardian:

5, 4, 3, 2, 1 … Thunderbirds are go – but not quite how older viewers will remember. International Rescue has been given a makeover for the modern age, with the Tracy brothers, Brains, Lady Penelope and Parker smarter, fitter and with better gadgets than they ever had when the “supermarionation” show began on ITV half a century ago.

But fans fearful that its return, complete with Hollywood star Rosamund Pike voicing Lady Penelope, will trample all over their childhood memories can rest easy.

Unlike the 2004 live action film which Thunderbirds creator, the late Gerry Anderson, described as the “biggest load of crap I have ever seen in my life”, the new take on the children’s favourite, called Thunderbirds Are Go, remains remarkably true to the spirit of the 50-year-old original.

Gone are the puppet strings – audience research found that younger viewers wanted something more dynamic – but along with computer generated effects are models and miniature sets (“actually rather huge” said executive producer Estelle Hughes) that faithfully recall the original Thunderbirds.

Speaking after the first screening of the new ITV series on Tuesday, executive producer Giles Ridge said: “We felt we should pay tribute to all those elements that made it special but at the same time update it so it’s suitable and compelling for a modern audience.

“The basic DNA of the show – five young brothers on a secret hideaway island with the most fantastic craft you could imagine, helping people around the world who are in trouble, that’s not a bad place to start.”

The theme music is intact, albeit given a 21st century makeover, as is the Tracy Island setting – complete with the avenue of palm trees that makes way for Thunderbird 2 and the swimming pool that slides into the mountain for the launch of Thunderbird 1.

Lady Penelope – as voiced by Pike – still has a cut-glass accent and is entirely unflappable. When she is not saving the world she is visiting Buckingham Palace or attending receptions at 10 Downing Street. There is also a nod – blink and you miss it – to another Anderson puppet series, Stingray.

Graham, who voiced Parker in the original series, returns in the same role. “I think they were checking me out to see if I was still in one piece,” said Graham, now 89, of the meeting when he was first approached to appear in the new series.

“I was absolutely thrilled to repeat the voice and character of Parker. Although I am older my voice hasn’t changed too much over the years.”

He said the voice of Parker had come from a wine waiter who used to work in the royal household, whom Anderson had taken him to see in a pub in Cookham, Berkshire.

“He came over and said, ‘Would you like to see the wine list, sir?’ And Parker was born. Thank you, old mate.”

Brains, as voiced by Fonejacker star Kayvan Novak, now has an Indian accent.

Sylvia Anderson, Anderson’s widow, who co-created the show, will make a guest appearance as Lady Penelope’s “crazy aunt”.

Read the entire story here.

Image courtesy of Google Search.

 

Your Current Dystopian Nightmare: In Just One Click

Amazon was supposed to give you back precious time by making shopping and spending painlessly simple. Apps on your smartphone were supposed to do the same for all manner of re-tooled on-demand services. What wonderful time-saving inventions! So, now you can live in the moment and make use of all this extra free time. It’s your time now. You’ve won it back and no one can take it away.

And, what do you spend this newly earned free time doing? Well, you sit at home in your isolated cocoon, you shop for more things online, you download some more great apps that promise to bring even greater convenience, you interact less with real humans, and, best of all, you spend more time working. Welcome to your new dystopian nightmare, and it’s happening right now. Click.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire story here.

The Me-Useum

art-in-island-museum

The smartphone and its partner in crime, the online social network, begat the ubiquitous selfie. The selfie begat the self-stick. And, now we have the selfie museum. This is not an April Fool’s prank. Quite the contrary.

The Art in Island museum in Manila is making the selfie part of the visitor experience. Despite the obvious crassness, it may usher in a way for this and other museums to engage with their visitors more personally, and for visitors to connect with art more intimately. Let’s face it, if you ever try to pull a selfie-like stunt, or even take a photo, in the Louvre or the Prado galleries you would be escorted rather promptly to the nearest padded cell.

From the Guardian:

Selfiemania in art galleries has reached new heights of surreal comedy at a museum in Manila. Art in Island is a museum specifically designed for taking selfies, with “paintings” you can touch, or even step inside, and unlimited, unhindered photo opportunities. It is full of 3D reproductions of famous paintings that are designed to offer the wackiest possible selfie poses.

Meanwhile, traditional museums are adopting diverse approaches to the mania for narcissistic photography. I have recently visited museums with wildly contrasting policies on picture taking. At the Prado in Madrid, all photography is banned. Anything goes? No, nothing goes. Guards leap on anyone wielding a camera.

At the Musée d’Orsay in Paris photography is a free-for-all. Even selfie sticks are allowed. I watched a woman elaborately pose in front of Manet’s Le Déjeuner sur l’herbe so she could photograph herself with her daft selfie stick. This ostentatious technology turns holiday snaps into a kind of performance art. That is what the Manila museum indulges.

My instincts are to ban selfie sticks, selfies, cameras and phones from museums. But my instincts are almost certanly wrong.

Surely the bizarre selfie museum in Manila is a warning to museums, such as New York’s MoMA, that seek to ban, at the very least, selfie sticks – let alone photography itself. If you frustrate selfie enthusiasts, they may just create their own simulated galleries with phoney art that’s “fun” – or stop going to art galleries entirely.

It is better for photo fans to be inside real art museums, looking – however briefly – at actual art than to create elitist barriers between museums and the children of the digital age.

The lure of the selfie stick, which has caused such a flurry of anxiety at museums, is exaggerated. It really is a specialist device for the hardcore selfie lover. At the Musée d’Orsay there are no prohibitions, but only that one visitor, in front of the Manet, out of all the thousands was actually using a selfie stick.

And there’s another reason to go easy on selfies in museums, however irritating such low-attention-span, superficial behaviour in front of masterpieces may be.

Read the entire story here.

Image: Jean-François Millet’s gleaners break out of his canvas. The original, The Gleaners (Des glaneuses) was completed in 1857. Courtesy of Art in Island Museum. Manila, Philippines.

Electric Sheep?

[tube]NoAzpa1x7jU[/tube]

I couldn’t agree more with Michael Newton’s analysis — Blade Runner remains a dystopian masterpiece, thirty-three years on. Long may it reign and rain.

And, here’s another toast to the brilliant mind of Philip K Dick. The author’s work Do Androids Dream of Electric Sheep?, published in 1968, led to this noir science-fiction classic.

From the Guardian:

It’s entirely apt that a film dedicated to replication should exist in multiple versions; there is not one Blade Runner, but seven. Though opinions on which is best vary and every edition has its partisans, the definitive rendering of Ridley Scott’s 1982 dystopian film is most likely The Final Cut (2002), about to play out once more in cinemas across the UK. Aptly, too, repetition is written into the movie’s plot (there are spoilers coming), that sees Deckard (played by Harrison Ford) as an official bounty hunter (or “Blade Runner”) consigned to hunt down, one after the other, four Nexus-6 replicants (genetically-designed artificial human beings, intended as slaves for Earth’s off-world colonies). One by one, our equivocal hero seeks out the runaways: worldly-wise Zhora (Joanna Cassidy); stolid Leon (Brion James); the “pleasure-model” Pris (Daryl Hannah); and the group’s apparent leader, the ultimate Nietzschean blond beast, Roy Batty (the wonderful Rutger Hauer). Along the way, Deckard meets and falls in love with another replicant, Rachael (Sean Young), as beautiful and cold as a porcelain doll.

In Blade Runner, as in all science-fiction, the “future” is a style. Here that style is part film noir and part Gary Numan. The 40s influence is everywhere: in Rachael’s Joan-Crawford shoulder pads, the striped shadows cast by Venetian blinds, the atmosphere of defeat. It’s not just noir, Ridley Scott also taps into 70s cop shows and movies that themselves tapped into nostalgic style, with their yearning jazz and their sad apartments; Deckard even visits a strip joint as all TV detectives must. The movie remains one of the most visually stunning in cinema history. It plots a planet of perpetual night, a landscape of shadows, rain and reflected neon (shone on windows or the eye) in a world not built to a human scale; there, the skyscrapers dwarf us like the pyramids. High above the Philip Marlowe world, hover cars swoop and dirigible billboards float by. More dated now than its hard-boiled lustre is the movie’s equal and opposite involvement in modish early 80s dreams; the soundtrack by Vangelis was up-to-the-minute, while the replicants dress like extras in a Billy Idol video, a post-punk, synth-pop costume party. However, it is noir romanticism that wins out, gifting the film with its forlorn Californian loneliness.

It is a starkly empty film, preoccupied as it is with the thought that people themselves might be hollow. The plot depends on the notion that the replicants must be allowed to live no longer than four years, because as time passes they begin to develop raw emotions. Why emotion should be a capital offence is never sufficiently explained; but it is of a piece with the film’s investigation of a flight from feeling – what psychologist Ian D Suttie once named the “taboo on tenderness”. Intimacy here is frightful (everyone appears to live alone), especially that closeness that suggests that the replicants might be indistinguishable from us.

Advertisement

This anxiety may originally have had tacit political resonances. In the novel that the film is based on, Philip K Dick’s thoughtful Do Androids Dream of Electric Sheep? (1968), the dilemma of the foot soldier plays out, commanded to kill an adversary considered less human than ourselves, yet troubled by the possibility that the enemy are in fact no different. Shades of Vietnam darken the story, as well as memories of America’s slave-owning past. We are told that the replicants can do everything a human being can do, except feel empathy. Yet how much empathy do we feel for faraway victims or inconvenient others?

Ford’s Deckard may or may not be as gripped by uncertainty about his job as Dick’s original blade runner. In any case, his brusque “lack of affect” provides one of the long-standing puzzles of the film: is he, too, a replicant? Certainly Ford’s perpetual grumpiness (it sometimes seems his default acting position), his curdled cynicism, put up barriers to feeling that suggest it is as disturbing for him as it is for the hunted Leon or Roy. Though some still doubt, it seems clear that Deckard is indeed a replicant, his imaginings and memories downloaded from some database, his life as transitory as that of his victims. However, as we watch Blade Runner, Deckard doesn’t feel like a replicant; he is dour and unengaged, but lacks his victims’ detached innocence, their staccato puzzlement at their own untrained feelings. The antithesis of the scowling Ford, Hauer’s Roy is a sinister smiler, or someone whose face falls at the brush of an unassimilable emotion.

Read the entire article here.

Video: Blade Runner clip.

April Can Mean Only One Thing

April-fool-Hailo-app

The advent of April in the United States usually brings the impending  tax day to mind. In the UK when April rolls in, it means the media goes overboard with April Fool’s jokes. Here’s a smattering of the silliest from Britain’s most serious media outlets.

 

From the Telegraph: transparent Marmite, Yessus Juice, prison release voting app, Burger King cologne (for men).

From the Guardian: Jeremy Clarkson and fossil fuel divestment.

 

From the Independent: a round-up of the best gags, including the proposed Edinburgh suspension bridge featuring a gap, Simon Cowell’s effigy on the new £5 note, grocery store aisle trampolines for the short of stature.

Image: Hailo’s new piggyback rideshare service.