Category Archives: Idea Soup

Regression Texting

emoji-2016

Some culture watchers believe we are entering into a linguistic death spiral. Our increasingly tech-driven communication is enabling our language to evolve in unforeseen ways, and some linguists believe the evolution is actually taking us backwards rather than forwards. Enter exhibit one into the record: the :evil:  emoji.

From the Guardian:

So it’s official. We are evolving backwards. Emoji, the visual system of communication that is incredibly popular online, is Britain’s fastest-growing language according to Professor Vyv Evans, a linguist at Bangor University.

The comparison he uses is telling – but not in the way the prof, who appears enthusiastic about emojis, presumably intends. “As a visual language emoji has already far eclipsed hieroglyphics, its ancient Egyptian precursor which took centuries to develop,” says Evans.

Perhaps that is because it is easier to go downhill than uphill. After millennia of painful improvement, from illiteracy to Shakespeare and beyond, humanity is rushing to throw it all away. We’re heading back to ancient Egyptian times, next stop the stone age, with a big yellow smiley grin on our faces.

Unicode, the company that created emojis, has announced it will release 36 more of the brainless little icons next year. Demand is massive: 72% of 18- to 25-year-olds find it easier to express their feelings in emoji pictures than through the written word, according to a survey for Talk Talk mobile.

As tends to happen in an age when technology is transforming culture on a daily basis, people relate such news with bland irony or apparent joy. Who wants to be the crusty old conservative who questions progress? But the simplest and most common-sense historical and anthropological evidence tells us that Emoji is not “progress” by any definition. It is plainly a step back.

Evans compares Emoji with ancient Egyptian hieroglyphics. Well indeed. ancient Egypt was a remarkable civilisation, but it had some drawbacks. The Egyptians created a magnificent but static culture. They invented a superb artistic style and powerful mythology – then stuck with these for millennia. Hieroglyphs enabled them to write spells but not to develop a more flexible, questioning literary culture: they left that to the Greeks.

These jumped-up Aegean loudmouths, using an abstract non-pictorial alphabet they got from the Phoenicians, obviously and spectacularly outdid the Egyptians in their range of expression. The Greek alphabet was much more productive than all those lovely Egyptian pictures. That is why there is no ancient Egyptian Iliad or Odyssey.

In other words, there are harsh limits on what you can say with pictures. The written word is infinitely more adaptable. That’s why Greece rather than Egypt leapt forward and why Shakespeare was more articulate than the Aztecs.

Read the entire article here.

Image: A subset of new emojis proposed for adoption in 2016. The third emoji along the top row is, of course, “selfie”. Courtesy of Unicode.

Send to Kindle

Myth Busting Silicon(e) Valley

map-Silicon-Valley

Question: what do silicone implants and Silicon Valley have in common?  Answer: they are both instruments of a grandiose illusion. The first, on a mostly personal level, promises eternal youth and vigor; the second, on a much grander scale, promises eternal wealth and greatness for humanity.

So, let’s leave aside the human cosmetic question for another time and concentrate on the broad deception that is current Silicon Valley. It’s a deception at many different levels —  self-deception of Silicon Valley’s young geeks and code jockeys, and the wider delusion that promises us all a glittering future underwritten by rapturous tech.

And, how best to debunk the myths that envelop the Valley like San Francisco’s fog, than to turn to Sam Biddle, former editor of Valleywag. He offers a scathing critique, which happens to be spot on. Quite rightly he asks if we need yet another urban, on-demand laundry app and what on earth is the value to society of “Yo”? But more importantly, he asks us to reconsider our misplaced awe and to knock Silicon Valley from its perch of self-fulfilling self-satisfaction. Yo and Facebook and Uber and Clinkle and Ringly and DogVacay and WhatsApp and the thousands of other trivial start-ups — despite astronomical valuations — will not be humanity’s savior. We need better ideas and deeper answers.

From GQ:

I think my life is better because of my iPhone. Yours probably is, too. I’m grateful to live in a time when I can see my baby cousins or experience any album ever without getting out of bed. I’m grateful that I will literally never be lost again, so long as my phone has battery. And I’m grateful that there are so many people so much smarter than I am who devise things like this, which are magical for the first week they show up, then a given in my life a week later.

We live in an era of technical ability that would have nauseated our ancestors with wonder, and so much of it comes from one very small place in California. But all these unimpeachable humanoid upgrades—the smartphones, the Google-gifted knowledge—are increasingly the exception, rather than the rule, of Silicon Valley’s output. What was once a land of upstarts and rebels is now being led by the money-hungry and the unspirited. Which is why we have a start-up that mails your dog curated treats and an app that says “Yo.” The brightest minds in tech just lately seem more concerned with silly business ideas and innocuous “disruption,” all for the shot at an immense payday. And when our country’s smartest people are working on the dumbest things, we all lose out.

That gap between the Silicon Valley that enriches the world and the Silicon Valley that wastes itself on the trivial is widening daily. And one of the biggest contributing factors is that the Valley has lost touch with reality by subscribing to its own self-congratulatory mythmaking. That these beliefs are mostly baseless, or at least egotistically distorted, is a problem—not just for Silicon Valley but for the rest of us. Which is why we’re here to help the Valley tear down its own myths—these seven in particular.

Myth #1: Silicon Valley Is the Universe’s Only True Meritocracy

 Everyone in Silicon Valley has convinced himself he’s helped create a free-market paradise, the software successor to Jefferson’s brotherhood of noble yeomen. “Silicon Valley has this way of finding greatness and supporting it,” said a member of Greylock Partners, a major venture-capital firm with over $2 billion under management. “It values meritocracy more than anyplace else.” After complaints of the start-up economy’s profound whiteness reached mainstream discussion just last year, companies like Apple, Facebook, and Twitter reluctantly released internal diversity reports. The results were as homogenized as expected: At Twitter, 79 percent of the leadership is male and 72 percent of it is white. At Facebook, senior positions are 77 percent male and 74 percent white. Twitter—a company whose early success can be directly attributed to the pioneering downloads of black smartphone users—hosts an entirely white board of directors. It’s a pounding indictment of Silicon Valley’s corporate psyche that Mark Zuckerberg—a bourgeois white kid from suburban New York who attended Harvard—is considered the Horatio Alger 2.0 paragon. When Paul Graham, the then head of the massive start-up incubator Y Combinator, told The New York Times that he could “be tricked by anyone who looks like Mark Zuckerberg,” he wasn’t just talking about Zuck’s youth.

If there’s any reassuring news, it’s not that tech’s diversity crisis is getting better, but that in the face of so much dismal news, people are becoming angry enough and brave enough to admit that the state of things is not good. Silicon Valley loves data, after all, and with data readily demonstrating tech’s overwhelming white-guy problem, even the true believers in meritocracy see the circumstances as they actually are.

Earlier this year, Ellen Pao became the most mentioned name in Silicon Valley as her gender-discrimination suit against her former employer, Kleiner Perkins Caufield & Byers, played out in court. Although the jury sided with the legendary VC firm, the Pao case was a watershed moment, bringing sunlight and national scrutiny to the issue of unchecked Valley sexism. For every defeated Ellen Pao, we can hope there are a hundred other female technology workers who feel new courage to speak up against wrongdoing, and a thousand male co-workers and employers who’ll reconsider their boys’-club bullshit. But they’ve got their work cut out for them.

Myth #4: School Is for Suckers, Just Drop Out

 Every year PayPal co-founder, investor-guru, and rabid libertarian Peter Thiel awards a small group of college-age students the Thiel Fellowship, a paid offer to either drop out or forgo college entirely. In exchange, the students receive money, mentorship, and networking opportunities from Thiel as they pursue a start-up of their choice. We’re frequently reminded of the tech titans of industry who never got a degree—Steve Jobs, Bill Gates, and Mark Zuckerberg are the most cited, though the fact that they’re statistical exceptions is an aside at best. To be young in Silicon Valley is great; to be a young dropout is golden.

The virtuous dropout hasn’t just made college seem optional for many aspiring barons—formal education is now excoriated in Silicon Valley as an obsolete system dreamed up by people who’d never heard of photo filters or Snapchat. Mix this cynicism with the libertarian streak many tech entrepreneurs carry already and you’ve got yourself a legit anti-education movement.

And for what? There’s no evidence that avoiding a conventional education today grants business success tomorrow. The gifted few who end up dropping out and changing tech history would have probably changed tech history anyway—you can’t learn start-up greatness by refusing to learn in a college classroom. And given that most start-ups fail, do we want an appreciable segment of bright young people gambling so heavily on being the next Zuck? More important, do we want an economy of CEOs who never had to learn to get along with their dorm-mates? Who never had the opportunity to grow up and figure out how to be a human being functioning in society? Who went straight from a bedroom in their parents’ house to an incubator that paid for their meals? It’s no wonder tech has an antisocial rep.

Myth #7: Silicon Valley Is Saving the World

Two years ago an online list of “57 start-up lessons” made its way through the coder community, bolstered by a co-sign from Paul Graham. “Wow, is this list good,” he commented. “It has the kind of resonance you only get when you’re writing from a lot of hard experience.” Among the platitudinous menagerie was this gem: “If it doesn’t augment the human condition for a huge number of people in a meaningful way, it’s not worth doing.” In a mission statement published on Andreessen Horowitz’s website, Marc Andreessen claimed he was “looking for the companies who are going to be the big winners because they are going to cause a fundamental change in the world.” The firm’s portfolio includes Ringly (maker of rings that light up when your phone does something), Teespring (custom T-shirts), DogVacay (pet-sitters on demand), and Hem (the zombified corpse of the furniture store Fab.com). Last year, wealthy Facebook alum Justin Rosenstein told a packed audience at TechCrunch Disrupt, “We in this room, we in technology, have a greater capacity to change the world than the kings and presidents of even a hundred years ago.” No one laughed, even though Rosenstein’s company, Asana, sells instant-messaging software.

 This isn’t just a matter of preening guys in fleece vests building giant companies predicated on their own personal annoyances. It’s wasteful and genuinely harmful to have so many people working on such trivial projects (Clinkle and fucking Yo) under the auspices of world-historical greatness. At one point recently, there were four separate on-demand laundry services operating in San Francisco, each no doubt staffed by smart young people who thought they were carving out a place of small software greatness. And yet for every laundry app, there are smart people doing smart, valuable things: Among the most recent batch of Y Combinator start-ups featured during March’s “Demo Day” were Diassess (twenty-minute HIV tests), Standard Cyborg (3D-printed limbs), and Atomwise (using supercomputing to develop new medical compounds). Those start-ups just happen to be sharing desk space at the incubator with “world changers” like Lumi (easy logo printing) and Underground Cellar (“curated, limited-edition wines with a twist”).

Read the entire article here.

Map: Silicon Valley, CA. Courtesy of Google.

Send to Kindle

Innovating the Disruption Or Disrupting the Innovation

Corporate America has a wonderful knack of embracing a meaningful idea and then overusing it to such an extent that it becomes thoroughly worthless. Until recently, every advertiser, every manufacturer, every service, shamelessly promoted itself as an innovator. Everything a company did was driven by innovation: employees succeeded by innovating; the CEO was innovation incarnate; products were innovative; new processes drove innovation — in fact, the processes themselves were innovative. Any business worth its salt produced completely innovative stuff from cupcakes to tires, from hair color to drill bits, from paper towels to hoses. And consequently this overwhelming ocean of innovation — which upon closer inspection actually isn’t real innovation — becomes worthless, underwhelming drivel.

So, what next for corporate America? Well, latch on to the next meme of course — disruption. Yawn.

From NPR/TED:

HBO’s Silicon Valley is back, with its pitch-perfect renderings of the culture and language of the tech world — like at the opening of the “Disrupt” startup competition run by the Tech Crunch website at the end of last season. “We’re making the world a better place through scalable fault-tolerant distributed databases” — the show’s writers didn’t have to exercise their imagination much to come up with those little arias of geeky self-puffery, or with the name Disrupt, which, as it happens, is what the Tech Crunch conferences are actually called. As is most everything else these days. “Disrupt” and “disruptive” are ubiquitous in the names of conferences, websites, business school degree programs and business book best-sellers. The words pop up in more than 500 TED Talks: “How to Avoid Disruption in Business and in Life,” “Embracing Disruption,” “Disrupting Higher Education,” “Disrupt Yourself.” It transcends being a mere buzzword. As the philosopher Jeremy Bentham said two centuries ago, there is a point where jargon becomes a species of the sublime.

 To give “disruptive” its due, it actually started its life with some meat on its bones. It was popularized in a 1997 book by Clayton Christensen of the Harvard Business School. According to Christensen, the reason why established companies fail isn’t that they don’t keep up with new technologies, but that their business models are disrupted by scrappy, bottom-fishing startups that turn out stripped-down versions of existing products at prices the established companies can’t afford to match. That’s what created an entry point for “disruptive innovations” like the Model T Ford, Craigslist classifieds, Skype and no-frills airlines.

Christensen makes a nice point. Sometimes you can get the world to beat a path to your door by building a crappier mousetrap, too, if you price it right. Some scholars have raised questions about that theory, but it isn’t the details of the story that have put “disruptive” on everybody’s lips; it’s the word itself. Buzzwords feed off their emotional resonances, not their ideas. And for pure resonance, “disruptive” is hard to beat. It’s a word with deep roots. I suspect I first encountered it when my parents read me the note that the teacher pinned to my sweater when I was sent home from kindergarten. Or maybe it reminds you of the unruly kid who was always pushing over the juice table. One way or another, the word evokes obstreperous rowdies, the impatient people who are always breaking stuff. It says something that “disrupt” is from the Latin for “shatter.”

Disrupt or be disrupted. The consultants and business book writers have proclaimed that as the chronic condition of the age, and everybody is clambering to be classed among the disruptors rather than the disruptees. The lists of disruptive companies in the business media include not just Amazon and Uber but also Procter and Gamble and General Motors. What company nowadays wouldn’t claim to be making waves? It’s the same with that phrase “disruptive technologies.” That might be robotics or next-generation genomics, sure. But CNBC also touts the disruptive potential of an iPhone case that converts to a video game joystick.

These days, people just use “disruptive” to mean shaking things up, though unlike my kindergarten teacher, they always infuse a note of approval. As those Tech Crunch competitors assured us, disruption makes the world a better place. Taco Bell has created a position called “Resident Disruptor,” and not to be outdone, McDonald’s is running radio ads describing its milkshake blenders as a disruptive technology. Well, OK, blenders really do shake things up. But by the time a tech buzzword has been embraced by the fast food chains, it’s getting a little frayed at the edges. “Disruption” was never really a new idea in the first place, just a new name for a fact of life as old as capitalism. Seventy years ago the economist Joseph Schumpeter was calling it the “gales of creative destruction,” and he just took the idea from Karl Marx.

Read the entire story here.

Send to Kindle

The Free Market? Yeah Right

The US purports to be home of the free market. But we all know it’s not. Rather, it is home to vested and entrenched interests who will fight tooth-and-nail to maintain the status quo beneath the veil of self-written regulations and laws. This is called protectionism — manufacturers, media companies, airlines and suppliers all do it. Texas car dealers and their corporate lobbyists are masters at this game.

From ars technica:

In a turn of events that isn’t terribly surprising, a bill to allow Tesla Motors to sell cars directly to consumers in Texas has failed to make it to the floor, with various state representatives offering excuses about not wanting to “piss off all the auto dealers.”

The Lone Star State’s notoriously anti-Tesla stance—one of the strongest in the nation—is in many ways the direct legacy of powerful lawmaker-turned-lobbyist Gene Fondren, who spent much of his life ensuring that the Texas Automobile Dealers Association’s wishes were railroaded through the Texas legislature.

That legacy is alive and well, with Texas lawmakers refusing to pass bills in 2013 and again in 2015 to allow Tesla to sell to consumers. Per the state’s franchise laws, auto manufacturers like Tesla are only allowed to sell cars to independent third-party dealers. These laws were originally intended to protect consumers against the possibility of automakers colluding on pricing; today, though, they function as protectionist shields for the entrenched political interests of car dealers and their powerful state- and nationwide lobbyist organizations.

The anti-Tesla sentiment didn’t stop Texas from attempting to snag the contracts for Tesla Motors’ upcoming “Gigafactory,” the multibillion dollar battery factory that Tesla Motors CEO Elon Musk eventually chose to build in Reno, Nevada.

Speaking of Elon Musk—in a stunning display of total ignorance, Texas state representative Senfronia Thompson (a Democrat representing House District 141) had this to say about the bill’s failure: “I can appreciate Tesla wanting to sell cars, but I think it would have been wiser if Mr. Tesla had sat down with the car dealers first.”

 Apparently being even minimally familiar with the matters one legislates isn’t a requirement to serve in the Texas legislature. However, Thompson did receive many thousands in campaign contributions from the Texas Automobile Dealers Association, so perhaps she’s just doing what she’s told.

Read the entire story here.

Send to Kindle

Texas Needs More Guns, Not Less

Nine dead. Waco, Texas. May 17, 2015. Gunfight at Twin Peaks restaurant.

What this should tell us, particularly gun control advocates, is that Texans need more guns. After all, the US typically loosens gun restrictions after major gun related massacres — the only “civilized” country to do so.

Lawmakers recently passed two open carry gun laws in the Texas Senate. Once reconciled the paranoid governor — Greg Abbott, will surely sign. But even though this means citizens of the Lone State State will then be able to openly run around in public, go shopping or visit the local movie theater while packing a firearm, they still can’t walk around with an alcoholic beverage. Incidentally, in 2013 in the US 1,075 people under the age of 19 were killed by guns. That’s more children dying from gunfire than annual military casualties in Iraq and Afghanistan.

But, let’s leave the irony of this situation aside and focus solely on some good old fashioned sarcasm. Surely, it’s time to mandate that all adults in Texas should be required to carry a weapon. Then there would be less gunfights, right? And, while the Texas Senate is busy with the open carry law perhaps State Senators should mandate that all restaurants install double swinging doors, just like those seen in the saloons of classic TV Westerns.

From the Guardian:

Nine people were killed on Sunday and some others injured after a shootout erupted among rival biker gangs at a Central Texas restaurant, sending patrons and bystanders fleeing for safety, a police spokesman said.

The violence erupted shortly after noon at a busy Waco marketplace along Interstate 35 that draws a large lunchtime crowd. Waco police Sergeant W Patrick Swanton said eight people died at the scene of the shooting at a Twin Peaks restaurant and another person died at a hospital.

It was not immediately clear if bystanders were among the dead, although a local TV station, KCEN-TV, reported that all of the fatalities were bikers and police confirmed that no officers had been injured or killed.

Another local station, KXXV, reported that police had recovered firearms, knives, bats and chains from the scene. Restaurant employees locked themselves in freezers after hearing the shots, the station said.

How many injuries had occurred and the severity of those injuries was not known.

“There are still bodies on the scene of the parking lot at Twin Peaks,” Swanton said. “There are bodies that are scattered throughout the parking lot of the next adjoining business.”

A photograph from the scene showed dozens of motorcycles parked in a lot. Among the bikes, at least three people wearing what looked like biker jackets were on the ground, two on their backs and one face down. Police were standing a few feet away in a group. Several other people also wearing biker jackets were standing or sitting nearby.

Swanton said police were aware in advance that at least three rival gangs would be gathering at the restaurant and at least 12 Waco officers in addition to state troopers were at the restaurant when the fight began.

When the shooting began in the restaurant and then continued outside, armed bikers were shot by officers, Swanton said, explaining that the actions of law enforcement prevented further deaths.

Read the entire article here.

Video: Great Western Movie Themes.

Send to Kindle

MondayMap: The State of Death

distinctive-causes-of-death-by-state

It’s a Monday, so why not dwell on an appropriately morbid topic — death. Or, to be more precise, a really cool map that shows the most distinctive causes of death for each state. We know that across the United States in general the most common causes of death are heart disease and cancer. However, looking a little deeper shows other, secondary causes that vary by state. So, leaving aside the top two, you will see that a resident of Tennessee is more likely to die from “accidental discharge of firearms”, while someone from Alabama will succumb to syphilis. Interestingly, Texans are more likely to depart this mortal coil from tuberculosis; Georgians from “abnormal clinical problems not elsewhere classified”. While Alaskans — no surprise here — lead the way in deaths from airplane, boating and “unspecified transport accidents”.

Read more here.

Map: Distinctive cause of death by state. Courtesy of Francis Boscoe, New York State Cancer Registry.

 

Send to Kindle

Finally A Reason For Twitter

Florida Man (@_FloridaMan) finally brings it all into sharp and hysterical focus. Now, I may have a worthy reason for joining the Twitterscape and actually following someone.

From the NYT:

Dangling into the sea like America’s last-ditch lifeline, the state of Florida beckons. Hustlers and fugitives, million-dollar hucksters and harebrained thieves, Armani-wearing drug traffickers and hapless dope dealers all congregate, scheme and revel in the Sunshine State. It’s easy to get in, get out or get lost.

For decades, this cast of characters provided a diffuse, luckless counternarrative to the salt-and-sun-kissed Florida that tourists spy from their beach towels. But recently there arrived a digital-era prototype, @_FloridaMan, a composite of Florida’s nuttiness unspooled, tweet by tweet, to the world at large. With pithy headlines and links to real news stories, @_FloridaMan offers up the “real-life stories of the world’s worst super hero,” as his Twitter bio proclaims.

His more than 1,600 tweets — equal parts ode and derision — are a favorite for weird-news aficionados. Yet, two years since his 2013 debut, the man behind the Twitter feed remains beguilingly anonymous, a Wizard of LOLZ. (The one false note is his zombielike avatar: The mug shot belongs to an Indiana Man.)

His style is deceptively simple. Nearly every Twitter message begins “Florida Man.” What follows, though, is almost always a pile of trouble. Some examples:

Florida Man Tries to Walk Out of Store With Chainsaw Stuffed Down His Pants.

Florida Man Falls Asleep During Sailboat Burglary With Gift Bag on His Head; Can’t Be Woken by Police.

Florida Man Arrested For Directing Traffic While Also Urinating.

Florida Man Impersonates Police Officer, Accidentally Pulls Over Real Police Officer.

Florida Man Says He Only Survived Ax Attack By Drunk Stripper Because “Her Coordination Was Terrible.”

“Now I think there are people who actually aspire to Florida Man-ness,” said Dave Barry, who celebrates Florida’s brand of madness in his popular columns and best-selling books. “It’s like the big leagues. It’s the Broadway for idiots.”

The number of @_FloridaMan’s followers is 270,000. Homages have proliferated: fan art, copycat Twitter feeds (California Man, Texas Man) and, most recently, a craft beer with Florida Man’s avatar.

Florida Man is considerably more popular (and funny) than competitors like Texas Man (732 followers) or California Man (129). But is the Florida Man who Accidentally Shoots Himself With Stun Gun While Trying to Rob the Radio Shack He Also Works At truly more wacky than, let’s say, an Arkansas Man or New Jersey Man?

Read the entire story here.

Send to Kindle

Your Goldfish is Better Than You

Common_goldfish

Well, perhaps not at philosophical musings or mathematics. But, your little orange aquatic friend now has an attention span that is longer than yours. And, it’s all thanks to mobile devices and multi-tasking on multiple media platforms. [Psst, by the way, multi-tasking at the level of media consumption is a fallacy]. On average, the adult attention span is now down to a laughingly paltry 8 seconds, whereas the lowly goldfish comes in at 9 seconds. Where of course that leaves your inbetweeners and teenagers is anyone’s guess.

From the Independent:

Humans have become so obsessed with portable devices and overwhelmed by content that we now have attention spans shorter than that of the previously jokingly juxtaposed goldfish.

Microsoft surveyed 2,000 people and used electroencephalograms (EEGs) to monitor the brain activity of another 112 in the study, which sought to determine the impact that pocket-sized devices and the increased availability of digital media and information have had on our daily lives.

Among the good news in the 54-page report is that our ability to multi-task has drastically improved in the information age, but unfortunately attention spans have fallen.

In 2000 the average attention span was 12 seconds, but this has now fallen to just eight. The goldfish is believed to be able to maintain a solid nine.

“Canadians [who were tested] with more digital lifestyles (those who consume more media, are multi-screeners, social media enthusiasts, or earlier adopters of technology) struggle to focus in environments where prolonged attention is needed,” the study reads.

“While digital lifestyles decrease sustained attention overall, it’s only true in the long-term. Early adopters and heavy social media users front load their attention and have more intermittent bursts of high attention. They’re better at identifying what they want/don’t want to engage with and need less to process and commit things to memory.”

Anecdotely, many of us can relate to the increasing inability to focus on tasks, being distracted by checking your phone or scrolling down a news feed.

Another recent study by the National Centre for Biotechnology Information and the National Library of Medicine in the US found that 79 per cent of respondents used portable devices while watching TV (known as dual-screening) and 52 per cent check their phone every 30 minutes.

Read the entire story here.

Image: Common Goldfish. Public Domain.

Send to Kindle

Atheists Growing, But Still Remain Hated

infographic-atheism-2014

While I’ve lived in the United States for quite some time now it continues to perplex. It may still be a land of opportunity, but it remains a head-scratching paradox. Take religion. On the one hand, a recent survey by the Pew Research Center found that 22.8 percent of the adult population has no religious affiliation. That is, almost one quarter is atheist, agnostic or has no identification with any organized religion. This increased from 16 percent a mere seven years earlier. Yet, on the other hand, atheists and non-believers make up one of the most hated groups in the country — second only to Muslims. And, I don’t know where Satanists figure in this analysis.

Pew’s analysis also dices the analysis by political affiliation, and to no surprise, finds that Republicans generally hate atheists more than those on the left of the political spectrum. For Pew’s next research effort I would suggest they examine which religious affiliations hate atheists the most.

From the Guardian:

The dominant Christian share of the American population is falling sharply while the number of US adults who do not believe in God or prefer not to identify with any organized religion is growing significantly, according to a new report.

The trend is affecting Americans across the country and across all demographics and age groups – but is especially pronounced among young people, the survey by the Pew Research Center found.

In the last seven years, the proportion of US adults declaring themselves Christian fell from 78.4% to 70.6%, with the mainstream protestant, Catholic and evangelical protestant faiths all affected.

Over the same period, those in the category that Pew labeled religiously “unaffiliated” – those describing themselves as atheist, agnostic or “nothing in particular” – jumped from 16.1% of the population to between a fifth and a quarter, at 22.8%, the report, released on Tuesday, found.

“The US remains home to more Christians than any other country in the world, and a large majority of Americans continue to identify with some branch of the Christian faith, but the percentage of adults who describe themselves as Christians has dropped by almost eight points since 2007,” the survey found.

The change in non-Christian religious faiths, including Jews, Muslims, Buddhists, Hindus and “other world religions and faiths” crept up modestly from 4.7% to 5.9% of US adults.

“The younger generation seem much less involved in organized religion and the older generation is passing on, which is a very important factor,” John Green, a professor of political science at the University of Akron in Ohio and an adviser on the survey, told the Guardian.

Tuesday’s report is called the Religious Landscape Study and is the second of its kind prepared by the Pew Research Center.

Pew first conducted such a survey in 2007 and repeated it in 2014 then made comparisons.

The US census does not ask Americans to specify their religion, and there are no official government statistics on the religious composition of the US population, the report pointed out, adding that researchers gathered their material by conducting the survey in Spanish and English across a nationally representative sample of 35,000 US adults.

Green said there were a number of different theories behind more young people eschewing organized religion.

“The involvement of religious groups in politics, particularly regarding issues such as same sex marriage and abortion, is alienating younger adults, who tend to have more liberal and progressive views than older people,” he said.

The rise of the internet and social media has also drawn younger adults towards online, general social groups and away from face-to-face organizations and traditional habits, such as churchgoing, he said.

And there is a theory that the fact that more young people in this generation are going to college is linked to their falling interest in organized religion, he said.

Read the entire story here.

Infographic courtesy of the Pew Research Center.

Send to Kindle

Self-Absorbed? Rejoice!

aricsnee-selfie-arm

From a culture that celebrates all things selfie comes the next logical extension. An invention that will surely delight any image-conscious narcissist.

The “selfie arm” is a wonderful tongue-firmly-in-cheek invention of artists Aric Snee and Justin Crowe. Their aim, to comment on the illusion of sociableness and connectedness. Thankfully they plan to only construct 10 of these contraptions. But, you know, somewhere and soon, a dubious entrepreneur will be hawking these for $19.95.

One can only hope that the children of Gen-Selfie will eventually rebel against their self-absored parents — until then I’m crawling back under my rock.

From Wired UK:

A selfie stick designed to look like a human arm will ensure you never look alone, but always feel alone. The accessory is designed to make it appear that a lover or friend is holding your hand while taking a photo, removing the crushing sense of narcissistic loneliness otherwise swamping your existence.

The prototype ‘selfie arm’ is the work of artists Justin Crowe and Aric Snee and isn’t intended to be taken seriously. Made of fibreglass, the selfie arm was created in protest against the “growing selfie stick phenomenon, and the constant, gnawing need for narcissistic internet validation,” according to Designboom.

Read the entire article here.

Image: Selfie arm by Aric Snee and Justin Crowe. Courtesy of Aric Snee and Justin Crowe.

Send to Kindle

Hard Work Versus Smart Work

If you work any kind of corporate job it’s highly likely that you’ll hear any of the following on an almost daily basis: “good job, all those extra hours you put in really paid off”, “I always eat lunch at my desk”, “yes… worked late again yesterday”, “… are you staying late too?”, “I know you must have worked so many long hours to get the project done”, “I’m really impressed at the hours you dedicate…”, “what a team, you all went over and above… working late, working weekends, sacrificing vacation…”, and so on.

The workaholic culture – particularly in the United States – serves to reinforce the notion that hard work is actually to be rewarded and reinforced. Many just seem to confuse long hours for persistence and resilience. On the surface it seems to be a great win for the employer: get more hours out of your employees, and it’s free. Of course, recent analyses of work-life balance show that pushing employees beyond a certain number of hours is thoroughly counterproductive — beyond the deleterious effects on employees the quality of the work suffers too. But it turns out that a not insignificant number of wily subordinates may actually be gaming the 80-hour workweek. And, don’t forget the other group of hard-workers — those who do endless hours of so-called “busy work” just to look hardworking.

What happened to just encouraging and incentivizing  employees, and bosses, for working smartly, rather than just hard? Reward long hours and there is no incentive for innovation or change; reward smartness and creativity thrives. The current mindset may take generations to alter — you’ll easily come across the word “hardworking” in the dictionary, but you’ll have no luck finding “smartworking“.

From the NYT:

Imagine an elite professional services firm with a high-performing, workaholic culture. Everyone is expected to turn on a dime to serve a client, travel at a moment’s notice, and be available pretty much every evening and weekend. It can make for a grueling work life, but at the highest levels of accounting, law, investment banking and consulting firms, it is just the way things are.

Except for one dirty little secret: Some of the people ostensibly turning in those 80- or 90-hour workweeks, particularly men, may just be faking it.

Many of them were, at least, at one elite consulting firm studied by Erin Reid, a professor at Boston University’s Questrom School of Business. It’s impossible to know if what she learned at that unidentified consulting firm applies across the world of work more broadly. But her research, publishedin the academic journal Organization Science, offers a way to understand how the professional world differs between men and women, and some of the ways a hard-charging culture that emphasizes long hours above all can make some companies worse off.

Ms. Reid interviewed more than 100 people in the American offices of a global consulting firm and had access to performance reviews and internal human resources documents. At the firm there was a strong culture around long hours and responding to clients promptly.

“When the client needs me to be somewhere, I just have to be there,” said one of the consultants Ms. Reid interviewed. “And if you can’t be there, it’s probably because you’ve got another client meeting at the same time. You know it’s tough to say I can’t be there because my son had a Cub Scout meeting.”

Some people fully embraced this culture and put in the long hours, and they tended to be top performers. Others openly pushed back against it, insisting upon lighter and more flexible work hours, or less travel; they were punished in their performance reviews.

The third group is most interesting. Some 31 percent of the men and 11 percent of the women whose records Ms. Reid examined managed to achieve the benefits of a more moderate work schedule without explicitly asking for it.

They made an effort to line up clients who were local, reducing the need for travel. When they skipped work to spend time with their children or spouse, they didn’t call attention to it. One team on which several members had small children agreed among themselves to cover for one another so that everyone could have more flexible hours.

A male junior manager described working to have repeat consulting engagements with a company near enough to his home that he could take care of it with day trips. “I try to head out by 5, get home at 5:30, have dinner, play with my daughter,” he said, adding that he generally kept weekend work down to two hours of catching up on email.

Despite the limited hours, he said: “I know what clients are expecting. So I deliver above that.” He received a high performance review and a promotion.

What is fascinating about the firm Ms. Reid studied is that these people, who in her terminology were “passing” as workaholics, received performance reviews that were as strong as their hyper-ambitious colleagues. For people who were good at faking it, there was no real damage done by their lighter workloads.

It calls to mind the episode of “Seinfeld” in which George Costanza leaves his car in the parking lot at Yankee Stadium, where he works, and gets a promotion because his boss sees the car and thinks he is getting to work earlier and staying later than anyone else. (The strategy goes awry for him, and is not recommended for any aspiring partners in a consulting firm.)

Read the entire article here.

Send to Kindle

Real Magic

Literary, social, moral and philanthropic leadership. These are all very admirable qualities. We might strive to embody just one of these in our daily lives. Author J.K. Rowling seems to demonstrate all four. In her new book, Very Good Lives: The Fringe Benefits of Failure and the Importance of Imagination, published in April 2015, she distills advice from her self-effacing but powerful Harvard University commencement speech, delivered in 2008.

A couple of my favorite quotes:

Many prefer not to exercise their imaginations at all. They choose to remain comfortably within the bounds of their own experience, never troubling to wonder how it would feel to have been born other than they are.

Some failure in life is inevitable. It is impossible to live without failing at something, unless you live so cautiously that you might as well not lived at all.

Video: J.K. Rowling Harvard Commencement Speech, 2008. Courtesy of Harvard University.

Send to Kindle

The Biggest Threats to Democracy

Edward_SnowdenHistory reminds us of those critical events that pose threats to us on various levels: to our well being at a narrow level and to the foundations of our democracies at a much broader level. And, most of these existential threats seem to come from the outside: wars, terrorism, ethnic cleansing.

But it’s not quite that simple — the biggest threats come not from external sources of evil, but from within us. Perhaps the two most significant are our apathy and paranoia. Taken together they erode our duty to protect our democracy, and hand over ever-increasing power to those who claim to protect us. Thus, before the Nazi machine enslaved huge portions of Europe, the citizens of Germany allowed it to gain power; before Al-Qaeda and Isis and their terrorist look-a-likes gained notoriety local conditions allowed these groups to flourish. We are all complicit in our inaction — driven by indifference or fear, or both.

Two timely events serve to remind us of the huge costs and consequences of our inaction from apathy and paranoia. One from the not too distant past, and the other portends our future. First, it is Victory in Europe (VE) day, the anniversary of the Allied win in WWII, on May 8, 1945. Many millions perished through the brutal policies of the Nazi ideology and its instrument, the Wehrmacht, and millions more subsequently perished in the fight to restore moral order. Much of Europe first ignored the growing threat of the national socialists. As the threat grew, Europe continued to contemplate appeasement. Only later, as true scale of atrocities became apparent did leaders realize that the threat needed to be tackled head-on.

Second, a federal appeals court in the United States ruled on May 7, 2015 that the National Security Agency’s collection of millions of phone records is illegal. This serves to remind us of the threat that our own governments pose to our fundamental freedoms under the promise of continued comfort and security. For those who truly care about the fragility of democracy this is a momentous and rightful ruling. It is all the more remarkable that since the calamitous events of September 11, 2001 few have challenged this governmental overreach into our private lives: our phone calls, our movements, our internet surfing habits, our credit card history. We have seen few public demonstrations and all too little ongoing debate. Indeed, only through the recent revelations by Edward Snowden did the debate even enter the media cycle. And, the debate is only just beginning.

Both of these events show that only we, the people who are fortunate enough to live within a democracy, can choose a path that strengthens our governmental institutions and balances these against our fundamental rights. By corollary we can choose a path that weakens our institutions too. One path requires engagement and action against those who use fear to make us conform. The other path, often easier, requires that we do nothing, accept the status quo, curl up in the comfort of our cocoons and give in to fear.

So this is why the appeals court ruling is so important. While only three in number, the judges have established that our government has been acting illegally, yet supposedly on our behalf. While the judges did not terminate the unlawful program, they pointedly requested the US Congress to debate and then define laws that would be narrower and less at odds with citizens’ constitutional rights. So, the courts have done us all a great favor. One can only hope that this opens the eyes, ears and mouths of the apathetic and fearful so that they continuously demand fair and considered action from their elected representatives. Only then can we begin to make inroads against the real and insidious threats to our democracy — our apathy and our fear. And perhaps, also, Mr.Snowden can take a small helping of solace.

From the Guardian:

The US court of appeals has ruled that the bulk collection of telephone metadata is unlawful, in a landmark decision that clears the way for a full legal challenge against the National Security Agency.

A panel of three federal judges for the second circuit overturned an earlier rulingthat the controversial surveillance practice first revealed to the US public by NSA whistleblower Edward Snowden in 2013 could not be subject to judicial review.

But the judges also waded into the charged and ongoing debate over the reauthorization of a key Patriot Act provision currently before US legislators. That provision, which the appeals court ruled the NSA program surpassed, will expire on 1 June amid gridlock in Washington on what to do about it.

The judges opted not to end the domestic bulk collection while Congress decides its fate, calling judicial inaction “a lesser intrusion” on privacy than at the time the case was initially argued.

“In light of the asserted national security interests at stake, we deem it prudent to pause to allow an opportunity for debate in Congress that may (or may not) profoundly alter the legal landscape,” the judges ruled.

But they also sent a tacit warning to Senator Mitch McConnell, the Republican leader in the Senate who is pushing to re-authorize the provision, known as Section 215, without modification: “There will be time then to address appellants’ constitutional issues.”

“We hold that the text of section 215 cannot bear the weight the government asks us to assign to it, and that it does not authorize the telephone metadata program,” concluded their judgment.

“Such a monumental shift in our approach to combating terrorism requires a clearer signal from Congress than a recycling of oft?used language long held in similar contexts to mean something far narrower,” the judges added.

“We conclude that to allow the government to collect phone records only because they may become relevant to a possible authorized investigation in the future fails even the permissive ‘relevance’ test.

“We agree with appellants that the government’s argument is ‘irreconcilable with the statute’s plain text’.”

Read the entire story here.

Image: Edward Snowden. Courtesy of Wikipedia.

Send to Kindle

The Lone (And Paranoid) Star State

Flag_of_the_Republic_of_TexasThe Lone Star State continues to take pride in doing its own thing. After all it has a legacy to uphold since its very construction — that of fierce and outspoken independence. But, sometimes this leads to blind political arrogance, soon followed by growing paranoia.

You see, newly minted Texas Governor Greg Abbott has a theory that the US military is about to  put his state under the control of martial law. So, he has deployed the Texas State Guard to monitor any dubious federal activity and, one supposes, to curtail any attempts at a coup d’état. If I were Governor Abbott I would not overly trouble myself with a possible federal take-over of the state. After all, citizens will very soon be able to openly carry weapons in public — 20 million Texans “packing heat” [carrying a loaded gun, for those not versed in the subtle American vernacular] will surely deter the feds.

From NPR:

Since Gen. Sam Houston executed his famous retreat to glory to defeat the superior forces of Gen. Antonio Lopez de Santa Anna, Texas has been ground zero for military training. We have so many military bases in the Lone Star State we could practically attack Russia.

So when rookie Texas Gov. Greg Abbott announced he was ordering the Texas State Guard to monitor a Navy SEAL/Green Beret joint training exercise, which was taking place in Texas and several other states, everybody here looked up from their iPhones. What?

It seems there is concern among some folks that this so-called training maneuver is just a cover story. What’s really going on? President Obama is about to use Special Forces to put Texas under martial law.

Let’s walk over by the fence where nobody can hear us, and I’ll tell you the story.

You see, there are these Wal-Marts in West Texas that supposedly closed for six months for “renovation.” That’s what they want you to believe. The truth is these Wal-Marts are going to be military guerrilla-warfare staging areas and FEMA processing camps for political prisoners. The prisoners are going to be transported by train cars that have already been equipped with shackles.

Don’t take my word for it. That comes directly from a Texas Ranger, who seems pretty plugged in, if you ask me. You and I both know President Obama has been waiting a long time for this, and now it’s happening. It’s a classic false flag operation. Don’t pay any attention to the mainstream media; all they’re going to do is lie and attack everyone who’s trying to tell you the truth.

Did I mention the ISIS terrorists? They’ve come across the border and are going to hit soft targets all across the Southwest. They’ve set up camp a few miles outside of El Paso.

That includes a Mexican army officer and Mexican federal police inspector. Not sure what they’re doing there, but probably nothing good. That’s why the Special Forces guys are here, get it? To wipe out ISIS and impose martial law. So now you know, whaddya say we get back to the party and grab another beer?

It’s true that the paranoid worldview of right-wing militia types has remarkable stamina. But that’s not news.

What is news is that there seem to be enough of them in Texas to influence the governor of the state to react — some might use the word pander — to them.

That started Monday when a public briefing by the Army in Bastrop County, which is just east of Austin, got raucous. The poor U.S. Army colonel probably just thought he was going to give a regular briefing, but instead 200 patriots shouted him down, told him he was a liar and grilled him about the imminent federal takeover of Texas and subsequent imposition of martial law.

“We just want to make sure our guys are trained. We want to hone our skills,” Lt. Col. Mark Listoria tried to explain in vain.

One wonders what Listoria was thinking to himself as he walked to his car after two hours of his life he’ll never get back. God bless Texas? Maybe not.

The next day Abbott decided he had to take action. He announced that he was going to ask the Texas State Guard to monitor Operation Jade Helm from start to finish.

“It is important that Texans know their safety, constitutional rights, private property rights and civil liberties will not be infringed upon,” Abbott said.

The idea that the Yankee military can’t be trusted down here has a long and rich history in Texas. But that was a while back. Abbott’s proclamation that he was going to keep his eye on these Navy SEAL and Green Beret boys did rub some of our leaders the wrong way.

Former Texas Lt. Gov. David Dewhurst tried to put it in perspective for outsiderswhen he explained, “Unfortunately, some Texans have projected their legitimate concerns about the competence and trustworthiness of President Barack Obama on these noble warriors. This must stop.”

Another former Republican politician was a bit more pointed.

“Your letter pandering to idiots … has left me livid,” former state Rep. Todd Smith wrote Abbott. “I am horrified that I have to choose between the possibility that my Governor actually believes this stuff and the possibility that my Governor doesn’t have the backbone to stand up to those who do.”

Read the entire story here.

Image: The “Burnet Flag,” used from 1836 to 1839 as the national flag of the Republic of Texas until it was replaced by the currently used “Lone Star Flag.” Public Domain. Courtesy of Wikipedia.

Send to Kindle

Baroness Thatcher and the Media Baron

The cozy yet fraught relationship between politicians and powerful figures in the media has been with us since the first days of newsprint. It’s a delicate symbiosis of sorts — the politician needs the media magnate to help acquire and retain power; the media baron needs the politician to shape and centralize it. The underlying motivations seem similar for both parties, hence the symbiosis — self-absorbtion, power, vanity.

So, it comes as no surprise to read intimate details of the symbiotic Rupert Murdoch / Margaret Thatcher years. Prime minister Thatcher would sometimes actively, but often surreptitiously, support Murdoch’s megalomaniacal desire to corner the UK (and global) media, while Murdoch would ensure his media channeled appropriately Thatcher-friendly news, spin and op-ed. But the Thatcher-Murdoch story is just one of the latest in a long line of business deals between puppet and puppet-master [you may decide which is which, dear reader]. Over the last hundred years we’ve had William Randolph Hearst and Roosevelt, Lloyd George and Northcliffe, Harold Wilson and Robert Maxwell, Baldwin and Beaverbrook.

Thomas Jefferson deplored newspapers — seeing them as vulgar and cancerous. His prescient analysis of the troubling and complex relationship between the news and politics is just as valid today, “an evil for which there is no remedy; our liberty depends on the freedom of the press, and this cannot be limited without being lost”.

Yet for all the grievous faults and dubious shenanigans of the brutish media barons and their fickle political spouses, the Thatcher-Murdoch story is perhaps not as sinister as one might first think. We now live in an age where faceless corporations and billionaires broker political power and shape policy behind mountains of money, obfuscated institutions and closed doors. This is far more troubling for our democracies. I would rather fight an evil that has a face.

From the Guardian:

The coup that transformed the relationship between British politics and journalism began at a quiet Sunday lunch at Chequers, the official country retreat of the prime minister, Margaret Thatcher. She was trailing in the polls, caught in a recession she had inherited, eager for an assured cheerleader at a difficult time. Her guest had an agenda too. He was Rupert Murdoch, eager to secure her help in acquiring control of nearly 40% of the British press.

Both parties got what they wanted.

The fact that they met at all, on 4 January 1981, was vehemently denied for 30 years. Since their lie was revealed, it has been possible to uncover how the greatest extension of monopoly power in modern press history was planned and executed with such furtive brilliance.

All the wretches in the subsequent hacking sagas – the predators in the red-tops, the scavengers and sleaze merchants, the blackmailers and bribers, the liars, the bullies, the cowed politicians and the bent coppers – were but the detritus of a collapse of integrity in British journalism and political life. At the root of the cruelties and extortions exposed in the recent criminal trials at the Old Bailey, was Margaret Thatcher’s reckless engorgement of the media power of her guest that January Sunday. The simple genesis of the hacking outrages is that Murdoch’s News International came to think it was above the law, because it was.

Thatcher achieved much as a radical prime minister confronted by political turmoil and economic torpor. So did Murdoch, in his liberation of British newspapers from war with the pressroom unions, and by wresting away the print unions’ monopoly of access to computer technology. I applauded his achievements, and still do, as I applauded many of Thatcher’s initiatives when I chaired the editorial boards of the Sunday Times (1967-81) and then the Times (1981-2). It is sad that her successes are stained by recent evidence of her readiness to ensure sunshine headlines for herself in the Murdoch press (especially when it was raining), at a heavy cost to the country. She enabled her guest to avoid a reference to the Monopolies and Mergers Commission, even though he already owned the biggest-selling daily newspaper, the Sun, and the biggest selling Sunday newspaper, the News of the World, and was intent on acquiring the biggest-selling quality weekly, the Sunday Times, and its stablemate, the Times. 

 Times Newspapers had long cherished their independence. In 1966, when the Times was in financial difficulty, the new owner who came to the rescue, Lord Roy Thomson of Fleet, promised to sustain it as an independent non-partisan newspaper – precisely how he had conducted the profitable Sunday Times. Murdoch was able to acquire both publications in 1981 only because he began making solemn pledges that he would maintain the tradition of independence. He broke every one of those promises in the first years. His breach of the undertakings freely made for Times Newspapers was a marked contrast with the independent journalism we at the Sunday Times (and William Rees-Mogg at the Times) had enjoyed under the principled ownership of the Thomson family. Thatcher was a vital force in reviving British competitiveness, but she abetted a concentration of press power that became increasingly arrogant and careless of human dignity in ways that would have appalled her, had she remained in good health long enough to understand what her actions had wrought.

Documents released by the Thatcher Archive Trust, now housed at Churchill College, Cambridge, give the lie to a litany of Murdoch-Thatcher denials about collusion during the bidding for Times Newspapers. They also expose a crucial falsehood in the seventh volume of The History of the Times: The Murdoch Years – the official story of the newspaper from 1981-2002, published in 2005 by the Murdoch-owned HarperCollins. In it Graham Stewart wrote, in all innocence, that Murdoch and Thatcher “had no communication whatsoever during the period in which the Times bid and presumed referral to the Monopolies and Mergers Commission was up for discussion”.

Read the entire story here.

Send to Kindle

Marketing of McGod

google-search-church-logos

Many churches now have their own cool logos. All of the large or mega-churches have their own well-defined brands and well-oiled marketing departments. Clearly, God is not doing enough to disseminate his (or her) message — God needs help from ad agencies and marketing departments. Modern day evangelism is not only a big business, it’s now a formalized business process, with key objectives, market share drivers, growth strategies, metrics and key performance indicators (KPI) — just like any other corporate franchise.

But some Christians believe that there is more (or, actually, less) to their faith than neo-evangelical brands like Vine, Gather, Vertical or Prime. So, some are shunning these houses of “worshipfotainment” [my invention, dear reader] with high-production values and edgy programming; they are forgoing mega-screens with Jesus-powerpoint and heavenly lasers, lattes in the lobby and hip Christian metal. A millennial tells his story of disillusionment with the McChurch — its evangelical shallowness and exclusiveness.

From the Washington Post:

Bass reverberates through the auditorium floor as a heavily bearded worship leader pauses to invite the congregation, bathed in the light of two giant screens, to tweet using #JesusLives. The scent of freshly brewed coffee wafts in from the lobby, where you can order macchiatos and purchase mugs boasting a sleek church logo. The chairs are comfortable, and the music sounds like something from the top of the charts. At the end of the service, someone will win an iPad.

This, in the view of many churches, is what millennials like me want. And no wonder pastors think so. Church attendance has plummeted among young adults. In the United States, 59 percent of people ages 18 to 29 with a Christian background have, at some point, dropped out. According to the Pew Forum on Religion & Public Life, among those of us who came of age around the year 2000, a solid quarter claim no religious affiliation at all, making my generation significantly more disconnected from faith than members of Generation X were at a comparable point in their lives and twice as detached as baby boomers were as young adults.

In response, many churches have sought to lure millennials back by focusing on style points: cooler bands, hipper worship, edgier programming, impressive technology. Yet while these aren’t inherently bad ideas and might in some cases be effective, they are not the key to drawing millennials back to God in a lasting and meaningful way. Young people don’t simply want a better show. And trying to be cool might be making things worse.

 You’re just as likely to hear the words “market share” and “branding” in church staff meetings these days as you are in any corporate office. Megachurches such as Saddleback in Lake Forest, Calif., and Lakewood in Houston have entire marketing departments devoted to enticing new members. Kent Shaffer of ChurchRelevance.com routinely ranks the best logos and Web sites and offers strategic counsel to organizations like Saddleback and LifeChurch.tv.

Increasingly, churches offer sermon series on iTunes and concert-style worship services with names like “Vine” or “Gather.” The young-adult group at Ed Young’s Dallas-based Fellowship Church is called Prime, and one of the singles groups at his father’s congregation in Houston is called Vertical. Churches have made news in recent years for giving away tablet computers , TVs and even cars at Easter. Still, attendance among young people remains flat.

Recent research from Barna Group and the Cornerstone Knowledge Network found that 67 percent of millennials prefer a “classic” church over a “trendy” one, and 77 percent would choose a “sanctuary” over an “auditorium.” While we have yet to warm to the word “traditional” (only 40 percent favor it over “modern”), millennials exhibit an increasing aversion to exclusive, closed-minded religious communities masquerading as the hip new places in town. For a generation bombarded with advertising and sales pitches, and for whom the charge of “inauthentic” is as cutting an insult as any, church rebranding efforts can actually backfire, especially when young people sense that there is more emphasis on marketing Jesus than actually following Him. Millennials “are not disillusioned with tradition; they are frustrated with slick or shallow expressions of religion,” argues David Kinnaman, who interviewed hundreds of them for Barna Group and compiled his research in “You Lost Me: Why Young Christians Are Leaving Church .?.?. and Rethinking Faith.”

My friend and blogger Amy Peterson put it this way: “I want a service that is not sensational, flashy, or particularly ‘relevant.’ I can be entertained anywhere. At church, I do not want to be entertained. I do not want to be the target of anyone’s marketing. I want to be asked to participate in the life of an ancient-future community.”

Millennial blogger Ben Irwin wrote: “When a church tells me how I should feel (‘Clap if you’re excited about Jesus!’), it smacks of inauthenticity. Sometimes I don’t feel like clapping. Sometimes I need to worship in the midst of my brokenness and confusion — not in spite of it and certainly not in denial of it.”

When I left church at age 29, full of doubt and disillusionment, I wasn’t looking for a better-produced Christianity. I was looking for a truer Christianity, a more authentic Christianity: I didn’t like how gay, lesbian, bisexual and transgender people were being treated by my evangelical faith community. I had questions about science and faith, biblical interpretation and theology. I felt lonely in my doubts. And, contrary to popular belief, the fog machines and light shows at those slick evangelical conferences didn’t make things better for me. They made the whole endeavor feel shallow, forced and fake.

Read the entire story here.

Send to Kindle

Spam, Spam, Spam: All Natural

Google-search-natural-junk-food

Parents through the ages have often decried the mangling of their mother tongue by subsequent generations. Language is fluid after all, particularly English, and our youth constantly add their own revisions to carve a divergent path from their elders. But, the focus of our disdain for the ongoing destruction of our linguistic heritage should really be corporations and their hordes of marketeers and lawyers. Take the once simple and meaningful word “natural”. You’ll see its oxymoronic application each time you stroll along the aisle at your grocery store: one hundred percent natural fruit roll-ups; all natural chicken rings; completely natural corn-dogs; totally naturally flavored cheese puffs. The word — natural — has become meaningless.

From NYT:

It isn’t every day that the definition of a common English word that is ubiquitous in common parlance is challenged in federal court, but that is precisely what has happened with the word “natural.” During the past few years, some 200 class-action suits have been filed against food manufacturers, charging them with misuse of the adjective in marketing such edible oxymorons as “natural” Cheetos Puffs, “all-natural” Sun Chips, “all-natural” Naked Juice, “100 percent all-natural” Tyson chicken nuggets and so forth. The plaintiffs argue that many of these products contain ingredients — high-fructose corn syrup, artificial flavors and colorings, chemical preservatives and genetically modified organisms — that the typical consumer wouldn’t think of as “natural.”

Judges hearing these cases — many of them in the Northern District of California — have sought a standard definition of the adjective that they could cite to adjudicate these claims, only to discover that no such thing exists.

Something in the human mind, or heart, seems to need a word of praise for all that humanity hasn’t contaminated, and for us that word now is “natural.” Such an ideal can be put to all sorts of rhetorical uses. Among the antivaccination crowd, for example, it’s not uncommon to read about the superiority of something called “natural immunity,” brought about by exposure to the pathogen in question rather than to the deactivated (and therefore harmless) version of it made by humans in laboratories. “When you inject a vaccine into the body,” reads a post on an antivaxxer website, Campaign for Truth in Medicine, “you’re actually performing an unnatural act.” This, of course, is the very same term once used to decry homosexuality and, more recently, same-sex marriage, which the Family Research Council has taken to comparing unfavorably to what it calls “natural marriage.”

So what are we really talking about when we talk about natural? It depends; the adjective is impressively slippery, its use steeped in dubious assumptions that are easy to overlook. Perhaps the most incoherent of these is the notion that nature consists of everything in the world except us and all that we have done or made. In our heart of hearts, it seems, we are all creationists.

In the case of “natural immunity,” the modifier implies the absence of human intervention, allowing for a process to unfold as it would if we did nothing, as in “letting nature take its course.” In fact, most of medicine sets itself against nature’s course, which is precisely what we like about it — at least when it’s saving us from dying, an eventuality that is perhaps more natural than it is desirable.

Yet sometimes medicine’s interventions are unwelcome or go overboard, and nature’s way of doing things can serve as a useful corrective. This seems to be especially true at the beginning and end of life, where we’ve seen a backlash against humanity’s technological ingenuity that has given us both “natural childbirth” and, more recently, “natural death.”

This last phrase, which I expect will soon be on many doctors’ lips, indicates the enduring power of the adjective to improve just about anything you attach it to, from cereal bars all the way on up to dying. It seems that getting end-of-life patients and their families to endorse “do not resuscitate” orders has been challenging. To many ears, “D.N.R.” sounds a little too much like throwing Grandpa under the bus. But according to a paper in The Journal of Medical Ethics, when the orders are reworded to say “allow natural death,” patients and family members and even medical professionals are much more likely to give their consent to what amounts to exactly the same protocols.

The word means something a little different when applied to human behavior rather than biology (let alone snack foods). When marriage or certain sexual practices are described as “natural,” the word is being strategically deployed as a synonym for “normal” or “traditional,” neither of which carries nearly as much rhetorical weight. “Normal” is by now too obviously soaked in moral bigotry; by comparison, “natural” seems to float high above human squabbling, offering a kind of secular version of what used to be called divine law. Of course, that’s exactly the role that “natural law” played for America’s founding fathers, who invoked nature rather than God as the granter of rights and the arbiter of right and wrong.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

The Rich and Powerful Live by Different Rules

Bradley_ManningNever has there been such a wonderful example of blatant utter hypocrisy. This time from the United States Department of Justice. It would be refreshing to convey to our leaders that not only do “Black Lives Matter”, “Less Privileged Lives Matter” as well.

Former director of the CIA no less, and ex-four star general David Petraeus copped a mere two years of probation and a $100,000 fine for leaking classified information to his biographer. Chelsea Manning, formerly Bradley Manning, intelligence analyst and ex-army private, was sentenced to 35 years in prison in 2013 for disclosing classified documents to WikiLeaks.

And, there are many other similar examples.

DCIA David PetraeusWe wince when hearing of oligarchic corruption and favoritism in other nations, such as Russia and China. But, in this country it goes by the euphemism known as “justice” so it must be OK.

From arstechnica:

Yesterday [April 23, 2015], former CIA Director David Petraeus was handed two years of probation and a $100,000 fine after agreeing to a plea deal that ends in no jail time for leaking classified information to Paula Broadwell, his biographer and lover.

“I now look forward to moving on with the next phase of my life and continuing to serve our great nation as a private citizen,” Petraeus said outside the federal courthouse in Charlotte, North Carolina on Thursday.

Lower-level government leakers have not, however, been as likely to walk out of a courthouse applauding the US as Petraeus did. Trevor Timm, executive director of the Freedom of the Press Foundation, called the Petraeus plea deal a “gross hypocrisy.”

“At the same time as Petraeus got off virtually scot-free, the Justice Department has been bringing the hammer down upon other leakers who talk to journalists—sometimes for disclosing information much less sensitive than Petraeus did,” he said.

The Petraeus sentencing came days after the Justice Department demanded (PDF) up to a 24-year-term for Jeffrey Sterling, a former CIA agent who leaked information to a Pulitzer Prize-winning writer about a botched mission to sell nuclear plans to Iran in order to hinder its nuclear-weapons progress.

“A substantial sentence in this case would send an appropriate and much needed message to all persons entrusted with the handling of classified information, i.e., that intentional breaches of the laws governing the safeguarding of national defense information will be pursued aggressively, and those who violate the law in this manner will be tried, convicted, and punished accordingly,” the Justice Department argued in Sterling’s case this week.

The Daily Beast sums up the argument that the Petraeus deal involves a double standard by noting other recent penalties for lower-level leakers:

“Chelsea Manning, formerly Bradley Manning, was sentenced to 35 years in prison in 2013 for disclosing classified documents to WikiLeaks. Stephen Jin-Woo Kim, a former State Department contractor, entered a guilty plea last year to one felony count of disclosing classified information to a Fox News reporter in February 2014. He was sentenced to 13 months in prison. On Monday, prosecutors urged a judge to sentence Jeffrey Sterling, a former CIA officer, to at least 20 years in prison for leaking classified plans to sabotage Iran’s nuclear-weapons program to a New York Times reporter. Sterling will be sentenced next month. And former CIA officer John C. Kiriakou served 30 months in federal prison after he disclosed the name of a covert operative to a reporter. He was released in February and is finishing up three months of house arrest.”

The information Petraeus was accused of leaking, according to the original indictment, contained “classified information regarding the identities of covert officers, war strategy, intelligence capabilities and mechanisms, diplomatic discussions, quotes and deliberative discussions from high-level National Security Council meetings.” The leak also included “discussions with the president of the United States.”

The judge presiding over the case, US Magistrate Judge David Keesler, increased the government’s recommended fine of $40,000 to $100,000 because of Petraeus’ ”grave but uncharacteristic error in judgement.”

Read the entire story here.

Images: Four-Star General David Petraeus; Private Chelsea Manning. Courtesy of Wikipedia.

Send to Kindle

Belief and the Falling Light

Many of us now accept that lights falling from the sky are rocky interlopers from the asteroid clouds within our solar system, rather than visiting angels or signs from an angry (or mysteriously benevolent) God. New analysis of the meteor that overflew Chelyabinsk in Russia in 2013 suggests that one of the key founders of Christianity may have witnessed a similar natural phenomenon around two thousand years ago. However, at the time, Saul (later to become Paul the evangelist) interpreted the dazzling light on the road to Damascus – Acts of the Apostles, New Testament – as a message from a Christian God. The rest, as they say, is history. Luckily, recent scientific progress now means that most of us no longer establish new religious movements based on fireballs in the sky. But, we are awed nonetheless.

From the New Scientist:

Nearly two thousand years ago, a man named Saul had an experience that changed his life, and possibly yours as well. According to Acts of the Apostles, the fifth book of the biblical New Testament, Saul was on the road to Damascus, Syria, when he saw a bright light in the sky, was blinded and heard the voice of Jesus. Changing his name to Paul, he became a major figure in the spread of Christianity.

William Hartmann, co-founder of the Planetary Science Institute in Tucson, Arizona, has a different explanation for what happened to Paul. He says the biblical descriptions of Paul’s experience closely match accounts of the fireball meteor seen above Chelyabinsk, Russia, in 2013.

Hartmann has detailed his argument in the journal Meteoritics & Planetary Science (doi.org/3vn). He analyses three accounts of Paul’s journey, thought to have taken place around AD 35. The first is a third-person description of the event, thought to be the work of one of Jesus’s disciples, Luke. The other two quote what Paul is said to have subsequently told others.

“Everything they are describing in those three accounts in the book of Acts are exactly the sequence you see with a fireball,” Hartmann says. “If that first-century document had been anything other than part of the Bible, that would have been a straightforward story.”

But the Bible is not just any ancient text. Paul’s Damascene conversion and subsequent missionary journeys around the Mediterranean helped build Christianity into the religion it is today. If his conversion was indeed as Hartmann explains it, then a random space rock has played a major role in determining the course of history (see “Christianity minus Paul”).

That’s not as strange as it sounds. A large asteroid impact helped kill off the dinosaurs, paving the way for mammals to dominate the Earth. So why couldn’t a meteor influence the evolution of our beliefs?

“It’s well recorded that extraterrestrial impacts have helped to shape the evolution of life on this planet,” says Bill Cooke, head of NASA’s Meteoroid Environment Office in Huntsville, Alabama. “If it was a Chelyabinsk fireball that was responsible for Paul’s conversion, then obviously that had a great impact on the growth of Christianity.”

Hartmann’s argument is possible now because of the quality of observations of the Chelyabinsk incident. The 2013 meteor is the most well-documented example of larger impacts that occur perhaps only once in 100 years. Before 2013, the 1908 blast in Tunguska, also in Russia, was the best example, but it left just a scattering of seismic data, millions of flattened trees and some eyewitness accounts. With Chelyabinsk, there is a clear scientific argument to be made, says Hartmann. “We have observational data that match what we see in this first-century account.”

Read the entire article here.

Video: Meteor above Chelyabinsk, Russia in 2013. Courtesy of Tuvix72.

Send to Kindle

Endless Political Campaigning

US-politicians

The great capitalist market has decided — endless political campaigning in the United States is beneficial. If you think the presidential campaign to elect the next leader in 2016 began sometime last year you are not mistaken. In fact, it really does seem that political posturing for the next election often begins before the current one is even decided. We all complain: too many ads, too much negativity, far too much inanity and little substance. Yet, we allow the process to continue, and to grow in scale. Would you put up with a political campaign that lasts a mere 38 days? The British seem to do it. But, then again, the United States is so much more advanced, right?

From WSJ:

On March 23, Ted Cruz announced he is running for president in a packed auditorium at Liberty University in Lynchburg, Va. On April 7, Rand Paul announced he is running for president amid the riverboat décor of the Galt House hotel in Louisville, Ky. On April 12, Hillary Clinton announced she is running for president in a brief segment of a two-minute video. On April 13, Marco Rubio announced he is running before a cheering crowd at the Freedom Tower in Miami. And these are just the official announcements.

Jeb Bush made it known in December that he is interested in running. Scott Walker’s rousing speech at the Freedom Summit in Des Moines, Iowa, on Jan. 24 left no doubt that he will enter the race. Chris Christie’s appearance in New Hampshire last week strongly suggests the same. Previous presidential candidates Mike Huckabee,Rick Perry and Rick Santorum seem almost certain to run. Pediatric surgeon Ben Carson is reportedly ready to announce his run on May 4 at the Detroit Music Hall.

With some 570 days left until Election Day 2016, the race for president is very much under way—to the dismay of a great many Americans. They find the news coverage of the candidates tiresome (what did Hillary order at Chipotle?), are depressed by the negative campaigning that is inevitable in an adversarial process, and dread the onslaught of political TV ads. Too much too soon!

They also note that other countries somehow manage to select their heads of government much more quickly. The U.K. has a general election campaign going on right now. It began on March 30, when the queen, on the advice of the prime minister, dissolved Parliament, and voting will take place on May 7. That’s 38 days later. Britons are complaining that the electioneering goes on too long.

American presidential campaigns did not always begin so soon, but they have for more than a generation now. As a young journalist, Sidney Blumenthal (in recent decades a consigliere to the Clintons) wrote quite a good book titled “The Permanent Campaign.” It was published in 1980. Mr. Blumenthal described what was then a relatively new phenomenon.

When Jimmy Carter announced his candidacy for president in January 1975, he was not taken particularly seriously. But his perseverance paid off, and he took the oath of office two years later. His successors—Ronald Reagan, George H.W. Bush and Bill Clinton—announced their runs in the fall before their election years, although they had all been busy assembling campaigns before that. George W. Bush announced in June 1999, after the adjournment of the Texas legislature. Barack Obama announced in February 2007, two days before Lincoln’s birthday, in Lincoln’s Springfield, Ill. By that standard, declared candidates Mr. Cruz, Mr. Paul, Mrs. Clinton and Mr. Rubio got a bit of a late start.

Why are American presidential campaigns so lengthy? And is there anything that can be done to compress them to a bearable timetable?

One clue to the answers: The presidential nominating process, the weakest part of our political system, is also the one part that was not envisioned by the Founding Fathers. The framers of the Constitution created a powerful presidency, confident (justifiably, as it turned out) that its first incumbent, George Washington, would set precedents that would guide the republic for years to come.

But they did not foresee that even in Washington’s presidency, Americans would develop political parties, which they abhorred. The Founders expected that later presidents would be chosen, usually by the House of Representatives, from local notables promoted by different states in the Electoral College. They did not expect that the Federalist and Republican parties would coalesce around two national leaders—Washington’s vice president, John Adams, and Washington’s first secretary of state, Thomas Jefferson—in the close elections of 1796 and 1800.

The issue then became: When a president followed George Washington’s precedent and retired after two terms, how would the parties choose nominees, in a republic that, from the start, was regionally, ethnically and religiously diverse?

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Religious Dogma and DNA

Despite ongoing conflicts around the global that are fueled or governed by religious fanaticism it is entirely plausible that our general tendency to supernatural belief is encoded in our DNA. Of course this does not mean that a God or that various gods exist, it merely implies that over time natural selection generally favored those who believed in deities over those did not. We are such complex and contradictory animals.

From NYT:

Most of us find it mind-boggling that some people seem willing to ignore the facts — on climate change, on vaccines, on health care — if the facts conflict with their sense of what someone like them believes. “But those are the facts,” you want to say. “It seems weird to deny them.”

And yet a broad group of scholars is beginning to demonstrate that religious belief and factual belief are indeed different kinds of mental creatures. People process evidence differently when they think with a factual mind-set rather than with a religious mind-set. Even what they count as evidence is different. And they are motivated differently, based on what they conclude. On what grounds do scholars make such claims?

First of all, they have noticed that the very language people use changes when they talk about religious beings, and the changes mean that they think about their realness differently. You do not say, “I believe that my dog is alive.” The fact is so obvious it is not worth stating. You simply talk in ways that presume the dog’s aliveness — you say she’s adorable or hungry or in need of a walk. But to say, “I believe that Jesus Christ is alive” signals that you know that other people might not think so. It also asserts reverence and piety. We seem to regard religious beliefs and factual beliefs with what the philosopher Neil Van Leeuwen calls different “cognitive attitudes.”

Second, these scholars have remarked that when people consider the truth of a religious belief, what the belief does for their lives matters more than, well, the facts. We evaluate factual beliefs often with perceptual evidence. If I believe that the dog is in the study but I find her in the kitchen, I change my belief. We evaluate religious beliefs more with our sense of destiny, purpose and the way we think the world should be. One study found that over 70 percent of people who left a religious cult did so because of a conflict of values. They did not complain that the leader’s views were mistaken. They believed that he was a bad person.

Third, these scholars have found that religious and factual beliefs play different roles in interpreting the same events. Religious beliefs explain why, rather than how. People who understand readily that diseases are caused by natural processes might still attribute sickness at a particular time to demons, or healing to an act of God. The psychologist Cristine H. Legare and her colleagues recently demonstrated that people use both natural and supernatural explanations in this interdependent way across many cultures. They tell a story, as recounted by Tracy Kidder’s book on the anthropologist and physician Paul Farmer, about a woman who had taken her tuberculosis medication and been cured — and who then told Dr. Farmer that she was going to get back at the person who had used sorcery to make her ill. “But if you believe that,” he cried, “why did you take your medicines?” In response to the great doctor she replied, in essence, “Honey, are you incapable of complexity?”

Moreover, people’s reliance on supernatural explanations increases as they age. It may be tempting to think that children are more likely than adults to reach out to magic to explain something, and that they increasingly put that mind-set to the side as they grow up, but the reverse is true. It’s the young kids who seem skeptical when researchers ask them about gods and ancestors, and the adults who seem clear and firm. It seems that supernatural ideas do things for adults they do not yet do for children.

Finally, scholars have determined that people don’t use rational, instrumental reasoning when they deal with religious beliefs. The anthropologist Scott Atran and his colleagues have shown that sacred values are immune to the normal cost-benefit trade-offs that govern other dimensions of our lives. Sacred values are insensitive to quantity (one cartoon can be a profound insult). They don’t respond to material incentives (if you offer people money to give up something that represents their sacred value, and they often become more intractable in their refusal). Sacred values may even have different neural signatures in the brain.

The danger point seems to be when people feel themselves to be completely fused with a group defined by its sacred value. When Mr. Atran and his colleagues surveyed young men in two Moroccan neighborhoods associated with militant jihad (one of them home to five men who helped plot the 2004 Madrid train bombings, and then blew themselves up), they found that those who described themselves as closest to their friends and who upheld Shariah law were also more likely to say that they would suffer grievous harm to defend Shariah law. These people become what Mr. Atran calls “devoted actors” who are unconditionally committed to their sacred value, and they are willing to die for it.

Read the entire article here.

Send to Kindle

MondayMap: Imagining a Post-Post-Ottoman World

Sykes_Picot_Agreement_Map_signed_8_May_1916

The United States is often portrayed as the world’s bully and nefarious geo-political schemer — a nation responsible for many of the world’s current political ills. However, it is the French and British who should be called to account for much of the globe’s ongoing turmoil, particularly in the Middle East. After the end of WWI the victors expeditiously carved up the spoils of the vanquished Austro-Hungarian and Ottoman Empires. Much of Eastern Europe and the Middle East was divvied and traded just a kids might swap baseball or football (soccer) cards today. Then President of France Georges Clemenceau and British Prime Minister David Lloyd George famously bartered and gifted — amongst themselves and their friends — entire regions and cities without thought to historical precedence, geographic and ethnic boundaries, or even the basic needs of entire populations. Their decisions were merely lines to be drawn and re-drawn on a map.

So, it would be a fascinating — though rather naive — exercise to re-draw many of today’s arbitrary and contrived boundaries, and to revert regions to their more appropriate owners. Of course, where and when should this thought experiment begin and end? Pre-roman empire, post-normans, before the Prussians, prior to the Austro-Hungarian Empire, or after the Ottomans, post-Soviets, or after Tito, or way before the Huns, Vandals and the Barbarians and any number of the Germanic tribes?

Nevertheless, essayist Yaroslav Trofimov takes a stab at re-districting to pre-Ottoman boundaries and imagines a world with less bloodshed. A worthy dream.

From WSJ:

Shortly after the end of World War I, the French and British prime ministers took a break from the hard business of redrawing the map of Europe to discuss the easier matter of where frontiers would run in the newly conquered Middle East.

Two years earlier, in 1916, the two allies had agreed on their respective zones of influence in a secret pact—known as the Sykes-Picot agreement—for divvying up the region. But now the Ottoman Empire lay defeated, and the United Kingdom, having done most of the fighting against the Turks, felt that it had earned a juicier reward.

“Tell me what you want,” France’s Georges Clemenceau said to Britain’s David Lloyd George as they strolled in the French embassy in London.

“I want Mosul,” the British prime minister replied.

“You shall have it. Anything else?” Clemenceau asked.

In a few seconds, it was done. The huge Ottoman imperial province of Mosul, home to Sunni Arabs and Kurds and to plentiful oil, ended up as part of the newly created country of Iraq, not the newly created country of Syria.

The Ottomans ran a multilingual, multireligious empire, ruled by a sultan who also bore the title of caliph—commander of all the world’s Muslims. Having joined the losing side in the Great War, however, the Ottomans saw their empire summarily dismantled by European statesmen who knew little about the region’s people, geography and customs.

The resulting Middle Eastern states were often artificial creations, sometimes with implausibly straight lines for borders. They have kept going since then, by and large, remaining within their colonial-era frontiers despite repeated attempts at pan-Arab unification.

The built-in imbalances in some of these newly carved-out states—particularly Syria and Iraq—spawned brutal dictatorships that succeeded for decades in suppressing restive majorities and perpetuating the rule of minority groups.

But now it may all be coming to an end. Syria and Iraq have effectively ceased to function as states. Large parts of both countries lie beyond central government control, and the very meaning of Syrian and Iraqi nationhood has been hollowed out by the dominance of sectarian and ethnic identities.

The rise of Islamic State is the direct result of this meltdown. The Sunni extremist group’s leader, Abu Bakr al-Baghdadi, has proclaimed himself the new caliph and vowed to erase the shame of the “Sykes-Picot conspiracy.” After his men surged from their stronghold in Syria last summer and captured Mosul, now one of Iraq’s largest cities, he promised to destroy the old borders. In that offensive, one of the first actions taken by ISIS (as his group is also known) was to blow up the customs checkpoints between Syria and Iraq.

“What we are witnessing is the demise of the post-Ottoman order, the demise of the legitimate states,” says Francis Ricciardone, a former U.S. ambassador to Turkey and Egypt who is now at the Atlantic Council, a Washington think tank. “ISIS is a piece of that, and it is filling in a vacuum of the collapse of that order.”

In the mayhem now engulfing the Middle East, it is mostly the countries created a century ago by European colonialists that are coming apart. In the region’s more “natural” nations, a much stronger sense of shared history and tradition has, so far, prevented a similar implosion.

“Much of the conflict in the Middle East is the result of insecurity of contrived states,” says Husain Haqqani, an author and a former Pakistani ambassador to the U.S. “Contrived states need state ideologies to make up for lack of history and often flex muscles against their own people or against neighbors to consolidate their identity.”

In Egypt, with its millennial history and strong sense of identity, almost nobody questioned the country’s basic “Egyptian-ness” throughout the upheaval that has followed President Hosni Mubarak’s ouster in a 2011 revolution. As a result, most of Egypt’s institutions have survived the turbulence relatively intact, and violence has stopped well short of outright civil war.

Turkey and Iran—both of them, in bygone eras, the center of vast empires—have also gone largely unscathed in recent years, even though both have large ethnic minorities of their own, including Arabs and Kurds.

The Middle East’s “contrived” countries weren’t necessarily doomed to failure, and some of them—notably Jordan—aren’t collapsing, at least not yet. The world, after all, is full of multiethnic and multiconfessional states that are successful and prosperous, from Switzerland to Singapore to the U.S., which remains a relative newcomer as a nation compared with, say, Iran.

Read the entire article here.

Image: Map of Sykes–Picot Agreement showing Eastern Turkey in Asia, Syria and Western Persia, and areas of control and influence agreed between the British and the French. Royal Geographical Society, 1910-15. Signed by Mark Sykes and François Georges-Picot, 8 May 1916. Courtesy of Wikipedia.

Send to Kindle

Yes M’Lady

google-Thunderbirds

Beneath the shell that envelops us as adults lies the child. We all have one inside — that vulnerable being who dreams, plays and improvises. Sadly, our contemporary society does a wonderful job of selectively numbing these traits, usually as soon as we enter school; our work finishes the process by quashing all remnants of our once colorful and unbounded imaginations. OK, I’m exaggerating a little to make my point. But I’m certain this strikes a chord.

Keeping this in mind, it’s awesomely brilliant to see Thunderbirds making a comeback. You may recall the original Thunderbirds TV shows in the mid-sixties. Created by Gerry and Sylvia Anderson, the marionette puppets and their International Rescue science-fiction machines would save us weekly from the forces of evil, destruction and chaos. The child who lurks within me utterly loved this show — everything would come to a halt to make way for this event on saturday mornings. Now I have a chance of reliving it with my kids, and maintaining some degree of childhood wonder in the process. Thunderbirds are go…

From the Guardian:

5, 4, 3, 2, 1 … Thunderbirds are go – but not quite how older viewers will remember. International Rescue has been given a makeover for the modern age, with the Tracy brothers, Brains, Lady Penelope and Parker smarter, fitter and with better gadgets than they ever had when the “supermarionation” show began on ITV half a century ago.

But fans fearful that its return, complete with Hollywood star Rosamund Pike voicing Lady Penelope, will trample all over their childhood memories can rest easy.

Unlike the 2004 live action film which Thunderbirds creator, the late Gerry Anderson, described as the “biggest load of crap I have ever seen in my life”, the new take on the children’s favourite, called Thunderbirds Are Go, remains remarkably true to the spirit of the 50-year-old original.

Gone are the puppet strings – audience research found that younger viewers wanted something more dynamic – but along with computer generated effects are models and miniature sets (“actually rather huge” said executive producer Estelle Hughes) that faithfully recall the original Thunderbirds.

Speaking after the first screening of the new ITV series on Tuesday, executive producer Giles Ridge said: “We felt we should pay tribute to all those elements that made it special but at the same time update it so it’s suitable and compelling for a modern audience.

“The basic DNA of the show – five young brothers on a secret hideaway island with the most fantastic craft you could imagine, helping people around the world who are in trouble, that’s not a bad place to start.”

The theme music is intact, albeit given a 21st century makeover, as is the Tracy Island setting – complete with the avenue of palm trees that makes way for Thunderbird 2 and the swimming pool that slides into the mountain for the launch of Thunderbird 1.

Lady Penelope – as voiced by Pike – still has a cut-glass accent and is entirely unflappable. When she is not saving the world she is visiting Buckingham Palace or attending receptions at 10 Downing Street. There is also a nod – blink and you miss it – to another Anderson puppet series, Stingray.

Graham, who voiced Parker in the original series, returns in the same role. “I think they were checking me out to see if I was still in one piece,” said Graham, now 89, of the meeting when he was first approached to appear in the new series.

“I was absolutely thrilled to repeat the voice and character of Parker. Although I am older my voice hasn’t changed too much over the years.”

He said the voice of Parker had come from a wine waiter who used to work in the royal household, whom Anderson had taken him to see in a pub in Cookham, Berkshire.

“He came over and said, ‘Would you like to see the wine list, sir?’ And Parker was born. Thank you, old mate.”

Brains, as voiced by Fonejacker star Kayvan Novak, now has an Indian accent.

Sylvia Anderson, Anderson’s widow, who co-created the show, will make a guest appearance as Lady Penelope’s “crazy aunt”.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Your Current Dystopian Nightmare: In Just One Click

Amazon was supposed to give you back precious time by making shopping and spending painlessly simple. Apps on your smartphone were supposed to do the same for all manner of re-tooled on-demand services. What wonderful time-saving inventions! So, now you can live in the moment and make use of all this extra free time. It’s your time now. You’ve won it back and no one can take it away.

And, what do you spend this newly earned free time doing? Well, you sit at home in your isolated cocoon, you shop for more things online, you download some more great apps that promise to bring even greater convenience, you interact less with real humans, and, best of all, you spend more time working. Welcome to your new dystopian nightmare, and it’s happening right now. Click.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire story here.

Send to Kindle

April Can Mean Only One Thing

April-fool-Hailo-app

The advent of April in the United States usually brings the impending  tax day to mind. In the UK when April rolls in, it means the media goes overboard with April Fool’s jokes. Here’s a smattering of the silliest from Britain’s most serious media outlets.

From the Telegraph: transparent Marmite, Yessus Juice, prison release voting app, Burger King cologne (for men).

From the Guardian: Jeremy Clarkson and fossil fuel divestment.

From the Independent: a round-up of the best gags, including the proposed Edinburgh suspension bridge featuring a gap, Simon Cowell’s effigy on the new £5 note, grocery store aisle trampolines for the short of stature.

Image: Hailo’s new piggyback rideshare service.

Send to Kindle

Women Are From Venus, Men Can’t Remember

Yet another body of research underscores how different women are from men. This time, we are told, that the sexes generally encode and recall memories differently. So, the next time you take issue with a spouse (of different gender) about a — typically trivial — past event keep in mind that your own actions, mood and gender will affect your recall. If you’re female, your memories may be much more vivid than your male counterpart, but not necessarily more correct. If you (male) won last night’s argument, your spouse (female) will — unfortunately for you — remember it more accurately than you, which of course will lead to another argument.

From WSJ:

Carrie Aulenbacher remembers the conversation clearly: Her husband told her he wanted to buy an arcade machine he found on eBay. He said he’d been saving up for it as a birthday present to himself. The spouses sat at the kitchen table and discussed where it would go in the den.

Two weeks later, Ms. Aulenbacher came home from work and found two arcade machines in the garage—and her husband beaming with pride.

“What are these?” she demanded.

“I told you I was picking them up today,” he replied.

She asked him why he’d bought two. He said he’d told her he was getting “a package deal.” She reminded him they’d measured the den for just one. He stood his ground.

“I believe I told her there was a chance I was going to get two,” says Joe Aulenbacher, who is 37 and lives in Erie, Pa.

“It still gets me going to think about it a year later,” says Ms. Aulenbacher, 36. “My home is now overrun with two machines I never agreed upon.” The couple compromised by putting one game in the den and the other in Mr. Aulenbacher’s weight room.

It is striking how many arguments in a relationship start with two different versions of an event: “Your tone of voice was rude.” “No it wasn’t.” “You didn’t say you’d be working late.” “Yes I did.” “I told you we were having dinner with my mother tonight.” “No, honey. You didn’t.”

How can two people have different memories of the same event? It starts with the way each person perceives the event in the first place—and how they encoded that memory. “You may recall something differently at least in part because you understood it differently at the time,” says Dr. Michael Ross, professor emeritus in the psychology department at the University of Waterloo in Ontario, Canada, who has studied memory for many years.

Researchers know that spouses sometimes can’t even agree on concrete events that happened in the past 24 hours—such as whether they had an argument or whether one received a gift from the other. A study in the early 1980s, published in the journal “Behavioral Assessment,” found that couples couldn’t perfectly agree on whether they had sex the previous night.

Women tend to remember more about relationship issues than men do. When husbands and wives are asked to recall concrete relationship events, such as their first date, an argument or a recent vacation, women’s memories are more vivid and detailed.

But not necessarily more accurate. When given a standard memory test where they are shown names or pictures and then asked to recall them, women do just about the same as men.

Researchers have found that women report having more emotions during relationship events than men do. They may remember events better because they pay more attention to the relationship and reminisce more about it.

People also remember their own actions better. So they can recall what they did, just not what their spouse did. Researchers call this an egocentric bias, and study it by asking people to recall their contributions to events, as well as their spouse’s. Who cleans the kitchen more? Who started the argument? Whether the event is positive or negative, people tend to believe that they had more responsibility.

Your mood—both when an event happens and when you recall it later—plays a big part in memory, experts say. If you are in a positive mood or feeling positive about the other person, you will more likely recall a positive experience or give a positive interpretation to a negative experience. Similarly, negative moods tend to reap negative memories.

Negative moods may also cause stronger memories. A person who lost an argument remembers it more clearly than the person who won it, says Dr. Ross. Men tend to win more arguments, he says, which may help to explain why women remember the spat more. But men who lost an argument remember it as well as women who lost.

Read the entire article here.

Send to Kindle

We Are All Always Right, All of the Time

You already know this: you believe that your opinion is correct all the time, about everything. And, interestingly enough, your friends and neighbors believe that they are always right too. Oh, and the colleague at the office with whom you argue all the time — she’s right all the time too.

How can this be, when in an increasingly science-driven, objective universe facts trump opinion? Well, not so fast. It seems that we humans have an internal mechanism that colors our views based on a need for acceptance within a broader group. That is, we generally tend to spin our rational views in favor of group consensus, versus supporting the views of a subject matter expert, which might polarize the group. This is both good and bad. Good because it reinforces the broader benefits of being within a group; bad because we are more likely to reject opinion, evidence and fact from experts outside of our group — think climate change.

From the Washington Post:

It’s both the coolest — and also in some ways the most depressing — psychology study ever.

Indeed, it’s so cool (and so depressing) that the name of its chief finding — the Dunning-Kruger effect — has at least halfway filtered into public consciousness. In the classic 1999 paper, Cornell researchers David Dunning and Justin Kruger found that the less competent people were in three domains — humor, logic, and grammar — the less likely they were to be able to recognize that. Or as the researchers put it:

We propose that those with limited knowledge in a domain suffer from a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it.

Dunning and Kruger didn’t directly apply this insight to our debates about science. But I would argue that the effect named after them certainly helps to explain phenomena like vaccine denial, in which medical authorities have voiced a very strong opinion, but some parents just keep on thinking that, somehow, they’re in a position to challenge or ignore this view.

So why do I bring this classic study up now?

The reason is that an important successor to the Dunning-Kruger paper has just been come out — and it, too, is pretty depressing (at least for those of us who believe that domain expertise is a thing to be respected and, indeed, treasured)This time around, psychologists have not uncovered an endless spiral of incompetence and the inability to perceive it. Rather, they’ve shown that people have an “equality bias” when it comes to competence or expertise, such that even when it’s very clear that one person in a group is more skilled, expert, or competent (and the other less), they are nonetheless inclined to seek out a middle ground in determining how correct different viewpoints are.

Yes, that’s right — we’re all right, nobody’s wrong, and nobody gets hurt feelings.

The new study, just published in the Proceedings of the National Academy of Sciences, is by Ali Mahmoodi of the University of Tehran and a long list of colleagues from universities in the UK, Germany, China, Denmark, and the United States. And no wonder: The research was transnational, and the same experiment — with the same basic results — was carried out across cultures in China, Denmark, and Iran.

Read the entire story here.

Send to Kindle

Hyper-Parenting and Couch Potato Kids

Google-search-kids-playing

Parents who are overly engaged in micro-managing the academic, athletic and social lives of their kids may be responsible for ensuring their offspring lead less active lives. A new research study finds children of so-called hyper-parents are significantly less active than peers with less involved parents. Hyper-parenting seems to come in 4 flavors: helicopter parents who hover over their child’s every move; tiger moms who constantly push for superior academic attainment; little emperor parents who constantly bestow their kids material things; and concerted cultivation parents who over-schedule their kids with never-ending after-school activities. If you recognize yourself in one of these parenting styles, take a deep breath, think back on when as a 7-12 year-old you had the most fun, and let you kids play outside — preferably in the rain and mud!

From the WSJ / Preventive Medicine:

Hyper-parenting may increase the risk of physical inactivity in children, a study in the April issue of Preventive Medicine suggests.

Children with parents who tended to be overly involved in their academic, athletic and social lives—a child-rearing style known as hyper-parenting—spent less time outdoors, played fewer after-school sports and were less likely to bike or walk to school, friends’ homes, parks and playgrounds than children with less-involved parents.

Hyperparenting, although it’s intended to benefit children by giving them extra time and attention, could have adverse consequences for their health, the researchers said.

The study, at Queen’s University in Ontario, surveyed 724 parents of children, ages 7 to 12 years old, born in the U.S. and Canada from 2002 to 2007. (The survey was based on parents’ interaction with the oldest child.)

Questionnaires assessed four hyper-parenting styles: helicopter or overprotective parents; little-emperor parents who shower children with material goods; so-called tiger moms who push for exceptional achievement; and parents who schedule excessive extracurricular activities, termed concerted cultivation. Hyperparenting was ranked in five categories from low to high based on average scores in the four styles.

Children’s preferred play location was their yard at home, and 64% of the children played there at least three times a week. Only 12% played on streets and cul-de-sacs away from home. Just over a quarter walked or cycled to school or friends’ homes, and slightly fewer to parks and playgrounds. Organized sports participation was 26%.

Of parents, about 40% had high hyper-parenting scores and 6% had low scores. The most active children had parents with low to below-average scores in all four hyper-parenting styles, while the least active had parents with average-to-high hyper-parenting scores. The difference between children in the low and high hyper-parenting groups was equivalent to about 20 physical-activity sessions a week, the researchers said.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Humor Versus Horror

Faced with unspeakable horror many of usually turn away. Some courageous souls turn to humor to counter the vileness of others. So, it is heartwarming to see comedians and satirists taking up rhetorical arms in the backyards of murderers and terrorists. Fighting violence and terror with much of the same may show progress in the short-term, but ridiculing our enemies with humor and thoughtful dialogue is the only long-term way to fight evil in its many human forms. A profound thank you to these four brave Syrian refugees who, in the face of much personal danger, are able to laugh at their foes.

From the Guardian:

They don’t have much to laugh about. But four young Syrian refugees from Aleppo believe humour may be the only antidote to the horrors taking place back home.

Settled in a makeshift studio in the Turkish city of Gaziantep 40 miles from the Syrian border, the film-makers decided ridicule was an effective way of responding to Islamic State and its grisly record of extreme violence.

“The entire world seems to be terrified of Isis, so we want to laugh at them, expose their hypocrisy and show that their interpretation of Islam does not represent the overwhelming majority of Muslims,” says Maen Watfe, 27. “The media, especially the western media, obsessively reproduce Isis propaganda portraying them as strong and intimidating. We want to show their weaknesses.”

The films and videos on Watfe and his three friends’ website mock the Islamist extremists and depict them as naive simpletons, hypocritical zealots and brutal thugs. It’s a high-risk undertaking. They have had to move house and keep their addresses secret from even their best friends after receiving death threats.

But the video activists – Watfe, Youssef Helali, Mohammed Damlakhy and Aya Brown – will not be deterred.

Their film The Prince shows Isis leader and self-appointed caliph Abu Bakr al-Baghdadi drinking wine, listening to pop music and exchanging selfies with girls on his smartphone. A Moroccan jihadi arrives saying he came to Syria to “liberate Jerusalem”. The leader swaps the wine for milk and switches the music to Islamic chants praising martyrdom. Then he hands the Moroccan a suicide belt and sends him off against a unit of Free Syrian Army fighters. The grenades detonate, and Baghdadi reaches for his glass of wine and turns the pop music back on.

It is pieces like this that have brought hate mail and threats via social media.

“One of them said that they would finish us off like they finished off Charlie [Hebdo],” Brown, 26, recalls. She declined to give her real name out of fear for her family, who still live in Aleppo. “In the end we decided to move from our old apartment.”

The Turkish landlord told them Arabic-speaking men had repeatedly asked for their whereabouts after they left, and kept the studio under surveillance.

Follow the story here.

Video: Happy Valentine. Courtesy of Dayaaltaseh Productions.

Send to Kindle