Category Archives: Idea Soup

Where Will I Get My News (and Satire)

Google-search-jon-stewart

Jon Stewart. Jon Stewart, you dastardly, villainous so-and-so. How could you? How could you decide to leave the most important show in media history — The Daily Show — after a mere 16 years? Where will I get my news? Where will I find another hypocrisy-meter? Where will I find another truth-seeking David to fend us from the fear-mongering neocon Goliaths led by Rogers Ailes over at the Foxion News Channel? Where will I find such a thoroughly delicious merging of news, fact and satire. Jon Stewart how could you?!

From the Guardian?

“Where will I get my news each night,” lamented Bill Clinton this week. This might have been a reaction to the fall from grace of Brian Williams, America’s top-rated news anchor, who was suspended for embellishing details of his adventures in Iraq. In fact the former US president was anticipating withdrawal symptoms for the impending departure of the comedian Jon Stewart, who – on the same day as Williams’s disgrace – announced that he will step down as the Daily Show host.

Stewart, who began his stint 16 years ago, has achieved something extraordinary from behind a studio desk on a comedy cable channel. Merging the intense desire for factual information with humour, irreverence, scepticism and usually appropriate cynicism, Stewart’s show proved a magnet for opinion formers, top politicians – who clamoured to appear – and most significantly the young, for whom the mix proved irresistible. His ridiculing of neocons became a nightly staple. His rejection from the outset of the Iraq war was prescient. And always he was funny, not least this week in using Williams’s fall to castigate the media for failing to properly scrutinise the Iraq war. Bill Clinton does not mourn alone.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Social Media Metes Out Social (Networking) Justice

Before the age of Facebook and Twitter if you were to say something utterly stupid, bigoted, sexist or racist among a small group of friends or colleagues it would, usually, have gone no further. Some members of your audience may have chastised you, while others may have agreed or ignored you. But then the comment would have been largely forgotten.

This is no longer so in our age of social networking and constant inter-connectedness. Our technologies distribute, repeat and amplify our words and actions, which now seem to take on lives of their very own. Love it or hate it — welcome to the age of social networking justice — a 21st century digital pillory.

Say something stupid or do something questionable today — and you’re likely to face a consequential backlash that stretches beyond the present and into your future. Just take the case of Justine Sacco.

From NYT:

As she made the long journey from New York to South Africa, to visit family during the holidays in 2013, Justine Sacco, 30 years old and the senior director of corporate communications at IAC, began tweeting acerbic little jokes about the indignities of travel. There was one about a fellow passenger on the flight from John F. Kennedy International Airport:

“?‘Weird German Dude: You’re in First Class. It’s 2014. Get some deodorant.’ — Inner monologue as I inhale BO. Thank God for pharmaceuticals.”

Then, during her layover at Heathrow:

“Chilly — cucumber sandwiches — bad teeth. Back in London!”

And on Dec. 20, before the final leg of her trip to Cape Town:

“Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”

She chuckled to herself as she pressed send on this last one, then wandered around Heathrow’s international terminal for half an hour, sporadically checking her phone. No one replied, which didn’t surprise her. She had only 170 Twitter followers.

Sacco boarded the plane. It was an 11-hour flight, so she slept. When the plane landed in Cape Town and was taxiing on the runway, she turned on her phone. Right away, she got a text from someone she hadn’t spoken to since high school: “I’m so sorry to see what’s happening.” Sacco looked at it, baffled.

Then another text: “You need to call me immediately.” It was from her best friend, Hannah. Then her phone exploded with more texts and alerts. And then it rang. It was Hannah. “You’re the No. 1 worldwide trend on Twitter right now,” she said.

Sacco’s Twitter feed had become a horror show. “In light of @Justine-Sacco disgusting racist tweet, I’m donating to @care today” and “How did @JustineSacco get a PR job?! Her level of racist ignorance belongs on Fox News. #AIDS can affect anyone!” and “I’m an IAC employee and I don’t want @JustineSacco doing any communications on our behalf ever again. Ever.” And then one from her employer, IAC, the corporate owner of The Daily Beast, OKCupid and Vimeo: “This is an outrageous, offensive comment. Employee in question currently unreachable on an intl flight.” The anger soon turned to excitement: “All I want for Christmas is to see @JustineSacco’s face when her plane lands and she checks her inbox/voicemail” and “Oh man, @JustineSacco is going to have the most painful phone-turning-on moment ever when her plane lands” and “We are about to watch this @JustineSacco bitch get fired. In REAL time. Before she even KNOWS she’s getting fired.”

The furor over Sacco’s tweet had become not just an ideological crusade against her perceived bigotry but also a form of idle entertainment. Her complete ignorance of her predicament for those 11 hours lent the episode both dramatic irony and a pleasing narrative arc. As Sacco’s flight traversed the length of Africa, a hashtag began to trend worldwide: #HasJustineLandedYet. “Seriously. I just want to go home to go to bed, but everyone at the bar is SO into #HasJustineLandedYet. Can’t look away. Can’t leave” and “Right, is there no one in Cape Town going to the airport to tweet her arrival? Come on, Twitter! I’d like pictures #HasJustineLandedYet.”

A Twitter user did indeed go to the airport to tweet her arrival. He took her photograph and posted it online. “Yup,” he wrote, “@JustineSacco HAS in fact landed at Cape Town International. She’s decided to wear sunnies as a disguise.”

By the time Sacco had touched down, tens of thousands of angry tweets had been sent in response to her joke. Hannah, meanwhile, frantically deleted her friend’s tweet and her account — Sacco didn’t want to look — but it was far too late. “Sorry @JustineSacco,” wrote one Twitter user, “your tweet lives on forever.”

Read the entire article here.

Send to Kindle

Are Most CEOs Talented or Lucky?

According to Harold G. Hamm, founder and CEO of Continental Resources, most CEOs are lucky not talented. You see, Hamm’s net worth has reached around $18 billion and in recent divorce filings he claims to only have been responsible for generating around 10 percent of this wealth since founding his company in 1988. Interestingly, even though he made most of the key company appointments and oversaw all the key business decisions, he seems to be rather reticent in claiming much of the company’s success as his own. Strange then that his company  would compensate him to the tune of around $43 million during 2006-2013 for essentially being a lucky slacker!

This, of course, enables him to minimize the amount owed to his ex-wife. Thus, one has to surmise from these shenanigans that some CEOs are not only merely lucky, they’re also stupid.

On a broader note this does raise the question of why many CEOs are rewarded such extraordinary sums when it’s mostly luck guiding their company’s progress!

From NYT:

The divorce of the oil billionaire Harold G. Hamm from Sue Ann Arnall has gained attention largely for its outsize dollar amounts. Mr. Hamm, the chief executive and founder of Continental Resources, who was worth more than $18 billion at one point, wrote his ex-wife a check last month for $974,790,317.77 to settle their split. She’s appealing to get more; he’s appealing to pay less.

Yet beyond the staggering sums, the Hamm divorce raises a fundamental question about the wealth of executives and entrepreneurs: How much do they owe their fortunes to skill and hard work, and how much comes from happenstance and luck?

Mr. Hamm, seeking to exploit a wrinkle in divorce law, made the unusual argument that his wealth came largely from forces outside his control, like global oil prices, the expertise of his deputies and other people’s technology. During the nine-week divorce trial, his lawyers claimed that although Mr. Hamm had founded Continental Resources and led the company to become a multibillion-dollar energy giant, he was responsible for less than 10 percent of his personal and corporate success.

Some in the courtroom started calling it the “Jed Clampett defense,” after the lead character in “The Beverly Hillbillies” TV series who got rich after tapping a gusher in his swampland.

In a filing last month supporting his appeal, Mr. Hamm cites the recent drop in oil prices and subsequent 50 percent drop in Continental’s share price and his fortune as further proof that forces outside his control direct his company’s fortunes.

Lawyers for Ms. Arnall argue that Mr. Hamm is responsible for more than 90 percent of his fortune.

While rooted in a messy divorce, the dispute frames a philosophical and ethical debate over inequality and the obligations of the wealthy. If wealth comes mainly from luck or circumstance, many say the wealthy owe a greater debt to society in the form of taxes or charity. If wealth comes from skill and hard work, perhaps higher taxes would discourage that effort.

Sorting out what value is created by luck or skill is a tricky proposition in itself. The limited amount of academic research on the topic, which mainly looks at how executives can influence a company’s value, has often found that broader market forces often have a bigger impact on a company’s success than an executive’s actions.

“As we know from the research, the performance of a large firm is due primarily to things outside the control of the top executive,” said J. Scott Armstrong, a professor at the Wharton School at the University of Pennsylvania. “We call that luck. Executives freely admit this — when they encounter bad luck.”

A study conducted from 1992 to 2011 of how C.E.O. compensation changed in response to luck or events beyond the executives’ control showed that their pay was 25 percent higher when luck favored the C.E.O.

Some management experts say the role of luck is nearly impossible to measure because it depends on the particular industry. Oil, for instance, is especially sensitive to outside forces.

“Within any industry, a more talented management team is going to tend to do better,” said Steven Neil Kaplan of the University of Chicago Booth School of Business. “That is why investors and boards of directors look for the best talent to run their companies. That is why company stock prices often move a lot, in both directions, when a C.E.O. dies or a new C.E.O. is hired.”

The Hamm case hinged on a quirk in divorce law known as “active versus passive appreciation.” In Oklahoma, and many other states, if a spouse owns an asset before the marriage, the increase in the value of an asset during marriage is not subject to division if the increase was because of “passive” appreciation. Passive appreciation is when an asset grows on its own because of factors outside either spouse’s control, like land that appreciates without any improvements or passively held stocks. Any value that’s not deemed as “passive” is considered “active” — meaning it increased because of the efforts, skills or funding of a spouse and can therefore be subject to division in a divorce.

The issue has been at the center of some other big divorces. In the 2002 divorce of the Chicago taxi magnate David Markin and Susan Markin, filed in Palm Beach, Fla., Mr. Markin claimed he was “merely a passenger on this corporate ship traveling through the ocean,” according to the judge. But he ruled that Mr. Markin was more like “the captain of the ship. Certainly he benefited by sailing through some good weather. However, he picked the course and he picked the crew. In short, he was directly responsible for everything that happened.” Ms. Markin was awarded more than $30 million, along with other assets.

Mr. Hamm, now 69, also had favorable conditions after founding Continental Resources well before his marriage in 1988 to Sue Ann, then a lawyer at the company. By this fall, when the trial ended, Continental had a market capitalization of over $30 billion; Mr. Hamm’s stake of 68 percent and other wealth exceeded $18 billion.

Their divorce trial was closed to the public, and all but a few of the documents are under seal. Neither Mr. Hamm nor his lawyers or representatives would comment. Ms. Arnall and her spokesman also declined to comment.

According to people with knowledge of the case, however, Mr. Hamm’s chief strategy was to claim most of his wealth as passive appreciation, and therefore not subject to division. During his testimony, the typically commanding Mr. Hamm, who had been the face of the company for decades, said he couldn’t recall certain decisions, didn’t know much about the engineering aspects of oil drilling and didn’t attend critical meetings.

Mr. Hamm’s lawyers calculated that only 5 to 10 percent of his wealth came from his own effort, skill, management or investment. It’s unclear how they squared this argument with his compensation, which totaled $42.7 million from 2006 to 2013, according to Equilar, an executive compensation data company.

Ms. Arnall called more than 80 witnesses — from Continental executives to leading economists like Glenn Hubbard and Kenneth Button — to show how much better Continental had done than its peers and that Mr. Hamm made most or all of the key decisions about the company’s strategy, finances and operations. They estimated that Mr. Hamm was responsible for $14 billion to $17 billion of his $18 billion fortune.

Read the entire article here.

Send to Kindle

The Paradox That is Humanity

Romans_punish_Cretan_Saracens

Fanatical brutality and altruism. Greed and self-sacrifice. Torture and love. Cruelty and remorse. Care and wickedness. These are the paradoxical traits that make us uniquely human. Many people give of themselves, love unconditionally, exhibit kindness, selflessness and compassion at every turn. And yet, describing the immolation, crucifixions and beheadings of fellow humans by humans as inhuman or “beastial” rather misses the point. While some animals maim and kill their own, and even feast on the spoils, humans have risen above all other species to a pinnacle of barbaric behavior that demands that we all continually reflect on our humanity, both good and evil. Sadly, this is not news: persecution of one group by another is encoded in our DNA.

From the Guardian:

It describes itself as “an inclusive school where gospel values underpin a caring and supporting ethos, manifest in care for each individual”. And I have no reason to doubt it. But one of the questions raised by the popularity of Hilary Mantel’s Wolf Hall is whether St Thomas More Catholic School is named after a monster or a saint. With Mantel, gone is the More of heroic humanism popularised by Robert Bolt’s fawning A Man for All Seasons. In its place she reminds us that More was persecutor-in-chief towards those who struggled to see the Bible translated into English and personally responsible for the burning of a number of men who dared question the ultimate authority of the Roman church.

This week’s Wolf Hall episode ended with the death of Middle Temple lawyer James Bainham at Smithfield on 30 April 1532. More tortured Bainham in the Tower of London for questioning the sanctity of Thomas Becket and for speaking out against the financial racket of the doctrine of purgatory that “picked men’s purses”. At first, under the pressure of torture, Bainham recanted his views. But within weeks of being released, Bainham re-asserted them. And so More had him burnt at the stake.

The recent immolation of Jordanian pilot Lieutenant Muadh al-Kasasbeh by Islamic State (Isis) brings home the horrendous reality of what this involves. I watched it on the internet. And I wish I hadn’t. I felt voyeuristic and complicit. And though I justified watching on the grounds that I was going to write about it, and thus (apparently) needed to see the truly horrific footage, I don’t think I was right to do so. As well as seeing things that I will never be able to un-see, I felt morally soiled – as if I had done exactly what Isis had wanted me to do. I mean, if no one ever watched this stuff,  they wouldn’t make it.

Afterwards, I wandered down to Smithfield market to get some air. I sat in a posh cafe and tried to picture what the place must have been like when Bainham was killed. Both then and now, death by burning was a staged event, deliberately public, a theatre of cruelty designed for political/religious instruction. In his book on burnings in 16th century England, the historian Eamon Duffy recounts a burning in Dartford in 1555: “‘Thither came … fruiterers wyth horse loades of cherries, and sold them’.” Can you imagine: passing round the cherries as you watch people burn? What sort of creatures are we?

Yes, religion is the common factor here. But if there is no God (as some say) and religion is a purely human phenomenon, then it is humanity that is also in the dock. For when we speak of these acts as “inhuman”, or of the “inhumanity” of Isis, we are surely kidding ourselves: history teaches that human beings are often exactly like this. We are often viciously cruel and without an ounce of pity and, yet, all too often in denial about our basic capacity for wickedness. One cannot be in denial after watching that video.

And yet the thing that it is almost impossible for us to get our heads around is that this capacity for wickedness can also co-exist with an extraordinary capacity for love and care and self-sacrifice. More, of course, is a perfect case in point. As well as being declared a saint, More was famously one of the early humanists, a friend of Erasmus. In his Utopia, he fantasised about a world where people lived together in harmony, with no private property to divide them. He championed female education and (believe it or not) religious toleration.

Robert Bolt may have only reflected one aspect of More’s character, but he did stand up for what he believed in, even to the point of death. And when More was declared a saint in 1935, it was partially a powerful and deliberate witness to German Christians to do the same. And who would have guessed that, within a few years, apparently civilized Europe would return again to the burning of human bodies, this time on an industrial scale. And this time, not in the name of God.

Read the entire article here.

Image: 12th century Byzantine manuscript illustration depicting Byzantine Greeks (Christian/Eastern Orthodox) punishing Cretan Saracens (Muslim) in the 9th century. Courtesy of Madrid Skylitzes / Wikipedia.

Send to Kindle

24 Hours with the Fox Circus

Frozen-Elsa

The Fox Channel is a domain of superlatives; nowhere else in our global media landscape can you find — under one roof — such utter, dumbed-down nonsense; opinionated drivel served up as “news” or “fact”; medieval, Murdochian dogma; misogyny and racism; perversion of science; and sheer journalistic piffle. In the space of a recent 24 hours the channel went from broadcasting the unedited immolation of Jordanian pilot, Muadh al-Kasasbeh, which is nothing more than an act of “murder-porn” (gratuitous profiteering and complicity with the murderers), to the denunciation of the movie Frozen as anti-male propaganda. Oh Jon Stewart, you are such a lucky man to be a contemporary of this network — long may you both reign!

From the Guardian:

If you believe what you hear on Fox News, Disney’s Frozen is nothing but misandrist propaganda.

During Wednesday’s Fox & Friends – the crown jewel program at a network known for its loose interpretation of facts – host Steve Doocy raised awareness about Hollywood’s latest nefarious plot to undermine American masculinity. Doocy took issue with Disney’s Frozen, the wildly successful children’s film released more than a year ago, saying the movie empowers young girls by “turning our men into fools and villains” – an agenda they dubbed the “Frozen effect”.

Penny Young Nance, the CEO of Concerned Women for America – “the women’s group that loves men” – went on: “We want to empower women, but we don’t have to do it at the cost of tearing down men … Men are essential in our society.”

“It would be nice for Hollywood to have more male figures,” Doocy concluded, a wish at odds with nearly every metric for gender equality in Hollywood.

The sentiment has been almost universally condemned on social media.

“If I see one more thinkpiece about how Hollywood is too kind to women and not respectful enough to the male population I just don’t know what I’ll do,” Kevin Fallon wrote at the Daily Beast.

Frozen, which generated more than $1bn in global ticket sales and won the Golden Globe for best animated film, has been hailed as an unexpectedly feminist work from a company not exactly known for empowering depictions of women.

Read the entire article here.

Image: Elsa, Frozen. Courtesy of Disney.

Send to Kindle

Je Suis Muadh al-Kasasbeh

The abhorrent, brutish murder of Jordanian pilot Muadh al-Kasasbeh by callous, cowardly murderers must take some kind of hideous prize. As some commentators have mentioned already, this act is not a cruel, new invention by depraved psychopaths, it represents a move backwards towards humanity’s sad vicious past, but played out for our social video-age.

From the Guardian:

Images of a Jordanian pilot being burned alive by the militants of Islamic State (Isis) began to filter on to social media and mainstream news sites on Monday. As with beheadings and other brutal acts carried out by the group in the past, there were calls not to share the video or stills of it, out of respect for the dead pilot and his family and in order not to further publicise the terrorists’ message. But it seems the details were so gruesome that many couldn’t help but watch and share.

I refused to look (I never do: it feels too much like giving Isis the attention it craves). But that didn’t stop others trying to tell me in vivid detail what the video showed. Someone even said it was “Bond villain-like”. Isis, it seems, has created a whole new kind of murderous cinematic experience.

Some internet users clearly find the unrelenting goriness of it all captivating – stonings, decapitations, throwing people off tall buildings, sticking severed heads on spikes. Perhaps there’s a compulsion to see just how far Isis will go. But the very act of choosing to witness these things makes us, in some way, complicit.

Media organisations face a particular dilemma, as the atrociousness arguably makes the crimes even more newsworthy. But any decision to transmit these images takes us into difficult territory. When Fox News posts all of the footage with the warning “extremely graphic video” attached, one could be forgiven for thinking that a steadfast commitment to truth-telling isn’t the only factor at play. But these videos are designed to be a grotesque form of clickbait. Making them available to ever-wider audiences only helps the terrorists achieve their traffic targets.

For some, displaying the video is not only a journalistic virtue. Watching it is somehow necessary to drive the full horror of Isis home. Piers Morgan, the former editor of the Daily Mirror, wrote in the Daily Mail how he couldn’t help but give in to the impulse to click, but is “glad” he did so because he now knows “what these monsters are capable of”. I am not sure how their nature is news to him. Did the rape, enslavement and summary execution of thousands of people and the murder of hostages not give it away?

He even imagines it could help win the battle of ideas. “If any Muslim remains in any doubt as to whether this is the right time to stand up and cry ‘Not in my name or my religion!’, then I suggest they too watch the video.” Where is this Muslim world full of doubt as to whether Isis is an enemy?

Morgan suggests that the fact the latest victim is a Sunni Arab means some sort of Rubicon has been crossed. This betrays his view that there is widespread implicit support for Isis among Muslims because they oppose the west, and that this video will shake them out of their complacency. Morgan helpfully adds: “This is your war.”

Most Muslims recoil in horror at the thought of Isis, and don’t need a video to help them along. Isis is playing a game of braggadocio and provocation, dressing it up in the language of prisoner exchanges and execution, as though it really is the state it claims to be. Anyone who views and disseminates these videos is playing their assigned part in the killers’ script. The crime doesn’t end with the death of the victim: the video and the process of watching and reacting to it are extensions of the terrorist act.

Thousands have been killed, off camera, in equally brutal ways, but these films allow Isis to revel in the toe-curling revulsion they inevitably provoke. It wants to generate just the kind of reaction that Morgan felt: he claims he was seized by “such uncontrollable rage that no amount of reasonable argument will ever temper it”.

But such videos turn the internet into a grisly public square where we all gather and watch in horror, then disband, unwittingly participating in a macabre cycle of action and reaction.

Isis has certainly murdered foreign hostages, Yezidis and members of other ethnic and religious groups. But the overwhelming majority of those targeted have been Muslims. No one is in any doubt whose war this is, or that Isis is capable of the stuff of nightmares. Films of its crimes are superfluous and risk distracting us from the continual suffering of those who live under it. So why keep looking?

Read the entire article here.

Send to Kindle

World Population: 100

World-as-100-People

Take today’s world and shrink its population down to just 100 people. Then apply a range of global measures to this group. Voila! You get a fascinating view of humanity at a scale that your brain can comprehend. For instance, from the total of a 100 people: 48 live on less than $2 per day, 93 do not have a college degree, 16 are undernourished or starving, 23 have no shelter, and 17 are illiterate.

The infographic was designed by graphic designer Jack Hagley. You can check out the infographic and read more of Hagley’s work here.

Infographic courtesy of Jack Hagley.

Send to Kindle

Silicon Death Valley

boo-com

Have you ever wondered what happens to the 99 percent of Silicon Valley startups that don’t make billionaires (or even millionaires) of their founders? It’s not all milk and honey in the land of sunshine. After all, for every Google or Facebook there are hundreds of humiliating failures – think: Webvan, Boo.com, Pets.com. Beautyjungle.com, Boxman, Flooz, eToys.

The valley’s venture capitalists tend to bury their business failures rather quietly, careful not to taint their reputations as omnipotent, infallible futurists. From the ashes of these failures some employees move on to well-established corporate serfdom and others find fresh challenges at new startups. But there is a fascinating middle-ground, between success and failure — an entrepreneurial twilight zone populated by zombie businesses.

From the Guardian:

It is probably Silicon Valley’s most striking mantra: “Fail fast, fail often.” It is recited at technology conferences, pinned to company walls, bandied in conversation.

Failure is not only invoked but celebrated. Entrepreneurs give speeches detailing their misfires. Academics laud the virtue of making mistakes. FailCon, a conference about “embracing failure”, launched in San Francisco in 2009 and is now an annual event, with technology hubs in Barcelona, Tokyo, Porto Alegre and elsewhere hosting their own versions.

While the rest of the world recoils at failure, in other words, technology’s dynamic innovators enshrine it as a rite of passage en route to success.

But what about those tech entrepreneurs who lose – and keep on losing? What about those who start one company after another, refine pitches, tweak products, pivot strategies, reinvent themselves … and never succeed? What about the angst masked behind upbeat facades?

Silicon Valley is increasingly asking such questions, even as the tech boom rewards some startups with billion-dollar valuations, sprinkling stardust on founders who talk of changing the world.

“It’s frustrating if you’re trying and trying and all you read about is how much money Airbnb and Uber are making,” said Johnny Chin, 28, who endured three startup flops but is hopeful for his fourth attempt. “The way startups are portrayed, everything seems an overnight success, but that’s a disconnect from reality. There can be a psychic toll.”

It has never been easier or cheaper to launch a company in the hothouse of ambition, money and software that stretches from San Francisco to Cupertino, Mountain View, Menlo Park and San Jose.

In 2012 the number of seed investment deals in US tech reportedly more than tripled, to 1,700, from three years earlier. Investment bankers are quitting Wall Street for Silicon Valley, lured by hopes of a cooler and more creative way to get rich.

Most startups fail. However many entrepreneurs still overestimate the chances of success – and the cost of failure.

Some estimates put the failure rate at 90% – on a par with small businesses in other sectors. A similar proportion of alumni from Y Combinator, a legendary incubator which mentors bright prospects, are said to also struggle.

Companies typically die around 20 months after their last financing round and after having raised $1.3m, according to a study by the analytics firms CB Insights titled The RIP Report – startup death trends.

Advertisement

Failure is difficult to quantify because it does not necessarily mean liquidation. Many startups limp on for years, ignored by the market but sustained by founders’ savings or investors.

“We call them the walking dead,” said one manager at a tech behemoth, who requested anonymity. “They don’t necessarily die. They putter along.”

Software engineers employed by such zombies face a choice. Stay in hope the company will take off, turning stock options into gold. Or quit and take one of the plentiful jobs at other startups or giants like Apple and Google.

Founders face a more agonising dilemma. Continue working 100-hour weeks and telling employees and investors their dream is alive, that the metrics are improving, and hope it’s true, or pull the plug.

The loss aversion principle – the human tendency to strongly prefer avoiding losses to acquiring gains – tilts many towards the former, said Bruno Bowden, a former engineering manager at Google who is now a venture investor and entrepreneur.

“People will do a lot of irrational things to avoid losing even if it’s to their detriment. You push and push and exhaust yourself.”

Silicon Valley wannabes tell origin fables of startup founders who maxed out credit cards before dazzling Wall Street, the same way Hollywood’s struggling actors find solace in the fact Brad Pitt dressed as a chicken for El Pollo Loco before his breakthrough.

“It’s painful to be one of the walking dead. You lie to yourself and mask what’s not working. You amplify little wins,” said Chin, who eventually abandoned startups which offered micro, specialised versions of Amazon and Yelp.

That startup founders were Silicon Valley’s “cool kids”, glamorous buccaneers compared to engineers and corporate drones, could make failure tricky to recognise, let alone accept, he said. “People are very encouraging. Everything is amazing, cool, awesome. But then they go home and don’t use your product.”

Chin is bullish about his new company, Bannerman, an Uber-type service for event security and bodyguards, and has no regrets about rolling the tech dice. “I love what I do. I couldn’t do anything else.”

Read the entire story here.

Image: Boo.com, 1999. Courtesy of the WayBackMachine, Internet Archive.

Send to Kindle

True “False Memory”

Apparently it is surprisingly easy to convince people to remember a crime, or other action, that they never committed. Makes one wonder how many of the around 2 million people in US prisons are incarcerated due to these false memories in both inmates and witnesses.

From ars technica:

The idea that memories are not as reliable as we think they are is disconcerting, but it’s pretty well-established. Various studies have shown that participants can be persuaded to create false childhood memories—of being lost in a shopping mall or hospitalized, or even highly implausible scenarios like having tea with Prince Charles.

The creation of false memories has obvious implications for the legal system, as it gives us reasons to distrust both eyewitness accounts and confessions. It’s therefore important to know exactly what kinds of false memories can be created, what influences the creation of a false memory, and whether false recollections can be distinguished from real ones.

A recent paper in Psychological Science found that 71 percent of participants exposed to certain interview techniques developed false memories of having committed a crime as a teenager. In reality, none of these people had experienced contact with the police during the age bracket in question.

After establishing a pool of potential participants, the researchers sent out questionnaires to the caregivers of these individuals. They eliminated any participants who had been involved in some way with an assault or theft, or had other police contact between the ages of 11 and 14. They also asked the caregivers to describe in detail a highly emotional event that the participant had experienced at this age. The caregivers were asked not to discuss the content of the questionnaire with the participants.

The 60 eligible participants were divided into two groups: one that would be given false memories of committing an assault, theft, or assault with a weapon, and another that would be provided with false memories of another emotional event—an injury, an attack by a dog, or the loss of a large sum of money. In the first of three interviews with each participant, the interviewer presented the true memory that had been provided by the caregiver. Once the interviewer’s credibility and knowledge of the participant’s background had been established, the false memory was presented.

For both kinds of memory, the interviewer gave the participant “cues”, such as their age at the time, people who had been involved, and the time of year. Participants were then asked to recall the details of what had happened. No participants recalled the false event the first time it was mentioned—which would have rung alarm bells—but were reassured that people could often uncover memories like these through effort.

A number of tactics were used to induce the false memory. Social pressure was applied to encourage recall of details, the interviewer attempted to build a rapport with the participants, and the participants were told that their caregivers had corroborated the facts. They were also encouraged to use visualization techniques to “uncover” the memory.

In each of the three interviews, participants were asked to provide as many details as they could for both events. After the final interview, they were informed that the second memory was false, and asked whether they had really believed the events had occurred. They were also asked to rate how surprised they were to find out that it was false. Only participants who answered that they had genuinely believed the false memory, and who could give more than ten details of the event, were classified as having a true false memory. Of the participants in the group with criminal false stories, 71 percent developed a “true” false memory. The group with non-criminal false stories was not significantly different, with 77 percent of participants classified as having a false memory. The details participants provided for their false memories did not differ significantly in either quality or quantity from their true memories.

This study is only a beginning, and there is still a great deal of work to be done. There are a number of factors that couldn’t be controlled for but which may have influenced the results. For instance, the researchers suggest that, since only one interviewer was involved, her individual characteristics may have influenced the results, raising the question of whether only certain kinds of interviewers can achieve these effects. It isn’t clear whether participants were fully honest about having believed in the false memory, since they could have just been trying to cooperate; the results could also have been affected by the fact that there were no negative consequences to telling the false story.

Read the entire article here.

Send to Kindle

Focus on Process, Not Perfect Grades

If you are a parent of a school-age child then it is highly likely that you have, on multiple occasions, chastised her or him and withheld privileges for poor grades. It’s also likely that you have rewarded the same child for being smart at math or having Picasso-like artistic talent. I have done this myself. But, there is a better way to nurture young minds, and it is through “telling stories about achievements that result from hard work.”

From Scientific American:

A brilliant student, Jonathan sailed through grade school. He completed his assignments easily and routinely earned As. Jonathan puzzled over why some of his classmates struggled, and his parents told him he had a special gift. In the seventh grade, however, Jonathan suddenly lost interest in school, refusing to do homework or study for tests. As a consequence, his grades plummeted. His parents tried to boost their son’s confidence by assuring him that he was very smart. But their attempts failed to motivate Jonathan (who is a composite drawn from several children). Schoolwork, their son maintained, was boring and pointless.

Our society worships talent, and many people assume that possessing superior intelligence or ability—along with confidence in that ability—is a recipe for success. In fact, however, more than 35 years of scientific investigation suggests that an overemphasis on intellect or talent leaves people vulnerable to failure, fearful of challenges and unwilling to remedy their shortcomings.

The result plays out in children like Jonathan, who coast through the early grades under the dangerous notion that no-effort academic achievement defines them as smart or gifted. Such children hold an implicit belief that intelligence is innate and fixed, making striving to learn seem far less important than being (or looking) smart. This belief also makes them see challenges, mistakes and even the need to exert effort as threats to their ego rather than as opportunities to improve. And it causes them to lose confidence and motivation when the work is no longer easy for them.

Praising children’s innate abilities, as Jonathan’s parents did, reinforces this mind-set, which can also prevent young athletes or people in the workforce and even marriages from living up to their potential. On the other hand, our studies show that teaching people to have a “growth mind-set,” which encourages a focus on “process” (consisting of personal effort and effective strategies) rather than on intelligence or talent, helps make them into high achievers in school and in life.

The Opportunity of Defeat
I first began to investigate the underpinnings of human motivation—and how people persevere after setbacks—as a psychology graduate student at Yale University in the 1960s. Animal experiments by psychologists Martin Seligman, Steven Maier and Richard Solomon, all then at the University of Pennsylvania, had shown that after repeated failures, most animals conclude that a situation is hopeless and beyond their control. After such an experience, the researchers found, an animal often remains passive even when it can effect change—a state they called learned helplessness.

People can learn to be helpless, too, but not everyone reacts to setbacks this way. I wondered: Why do some students give up when they encounter difficulty, whereas others who are no more skilled continue to strive and learn? One answer, I soon discovered, lay in people’s beliefs about why they had failed.

In particular, attributing poor performance to a lack of ability depresses motivation more than does the belief that lack of effort is to blame. In 1972, when I taught a group of elementary and middle school children who displayed helpless behavior in school that a lack of effort (rather than lack of ability) led to their mistakes on math problems, the kids learned to keep trying when the problems got tough. They also solved many more problems even in the face of difficulty. Another group of helpless children who were simply rewarded for their success on easier problems did not improve their ability to solve hard math problems. These experiments were an early indication that a focus on effort can help resolve helplessness and engender success.

Subsequent studies revealed that the most persistent students do not ruminate about their own failure much at all but instead think of mistakes as problems to be solved. At the University of Illinois in the 1970s I, along with my then graduate student Carol Diener, asked 60 fifth graders to think out loud while they solved very difficult pattern-recognition problems. Some students reacted defensively to mistakes, denigrating their skills with comments such as “I never did have a good rememory,” and their problem-solving strategies deteriorated.

Others, meanwhile, focused on fixing errors and honing their skills. One advised himself: “I should slow down and try to figure this out.” Two schoolchildren were particularly inspiring. One, in the wake of difficulty, pulled up his chair, rubbed his hands together, smacked his lips and said, “I love a challenge!” The other, also confronting the hard problems, looked up at the experimenter and approvingly declared, “I was hoping this would be informative!” Predictably, the students with this attitude outperformed their cohorts in these studies.

Read the entire article here.

Send to Kindle

Feminism in Saudi Arabia? Hypocrisy in the West!

We are constantly reminded on the immense struggle that is humanity’s progress. Often it seems like one step forward and several back. Cultural relativism and hypocrisy continue to run rampant in a world that celebrates selfies and serfdom.

Oh, and in case you haven’t heard: the rulers of Saudi Arabia are feminists. But then again, so too are the white males who control most of the power, wealth, media and political machinery in the West.

From the Guardian:

Christine Lagarde, the first woman to head the IMF, has paid tribute to the late King Abdullah of Saudi Arabia. He was a strong advocate of women, she said. This is almost certainly not what she thinks. She even hedged her remarks about with qualifiers like “discreet” and “appropriate”. There are constraints of diplomacy and obligations of leadership and navigating between them can be fraught. But this time there was only one thing to say. Abdullah led a country that abuses women’s rights, and indeed all human rights, in a way that places it beyond normal diplomacy.

The constraints and restrictions on Saudi women are too notorious and too numerous to itemise. Right now, two women are in prison for the offence of trying to drive over the border in to Saudi Arabia. It is not just the ban on driving. There is also the ban on going out alone, the ban on voting, the death penalty for adultery, and the total obliteration of public personality – almost of a sense of existence – by the obligatory veil. And there are the terrible punishments meted out to those who infringe these rules that are not written down but “interpreted” – Islam mediated through the conventions of a deeply conservative people.

Lagarde is right. King Abdullah did introduce reforms. Women can now work almost anywhere they want, although their husband brother or father will have to drive them there (and the children to school). They can now not just study law but practise as lawyers. There are women on the Sharia council and it was through their efforts that domestic violence has been criminalised. But enforcement is in the hands of courts that do not necessarily recognise the change. These look like reforms with all the substance of a Potemkin village, a flimsy structure to impress foreign opinion.

Pressure for change is driven by women themselves, exploiting social media by actions that range from the small, brave actions of defiance – posting images of women at the wheel (ovaries, despite men’s fears, apparently undamaged) – to the large-scale subversive gesture such as the YouTube TV programmes reported by the Economist.

But the point about the Lagarde remarks is that there are signs the Saudi authorities really can be sensitive to the rare criticism that comes from western governments, and the western media. Such protests may yet spare blogger Raif Badawi from further punishment for alleged blasphemy. Today’s lashing has been delayed for the third successive week .The Saudi authorities, like any despotic regime, are trying to appease their critics and contain the pressure for change that social media generates by conceding inch by inch so that, like the slow downhill creep of a glacier, the religious authorities and mainstream social opinion don’t notice it is happening.

But beyond Saudi’s borders, it is surely the duty of everyone who really does believe in equality and human rights to shout and finger point and criticise at every opportunity. Failing to do so is what makes Christine Lagarde’s remarks a betrayal of the women who literally risk everything to try to bring about change in the oppressive patriarchy in which they live. They are typical of the desire not to offend the world’s biggest oil producer and the west’s key Middle Eastern ally, a self-censorship that allows the Saudis to claim they respect human rights while breaching every known norm of behaviour.

Read the entire article here.

Send to Kindle

Education And Reality

Recent studies show that having a higher level of education does not necessarily lead to greater acceptance of reality. This seems to fly in the face of oft cited anecdotal evidence and prevailing beliefs that suggest people with lower educational attainment are more likely to reject accepted scientific fact, such as evolutionary science and climate change.

From ars technica:

We like to think that education changes people for the better, helping them critically analyze information and providing a certain immunity from disinformation. But if that were really true, then you wouldn’t have low vaccination rates clustering in areas where parents are, on average, highly educated.

Vaccination isn’t generally a political issue. (Or, it is, but it’s rejected both by people who don’t trust pharmaceutical companies and by those who don’t trust government mandates; these tend to cluster on opposite ends of the political spectrum.) But some researchers decided to look at a number of issues that have become politicized, such as the Iraq War, evolution, and climate change. They find that, for these issues, education actually makes it harder for people to accept reality, an effect they ascribe to the fact that “highly educated partisans would be better equipped to challenge information inconsistent with predispositions.”

The researchers looked at two sets of questions about the Iraq War. The first involved the justifications for the war (weapons of mass destruction and links to Al Qaeda), as well as the perception of the war outside the US. The second focused on the role of the troop surge in reducing violence within Iraq. At the time the polls were taken, there was a clear reality: no evidence of an active weapons program or links to Al Qaeda; the war was frowned upon overseas; and the surge had successfully reduced violence in the country.

On the three issues that were most embarrassing to the Bush administration, Democrats were more likely to get things right, and their accuracy increased as their level of education rose. In contrast, the most and least educated Republicans were equally likely to have things wrong. When it came to the surge, the converse was true. Education increased the chances that Republicans would recognize reality, while the Democratic acceptance of the facts stayed flat even as education levels rose. In fact, among Democrats, the base level of recognition that the surge was a success was so low that it’s not even clear it would have been possible to detect a downward trend.

When it came to evolution, the poll question didn’t even ask whether people accepted the reality of evolution. Instead, it asked “Is there general agreement among scientists that humans have evolved over time, or not?” (This phrasing generally makes it easier for people to accept the reality of evolution, since it’s not asking about their personal beliefs.) Again, education increased the acceptance of this reality among both Democrats and Republicans, but the magnitude of the effect was much smaller among Republicans. In fact, the impact of ideology was stronger than education itself: “The effect of Republican identification on the likelihood of believing that there is a scientific consensus is roughly three times that of the effect of education.”

For climate change, the participants were asked “Do you believe that the earth is getting warmer because of human activity or natural patterns?” Overall, about the beliefs of 70 percent of those polled lined up with scientific conclusions on the matter. And, among the least educated, party affiliation made very little difference in terms of getting this right. But, as education rose, Democrats were more likely to get this right, while Republicans saw their accuracy drop. At the highest levels of education, Democrats got it right 90 percent of the time, while Republicans less than half.

The results are in keeping with a number of other studies that have been published of late, which also show that partisan divides over things that could be considered factual sometimes increase with education. Typically, these issues are widely perceived as political. (With some exceptions; GMOs, for example.) In this case, the authors suspect that education simply allows people to deploy more sophisticated cognitive filters that end up rejecting information that could otherwise compel them to change their perceptions.

The authors conclude that’s somewhat mixed news for democracy itself. Education is intended to improve people’s ability to assimilate information upon which to base their political judgements. And, to a large extent, it does: people, on average, got 70 percent of the questions right, and there was only a single case where education made matters worse.

Read the entire article here.

Send to Kindle

Facts, Fiction and Foxtion

Foxtion. fox·tion. noun \ fäks-sh?n \

New stories about people and events that are not real: literature that tells stories which are imagined by the writer and presenter, and presented earnestly and authoritatively by self-proclaimed experts, repeated over and over until audience accepts as written-in-stone truth. 

Fox News is the gift that just keeps on giving – to comedians, satirists, seekers of truth and, generally, people with reasonably intact grey matter. This time Fox has reconnected with so-called terrorism expert, Steven Emerson. Seems like a nice chap, but, as the British Prime Minister recently remarked, he’s “an idiot”.

From the Guardian:

Steven Emerson, a man whose job title of terrorism expert will henceforth always attract quotation marks, provoked a lot of mirth with his claim, made during a Fox News interview, that Birmingham was a Muslim-only city where “non-Muslims simply just don’t go in”. He was forced to apologise, and the prime minister called him an idiot, all within the space of 24 hours.

This was just one of the many deeply odd things Emerson said in the course of the interview, although it was perhaps the most instantly refutable: Birmingham census figures are easy to come by. His claim that London was full of “actual religious police that actually beat and actually wound seriously anyone who doesn’t dress according to religious Muslim attire” is harder to disprove; just because I live in London and I’ve never seen them doesn’t mean they don’t exist. But they’re not exactly thick on the ground. I blame the cuts.

Emerson also made reference to the “no-go zones” of France, where the government doesn’t “exercise any sovereignty”. “On the French official website it says there are,” he said. “It actually has a map of them.”

How could the French government make the basic blunder of publicising its inability to exercise sovereignty, and on the “French official website” of all places?

After a bit of Googling – which appears to be how Emerson gets his information – I think I know what he’s on about. He appears to be referring to The 751 No-Go Zones of France, the title of a widely disseminated, nine-year-old blogpost originating on the website of Daniel Pipes, another terrorism expert, or “anti-Arab propagandist”.

“They go by the euphemistic term Zones Urbaines Sensibles, or sensitive urban zones,” wrote Pipes, referring to them as “places in France that the French state does not fully control”. And it’s true: you can find them all listed on the French government’s website. Never mind that they were introduced in 1996, or that the ZUS distinction actually denotes an impoverished area targeted for economic and social intervention, not abandonment of sovereignty. For people like Emerson they are officially sanctioned caliphates, where cops and non-Muslims dare not tread.

Yet seven years after he first exposed the No-Go Zones of France, Pipes actually managed to visit several banlieues around Paris. In an update posted in 2013, his disappointment was palpable.

“For a visiting American, these areas are very mild, even dull,” he wrote. “We who know the Bronx and Detroit expect urban hell in Europe too, but there things look fine.

“I regret having called these areas no-go zones.”

Read the entire story here.

Send to Kindle

Je Suis Snowman #jesuissnowman

snowman

What do Salman Rushdie and snowmen have in common, you may ask. Apparently, they are they both the subject of an Islamic fatwa. So, beware building a snowman lest you stray onto an ungodly path from idolizing your frozen handiwork. And, you may wish to return that DVD of Frozen. Oh, the utter absurdity of it all!

From the Guardian:

A prominent Saudi Arabian cleric has whipped up controversy by issuing a religious edict forbidding the building of snowmen, described them as anti-Islamic.

Asked on a religious website if it was permissible for fathers to build snowmen for their children after a snowstorm in the country’s north, Sheikh Mohammed Saleh al-Munajjid replied: “It is not permitted to make a statue out of snow, even by way of play and fun.”

Quoting from Muslim scholars, Munajjid argued that to build a snowman was to create an image of a human being, an action considered sinful under the kingdom’s strict interpretation of Sunni Islam.

“God has given people space to make whatever they want which does not have a soul, including trees, ships, fruits, buildings and so on,” he wrote in his ruling.

That provoked swift responses from Twitter users writing in Arabic and identifying themselves with Arab names.

“They are afraid for their faith of everything … sick minds,” one Twitter user wrote.

Another posted a photo of a man in formal Arab garb holding the arm of a “snow bride” wearing a bra and lipstick. “The reason for the ban is fear of sedition,” he wrote.

A third said the country was plagued by two types of people: “A people looking for a fatwa [religious ruling] for everything in their lives, and a cleric who wants to interfere in everything in the lives of others through a fatwa.”

Munajjid had some supporters however. “It (building snowmen) is imitating the infidels, it promotes lustiness and eroticism,” one wrote. “May God preserve the scholars, for they enjoy sharp vision and recognise matters that even Satan does not think about.”

Snow has covered upland areas of Tabuk province near Saudi Arabia’s border with Jordan for the third consecutive year as cold weather swept across the Middle East.

Read more here.

Images courtesy of Google Search.

Send to Kindle

The Thugs of Cultural Disruption

What becomes of our human culture as Amazon crushes booksellers and publishers, Twitter dumbs down journalism, knowledge is replaced by keyword search, and the internet becomes a popularity contest?

Leon Wieseltier contributing editor at The Atlantic has some thoughts.

From NYT:

Amid the bacchanal of disruption, let us pause to honor the disrupted. The streets of American cities are haunted by the ghosts of bookstores and record stores, which have been destroyed by the greatest thugs in the history of the culture industry. Writers hover between a decent poverty and an indecent one; they are expected to render the fruits of their labors for little and even for nothing, and all the miracles of electronic dissemination somehow do not suffice for compensation, either of the fiscal or the spiritual kind. Everybody talks frantically about media, a second-order subject if ever there was one, as content disappears into “content.” What does the understanding of media contribute to the understanding of life? Journalistic institutions slowly transform themselves into silent sweatshops in which words cannot wait for thoughts, and first responses are promoted into best responses, and patience is a professional liability. As the frequency of expression grows, the force of expression diminishes: Digital expectations of alacrity and terseness confer the highest prestige upon the twittering cacophony of one-liners and promotional announcements. It was always the case that all things must pass, but this is ridiculous.

Meanwhile the discussion of culture is being steadily absorbed into the discussion of business. There are “metrics” for phenomena that cannot be metrically measured. Numerical values are assigned to things that cannot be captured by numbers. Economic concepts go rampaging through noneconomic realms: Economists are our experts on happiness! Where wisdom once was, quantification will now be. Quantification is the most overwhelming influence upon the contemporary American understanding of, well, everything. It is enabled by the idolatry of data, which has itself been enabled by the almost unimaginable data-generating capabilities of the new technology. The distinction between knowledge and information is a thing of the past, and there is no greater disgrace than to be a thing of the past. Beyond its impact upon culture, the new technology penetrates even deeper levels of identity and experience, to cognition and to consciousness. Such transformations embolden certain high priests in the church of tech to espouse the doctrine of “transhumanism” and to suggest, without any recollection of the bankruptcy of utopia, without any consideration of the cost to human dignity, that our computational ability will carry us magnificently beyond our humanity and “allow us to transcend these limitations of our biological bodies and brains. . . . There will be no distinction, post-Singularity, between human and machine.” (The author of that updated mechanistic nonsense is a director of engineering at Google.)

And even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science. The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university, where the humanities are disparaged as soft and impractical and insufficiently new. The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy. So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.

Read the entire essay here.

Send to Kindle

Je Suis Ahmed

From the Guardian:

It was a Muslim policeman from a local police station who was “slaughtered like a dog” after heroically trying to stop two heavily armed killers from fleeing the Charlie Hebdo offices following the massacre.

Tributes to Ahmed Merabet poured in on Thursday after images of his murder at point blank range by a Kalashnikov-wielding masked terrorist circulated around the world.

Merabet, who according to officials was 40, was called to the scene while on patrol with a female colleague in the neighbourhood, just in time to see the black Citroën used by the two killers heading towards the boulevard from Charlie Hebdo.

“He was on foot, and came nose to nose with the terrorists. He pulled out his weapon. It was his job, it was his duty,” said Rocco Contento, a colleague who was a union representative at the central police station for Paris’s 11th arrondissement.

Video footage, which has now been pulled from the internet, showed the two gunmen get out of the car before one shot the policeman in the groin. As he falls to the pavement groaning in pain and holding up an arm as though to protect himself, the second gunman moves forward and asks the policeman: “Do you want to kill us?” Merabet replies: “Non, ç’est bon, chef” (“No, it’s OK mate”). The terrorist then shoots him in the head.

After the rise in online support for the satirical magazine, with the catchphrase “Je Suis Charlie,” many decided to honour Merabet, tweeting “Je Suis Ahmed”. One, @Aboujahjah, posted: “I am not Charlie, I am Ahmed the dead cop. Charlie ridiculed my faith and culture and I died defending his right to do so.”

Another policeman, 48-year-old Franck Brinsolaro, was killed moments earlier in the assault on Charlie Hebdo where he was responsible for the protection of its editor, Stéphane Charbonnier, one of the 11 killed in the building. A colleague said he “never had time” to pull his weapon.

Read the entire story here.

Send to Kindle

The Pen Must Always be Mightier

charlie

Philip Val former publisher of satirical magazine Charlie Hebdo, says of the assassination on January 7:

“They were so alive, they loved to make people happy, to make them laugh, to give them generous ideas. They were very good people. They were the best among us, as those who make us laugh, who are for liberty … They were assassinated, it is an insufferable butchery.

We cannot let silence set in, we need help. We all need to band together against this horror. Terror must not prevent joy, must not prevent our ability to live, freedom, expression – I’m going to use stupid words – democracy, after all this is what is at stake. It is this kind of fraternity that allows us to live. We cannot allow this, this is an act of war. It might be good if tomorrow, all newspapers were called Charlie Hebdo. If we titled them all Charlie Hebdo. If all of France was Charlie Hebdo. It would show that we are not okay with this. That we will never let stop laughing. We will never let liberty be extinguished.”

Send to Kindle

Narcissistick

The pursuit of all things self continues unabated in 2015. One has to wonder what children of the self-absorbed, selfie generations will be like. Or, perhaps, there will be no or few children, because many of the self-absorbed will remain, well, rather too self-absorbed.

From NYT:

Sometimes you don’t need an analyst’s report to get a look at the future of the media industry and the challenges it will bring.

On New Year’s Eve, I was one of the poor souls working in Times Square. By about 1 p.m., it was time to evacuate, and when I stepped into the cold that would assault the huddled, partying masses that night, a couple was getting ready to pose for a photo with the logo on The New York Times Building in the background. I love that I work at a place that people deem worthy of memorializing, and I often offer to help.

My assistance was not required. As I watched, the young couple mounted their phone on a collapsible pole, then extended it outward, the camera now able to capture the moment in wide-screen glory.

I’d seen the same phenomenon when I was touring the Colosseum in Rome last month. So many people were fighting for space to take selfies with their long sticks — what some have called the “Narcissistick” — that it looked like a reprise of the gladiatorial battles the place once hosted.

The urge to stare at oneself predates mirrors — you could imagine a Neanderthal fussing with his hair, his image reflected in a pool of water — but it has some pretty modern dimensions. In the forest of billboards in Times Square, the one with a camera that captures the people looking at the billboard always draws a big crowd.

Selfies are hardly new, but the incremental improvement in technology of putting a phone on a stick — a curiously analog fix that Time magazine listed as one of the best inventions of 2014 along with something called the “high-beta fusion reactor” — suggests that the séance with the self is only going to grow. (Selfie sticks are often used to shoot from above, which any self-respecting selfie auteur will tell you is the most flattering angle.)

There are now vast, automated networks to harvest all that narcissism, along with lots of personal data, creating extensive troves of user-generated content. The tendency to listen to the holy music of the self is reflected in the abundance of messaging and self-publishing services — Vine, WhatsApp, Snapchat, Instagram, Apple’s new voice messaging and the rest — all of which pose a profound challenge for media companies. Most media outfits are in the business of one-to-many, creating single pieces of text, images or audio meant to be shared by the masses.

But most sharing does not involve traditional media companies. Consumers are increasingly glued to their Facebook feeds as a source of information about not just their friends but the broader world as well. And with the explosive growth of Snapchat, the fastest-growing social app of the last year, much of the sharing that takes place involves one-to-one images that come and go in 10 seconds or less. Getting a media message — a television show, a magazine, a website, not to mention the ads that pay for most of it — into the intimate space between consumers and a torrent of information about themselves is only going to be more difficult.

I’ve been around since before there was a consumer Internet, but my frame of reference is as neither a Luddite nor a curmudgeon. I didn’t end up with over half a million followers on social media — Twitter and Facebookcombined — by posting only about broadband regulations and cable deals. (Not all self-flattering portraits are rendered in photos. You see what I did there, right?) The enhanced ability to communicate and share in the current age has many tangible benefits.

My wife travels a great deal, sometimes to conflicted regions, and WhatsApp’s global reach gives us a stable way of staying in touch. Over the holidays, our family shared endless photos, emoticons and inside jokes in group messages that were very much a part of Christmas. Not that long ago, we might have spent the time gathered around watching “Elf,” but this year, we were brought together by the here and now, the familiar, the intimate and personal. We didn’t need a traditional media company to help us create a shared experience.

Many younger consumers have become mini-media companies themselves, madly distributing their own content on Vine, Instagram, YouTube and Snapchat. It’s tough to get their attention on media created for the masses when they are so busy producing their own. And while the addiction to self is not restricted to millennials — boomers bow to no one in terms of narcissism — there are now easy-to-use platforms that amplify that self-reflecting impulse.

While legacy media companies still make products meant to be studied and savored over varying lengths of time — the movie “Boyhood,” The Atlantic magazine, the novel “The Goldfinch” — much of the content that individuals produce is ephemeral. Whatever bit of content is in front of someone — text messages, Facebook posts, tweets — is quickly replaced by more and different. For Snapchat, the fact that photos and videos disappear almost immediately is not a flaw, it’s a feature. Users can send content into the world with little fear of creating a trail of digital breadcrumbs that advertisers, parents or potential employers could follow. Warhol’s 15 minutes of fame has been replaced by less than 15 seconds on Snapchat.

Facebook, which is a weave of news encompassing both the self and the world, has become, for many, a de facto operating system on the web. And many of the people who aren’t busy on Facebook are up for grabs on the web but locked up on various messaging apps. What used to be called the audience is disappearing into apps, messaging and user-generated content. Media companies in search of significant traffic have to find a way into that stream.

“The majority of time that people are spending online is on Facebook,” said Anthony De Rosa, editor in chief of Circa, a mobile news start-up. “You have to find a way to break through or tap into all that narcissism. We are way too into ourselves.”

Read the entire article here.

Send to Kindle

Socks and Self-knowledge

ddg-search-socks

How well do you really know yourself?  Go beyond your latte preferences and your favorite movies. Knowing yourself means being familiar with your most intimate thoughts, desires and fears, your character traits and flaws, your values. for many this quest for self-knowledge is a life-long process. And, it may begin with knowing about your socks.

From NYT:

Most people wonder at some point in their lives how well they know themselves. Self-knowledge seems a good thing to have, but hard to attain. To know yourself would be to know such things as your deepest thoughts, desires and emotions, your character traits, your values, what makes you happy and why you think and do the things you think and do. These are all examples of what might be called “substantial” self-knowledge, and there was a time when it would have been safe to assume that philosophy had plenty to say about the sources, extent and importance of self-knowledge in this sense.

Not any more. With few exceptions, philosophers of self-knowledge nowadays have other concerns. Here’s an example of the sort of thing philosophers worry about: suppose you are wearing socks and believe you are wearing socks. How do you know that that’s what you believe? Notice that the question isn’t: “How do you know you are wearing socks?” but rather “How do you know you believe you are wearing socks?” Knowledge of such beliefs is seen as a form of self-knowledge. Other popular examples of self-knowledge in the philosophical literature include knowing that you are in pain and knowing that you are thinking that water is wet. For many philosophers the challenge is explain how these types of self-knowledge are possible.

This is usually news to non-philosophers. Most certainly imagine that philosophy tries to answer the Big Questions, and “How do you know you believe you are wearing socks?” doesn’t sound much like one of them. If knowing that you believe you are wearing socks qualifies as self-knowledge at all — and even that isn’t obvious — it is self-knowledge of the most trivial kind. Non-philosophers find it hard to figure out why philosophers would be more interested in trivial than in substantial self-knowledge.

One common reaction to the focus on trivial self-knowledge is to ask, “Why on earth would you be interested in that?” — or, more pointedly, “Why on earth would anyone pay you to think about that?” Philosophers of self-knowledge aren’t deterred. It isn’t unusual for them to start their learned articles and books on self-knowledge by declaring that they aren’t going to be discussing substantial self-knowledge because that isn’t where the philosophical action is.

How can that be? It all depends on your starting point. For example, to know that you are wearing socks requires effort, even if it’s only the minimal effort of looking down at your feet. When you look down and see the socks on your feet you have evidence — the evidence of your senses — that you are wearing socks, and this illustrates what seems a general point about knowledge: knowledge is based on evidence, and our beliefs about the world around us can be wrong. Evidence can be misleading and conclusions from evidence unwarranted. Trivial self-knowledge seems different. On the face of it, you don’t need evidence to know that you believe you are wearing socks, and there is a strong presumption that your beliefs about your own beliefs and other states of mind aren’t mistaken. Trivial self-knowledge is direct (not based on evidence) and privileged (normally immune to error). Given these two background assumptions, it looks like there is something here that needs explaining: How is trivial self-knowledge, with all its peculiarities, possible?

From this perspective, trivial self-knowledge is philosophically interesting because it is special. “Special” in this context means special from the standpoint of epistemology or the philosophy of knowledge. Substantial self-knowledge is much less interesting from this point of view because it is like any other knowledge. You need evidence to know your own character and values, and your beliefs about your own character and values can be mistaken. For example, you think you are generous but your friends know you better. You think you are committed to racial equality but your behaviour suggests otherwise. Once you think of substantial self-knowledge as neither direct nor privileged why would you still regard it as philosophically interesting?

What is missing from this picture is any real sense of the human importance of self-knowledge. Self-knowledge matters to us as human beings, and the self-knowledge which matters to us as human beings is substantial rather than trivial self-knowledge. We assume that on the whole our lives go better with substantial self-knowledge than without it, and what is puzzling is how hard it can be to know ourselves in this sense.

The assumption that self-knowledge matters is controversial and philosophy might be expected to have something to say about the importance of self-knowledge, as well as its scope and extent. The interesting questions in this context include “Why is substantial self-knowledge hard to attain?” and “To what extent is substantial self-knowledge possible?”

Read the entire article here.

Image courtesy of DuckDuckGo Search.

Send to Kindle

The Haves versus the Have-Mores

los-angeles-billionaires

Poverty and wealth are relative terms here in the United States. Certainly those who have amassed millions will seem “poor” to the established and nouveaux-riche billionaires. Yet these is something rather surreal in the spectacle of watching Los Angeles’ lesser-millionaires fight the mega-rich for their excess. As Peter Haldeman says in the following article of Michael Ovitz, founder of Creative Arts Agency, mere millionaire and landlord of a 28,000 square foot mega mansion, “Mr. Ovitz calling out a neighbor for overbuilding is a little like Lady Gaga accusing someone of overdressing. Welcome to the giga-mansion — Roman emperor Caligula, would feel much at home in this Californian circus of excess.

From NYT:

At the end of a narrow, twisting side street not far from the Hotel Bel-Air rises a knoll that until recently was largely covered with scrub brush and Algerian ivy. Now the hilltop is sheared and graded, girded by caissons sprouting exposed rebar. “They took 50- or 60,000 cubic yards of dirt out of the place,” said Fred Rosen, a neighbor, glowering at the site from behind the wheel of his Cadillac Escalade on a sunny October afternoon.

Mr. Rosen, who used to run Ticketmaster, has lately devoted himself to the homeowners alliance he helped form shortly after this construction project was approved. When it is finished, a modern compound of glass and steel will rise two stories, encompass several structures and span — wait for it — some 90,000 square feet.

In an article titled “Here Comes L.A.’s Biggest Residence,” The Los Angeles Business Journal announced in June that the house, conceived by Nile Niami, a film producer turned developer, with an estimated sale price “in the $150 million range,” will feature a cantilevered tennis court and five swimming pools. “We’re talking 200 construction trucks a day,” fumed Mr. Rosen. “Then multiply that by all the other giant projects. More than a million cubic yards of this hillside have been taken out. What happens when the next earthquake comes? How nuts is all this?”

By “all this,” he means not just the house with five swimming pools but the ever-expanding number of houses the size of Hyatt resorts rising in the most expensive precincts of Los Angeles. Built for the most part on spec, bestowed with names as assuming as their dimensions, these behemoths are transforming once leafy and placid neighborhoods into dusty enclaves carved by retaining walls and overrun by dirt haulers and cement mixers. “Twenty-thousand-square-foot homes have become teardowns for people who want to build 70-, 80-, and 90,000-square-foot homes,” Los Angeles City Councilman Paul Koretz said. So long, megamansion. Say hello to the gigamansion.

In Mr. Rosen’s neighborhood, ground was recently broken on a 70,000- to 80,000-square-foot Mediterranean manse for a citizen of Qatar, while Chateau des Fleurs, a 60,000-square-foot pile with a 40-car underground garage, is nearing completion. Not long ago, Anthony Pritzker, an heir to the Hyatt hotel fortune, built a boxy contemporary residence for himself in Beverly Hills that covers just shy of 50,000 square feet. And Mohamed Hadid, a prolific and high-profile developer (he has appeared on “The Shahs of Sunset” and “The Real Housewives of Beverly Hills”), is known for two palaces that measure 48,000 square feet each: Le Palais in Beverly Hills, which has a swan pond and a Jacuzzi that seats 20 people, and Le Belvédère in Bel Air, which features a Turkish hammam and a ballroom for 250.

Why are people building houses the size of shopping malls? Because they can. “Why do you see a yacht 500 feet long when you could easily have the same fun in one half the size?” asked Jeffrey Hyland, a partner in the Beverly Hills real estate firm Hilton & Hyland, who is developing five 50,000-square-foot properties on the site of the old Merv Griffin estate in Beverly Hills.

Le Belvédère was reportedly purchased by an Indonesian buyer, and Le Palais sold to a daughter of President Islam Karimov of Uzbekistan. According to Mr. Hyland, the market for these Versailles knockoffs is “flight capital.” “It’s oligarchs, oilgarchs, people from Asia, people who came up with the next app for the iPhone,” he said. While global wealth is pouring into other American cities as well, Los Angeles is still a relative bargain, Mr. Hyland said, adding: “Here you can buy the best house for $3,000 a square foot. In Manhattan, you’re looking at $11,000 a square foot and you get a skybox.”

Speculators are tapping the demand, snapping up the best lots, bulldozing whatever is on them and building not only domiciles but also West Coast “lifestyles.” The particulars can seem a little puzzling to the uninitiated. The very busy Mr. Niami (he also built the Winklevoss twins’ perch above the Sunset Strip) constructed a 30,000-square-foot Mediterranean-style house in Holmby Hills that locals have called the Fendi Casa because it was filled with furniture and accessories from the Italian fashion house.

The residence also offered indoor and outdoor pools, commissioned artwork by the graffiti artist Retna, and an operating room in the basement. “It’s not like it’s set up to take out your gallbladder,” said Mark David, a real estate columnist for Variety, who has toured the house. “It’s for cosmetic procedures — fillers, dermabrasion, that kind of thing.” The house sold, with all its furnishings, to an unidentified Saudi buyer for $44 million.

Read the entire article here.

Image: Satellite view of the 70,000 square foot giga-mansion development in Bel Air. Los Angeles. Courtesy of Google Maps.

Send to Kindle

Will the AIs Let Us Coexist?

At some point in the not too distant future artificial intelligences will far exceed humans in most capacities (except shopping and beer drinking). The scripts according to most Hollywood movies seem to suggest that we, humans, would be (mostly) wiped-out by AI machines, beings, robots or other non-human forms – we being the lesser-organisms, superfluous to AI needs.

Perhaps, we may find an alternate path, to a more benign coexistence, much like that posited in The Culture novels by dearly departed, Iain M. Banks. I’ll go with Mr.Banks’ version. Though, just perhaps, evolution is supposed to leave us behind, replacing our simplistic, selfish intelligence with much more advanced, non-human version.

From the Guardian:

From 2001: A Space Odyssey to Blade Runner and RoboCop to The Matrix, how humans deal with the artificial intelligence they have created has proved a fertile dystopian territory for film-makers. More recently Spike Jonze’s Her and Alex Garland’s forthcoming Ex Machina explore what it might be like to have AI creations living among us and, as Alan Turing’s famous test foregrounded, how tricky it might be to tell the flesh and blood from the chips and code.

These concerns are even troubling some of Silicon Valley’s biggest names: last month Telsa’s Elon Musk described AI as mankind’s “biggest existential threat… we need to be very careful”. What many of us don’t realise is that AI isn’t some far-off technology that only exists in film-maker’s imaginations and computer scientist’s labs. Many of our smartphones employ rudimentary AI techniques to translate languages or answer our queries, while video games employ AI to generate complex, ever-changing gaming scenarios. And so long as Silicon Valley companies such as Google and Facebook continue to acquire AI firms and hire AI experts, AI’s IQ will continue to rise…

Isn’t AI a Steven Spielberg movie?
No arguments there, but the term, which stands for “artificial intelligence”, has a more storied history than Spielberg and Kubrick’s 2001 film. The concept of artificial intelligence goes back to the birth of computing: in 1950, just 14 years after defining the concept of a general-purpose computer, Alan Turing asked “Can machines think?”

It’s something that is still at the front of our minds 64 years later, most recently becoming the core of Alex Garland’s new film, Ex Machina, which sees a young man asked to assess the humanity of a beautiful android. The concept is not a million miles removed from that set out in Turing’s 1950 paper, Computing Machinery and Intelligence, in which he laid out a proposal for the “imitation game” – what we now know as the Turing test. Hook a computer up to text terminal and let it have conversations with a human interrogator, while a real person does the same. The heart of the test is whether, when you ask the interrogator to guess which is the human, “the interrogator [will] decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman”.

Turing said that asking whether machines could pass the imitation game is more useful than the vague and philosophically unclear question of whether or not they “think”. “The original question… I believe to be too meaningless to deserve discussion.” Nonetheless, he thought that by the year 2000, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”.

In terms of natural language, he wasn’t far off. Today, it is not uncommon to hear people talking about their computers being “confused”, or taking a long time to do something because they’re “thinking about it”. But even if we are stricter about what counts as a thinking machine, it’s closer to reality than many people think.

So AI exists already?
It depends. We are still nowhere near to passing Turing’s imitation game, despite reports to the contrary. In June, a chatbot called Eugene Goostman successfully fooled a third of judges in a mock Turing test held in London into thinking it was human. But rather than being able to think, Eugene relied on a clever gimmick and a host of tricks. By pretending to be a 13-year-old boy who spoke English as a second language, the machine explained away its many incoherencies, and with a smattering of crude humour and offensive remarks, managed to redirect the conversation when unable to give a straight answer.

The most immediate use of AI tech is natural language processing: working out what we mean when we say or write a command in colloquial language. For something that babies begin to do before they can even walk, it’s an astonishingly hard task. Consider the phrase beloved of AI researchers – “time flies like an arrow, fruit flies like a banana”. Breaking the sentence down into its constituent parts confuses even native English speakers, let alone an algorithm.

Read the entire article here.

Send to Kindle

Money Can Buy You… (Some) Happiness

Google-search-moneyNew results are in, and yes, money can buy you happiness. But the picture from some extensive new research shows that your happiness is much more dependent on how you spend it, than how much your earn. Generally, you are more likely to be happier if you give money away rather than fritter it on yourself. Also, you are more likely to be happier if you spend it on an experience rather than things.

From the WSJ:

It’s an age-old question: Can money buy happiness?

Over the past few years, new research has given us a much deeper understanding of the relationship between what we earn and how we feel. Economists have been scrutinizing the links between income and happiness across nations, and psychologists have probed individuals to find out what really makes us tick when it comes to cash.

The results, at first glance, may seem a bit obvious: Yes, people with higher incomes are, broadly speaking, happier than those who struggle to get by.

But dig a little deeper into the findings, and they get a lot more surprising—and a lot more useful.

In short, this latest research suggests, wealth alone doesn’t provide any guarantee of a good life. What matters a lot more than a big income is howpeople spend it. For instance, giving money away makes people a lot happier than lavishing it on themselves. And when they do spend money on themselves, people are a lot happier when they use it for experiences like travel than for material goods.

With that in mind, here’s what the latest research says about how people can make smarter use of their dollars and maximize their happiness.

Experiences Are Worth More Than You Think

Ryan Howell was bothered by a conundrum. Numerous studies conducted over the past 10 years have shown that life experiences give us more lasting pleasure than material things, and yet people still often deny themselves experiences and prioritize buying material goods.

So, Prof. Howell, associate professor of psychology at San Francisco State University, decided to look at what’s going on. In a study published earlier this year, he found that people think material purchases offer better value for the money because experiences are fleeting, and material goods last longer. So, although they’ll occasionally splurge on a big vacation or concert tickets, when they’re in more money-conscious mode, they stick to material goods.

But in fact, Prof. Howell found that when people looked back at their purchases, they realized that experiences actually provided better value.

“What we find is that there’s this huge misforecast,” he says. “People think that experiences are only going to provide temporary happiness, but they actually provide both more happiness and more lasting value.” And yet we still keep on buying material things, he says, because they’re tangible and we think we can keep on using them.

Cornell University psychology professor Thomas Gilovich has reached similar conclusions. “People often make a rational calculation: I have a limited amount of money, and I can either go there, or I can have this,” he says. “If I go there, it’ll be great, but it’ll be done in no time. If I buy this thing, at least I’ll always have it. That is factually true, but not psychologically true. We adapt to our material goods.”

It’s this process of “hedonic adaptation” that makes it so hard to buy happiness through material purchases. The new dress or the fancy car provides a brief thrill, but we soon come to take it for granted.

Experiences, on the other hand, tend to meet more of our underlying psychological needs, says Prof. Gilovich. They’re often shared with other people, giving us a greater sense of connection, and they form a bigger part of our sense of identity. If you’ve climbed in the Himalayas, that’s something you’ll always remember and talk about, long after all your favorite gadgets have gone to the landfill.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

Sartre: Forever Linked with Mrs Premise and Mrs Conclusion

Jean-Paul_Sartre_FP

One has to wonder how Jean-Paul Sartre would have been regarded today had he accepted the Nobel Prize in Literature in 1964, or had the characters of Monty Python not used him as a punching bag in one of their infamous, satyrical philosopher sketches:

Mrs Conclusion: What was Jean-Paul like? 

Mrs Premise: Well, you know, a bit moody. Yes, he didn’t join in the fun much. Just sat there thinking. Still, Mr Rotter caught him a few times with the whoopee cushion. (she demonstrates) Le Capitalisme et La Bourgeoisie ils sont la m~me chose… Oooh we did laugh…

From the Guardian:

In this age in which all shall have prizes, in which every winning author knows what’s necessary in the post-award trial-by-photoshoot (Book jacket pressed to chest? Check. Wall-to-wall media? Check. Backdrop of sponsor’s logo? Check) and in which scarcely anyone has the couilles, as they say in France, to politely tell judges where they can put their prize, how lovely to recall what happened on 22 October 1964, when Jean-Paul Sartre turned down the Nobel prize for literature.

“I have always declined official honours,” he explained at the time. “A writer should not allow himself to be turned into an institution. This attitude is based on my conception of the writer’s enterprise. A writer who adopts political, social or literary positions must act only within the means that are his own – that is, the written word.”

Throughout his life, Sartre agonised about the purpose of literature. In 1947’s What is Literature?, he jettisoned a sacred notion of literature as capable of replacing outmoded religious beliefs in favour of the view that it should have a committed social function. However, the last pages of his enduringly brilliant memoir Words, published the same year as the Nobel refusal, despair over that function: “For a long time I looked on my pen as a sword; now I know how powerless we are.” Poetry, wrote Auden, makes nothing happen; politically committed literature, Sartre was saying, was no better. In rejecting the honour, Sartre worried that the Nobel was reserved for “the writers of the west or the rebels of the east”. He didn’t damn the Nobel in quite the bracing terms that led Hari Kunzru to decline the 2003 John Llewellyn Rhys prize, sponsored by the Mail on Sunday (“As the child of an immigrant, I am only too aware of the poisonous effect of the Mail’s editorial line”), but gently pointed out its Eurocentric shortcomings. Plus, one might say 50 years on, ça change. Sartre said that he might have accepted the Nobel if it had been offered to him during France’s imperial war in Algeria, which he vehemently opposed, because then the award would have helped in the struggle, rather than making Sartre into a brand, an institution, a depoliticised commodity. Truly, it’s difficult not to respect his compunctions.

But the story is odder than that. Sartre read in Figaro Littéraire that he was in the frame for the award, so he wrote to the Swedish Academy saying he didn’t want the honour. He was offered it anyway. “I was not aware at the time that the Nobel prize is awarded without consulting the opinion of the recipient,” he said. “But I now understand that when the Swedish Academy has made a decision, it cannot subsequently revoke it.”

Regrets? Sartre had a few – at least about the money. His principled stand cost him 250,000 kronor (about £21,000), prize money that, he reflected in his refusal statement, he could have donated to the “apartheid committee in London” who badly needed support at the time. All of which makes one wonder what his compatriot, Patrick Modiano, the 15th Frenchman to win the Nobel for literature earlier this month, did with his 8m kronor (about £700,000).

The Swedish Academy had selected Sartre for having “exerted a far-reaching influence on our age”. Is this still the case? Though he was lionised by student radicals in Paris in May 1968, his reputation as a philosopher was on the wane even then. His brand of existentialism had been eclipsed by structuralists (such as Lévi-Strauss and Althusser) and post-structuralists (such as Derrida and Deleuze). Indeed, Derrida would spend a great deal of effort deriding Sartrean existentialism as a misconstrual of Heidegger. Anglo-Saxon analytic philosophy, with the notable exception of Iris Murdoch and Arthur Danto, has for the most part been sniffy about Sartre’s philosophical credentials.

Sartre’s later reputation probably hasn’t benefited from being championed by Paris’s philosophical lightweight, Bernard-Henri Lévy, who subtitled his biography of his hero The Philosopher of the Twentieth Century (Really? Not Heidegger, Russell, Wittgenstein or Adorno?); still less by his appearance in Monty Python’s least funny philosophy sketch, “Mrs Premise and Mrs Conclusion visit Jean-Paul Sartre at his Paris home”. Sartre has become more risible than lisible: unremittingly depicted as laughable philosopher toad – ugly, randy, incomprehensible, forever excitably over-caffeinated at Les Deux Magots with Simone de Beauvoir, encircled with pipe smoke and mired in philosophical jargon, not so much a man as a stock pantomime figure. He deserves better.

How then should we approach Sartre’s writings in 2014? So much of his lifelong intellectual struggle and his work still seems pertinent. When we read the “Bad Faith” section of Being and Nothingness, it is hard not to be struck by the image of the waiter who is too ingratiating and mannered in his gestures, and how that image pertains to the dismal drama of inauthentic self-performance that we find in our culture today. When we watch his play Huis Clos, we might well think of how disastrous our relations with other people are, since we now require them, more than anything else, to confirm our self-images, while they, no less vexingly, chiefly need us to confirm theirs. When we read his claim that humans can, through imagination and action, change our destiny, we feel something of the burden of responsibility of choice that makes us moral beings. True, when we read such sentences as “the being by which Nothingness comes to the world must be its own Nothingness”, we might want to retreat to a dark room for a good cry, but let’s not spoil the story.

His lifelong commitments to socialism, anti-fascism and anti-imperialism still resonate. When we read, in his novel Nausea, of the protagonost Antoine Roquentin in Bouville’s art gallery, looking at pictures of self-satisfied local worthies, we can apply his fury at their subjects’ self-entitlement to today’s images of the powers that be (the suppressed photo, for example, of Cameron and his cronies in Bullingdon pomp), and share his disgust that such men know nothing of what the world is really like in all its absurd contingency.

In his short story Intimacy, we confront a character who, like all of us on occasion, is afraid of the burden of freedom and does everything possible to make others take her decisions for her. When we read his distinctions between being-in-itself (être-en-soi), being-for-itself (être-pour-soi) and being-for-others (être-pour-autrui), we are encouraged to think about the tragicomic nature of what it is to be human – a longing for full control over one’s destiny and for absolute identity, and at the same time, a realisation of the futility of that wish.

The existential plight of humanity, our absurd lot, our moral and political responsibilities that Sartre so brilliantly identified have not gone away; rather, we have chosen the easy path of ignoring them. That is not a surprise: for Sartre, such refusal to accept what it is to be human was overwhelmingly, paradoxically, what humans do.

Read the entire article here.

Image: Jean-Paul Sartre (c1950). Courtesy: Archivo del diario Clarín, Buenos Aires, Argentina

Send to Kindle

Colorless Green Ideas Sleep Furiously

Linguist, philosopher, and more recently political activist, Noam Chomsky penned the title phrase in the late 1950s. The sentence is grammatically correct, but semantically nonsensical. Some now maintain that many of Chomsky’s early ideas on the innateness of human language are equally nonsensical. Chomsky popularized the idea that language is innate to humans; that somehow and somewhere the minds of human infants contain a mechanism that can make sense of language by applying rules encoded in and activated by our genes. Steven Pinker expanded on Chomsky’s theory by proposing that the mind contains an innate device that encodes a common, universal grammar, which is foundational to all languages across all human societies.

Recently however, this notion has come under increasing criticism. A  growing number of prominent linguistic scholars, including Professor Vyvyan Evans, maintain that Chomsky’s and Pinker’s linguistic models are outdated — that a universal grammar is nothing but a finely-tuned myth. Evans and others maintain that language arises from and is directly embodied in experience.

From the New Scientist:

The ideas of Noam Chomsky, popularised by Steven Pinker, come under fire in Vyvyan Evans’s book The Language Myth: Why language is not an instinct

IS THE way we think about language on the cusp of a revolution? After reading The Language Myth, it certainly looks as if a major shift is in progress, one that will open people’s minds to liberating new ways of thinking about language.

I came away excited. I found that words aren’t so much things that can be limited by a dictionary definition but are encyclopaedic, pointing to sets of concepts. There is the intriguing notion that language will always be less rich than our ideas and there will always be things we cannot quite express. And there is the growing evidence that words are rooted in concepts built out of our bodily experience of living in the world.

Its author, Vyvyan Evans, is a professor of linguistics at Bangor University, UK, and his primary purpose is not so much to map out the revolution (that comes in a sequel) but to prepare you for it by sweeping out old ideas. The book is sure to whip up a storm, because in his sights are key ideas from some of the world’s great thinkers, including philosophers Noam Chomsky and Jerry Fodor.

Ideas about language that have entered the public consciousness are more myth than reality, Evans argues. Bestsellers by Steven Pinker, the Harvard University professor who popularised Chomksy in The Language InstinctHow the Mind Works and The Stuff of Thought, come in for particular criticism. “Science has moved on,” Evans writes. “And to end it all, Pinker is largely wrong, about language and about a number of other things too…”

The commonplace view of “language as instinct” is the myth Evans wants to destroy and he attempts the operation with great verve. The myth comes from the way children effortlessly learn languages just by listening to adults around them, without being aware explicitly of the governing grammatical rules.

This “miracle” of spontaneous learning led Chomsky to argue that grammar is stored in a module of the mind, a “language acquisition device”, waiting to be activated, stage-by-stage, when an infant encounters the jumble of language. The rules behind language are built into our genes.

This innate grammar is not the grammar of a school textbook, but a universal grammar, capable of generating the rules of any of the 7000 or so languages that a child might be exposed to, however different they might appear. In The Language Instinct, Pinker puts it this way: “a Universal Grammar, not reducible to history or cognition, underlies the human language instinct”. The search for that universal grammar has kept linguists busy for half a century.

They may have been chasing a mirage. Evans marshals impressive empirical evidence to take apart different facets of the “language instinct myth”. A key criticism is that the more languages are studied, the more their diversity becomes apparent and an underlying universal grammar less probable.

In a whistle-stop tour, Evans tells stories of languages with a completely free word order, including Jiwarli and Thalanyji from Australia. Then there’s the Inuit language Inuktitut, which builds sentences out of prefixes and suffixes to create giant words like tawakiqutiqarpiit, roughly meaning: “Do you have any tobacco for sale?” And there is the native Canadian language, Straits Salish, which appears not to have nouns or verbs.

An innate language module also looks shaky, says Evans, now scholars have watched languages emerge among communities of deaf people. A sign language is as rich grammatically as a spoken one, but new ones don’t appear fully formed as we might expect if grammar is laid out in our genes. Instead, they gain grammatical richness over several generations.

Now, too, we have detailed studies of how children acquire language. Grammatical sentences don’t start to pop out of their mouths at certain developmental stages, but rather bits and pieces emerge as children learn. At first, they use chunks of particular expressions they hear often, only gradually learning patterns and generalising to a fully fledged grammar. So grammars emerge from use, and the view of “language-as-instinct”, argues Evans, should be replaced by “language-as-use”.

The “innate” view also encounters a deep philosophical problem. If the rules of language are built into our genes, how is it that sentences mean something? How do they connect to our thoughts, concepts and to the outside world?

A solution from the language-as-instinct camp is that there is an internal language of thought called “mentalese”. In The Language Instinct, Pinker explains: “Knowing a language, then, is knowing how to translate mentalese into strings of words.” But philosophers are left arguing over the same question once removed: how does mentalese come to have meaning?

Read the entire article here.

Send to Kindle

The Italian Canary Sings

Coal_bituminousThose who decry benefits fraud in their own nations should look to the illustrious example of Italian “miner” Carlo Cani. His adventures in absconding from work over a period of 35 years (yes, years) would make a wonderful indie movie, and should be an inspiration to less ambitious slackers the world over.

From the Telegraph:

An Italian coal miner’s confession that he is drawing a pension despite hardly ever putting in a day’s work over a 35-year career has underlined the country’s problem with benefit fraud and its dysfunctional pension system.

Carlo Cani started work as a miner in 1980 but soon found that he suffered from claustrophobia and hated being underground.

He started doing everything he could to avoid hacking away at the coal face, inventing an imaginative range of excuses for not venturing down the mine in Sardinia where he was employed.

He pretended to be suffering from amnesia and haemorrhoids, rubbed coal dust into his eyes to feign an infection and on occasion staggered around pretending to be drunk.

The miner, now aged 60, managed to accumulate years of sick leave, apparently with the help of compliant doctors, and was able to stay at home to indulge his passion for jazz.

He also spent extended periods of time at home on reduced pay when demand for coal from the mine dipped, under an Italian system known as “cassazione integrazione” in which employees are kept on the pay roll during periods of economic difficulty for their companies.

Despite his long periods of absence, he was still officially an employee of the mining company, Carbosulcis, and therefore eventually entitled to a pension.

“I invented everything – amnesia, pains, haemorrhoids, I used to lurch around as if I was drunk. I bumped my thumb on a wall and obviously you can’t work with a swollen thumb,” Mr Cani told La Stampa daily on Tuesday.

“Other times I would rub coal dust into my eyes. I just didn’t like the work – being a miner was not the job for me.”

But rather than find a different occupation, he managed to milk the system for 35 years, until retiring on a pension in 2006 at the age of just 52.

“I reached the pensionable age without hardly ever working. I hated being underground. “Right from the start, I had no affinity for coal.”

He said he had “respect” for his fellow miners, who had earned their pensions after “years of sweat and back-breaking work”, while he had mostly rested at home.

The case only came to light this week but has caused such a furore in Italy that Mr Cani is now refusing to take telephone calls.

He could not be contacted but another Carlo Cani, who is no relation but lives in the same area of southern Sardinia and has his number listed in the phone book, said: “People round here are absolutely furious about this – to think that someone could skive off work for so long and still get his pension. He even seems to be proud of that fact.

“It’s shameful. This is a poor region and there is no work. All the young people are leaving and moving to England and Germany.”

The former miner’s work-shy ways have caused indignation in a country in which youth unemployment is more than 40 per cent.

Read the entire story here.

Image: Bituminous coal. The type of coal not mined by retired “miner” Carlo Cani. Courtesy of Wikipedia.

Send to Kindle

Cross-Connection Requires a Certain Daring

A previously unpublished essay by Isaac Asimov on the creative process shows us his well reasoned thinking on the subject. While he believed that deriving new ideas could be done productively in a group, he seemed to gravitate more towards the notion of the lone creative genius. Both, however, require the innovator(s) to cross-connect thoughts, often from disparate sources.

From Technology Review:

How do people get new ideas?

Presumably, the process of creativity, whatever it is, is essentially the same in all its branches and varieties, so that the evolution of a new art form, a new gadget, a new scientific principle, all involve common factors. We are most interested in the “creation” of a new scientific principle or a new application of an old one, but we can be general here.

One way of investigating the problem is to consider the great ideas of the past and see just how they were generated. Unfortunately, the method of generation is never clear even to the “generators” themselves.

But what if the same earth-shaking idea occurred to two men, simultaneously and independently? Perhaps, the common factors involved would be illuminating. Consider the theory of evolution by natural selection, independently created by Charles Darwin and Alfred Wallace.

There is a great deal in common there. Both traveled to far places, observing strange species of plants and animals and the manner in which they varied from place to place. Both were keenly interested in finding an explanation for this, and both failed until each happened to read Malthus’s “Essay on Population.”

Both then saw how the notion of overpopulation and weeding out (which Malthus had applied to human beings) would fit into the doctrine of evolution by natural selection (if applied to species generally).

Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.

Undoubtedly in the first half of the 19th century, a great many naturalists had studied the manner in which species were differentiated among themselves. A great many people had read Malthus. Perhaps some both studied species and read Malthus. But what you needed was someone who studied species, read Malthus, and had the ability to make a cross-connection.

That is the crucial point that is the rare characteristic that must be found. Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, “How stupid of me not to have thought of this.”

But why didn’t he think of it? The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table. Making the cross-connection requires a certain daring. It must, for any cross-connection that does not require daring is performed at once by many and develops not as a “new idea,” but as a mere “corollary of an old idea.”

It is only afterward that a new idea seems reasonable. To begin with, it usually seems unreasonable. It seems the height of unreason to suppose the earth was round instead of flat, or that it moved instead of the sun, or that objects required a force to stop them when in motion, instead of a force to keep them moving, and so on.

A person willing to fly in the face of reason, authority, and common sense must be a person of considerable self-assurance. Since he occurs only rarely, he must seem eccentric (in at least that respect) to the rest of us. A person eccentric in one respect is often eccentric in others.

Consequently, the person who is most likely to get new ideas is a person of good background in the field of interest and one who is unconventional in his habits. (To be a crackpot is not, however, enough in itself.)

Once you have the people you want, the next question is: Do you want to bring them together so that they may discuss the problem mutually, or should you inform each of the problem and allow them to work in isolation?

My feeling is that as far as creativity is concerned, isolation is required. The creative person is, in any case, continually working at it. His mind is shuffling his information at all times, even when he is not conscious of it. (The famous example of Kekule working out the structure of benzene in his sleep is well-known.)

The presence of others can only inhibit this process, since creation is embarrassing. For every new good idea you have, there are a hundred, ten thousand foolish ones, which you naturally do not care to display.

Nevertheless, a meeting of such people may be desirable for reasons other than the act of creation itself.

Read the entire article here.

Send to Kindle

The Sandwich of Corporate Exploitation

Google-search-sandwich

If ever you needed a vivid example of corporate exploitation of the most vulnerable, this is it. So-called free-marketeers will sneer at any suggestion of corporate over-reach — they will chant that it’s just the free market at work. But, the rules of this market,
as are many others, are written and enforced by the patricians and well-stacked against the plebs.

From NYT:

If you are a chief executive of a large company, you very likely have a noncompete clause in your contract, preventing you from jumping ship to a competitor until some period has elapsed. Likewise if you are a top engineer or product designer, holding your company’s most valuable intellectual property between your ears.

And you also probably have a noncompete agreement if you assemble sandwiches at Jimmy John’s sub sandwich chain for a living.

But what’s most startling about that information, first reported by The Huffington Post, is that it really isn’t all that uncommon. As my colleague Steven Greenhouse reported this year, employers are now insisting that workers in a surprising variety of relatively low- and moderate-paid jobs sign noncompete agreements.

Indeed, while HuffPo has no evidence that Jimmy John’s, a 2,000-location sandwich chain, ever tried to enforce the agreement to prevent some $8-an-hour sandwich maker or delivery driver from taking a job at the Blimpie down the road, there are other cases where low-paid or entry-level workers have had an employer try to restrict their employability elsewhere. The Times article tells of a camp counselor and a hair stylist who faced such restrictions.

American businesses are paying out a historically low proportion of their income in the form of wages and salaries. But the Jimmy John’s employment agreement is one small piece of evidence that workers, especially those without advanced skills, are also facing various practices and procedures that leave them worse off, even apart from what their official hourly pay might be. Collectively they tilt the playing field toward the owners of businesses and away from the workers who staff them.

You see it in disputes like the one heading to the Supreme Court over whether workers at an Amazon warehouse in Nevada must be paid for the time they wait to be screened at the end of the workday to ensure they have no stolen goods on them.

It’s evident in continuing lawsuits against Federal Express claiming that its “independent contractors” who deliver packages are in fact employees who are entitled to benefits and reimbursements of costs they incur.

And it is shown in the way many retailers assign hourly workers inconvenient schedules that can change at the last minute, giving them little ability to plan their lives (my colleague Jodi Kantor wrote memorably about the human effects of those policies on a Starbucks coffee worker in August, and Starbucks rapidly said it would end many of them).

These stories all expose the subtle ways that employers extract more value from their entry-level workers, at the cost of their quality of life (or, in the case of the noncompete agreements, freedom to leave for a more lucrative offer).

What’s striking about some of these labor practices is the absence of reciprocity. When a top executive agrees to a noncompete clause in a contract, it is typically the product of a negotiation in which there is some symmetry: The executive isn’t allowed to quit for a competitor, but he or she is guaranteed to be paid for the length of the contract even if fired.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Frenemies: The Religious Beheading and The Secular Guillotine

Secular ideologues in the West believe they are on the moral high-ground. The separation of church (and mosque or synagogue) from state is, they believe, the path to a more just, equal and less-violent culture. They will cite example after example in contemporary and recent culture of terrible violence in the name of religious extremism and fundamentalism.

And, yet, step back for a minute from the horrendous stories and images of atrocities wrought by religious fanatics in Europe, Africa, Asia and the Middle East. Think of the recent histories of fledgling nations in Africa; the ethnic cleansings across much of Central and Eastern Europe — several times over; the egomaniacal tribal terrorists of Central Asia, the brutality of neo-fascists and their socialist bedfellows in Latin America. Delve deeper into these tragic histories — some still unfolding before our very eyes — and you will see a much more complex view of humanity.  Our tribal rivalries know no bounds and our violence towards others is certainly not limited only to the catalyst of religion. Yes, we fight for our religion, but we also fight for territory, politics, resources, nationalism, revenge, poverty, ego.  Soon the coming fights will be about water and food — these will make our wars over belief systems seem rather petty.

Scholar and author Karen Armstrong explores the complexities of religious and secular violence in the broader context of human struggle in her new book, Fields of Blood: Religion and the History of Violence.

From the Guardian:

As we watch the fighters of the Islamic State (Isis) rampaging through the Middle East, tearing apart the modern nation-states of Syria and Iraq created by departing European colonialists, it may be difficult to believe we are living in the 21st century. The sight of throngs of terrified refugees and the savage and indiscriminate violence is all too reminiscent of barbarian tribes sweeping away the Roman empire, or the Mongol hordes of Genghis Khan cutting a swath through China, Anatolia, Russia and eastern Europe, devastating entire cities and massacring their inhabitants. Only the wearily familiar pictures of bombs falling yet again on Middle Eastern cities and towns – this time dropped by the United States and a few Arab allies – and the gloomy predictions that this may become another Vietnam, remind us that this is indeed a very modern war.

The ferocious cruelty of these jihadist fighters, quoting the Qur’an as they behead their hapless victims, raises another distinctly modern concern: the connection between religion and violence. The atrocities of Isis would seem to prove that Sam Harris, one of the loudest voices of the “New Atheism”, was right to claim that “most Muslims are utterly deranged by their religious faith”, and to conclude that “religion itself produces a perverse solidarity that we must find some way to undercut”. Many will agree with Richard Dawkins, who wrote in The God Delusion that “only religious faith is a strong enough force to motivate such utter madness in otherwise sane and decent people”. Even those who find these statements too extreme may still believe, instinctively, that there is a violent essence inherent in religion, which inevitably radicalises any conflict – because once combatants are convinced that God is on their side, compromise becomes impossible and cruelty knows no bounds.

Despite the valiant attempts by Barack Obama and David Cameron to insist that the lawless violence of Isis has nothing to do with Islam, many will disagree. They may also feel exasperated. In the west, we learned from bitter experience that the fanatical bigotry which religion seems always to unleash can only be contained by the creation of a liberal state that separates politics and religion. Never again, we believed, would these intolerant passions be allowed to intrude on political life. But why, oh why, have Muslims found it impossible to arrive at this logicalsolution to their current problems? Why do they cling with perverse obstinacy to the obviously bad idea of theocracy? Why, in short, have they been unable to enter the modern world? The answer must surely lie in their primitive and atavistic religion.

But perhaps we should ask, instead, how it came about that we in the west developed our view of religion as a purely private pursuit, essentially separate from all other human activities, and especially distinct from politics. After all, warfare and violence have always been a feature of political life, and yet we alone drew the conclusion that separating the church from the state was a prerequisite for peace. Secularism has become so natural to us that we assume it emerged organically, as a necessary condition of any society’s progress into modernity. Yet it was in fact a distinct creation, which arose as a result of a peculiar concatenation of historical circumstances; we may be mistaken to assume that it would evolve in the same fashion in every culture in every part of the world.

We now take the secular state so much for granted that it is hard for us to appreciate its novelty, since before the modern period, there were no “secular” institutions and no “secular” states in our sense of the word. Their creation required the development of an entirely different understanding of religion, one that was unique to the modern west. No other culture has had anything remotely like it, and before the 18th century, it would have been incomprehensible even to European Catholics. The words in other languages that we translate as “religion” invariably refer to something vaguer, larger and more inclusive. The Arabic word dinsignifies an entire way of life, and the Sanskrit dharma covers law, politics, and social institutions as well as piety. The Hebrew Bible has no abstract concept of “religion”; and the Talmudic rabbis would have found it impossible to define faith in a single word or formula, because the Talmud was expressly designed to bring the whole of human life into the ambit of the sacred. The Oxford Classical Dictionary firmly states: “No word in either Greek or Latin corresponds to the English ‘religion’ or ‘religious’.” In fact, the only tradition that satisfies the modern western criterion of religion as a purely private pursuit is Protestant Christianity, which, like our western view of “religion”, was also a creation of the early modern period.

Traditional spirituality did not urge people to retreat from political activity. The prophets of Israel had harsh words for those who assiduously observed the temple rituals but neglected the plight of the poor and oppressed. Jesus’s famous maxim to “Render unto Caesar the things that are Caesar’s” was not a plea for the separation of religion and politics. Nearly all the uprisings against Rome in first-century Palestine were inspired by the conviction that the Land of Israel and its produce belonged to God, so that there was, therefore, precious little to “give back” to Caesar. When Jesus overturned the money-changers’ tables in the temple, he was not demanding a more spiritualised religion. For 500 years, the temple had been an instrument of imperial control and the tribute for Rome was stored there. Hence for Jesus it was a “den of thieves”. The bedrock message of the Qur’an is that it is wrong to build a private fortune but good to share your wealth in order to create a just, egalitarian and decent society. Gandhi would have agreed that these were matters of sacred import: “Those who say that religion has nothing to do with politics do not know what religion means.”

The myth of religious violence

Before the modern period, religion was not a separate activity, hermetically sealed off from all others; rather, it permeated all human undertakings, including economics, state-building, politics and warfare. Before 1700, it would have been impossible for people to say where, for example, “politics” ended and “religion” began. The Crusades were certainly inspired by religious passion but they were also deeply political: Pope Urban II let the knights of Christendom loose on the Muslim world to extend the power of the church eastwards and create a papal monarchy that would control Christian Europe. The Spanish inquisition was a deeply flawed attempt to secure the internal order of Spain after a divisive civil war, at a time when the nation feared an imminent attack by the Ottoman empire. Similarly, the European wars of religion and the thirty years war were certainly exacerbated by the sectarian quarrels of Protestants and Catholics, but their violence reflected the birth pangs of the modern nation-state.

Read the entire article here.

Send to Kindle