MondayMap: Bro or Dude Country?

Dude_Frequency

If you’re a male in Texas and have one or more BFFs, then chances are that you refer to each of them as “bro”. If you and your BFFs hang out in the deep south, then you’re more likely to call them “fella”. Fans of the Big Lebowski will be glad to hear the “dude” lives on — but mostly only in California, Southwestern US and around the Great Lakes.

See more maps of bros, fellas, dudes, and pals at Frank Jacobs blog here.

Image courtesy of Frank Jacobs / Jack Grieve and Diansheng Guo.

Silicon Death Valley

boo-com

Have you ever wondered what happens to the 99 percent of Silicon Valley startups that don’t make billionaires (or even millionaires) of their founders? It’s not all milk and honey in the land of sunshine. After all, for every Google or Facebook there are hundreds of humiliating failures — think: Webvan, Boo.com, Pets.com. Beautyjungle.com, Boxman, Flooz, eToys.

The valley’s venture capitalists tend to bury their business failures rather quietly, careful not to taint their reputations as omnipotent, infallible futurists. From the ashes of these failures some employees move on to well-established corporate serfdom and others find fresh challenges at new startups. But there is a fascinating middle-ground, between success and failure — an entrepreneurial twilight zone populated by zombie businesses.

From the Guardian:

It is probably Silicon Valley’s most striking mantra: “Fail fast, fail often.” It is recited at technology conferences, pinned to company walls, bandied in conversation.

Failure is not only invoked but celebrated. Entrepreneurs give speeches detailing their misfires. Academics laud the virtue of making mistakes. FailCon, a conference about “embracing failure”, launched in San Francisco in 2009 and is now an annual event, with technology hubs in Barcelona, Tokyo, Porto Alegre and elsewhere hosting their own versions.

While the rest of the world recoils at failure, in other words, technology’s dynamic innovators enshrine it as a rite of passage en route to success.

But what about those tech entrepreneurs who lose – and keep on losing? What about those who start one company after another, refine pitches, tweak products, pivot strategies, reinvent themselves … and never succeed? What about the angst masked behind upbeat facades?

Silicon Valley is increasingly asking such questions, even as the tech boom rewards some startups with billion-dollar valuations, sprinkling stardust on founders who talk of changing the world.

“It’s frustrating if you’re trying and trying and all you read about is how much money Airbnb and Uber are making,” said Johnny Chin, 28, who endured three startup flops but is hopeful for his fourth attempt. “The way startups are portrayed, everything seems an overnight success, but that’s a disconnect from reality. There can be a psychic toll.”

It has never been easier or cheaper to launch a company in the hothouse of ambition, money and software that stretches from San Francisco to Cupertino, Mountain View, Menlo Park and San Jose.

In 2012 the number of seed investment deals in US tech reportedly more than tripled, to 1,700, from three years earlier. Investment bankers are quitting Wall Street for Silicon Valley, lured by hopes of a cooler and more creative way to get rich.

Most startups fail. However many entrepreneurs still overestimate the chances of success – and the cost of failure.

Some estimates put the failure rate at 90% – on a par with small businesses in other sectors. A similar proportion of alumni from Y Combinator, a legendary incubator which mentors bright prospects, are said to also struggle.

Companies typically die around 20 months after their last financing round and after having raised $1.3m, according to a study by the analytics firms CB Insights titled The RIP Report – startup death trends.

Advertisement

Failure is difficult to quantify because it does not necessarily mean liquidation. Many startups limp on for years, ignored by the market but sustained by founders’ savings or investors.

“We call them the walking dead,” said one manager at a tech behemoth, who requested anonymity. “They don’t necessarily die. They putter along.”

Software engineers employed by such zombies face a choice. Stay in hope the company will take off, turning stock options into gold. Or quit and take one of the plentiful jobs at other startups or giants like Apple and Google.

Founders face a more agonising dilemma. Continue working 100-hour weeks and telling employees and investors their dream is alive, that the metrics are improving, and hope it’s true, or pull the plug.

The loss aversion principle – the human tendency to strongly prefer avoiding losses to acquiring gains – tilts many towards the former, said Bruno Bowden, a former engineering manager at Google who is now a venture investor and entrepreneur.

“People will do a lot of irrational things to avoid losing even if it’s to their detriment. You push and push and exhaust yourself.”

Silicon Valley wannabes tell origin fables of startup founders who maxed out credit cards before dazzling Wall Street, the same way Hollywood’s struggling actors find solace in the fact Brad Pitt dressed as a chicken for El Pollo Loco before his breakthrough.

“It’s painful to be one of the walking dead. You lie to yourself and mask what’s not working. You amplify little wins,” said Chin, who eventually abandoned startups which offered micro, specialised versions of Amazon and Yelp.

That startup founders were Silicon Valley’s “cool kids”, glamorous buccaneers compared to engineers and corporate drones, could make failure tricky to recognise, let alone accept, he said. “People are very encouraging. Everything is amazing, cool, awesome. But then they go home and don’t use your product.”

Chin is bullish about his new company, Bannerman, an Uber-type service for event security and bodyguards, and has no regrets about rolling the tech dice. “I love what I do. I couldn’t do anything else.”

Read the entire story here.

Image: Boo.com, 1999. Courtesy of the WayBackMachine, Internet Archive.

Universal Amniotic Fluid

Another day, another physics paper describing the origin of the universe. This is no wonder. Since the development of general relativity and quantum mechanics — two mutually incompatible descriptions of our reality — theoreticians have been scurrying to come up with a grand theory, a rapprochement of sorts. This one describes the universe as a quantum fluid, perhaps made up of hypothesized gravitons.

From Nature Asia:

The prevailing model of cosmology, based on Einstein’s theory of general relativity, puts the universe at around 13.8 billion years old and suggests it originated from a “singularity” – an infinitely small and dense point – at the Big Bang.

 To understand what happened inside that tiny singularity, physicists must marry general relativity with quantum mechanics – the laws that govern small objects. Applying both of these disciplines has challenged physicists for decades. “The Big Bang singularity is the most serious problem of general relativity, because the laws of physics appear to break down there,” says Ahmed Farag Ali, a physicist at Zewail City of Science and Technology, Egypt.

 In an effort to bring together the laws of quantum mechanics and general relativity, and to solve the singularity puzzle, Ali and Saurya Das, a physicist at the University of Lethbridge in Alberta Canada, employed an equation that predicts the development of singularities in general relativity. That equation had been developed by Das’s former professor, Amal Kumar Raychaudhuri, when Das was an undergraduate student at Presidency University, in Kolkata, India, so Das was particularly familiar and fascinated by it.

 When Ali and Das made small quantum corrections to the Raychaudhuri equation, they realised it described a fluid, made up of small particles, that pervades space. Physicists have long believed that a quantum version of gravity would include a hypothetical particle, called the graviton, which generates the force of gravity. In their new model — which will appear in Physics Letters B in February — Ali and Das propose that such gravitons could form this fluid.

To understand the origin of the universe, they used this corrected equation to trace the behaviour of the fluid back through time. Surprisingly, they found that it did not converge into a singularity. Instead, the universe appears to have existed forever. Although it was smaller in the past, it never quite crunched down to nothing, says Das.

 “Our theory serves to complement Einstein’s general relativity, which is very successful at describing physics over large distances,” says Ali. “But physicists know that to describe short distances, quantum mechanics must be accommodated, and the quantum Raychaudhui equation is a big step towards that.”

The model could also help solve two other cosmic mysteries. In the late 1990s, astronomers discovered that the expansion of the universe is accelerating due the presence of a mysterious dark energy, the origin of which is not known. The model has the potential to explain it since the fluid creates a minor but constant outward force that expands space. “This is a happy offshoot of our work,” says Das.

 Astronomers also now know that most matter in the universe is in an invisible mysterious form called dark matter, only perceptible through its gravitational effect on visible matter such as stars. When Das and a colleague set the mass of the graviton in the model to a small level, they could make the density of their fluid match the universe’s observed density of dark matter, while also providing the right value for dark energy’s push.

Read the entire article here.

 

True “False Memory”

Apparently it is surprisingly easy to convince people to remember a crime, or other action, that they never committed. Makes one wonder how many of the around 2 million people in US prisons are incarcerated due to these false memories in both inmates and witnesses.

From ars technica:

The idea that memories are not as reliable as we think they are is disconcerting, but it’s pretty well-established. Various studies have shown that participants can be persuaded to create false childhood memories—of being lost in a shopping mall or hospitalized, or even highly implausible scenarios like having tea with Prince Charles.

The creation of false memories has obvious implications for the legal system, as it gives us reasons to distrust both eyewitness accounts and confessions. It’s therefore important to know exactly what kinds of false memories can be created, what influences the creation of a false memory, and whether false recollections can be distinguished from real ones.

A recent paper in Psychological Science found that 71 percent of participants exposed to certain interview techniques developed false memories of having committed a crime as a teenager. In reality, none of these people had experienced contact with the police during the age bracket in question.

After establishing a pool of potential participants, the researchers sent out questionnaires to the caregivers of these individuals. They eliminated any participants who had been involved in some way with an assault or theft, or had other police contact between the ages of 11 and 14. They also asked the caregivers to describe in detail a highly emotional event that the participant had experienced at this age. The caregivers were asked not to discuss the content of the questionnaire with the participants.

The 60 eligible participants were divided into two groups: one that would be given false memories of committing an assault, theft, or assault with a weapon, and another that would be provided with false memories of another emotional event—an injury, an attack by a dog, or the loss of a large sum of money. In the first of three interviews with each participant, the interviewer presented the true memory that had been provided by the caregiver. Once the interviewer’s credibility and knowledge of the participant’s background had been established, the false memory was presented.

For both kinds of memory, the interviewer gave the participant “cues”, such as their age at the time, people who had been involved, and the time of year. Participants were then asked to recall the details of what had happened. No participants recalled the false event the first time it was mentioned—which would have rung alarm bells—but were reassured that people could often uncover memories like these through effort.

A number of tactics were used to induce the false memory. Social pressure was applied to encourage recall of details, the interviewer attempted to build a rapport with the participants, and the participants were told that their caregivers had corroborated the facts. They were also encouraged to use visualization techniques to “uncover” the memory.

In each of the three interviews, participants were asked to provide as many details as they could for both events. After the final interview, they were informed that the second memory was false, and asked whether they had really believed the events had occurred. They were also asked to rate how surprised they were to find out that it was false. Only participants who answered that they had genuinely believed the false memory, and who could give more than ten details of the event, were classified as having a true false memory. Of the participants in the group with criminal false stories, 71 percent developed a “true” false memory. The group with non-criminal false stories was not significantly different, with 77 percent of participants classified as having a false memory. The details participants provided for their false memories did not differ significantly in either quality or quantity from their true memories.

This study is only a beginning, and there is still a great deal of work to be done. There are a number of factors that couldn’t be controlled for but which may have influenced the results. For instance, the researchers suggest that, since only one interviewer was involved, her individual characteristics may have influenced the results, raising the question of whether only certain kinds of interviewers can achieve these effects. It isn’t clear whether participants were fully honest about having believed in the false memory, since they could have just been trying to cooperate; the results could also have been affected by the fact that there were no negative consequences to telling the false story.

Read the entire article here.

Focus on Process, Not Perfect Grades

If you are a parent of a school-age child then it is highly likely that you have, on multiple occasions, chastised her or him and withheld privileges for poor grades. It’s also likely that you have rewarded the same child for being smart at math or having Picasso-like artistic talent. I have done this myself. But, there is a better way to nurture young minds, and it is through “telling stories about achievements that result from hard work.”

From Scientific American:

A brilliant student, Jonathan sailed through grade school. He completed his assignments easily and routinely earned As. Jonathan puzzled over why some of his classmates struggled, and his parents told him he had a special gift. In the seventh grade, however, Jonathan suddenly lost interest in school, refusing to do homework or study for tests. As a consequence, his grades plummeted. His parents tried to boost their son’s confidence by assuring him that he was very smart. But their attempts failed to motivate Jonathan (who is a composite drawn from several children). Schoolwork, their son maintained, was boring and pointless.

Our society worships talent, and many people assume that possessing superior intelligence or ability—along with confidence in that ability—is a recipe for success. In fact, however, more than 35 years of scientific investigation suggests that an overemphasis on intellect or talent leaves people vulnerable to failure, fearful of challenges and unwilling to remedy their shortcomings.

The result plays out in children like Jonathan, who coast through the early grades under the dangerous notion that no-effort academic achievement defines them as smart or gifted. Such children hold an implicit belief that intelligence is innate and fixed, making striving to learn seem far less important than being (or looking) smart. This belief also makes them see challenges, mistakes and even the need to exert effort as threats to their ego rather than as opportunities to improve. And it causes them to lose confidence and motivation when the work is no longer easy for them.

Praising children’s innate abilities, as Jonathan’s parents did, reinforces this mind-set, which can also prevent young athletes or people in the workforce and even marriages from living up to their potential. On the other hand, our studies show that teaching people to have a “growth mind-set,” which encourages a focus on “process” (consisting of personal effort and effective strategies) rather than on intelligence or talent, helps make them into high achievers in school and in life.

The Opportunity of Defeat
I first began to investigate the underpinnings of human motivation—and how people persevere after setbacks—as a psychology graduate student at Yale University in the 1960s. Animal experiments by psychologists Martin Seligman, Steven Maier and Richard Solomon, all then at the University of Pennsylvania, had shown that after repeated failures, most animals conclude that a situation is hopeless and beyond their control. After such an experience, the researchers found, an animal often remains passive even when it can effect change—a state they called learned helplessness.

People can learn to be helpless, too, but not everyone reacts to setbacks this way. I wondered: Why do some students give up when they encounter difficulty, whereas others who are no more skilled continue to strive and learn? One answer, I soon discovered, lay in people’s beliefs about why they had failed.

In particular, attributing poor performance to a lack of ability depresses motivation more than does the belief that lack of effort is to blame. In 1972, when I taught a group of elementary and middle school children who displayed helpless behavior in school that a lack of effort (rather than lack of ability) led to their mistakes on math problems, the kids learned to keep trying when the problems got tough. They also solved many more problems even in the face of difficulty. Another group of helpless children who were simply rewarded for their success on easier problems did not improve their ability to solve hard math problems. These experiments were an early indication that a focus on effort can help resolve helplessness and engender success.

Subsequent studies revealed that the most persistent students do not ruminate about their own failure much at all but instead think of mistakes as problems to be solved. At the University of Illinois in the 1970s I, along with my then graduate student Carol Diener, asked 60 fifth graders to think out loud while they solved very difficult pattern-recognition problems. Some students reacted defensively to mistakes, denigrating their skills with comments such as “I never did have a good rememory,” and their problem-solving strategies deteriorated.

Others, meanwhile, focused on fixing errors and honing their skills. One advised himself: “I should slow down and try to figure this out.” Two schoolchildren were particularly inspiring. One, in the wake of difficulty, pulled up his chair, rubbed his hands together, smacked his lips and said, “I love a challenge!” The other, also confronting the hard problems, looked up at the experimenter and approvingly declared, “I was hoping this would be informative!” Predictably, the students with this attitude outperformed their cohorts in these studies.

Read the entire article here.

The Great Unknown: Consciousness

Google-search-consciousness

Much has been written in the humanities and scientific journals about consciousness. Scholars continue to probe and pontificate and theorize. And yet we seem to know more of the ocean depths and our cosmos than we do of that interminable, self-aware inner voice that sits behind our eyes.

From the Guardian:

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Read the entire story here.

Image courtesy of Google Search.

Feminism in Saudi Arabia? Hypocrisy in the West!

We are constantly reminded on the immense struggle that is humanity’s progress. Often it seems like one step forward and several back. Cultural relativism and hypocrisy continue to run rampant in a world that celebrates selfies and serfdom.

Oh, and in case you haven’t heard: the rulers of Saudi Arabia are feminists. But then again, so too are the white males who control most of the power, wealth, media and political machinery in the West.

From the Guardian:

Christine Lagarde, the first woman to head the IMF, has paid tribute to the late King Abdullah of Saudi Arabia. He was a strong advocate of women, she said. This is almost certainly not what she thinks. She even hedged her remarks about with qualifiers like “discreet” and “appropriate”. There are constraints of diplomacy and obligations of leadership and navigating between them can be fraught. But this time there was only one thing to say. Abdullah led a country that abuses women’s rights, and indeed all human rights, in a way that places it beyond normal diplomacy.

The constraints and restrictions on Saudi women are too notorious and too numerous to itemise. Right now, two women are in prison for the offence of trying to drive over the border in to Saudi Arabia. It is not just the ban on driving. There is also the ban on going out alone, the ban on voting, the death penalty for adultery, and the total obliteration of public personality – almost of a sense of existence – by the obligatory veil. And there are the terrible punishments meted out to those who infringe these rules that are not written down but “interpreted” – Islam mediated through the conventions of a deeply conservative people.

Lagarde is right. King Abdullah did introduce reforms. Women can now work almost anywhere they want, although their husband brother or father will have to drive them there (and the children to school). They can now not just study law but practise as lawyers. There are women on the Sharia council and it was through their efforts that domestic violence has been criminalised. But enforcement is in the hands of courts that do not necessarily recognise the change. These look like reforms with all the substance of a Potemkin village, a flimsy structure to impress foreign opinion.

Pressure for change is driven by women themselves, exploiting social media by actions that range from the small, brave actions of defiance – posting images of women at the wheel (ovaries, despite men’s fears, apparently undamaged) – to the large-scale subversive gesture such as the YouTube TV programmes reported by the Economist.

But the point about the Lagarde remarks is that there are signs the Saudi authorities really can be sensitive to the rare criticism that comes from western governments, and the western media. Such protests may yet spare blogger Raif Badawi from further punishment for alleged blasphemy. Today’s lashing has been delayed for the third successive week .The Saudi authorities, like any despotic regime, are trying to appease their critics and contain the pressure for change that social media generates by conceding inch by inch so that, like the slow downhill creep of a glacier, the religious authorities and mainstream social opinion don’t notice it is happening.

But beyond Saudi’s borders, it is surely the duty of everyone who really does believe in equality and human rights to shout and finger point and criticise at every opportunity. Failing to do so is what makes Christine Lagarde’s remarks a betrayal of the women who literally risk everything to try to bring about change in the oppressive patriarchy in which they live. They are typical of the desire not to offend the world’s biggest oil producer and the west’s key Middle Eastern ally, a self-censorship that allows the Saudis to claim they respect human rights while breaching every known norm of behaviour.

Read the entire article here.

 

Education And Reality

Recent studies show that having a higher level of education does not necessarily lead to greater acceptance of reality. This seems to fly in the face of oft cited anecdotal evidence and prevailing beliefs that suggest people with lower educational attainment are more likely to reject accepted scientific fact, such as evolutionary science and climate change.

From ars technica:

We like to think that education changes people for the better, helping them critically analyze information and providing a certain immunity from disinformation. But if that were really true, then you wouldn’t have low vaccination rates clustering in areas where parents are, on average, highly educated.

Vaccination isn’t generally a political issue. (Or, it is, but it’s rejected both by people who don’t trust pharmaceutical companies and by those who don’t trust government mandates; these tend to cluster on opposite ends of the political spectrum.) But some researchers decided to look at a number of issues that have become politicized, such as the Iraq War, evolution, and climate change. They find that, for these issues, education actually makes it harder for people to accept reality, an effect they ascribe to the fact that “highly educated partisans would be better equipped to challenge information inconsistent with predispositions.”

The researchers looked at two sets of questions about the Iraq War. The first involved the justifications for the war (weapons of mass destruction and links to Al Qaeda), as well as the perception of the war outside the US. The second focused on the role of the troop surge in reducing violence within Iraq. At the time the polls were taken, there was a clear reality: no evidence of an active weapons program or links to Al Qaeda; the war was frowned upon overseas; and the surge had successfully reduced violence in the country.

On the three issues that were most embarrassing to the Bush administration, Democrats were more likely to get things right, and their accuracy increased as their level of education rose. In contrast, the most and least educated Republicans were equally likely to have things wrong. When it came to the surge, the converse was true. Education increased the chances that Republicans would recognize reality, while the Democratic acceptance of the facts stayed flat even as education levels rose. In fact, among Democrats, the base level of recognition that the surge was a success was so low that it’s not even clear it would have been possible to detect a downward trend.

When it came to evolution, the poll question didn’t even ask whether people accepted the reality of evolution. Instead, it asked “Is there general agreement among scientists that humans have evolved over time, or not?” (This phrasing generally makes it easier for people to accept the reality of evolution, since it’s not asking about their personal beliefs.) Again, education increased the acceptance of this reality among both Democrats and Republicans, but the magnitude of the effect was much smaller among Republicans. In fact, the impact of ideology was stronger than education itself: “The effect of Republican identification on the likelihood of believing that there is a scientific consensus is roughly three times that of the effect of education.”

For climate change, the participants were asked “Do you believe that the earth is getting warmer because of human activity or natural patterns?” Overall, about the beliefs of 70 percent of those polled lined up with scientific conclusions on the matter. And, among the least educated, party affiliation made very little difference in terms of getting this right. But, as education rose, Democrats were more likely to get this right, while Republicans saw their accuracy drop. At the highest levels of education, Democrats got it right 90 percent of the time, while Republicans less than half.

The results are in keeping with a number of other studies that have been published of late, which also show that partisan divides over things that could be considered factual sometimes increase with education. Typically, these issues are widely perceived as political. (With some exceptions; GMOs, for example.) In this case, the authors suspect that education simply allows people to deploy more sophisticated cognitive filters that end up rejecting information that could otherwise compel them to change their perceptions.

The authors conclude that’s somewhat mixed news for democracy itself. Education is intended to improve people’s ability to assimilate information upon which to base their political judgements. And, to a large extent, it does: people, on average, got 70 percent of the questions right, and there was only a single case where education made matters worse.

Read the entire article here.

The Impending AI Apocalypse

Robbie_the_Robot_2006

AI as in Artificial Intelligence, not American Idol — though some believe the latter to be somewhat of a cultural apocalypse.

AI is reaching a technological tipping point; advances in computation especially neural networks are making machines more intelligent every day. These advances are likely to spawn machines — sooner rather than later — that will someday mimic and then surpass human cognition. This has an increasing number of philosophers, scientists and corporations raising alarms. The fear: what if super-intelligent AI machines one day decide that humans are far too inferior and superfluous?

From Wired:

On the first Sunday afternoon of 2015, Elon Musk took to the stage at a closed-door conference at a Puerto Rican resort to discuss an intelligence explosion. This slightly scary theoretical term refers to an uncontrolled hyper-leap in the cognitive ability of AI that Musk and physicist Stephen Hawking worry could one day spell doom for the human race.

That someone of Musk’s considerable public stature was addressing an AI ethics conference—long the domain of obscure academics—was remarkable. But the conference, with the optimistic title “The Future of AI: Opportunities and Challenges,” was an unprecedented meeting of the minds that brought academics like Oxford AI ethicist Nick Bostrom together with industry bigwigs like Skype founder Jaan Tallinn and Google AI expert Shane Legg.

Musk and Hawking fret over an AI apocalypse, but there are more immediate threats. In the past five years, advances in artificial intelligence—in particular, within a branch of AI algorithms called deep neural networks—are putting AI-driven products front-and-center in our lives. Google, Facebook, Microsoft and Baidu, to name a few, are hiring artificial intelligence researchers at an unprecedented rate, and putting hundreds of millions of dollars into the race for better algorithms and smarter computers.

AI problems that seemed nearly unassailable just a few years ago are now being solved. Deep learning has boosted Android’s speech recognition, and given Skype Star Trek-like instant translation capabilities. Google is building self-driving cars, and computer systems that can teach themselves to identify cat videos. Robot dogs can now walk very much like their living counterparts.

“Things like computer vision are starting to work; speech recognition is starting to work There’s quite a bit of acceleration in the development of AI systems,” says Bart Selman, a Cornell professor and AI ethicist who was at the event with Musk. “And that’s making it more urgent to look at this issue.”

Given this rapid clip, Musk and others are calling on those building these products to carefully consider the ethical implications. At the Puerto Rico conference, delegates signed an open letter pledging to conduct AI research for good, while “avoiding potential pitfalls.” Musk signed the letter too. “Here are all these leading AI researchers saying that AI safety is important,” Musk said yesterday. “I agree with them.”

Google Gets on Board

Nine researchers from DeepMind, the AI company that Google acquired last year, have also signed the letter. The story of how that came about goes back to 2011, however. That’s when Jaan Tallinn introduced himself to Demis Hassabis after hearing him give a presentation at an artificial intelligence conference. Hassabis had recently founded the hot AI startup DeepMind, and Tallinn was on a mission. Since founding Skype, he’d become an AI safety evangelist, and he was looking for a convert. The two men started talking about AI and Tallinn soon invested in DeepMind, and last year, Google paid $400 million for the 50-person company. In one stroke, Google owned the largest available talent pool of deep learning experts in the world. Google has kept its DeepMind ambitions under wraps—the company wouldn’t make Hassabis available for an interview—but DeepMind is doing the kind of research that could allow a robot or a self-driving car to make better sense of its surroundings.

That worries Tallinn, somewhat. In a presentation he gave at the Puerto Rico conference, Tallinn recalled a lunchtime meeting where Hassabis showed how he’d built a machine learning system that could play the classic ’80s arcade game Breakout. Not only had the machine mastered the game, it played it a ruthless efficiency that shocked Tallinn. While “the technologist in me marveled at the achievement, the other thought I had was that I was witnessing a toy model of how an AI disaster would begin, a sudden demonstration of an unexpected intellectual capability,” Tallinn remembered.

Read the entire story here.

Image: Robby the Robot (Forbidden Planet), Comic Con, San Diego, 2006. Courtesy of Pattymooney.

Facts, Fiction and Foxtion

Foxtion. fox·tion. noun \ fäks-sh?n \

New stories about people and events that are not real: literature that tells stories which are imagined by the writer and presenter, and presented earnestly and authoritatively by self-proclaimed experts, repeated over and over until audience accepts as written-in-stone truth. 

Fox News is the gift that just keeps on giving — to comedians, satirists, seekers of truth and, generally, people with reasonably intact grey matter. This time Fox has reconnected with so-called terrorism expert, Steven Emerson. Seems like a nice chap, but, as the British Prime Minister recently remarked, he’s “an idiot”.

From the Guardian:

Steven Emerson, a man whose job title of terrorism expert will henceforth always attract quotation marks, provoked a lot of mirth with his claim, made during a Fox News interview, that Birmingham was a Muslim-only city where “non-Muslims simply just don’t go in”. He was forced to apologise, and the prime minister called him an idiot, all within the space of 24 hours.

This was just one of the many deeply odd things Emerson said in the course of the interview, although it was perhaps the most instantly refutable: Birmingham census figures are easy to come by. His claim that London was full of “actual religious police that actually beat and actually wound seriously anyone who doesn’t dress according to religious Muslim attire” is harder to disprove; just because I live in London and I’ve never seen them doesn’t mean they don’t exist. But they’re not exactly thick on the ground. I blame the cuts.

Emerson also made reference to the “no-go zones” of France, where the government doesn’t “exercise any sovereignty”. “On the French official website it says there are,” he said. “It actually has a map of them.”

How could the French government make the basic blunder of publicising its inability to exercise sovereignty, and on the “French official website” of all places?

After a bit of Googling – which appears to be how Emerson gets his information – I think I know what he’s on about. He appears to be referring to The 751 No-Go Zones of France, the title of a widely disseminated, nine-year-old blogpost originating on the website of Daniel Pipes, another terrorism expert, or “anti-Arab propagandist”.

“They go by the euphemistic term Zones Urbaines Sensibles, or sensitive urban zones,” wrote Pipes, referring to them as “places in France that the French state does not fully control”. And it’s true: you can find them all listed on the French government’s website. Never mind that they were introduced in 1996, or that the ZUS distinction actually denotes an impoverished area targeted for economic and social intervention, not abandonment of sovereignty. For people like Emerson they are officially sanctioned caliphates, where cops and non-Muslims dare not tread.

Yet seven years after he first exposed the No-Go Zones of France, Pipes actually managed to visit several banlieues around Paris. In an update posted in 2013, his disappointment was palpable.

“For a visiting American, these areas are very mild, even dull,” he wrote. “We who know the Bronx and Detroit expect urban hell in Europe too, but there things look fine.

“I regret having called these areas no-go zones.”

Read the entire story here.

Je Suis Snowman #jesuissnowman

snowman

What do Salman Rushdie and snowmen have in common, you may ask. Apparently, they are they both the subject of an Islamic fatwa. So, beware building a snowman lest you stray onto an ungodly path from idolizing your frozen handiwork. And, you may wish to return that DVD of Frozen. Oh, the utter absurdity of it all!

From the Guardian:

A prominent Saudi Arabian cleric has whipped up controversy by issuing a religious edict forbidding the building of snowmen, described them as anti-Islamic.

Asked on a religious website if it was permissible for fathers to build snowmen for their children after a snowstorm in the country’s north, Sheikh Mohammed Saleh al-Munajjid replied: “It is not permitted to make a statue out of snow, even by way of play and fun.”

Quoting from Muslim scholars, Munajjid argued that to build a snowman was to create an image of a human being, an action considered sinful under the kingdom’s strict interpretation of Sunni Islam.

“God has given people space to make whatever they want which does not have a soul, including trees, ships, fruits, buildings and so on,” he wrote in his ruling.

That provoked swift responses from Twitter users writing in Arabic and identifying themselves with Arab names.

“They are afraid for their faith of everything … sick minds,” one Twitter user wrote.

Another posted a photo of a man in formal Arab garb holding the arm of a “snow bride” wearing a bra and lipstick. “The reason for the ban is fear of sedition,” he wrote.

A third said the country was plagued by two types of people: “A people looking for a fatwa [religious ruling] for everything in their lives, and a cleric who wants to interfere in everything in the lives of others through a fatwa.”

Munajjid had some supporters however. “It (building snowmen) is imitating the infidels, it promotes lustiness and eroticism,” one wrote. “May God preserve the scholars, for they enjoy sharp vision and recognise matters that even Satan does not think about.”

Snow has covered upland areas of Tabuk province near Saudi Arabia’s border with Jordan for the third consecutive year as cold weather swept across the Middle East.

Read more here.

Images courtesy of Google Search.

Exotic Exoplanets Await Your Arrival

NASA_kepler16b_poster

Vintage travel posters from the late 1890s through to the 1950s colorfully captured the public’s imagination. Now, not to be outdone by the classic works from the Art Nouveau and Art Deco periods, NASA has published a series of its own. But, these posters go beyond illustrating alpine ski resorts, sumptuous hotels and luxurious cruises. Rather, NASA has its sights on exotic and very distant travels — from tens to hundreds of millions of light-years. One such spot is the destination Kepler-16.

Kepler-16 A/B is a binary star system in the constellation of Cygnus that was targeted for analysis by the Kepler exoplanet hunting spacecraft. The star system is home to a Saturn-sized planet Kepler 16b orbiting the red dwarf star, Kepler 16-B, and  is 196 light-years from Earth.

See more of NASA’s travel posters here.

 

The Thugs of Cultural Disruption

What becomes of our human culture as Amazon crushes booksellers and publishers, Twitter dumbs down journalism, knowledge is replaced by keyword search, and the internet becomes a popularity contest?

Leon Wieseltier contributing editor at The Atlantic has some thoughts.

From NYT:

Amid the bacchanal of disruption, let us pause to honor the disrupted. The streets of American cities are haunted by the ghosts of bookstores and record stores, which have been destroyed by the greatest thugs in the history of the culture industry. Writers hover between a decent poverty and an indecent one; they are expected to render the fruits of their labors for little and even for nothing, and all the miracles of electronic dissemination somehow do not suffice for compensation, either of the fiscal or the spiritual kind. Everybody talks frantically about media, a second-order subject if ever there was one, as content disappears into “content.” What does the understanding of media contribute to the understanding of life? Journalistic institutions slowly transform themselves into silent sweatshops in which words cannot wait for thoughts, and first responses are promoted into best responses, and patience is a professional liability. As the frequency of expression grows, the force of expression diminishes: Digital expectations of alacrity and terseness confer the highest prestige upon the twittering cacophony of one-liners and promotional announcements. It was always the case that all things must pass, but this is ridiculous.

Meanwhile the discussion of culture is being steadily absorbed into the discussion of business. There are “metrics” for phenomena that cannot be metrically measured. Numerical values are assigned to things that cannot be captured by numbers. Economic concepts go rampaging through noneconomic realms: Economists are our experts on happiness! Where wisdom once was, quantification will now be. Quantification is the most overwhelming influence upon the contemporary American understanding of, well, everything. It is enabled by the idolatry of data, which has itself been enabled by the almost unimaginable data-generating capabilities of the new technology. The distinction between knowledge and information is a thing of the past, and there is no greater disgrace than to be a thing of the past. Beyond its impact upon culture, the new technology penetrates even deeper levels of identity and experience, to cognition and to consciousness. Such transformations embolden certain high priests in the church of tech to espouse the doctrine of “transhumanism” and to suggest, without any recollection of the bankruptcy of utopia, without any consideration of the cost to human dignity, that our computational ability will carry us magnificently beyond our humanity and “allow us to transcend these limitations of our biological bodies and brains. . . . There will be no distinction, post-Singularity, between human and machine.” (The author of that updated mechanistic nonsense is a director of engineering at Google.)

And even as technologism, which is not the same as technology, asserts itself over more and more precincts of human life, so too does scientism, which is not the same as science. The notion that the nonmaterial dimensions of life must be explained in terms of the material dimensions, and that nonscientific understandings must be translated into scientific understandings if they are to qualify as knowledge, is increasingly popular inside and outside the university, where the humanities are disparaged as soft and impractical and insufficiently new. The contrary insistence that the glories of art and thought are not evolutionary adaptations, or that the mind is not the brain, or that love is not just biology’s bait for sex, now amounts to a kind of heresy. So, too, does the view that the strongest defense of the humanities lies not in the appeal to their utility — that literature majors may find good jobs, that theaters may economically revitalize neighborhoods — but rather in the appeal to their defiantly nonutilitarian character, so that individuals can know more than how things work, and develop their powers of discernment and judgment, their competence in matters of truth and goodness and beauty, to equip themselves adequately for the choices and the crucibles of private and public life.

Read the entire essay here.

Je Suis Ahmed

From the Guardian:

It was a Muslim policeman from a local police station who was “slaughtered like a dog” after heroically trying to stop two heavily armed killers from fleeing the Charlie Hebdo offices following the massacre.

Tributes to Ahmed Merabet poured in on Thursday after images of his murder at point blank range by a Kalashnikov-wielding masked terrorist circulated around the world.

Merabet, who according to officials was 40, was called to the scene while on patrol with a female colleague in the neighbourhood, just in time to see the black Citroën used by the two killers heading towards the boulevard from Charlie Hebdo.

“He was on foot, and came nose to nose with the terrorists. He pulled out his weapon. It was his job, it was his duty,” said Rocco Contento, a colleague who was a union representative at the central police station for Paris’s 11th arrondissement.

Video footage, which has now been pulled from the internet, showed the two gunmen get out of the car before one shot the policeman in the groin. As he falls to the pavement groaning in pain and holding up an arm as though to protect himself, the second gunman moves forward and asks the policeman: “Do you want to kill us?” Merabet replies: “Non, ç’est bon, chef” (“No, it’s OK mate”). The terrorist then shoots him in the head.

After the rise in online support for the satirical magazine, with the catchphrase “Je Suis Charlie,” many decided to honour Merabet, tweeting “Je Suis Ahmed”. One, @Aboujahjah, posted: “I am not Charlie, I am Ahmed the dead cop. Charlie ridiculed my faith and culture and I died defending his right to do so.”

Another policeman, 48-year-old Franck Brinsolaro, was killed moments earlier in the assault on Charlie Hebdo where he was responsible for the protection of its editor, Stéphane Charbonnier, one of the 11 killed in the building. A colleague said he “never had time” to pull his weapon.

Read the entire story here.

The Pen Must Always be Mightier

charlie

Philip Val former publisher of satirical magazine Charlie Hebdo, says of the assassination on January 7:

“They were so alive, they loved to make people happy, to make them laugh, to give them generous ideas. They were very good people. They were the best among us, as those who make us laugh, who are for liberty … They were assassinated, it is an insufferable butchery.

We cannot let silence set in, we need help. We all need to band together against this horror. Terror must not prevent joy, must not prevent our ability to live, freedom, expression – I’m going to use stupid words – democracy, after all this is what is at stake. It is this kind of fraternity that allows us to live. We cannot allow this, this is an act of war. It might be good if tomorrow, all newspapers were called Charlie Hebdo. If we titled them all Charlie Hebdo. If all of France was Charlie Hebdo. It would show that we are not okay with this. That we will never let stop laughing. We will never let liberty be extinguished.”

Narcissistick

The pursuit of all things self continues unabated in 2015. One has to wonder what children of the self-absorbed, selfie generations will be like. Or, perhaps, there will be no or few children, because many of the self-absorbed will remain, well, rather too self-absorbed.

From NYT:

Sometimes you don’t need an analyst’s report to get a look at the future of the media industry and the challenges it will bring.

On New Year’s Eve, I was one of the poor souls working in Times Square. By about 1 p.m., it was time to evacuate, and when I stepped into the cold that would assault the huddled, partying masses that night, a couple was getting ready to pose for a photo with the logo on The New York Times Building in the background. I love that I work at a place that people deem worthy of memorializing, and I often offer to help.

My assistance was not required. As I watched, the young couple mounted their phone on a collapsible pole, then extended it outward, the camera now able to capture the moment in wide-screen glory.

I’d seen the same phenomenon when I was touring the Colosseum in Rome last month. So many people were fighting for space to take selfies with their long sticks — what some have called the “Narcissistick” — that it looked like a reprise of the gladiatorial battles the place once hosted.

The urge to stare at oneself predates mirrors — you could imagine a Neanderthal fussing with his hair, his image reflected in a pool of water — but it has some pretty modern dimensions. In the forest of billboards in Times Square, the one with a camera that captures the people looking at the billboard always draws a big crowd.

Selfies are hardly new, but the incremental improvement in technology of putting a phone on a stick — a curiously analog fix that Time magazine listed as one of the best inventions of 2014 along with something called the “high-beta fusion reactor” — suggests that the séance with the self is only going to grow. (Selfie sticks are often used to shoot from above, which any self-respecting selfie auteur will tell you is the most flattering angle.)

There are now vast, automated networks to harvest all that narcissism, along with lots of personal data, creating extensive troves of user-generated content. The tendency to listen to the holy music of the self is reflected in the abundance of messaging and self-publishing services — Vine, WhatsApp, Snapchat, Instagram, Apple’s new voice messaging and the rest — all of which pose a profound challenge for media companies. Most media outfits are in the business of one-to-many, creating single pieces of text, images or audio meant to be shared by the masses.

But most sharing does not involve traditional media companies. Consumers are increasingly glued to their Facebook feeds as a source of information about not just their friends but the broader world as well. And with the explosive growth of Snapchat, the fastest-growing social app of the last year, much of the sharing that takes place involves one-to-one images that come and go in 10 seconds or less. Getting a media message — a television show, a magazine, a website, not to mention the ads that pay for most of it — into the intimate space between consumers and a torrent of information about themselves is only going to be more difficult.

I’ve been around since before there was a consumer Internet, but my frame of reference is as neither a Luddite nor a curmudgeon. I didn’t end up with over half a million followers on social media — Twitter and Facebookcombined — by posting only about broadband regulations and cable deals. (Not all self-flattering portraits are rendered in photos. You see what I did there, right?) The enhanced ability to communicate and share in the current age has many tangible benefits.

My wife travels a great deal, sometimes to conflicted regions, and WhatsApp’s global reach gives us a stable way of staying in touch. Over the holidays, our family shared endless photos, emoticons and inside jokes in group messages that were very much a part of Christmas. Not that long ago, we might have spent the time gathered around watching “Elf,” but this year, we were brought together by the here and now, the familiar, the intimate and personal. We didn’t need a traditional media company to help us create a shared experience.

Many younger consumers have become mini-media companies themselves, madly distributing their own content on Vine, Instagram, YouTube and Snapchat. It’s tough to get their attention on media created for the masses when they are so busy producing their own. And while the addiction to self is not restricted to millennials — boomers bow to no one in terms of narcissism — there are now easy-to-use platforms that amplify that self-reflecting impulse.

While legacy media companies still make products meant to be studied and savored over varying lengths of time — the movie “Boyhood,” The Atlantic magazine, the novel “The Goldfinch” — much of the content that individuals produce is ephemeral. Whatever bit of content is in front of someone — text messages, Facebook posts, tweets — is quickly replaced by more and different. For Snapchat, the fact that photos and videos disappear almost immediately is not a flaw, it’s a feature. Users can send content into the world with little fear of creating a trail of digital breadcrumbs that advertisers, parents or potential employers could follow. Warhol’s 15 minutes of fame has been replaced by less than 15 seconds on Snapchat.

Facebook, which is a weave of news encompassing both the self and the world, has become, for many, a de facto operating system on the web. And many of the people who aren’t busy on Facebook are up for grabs on the web but locked up on various messaging apps. What used to be called the audience is disappearing into apps, messaging and user-generated content. Media companies in search of significant traffic have to find a way into that stream.

“The majority of time that people are spending online is on Facebook,” said Anthony De Rosa, editor in chief of Circa, a mobile news start-up. “You have to find a way to break through or tap into all that narcissism. We are way too into ourselves.”

Read the entire article here.

Why, Not What

Great leaders, be they individuals, organization or companies, share a simple and yet common trait. Ethnographer Simon Sinek tells us what sets great leaders apart — think Wright brothers, Martin Luther King, Apple — and why some ideas take root and others don’t.

[tube]qp0HIF3SfI4[/tube]