My thoughts go to the families and friends of those who lost their lives today in Paris.
The pursuit of all things self continues unabated in 2015. One has to wonder what children of the self-absorbed, selfie generations will be like. Or, perhaps, there will be no or few children, because many of the self-absorbed will remain, well, rather too self-absorbed.
Sometimes you don’t need an analyst’s report to get a look at the future of the media industry and the challenges it will bring.
On New Year’s Eve, I was one of the poor souls working in Times Square. By about 1 p.m., it was time to evacuate, and when I stepped into the cold that would assault the huddled, partying masses that night, a couple was getting ready to pose for a photo with the logo on The New York Times Building in the background. I love that I work at a place that people deem worthy of memorializing, and I often offer to help.
My assistance was not required. As I watched, the young couple mounted their phone on a collapsible pole, then extended it outward, the camera now able to capture the moment in wide-screen glory.
I’d seen the same phenomenon when I was touring the Colosseum in Rome last month. So many people were fighting for space to take selfies with their long sticks — what some have called the “Narcissistick” — that it looked like a reprise of the gladiatorial battles the place once hosted.
The urge to stare at oneself predates mirrors — you could imagine a Neanderthal fussing with his hair, his image reflected in a pool of water — but it has some pretty modern dimensions. In the forest of billboards in Times Square, the one with a camera that captures the people looking at the billboard always draws a big crowd.
Selfies are hardly new, but the incremental improvement in technology of putting a phone on a stick — a curiously analog fix that Time magazine listed as one of the best inventions of 2014 along with something called the “high-beta fusion reactor” — suggests that the séance with the self is only going to grow. (Selfie sticks are often used to shoot from above, which any self-respecting selfie auteur will tell you is the most flattering angle.)
There are now vast, automated networks to harvest all that narcissism, along with lots of personal data, creating extensive troves of user-generated content. The tendency to listen to the holy music of the self is reflected in the abundance of messaging and self-publishing services — Vine, WhatsApp, Snapchat, Instagram, Apple’s new voice messaging and the rest — all of which pose a profound challenge for media companies. Most media outfits are in the business of one-to-many, creating single pieces of text, images or audio meant to be shared by the masses.
But most sharing does not involve traditional media companies. Consumers are increasingly glued to their Facebook feeds as a source of information about not just their friends but the broader world as well. And with the explosive growth of Snapchat, the fastest-growing social app of the last year, much of the sharing that takes place involves one-to-one images that come and go in 10 seconds or less. Getting a media message — a television show, a magazine, a website, not to mention the ads that pay for most of it — into the intimate space between consumers and a torrent of information about themselves is only going to be more difficult.
I’ve been around since before there was a consumer Internet, but my frame of reference is as neither a Luddite nor a curmudgeon. I didn’t end up with over half a million followers on social media — Twitter and Facebookcombined — by posting only about broadband regulations and cable deals. (Not all self-flattering portraits are rendered in photos. You see what I did there, right?) The enhanced ability to communicate and share in the current age has many tangible benefits.
My wife travels a great deal, sometimes to conflicted regions, and WhatsApp’s global reach gives us a stable way of staying in touch. Over the holidays, our family shared endless photos, emoticons and inside jokes in group messages that were very much a part of Christmas. Not that long ago, we might have spent the time gathered around watching “Elf,” but this year, we were brought together by the here and now, the familiar, the intimate and personal. We didn’t need a traditional media company to help us create a shared experience.
Many younger consumers have become mini-media companies themselves, madly distributing their own content on Vine, Instagram, YouTube and Snapchat. It’s tough to get their attention on media created for the masses when they are so busy producing their own. And while the addiction to self is not restricted to millennials — boomers bow to no one in terms of narcissism — there are now easy-to-use platforms that amplify that self-reflecting impulse.
While legacy media companies still make products meant to be studied and savored over varying lengths of time — the movie “Boyhood,” The Atlantic magazine, the novel “The Goldfinch” — much of the content that individuals produce is ephemeral. Whatever bit of content is in front of someone — text messages, Facebook posts, tweets — is quickly replaced by more and different. For Snapchat, the fact that photos and videos disappear almost immediately is not a flaw, it’s a feature. Users can send content into the world with little fear of creating a trail of digital breadcrumbs that advertisers, parents or potential employers could follow. Warhol’s 15 minutes of fame has been replaced by less than 15 seconds on Snapchat.
Facebook, which is a weave of news encompassing both the self and the world, has become, for many, a de facto operating system on the web. And many of the people who aren’t busy on Facebook are up for grabs on the web but locked up on various messaging apps. What used to be called the audience is disappearing into apps, messaging and user-generated content. Media companies in search of significant traffic have to find a way into that stream.
“The majority of time that people are spending online is on Facebook,” said Anthony De Rosa, editor in chief of Circa, a mobile news start-up. “You have to find a way to break through or tap into all that narcissism. We are way too into ourselves.”
Read the entire article here.
How well do you really know yourself? Go beyond your latte preferences and your favorite movies. Knowing yourself means being familiar with your most intimate thoughts, desires and fears, your character traits and flaws, your values. for many this quest for self-knowledge is a life-long process. And, it may begin with knowing about your socks.
Most people wonder at some point in their lives how well they know themselves. Self-knowledge seems a good thing to have, but hard to attain. To know yourself would be to know such things as your deepest thoughts, desires and emotions, your character traits, your values, what makes you happy and why you think and do the things you think and do. These are all examples of what might be called “substantial” self-knowledge, and there was a time when it would have been safe to assume that philosophy had plenty to say about the sources, extent and importance of self-knowledge in this sense.
Not any more. With few exceptions, philosophers of self-knowledge nowadays have other concerns. Here’s an example of the sort of thing philosophers worry about: suppose you are wearing socks and believe you are wearing socks. How do you know that that’s what you believe? Notice that the question isn’t: “How do you know you are wearing socks?” but rather “How do you know you believe you are wearing socks?” Knowledge of such beliefs is seen as a form of self-knowledge. Other popular examples of self-knowledge in the philosophical literature include knowing that you are in pain and knowing that you are thinking that water is wet. For many philosophers the challenge is explain how these types of self-knowledge are possible.
This is usually news to non-philosophers. Most certainly imagine that philosophy tries to answer the Big Questions, and “How do you know you believe you are wearing socks?” doesn’t sound much like one of them. If knowing that you believe you are wearing socks qualifies as self-knowledge at all — and even that isn’t obvious — it is self-knowledge of the most trivial kind. Non-philosophers find it hard to figure out why philosophers would be more interested in trivial than in substantial self-knowledge.
One common reaction to the focus on trivial self-knowledge is to ask, “Why on earth would you be interested in that?” — or, more pointedly, “Why on earth would anyone pay you to think about that?” Philosophers of self-knowledge aren’t deterred. It isn’t unusual for them to start their learned articles and books on self-knowledge by declaring that they aren’t going to be discussing substantial self-knowledge because that isn’t where the philosophical action is.
How can that be? It all depends on your starting point. For example, to know that you are wearing socks requires effort, even if it’s only the minimal effort of looking down at your feet. When you look down and see the socks on your feet you have evidence — the evidence of your senses — that you are wearing socks, and this illustrates what seems a general point about knowledge: knowledge is based on evidence, and our beliefs about the world around us can be wrong. Evidence can be misleading and conclusions from evidence unwarranted. Trivial self-knowledge seems different. On the face of it, you don’t need evidence to know that you believe you are wearing socks, and there is a strong presumption that your beliefs about your own beliefs and other states of mind aren’t mistaken. Trivial self-knowledge is direct (not based on evidence) and privileged (normally immune to error). Given these two background assumptions, it looks like there is something here that needs explaining: How is trivial self-knowledge, with all its peculiarities, possible?
From this perspective, trivial self-knowledge is philosophically interesting because it is special. “Special” in this context means special from the standpoint of epistemology or the philosophy of knowledge. Substantial self-knowledge is much less interesting from this point of view because it is like any other knowledge. You need evidence to know your own character and values, and your beliefs about your own character and values can be mistaken. For example, you think you are generous but your friends know you better. You think you are committed to racial equality but your behaviour suggests otherwise. Once you think of substantial self-knowledge as neither direct nor privileged why would you still regard it as philosophically interesting?
What is missing from this picture is any real sense of the human importance of self-knowledge. Self-knowledge matters to us as human beings, and the self-knowledge which matters to us as human beings is substantial rather than trivial self-knowledge. We assume that on the whole our lives go better with substantial self-knowledge than without it, and what is puzzling is how hard it can be to know ourselves in this sense.
The assumption that self-knowledge matters is controversial and philosophy might be expected to have something to say about the importance of self-knowledge, as well as its scope and extent. The interesting questions in this context include “Why is substantial self-knowledge hard to attain?” and “To what extent is substantial self-knowledge possible?”
Read the entire article here.
Image courtesy of DuckDuckGo Search.
Poverty and wealth are relative terms here in the United States. Certainly those who have amassed millions will seem “poor” to the established and nouveaux-riche billionaires. Yet these is something rather surreal in the spectacle of watching Los Angeles’ lesser-millionaires fight the mega-rich for their excess. As Peter Haldeman says in the following article of Michael Ovitz, founder of Creative Arts Agency, mere millionaire and landlord of a 28,000 square foot mega mansion, “Mr. Ovitz calling out a neighbor for overbuilding is a little like Lady Gaga accusing someone of overdressing. Welcome to the giga-mansion — Roman emperor Caligula, would feel much at home in this Californian circus of excess.
At the end of a narrow, twisting side street not far from the Hotel Bel-Air rises a knoll that until recently was largely covered with scrub brush and Algerian ivy. Now the hilltop is sheared and graded, girded by caissons sprouting exposed rebar. “They took 50- or 60,000 cubic yards of dirt out of the place,” said Fred Rosen, a neighbor, glowering at the site from behind the wheel of his Cadillac Escalade on a sunny October afternoon.
Mr. Rosen, who used to run Ticketmaster, has lately devoted himself to the homeowners alliance he helped form shortly after this construction project was approved. When it is finished, a modern compound of glass and steel will rise two stories, encompass several structures and span — wait for it — some 90,000 square feet.
In an article titled “Here Comes L.A.’s Biggest Residence,” The Los Angeles Business Journal announced in June that the house, conceived by Nile Niami, a film producer turned developer, with an estimated sale price “in the $150 million range,” will feature a cantilevered tennis court and five swimming pools. “We’re talking 200 construction trucks a day,” fumed Mr. Rosen. “Then multiply that by all the other giant projects. More than a million cubic yards of this hillside have been taken out. What happens when the next earthquake comes? How nuts is all this?”
By “all this,” he means not just the house with five swimming pools but the ever-expanding number of houses the size of Hyatt resorts rising in the most expensive precincts of Los Angeles. Built for the most part on spec, bestowed with names as assuming as their dimensions, these behemoths are transforming once leafy and placid neighborhoods into dusty enclaves carved by retaining walls and overrun by dirt haulers and cement mixers. “Twenty-thousand-square-foot homes have become teardowns for people who want to build 70-, 80-, and 90,000-square-foot homes,” Los Angeles City Councilman Paul Koretz said. So long, megamansion. Say hello to the gigamansion.
In Mr. Rosen’s neighborhood, ground was recently broken on a 70,000- to 80,000-square-foot Mediterranean manse for a citizen of Qatar, while Chateau des Fleurs, a 60,000-square-foot pile with a 40-car underground garage, is nearing completion. Not long ago, Anthony Pritzker, an heir to the Hyatt hotel fortune, built a boxy contemporary residence for himself in Beverly Hills that covers just shy of 50,000 square feet. And Mohamed Hadid, a prolific and high-profile developer (he has appeared on “The Shahs of Sunset” and “The Real Housewives of Beverly Hills”), is known for two palaces that measure 48,000 square feet each: Le Palais in Beverly Hills, which has a swan pond and a Jacuzzi that seats 20 people, and Le Belvédère in Bel Air, which features a Turkish hammam and a ballroom for 250.
Why are people building houses the size of shopping malls? Because they can. “Why do you see a yacht 500 feet long when you could easily have the same fun in one half the size?” asked Jeffrey Hyland, a partner in the Beverly Hills real estate firm Hilton & Hyland, who is developing five 50,000-square-foot properties on the site of the old Merv Griffin estate in Beverly Hills.
Le Belvédère was reportedly purchased by an Indonesian buyer, and Le Palais sold to a daughter of President Islam Karimov of Uzbekistan. According to Mr. Hyland, the market for these Versailles knockoffs is “flight capital.” “It’s oligarchs, oilgarchs, people from Asia, people who came up with the next app for the iPhone,” he said. While global wealth is pouring into other American cities as well, Los Angeles is still a relative bargain, Mr. Hyland said, adding: “Here you can buy the best house for $3,000 a square foot. In Manhattan, you’re looking at $11,000 a square foot and you get a skybox.”
Speculators are tapping the demand, snapping up the best lots, bulldozing whatever is on them and building not only domiciles but also West Coast “lifestyles.” The particulars can seem a little puzzling to the uninitiated. The very busy Mr. Niami (he also built the Winklevoss twins’ perch above the Sunset Strip) constructed a 30,000-square-foot Mediterranean-style house in Holmby Hills that locals have called the Fendi Casa because it was filled with furniture and accessories from the Italian fashion house.
The residence also offered indoor and outdoor pools, commissioned artwork by the graffiti artist Retna, and an operating room in the basement. “It’s not like it’s set up to take out your gallbladder,” said Mark David, a real estate columnist for Variety, who has toured the house. “It’s for cosmetic procedures — fillers, dermabrasion, that kind of thing.” The house sold, with all its furnishings, to an unidentified Saudi buyer for $44 million.
Read the entire article here.
Image: Satellite view of the 70,000 square foot giga-mansion development in Bel Air. Los Angeles. Courtesy of Google Maps.
At some point in the not too distant future artificial intelligences will far exceed humans in most capacities (except shopping and beer drinking). The scripts according to most Hollywood movies seem to suggest that we, humans, would be (mostly) wiped-out by AI machines, beings, robots or other non-human forms – we being the lesser-organisms, superfluous to AI needs.
Perhaps, we may find an alternate path, to a more benign coexistence, much like that posited in The Culture novels by dearly departed, Iain M. Banks. I’ll go with Mr.Banks’ version. Though, just perhaps, evolution is supposed to leave us behind, replacing our simplistic, selfish intelligence with much more advanced, non-human version.
From the Guardian:
From 2001: A Space Odyssey to Blade Runner and RoboCop to The Matrix, how humans deal with the artificial intelligence they have created has proved a fertile dystopian territory for film-makers. More recently Spike Jonze’s Her and Alex Garland’s forthcoming Ex Machina explore what it might be like to have AI creations living among us and, as Alan Turing’s famous test foregrounded, how tricky it might be to tell the flesh and blood from the chips and code.
These concerns are even troubling some of Silicon Valley’s biggest names: last month Telsa’s Elon Musk described AI as mankind’s “biggest existential threat… we need to be very careful”. What many of us don’t realise is that AI isn’t some far-off technology that only exists in film-maker’s imaginations and computer scientist’s labs. Many of our smartphones employ rudimentary AI techniques to translate languages or answer our queries, while video games employ AI to generate complex, ever-changing gaming scenarios. And so long as Silicon Valley companies such as Google and Facebook continue to acquire AI firms and hire AI experts, AI’s IQ will continue to rise…
Isn’t AI a Steven Spielberg movie?
No arguments there, but the term, which stands for “artificial intelligence”, has a more storied history than Spielberg and Kubrick’s 2001 film. The concept of artificial intelligence goes back to the birth of computing: in 1950, just 14 years after defining the concept of a general-purpose computer, Alan Turing asked “Can machines think?”
It’s something that is still at the front of our minds 64 years later, most recently becoming the core of Alex Garland’s new film, Ex Machina, which sees a young man asked to assess the humanity of a beautiful android. The concept is not a million miles removed from that set out in Turing’s 1950 paper, Computing Machinery and Intelligence, in which he laid out a proposal for the “imitation game” – what we now know as the Turing test. Hook a computer up to text terminal and let it have conversations with a human interrogator, while a real person does the same. The heart of the test is whether, when you ask the interrogator to guess which is the human, “the interrogator [will] decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman”.
Turing said that asking whether machines could pass the imitation game is more useful than the vague and philosophically unclear question of whether or not they “think”. “The original question… I believe to be too meaningless to deserve discussion.” Nonetheless, he thought that by the year 2000, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted”.
In terms of natural language, he wasn’t far off. Today, it is not uncommon to hear people talking about their computers being “confused”, or taking a long time to do something because they’re “thinking about it”. But even if we are stricter about what counts as a thinking machine, it’s closer to reality than many people think.
So AI exists already?
It depends. We are still nowhere near to passing Turing’s imitation game, despite reports to the contrary. In June, a chatbot called Eugene Goostman successfully fooled a third of judges in a mock Turing test held in London into thinking it was human. But rather than being able to think, Eugene relied on a clever gimmick and a host of tricks. By pretending to be a 13-year-old boy who spoke English as a second language, the machine explained away its many incoherencies, and with a smattering of crude humour and offensive remarks, managed to redirect the conversation when unable to give a straight answer.
The most immediate use of AI tech is natural language processing: working out what we mean when we say or write a command in colloquial language. For something that babies begin to do before they can even walk, it’s an astonishingly hard task. Consider the phrase beloved of AI researchers – “time flies like an arrow, fruit flies like a banana”. Breaking the sentence down into its constituent parts confuses even native English speakers, let alone an algorithm.
Read the entire article here.
New results are in, and yes, money can buy you happiness. But the picture from some extensive new research shows that your happiness is much more dependent on how you spend it, than how much your earn. Generally, you are more likely to be happier if you give money away rather than fritter it on yourself. Also, you are more likely to be happier if you spend it on an experience rather than things.
From the WSJ:
It’s an age-old question: Can money buy happiness?
Over the past few years, new research has given us a much deeper understanding of the relationship between what we earn and how we feel. Economists have been scrutinizing the links between income and happiness across nations, and psychologists have probed individuals to find out what really makes us tick when it comes to cash.
The results, at first glance, may seem a bit obvious: Yes, people with higher incomes are, broadly speaking, happier than those who struggle to get by.
But dig a little deeper into the findings, and they get a lot more surprising—and a lot more useful.
In short, this latest research suggests, wealth alone doesn’t provide any guarantee of a good life. What matters a lot more than a big income is howpeople spend it. For instance, giving money away makes people a lot happier than lavishing it on themselves. And when they do spend money on themselves, people are a lot happier when they use it for experiences like travel than for material goods.
With that in mind, here’s what the latest research says about how people can make smarter use of their dollars and maximize their happiness.
Ryan Howell was bothered by a conundrum. Numerous studies conducted over the past 10 years have shown that life experiences give us more lasting pleasure than material things, and yet people still often deny themselves experiences and prioritize buying material goods.
So, Prof. Howell, associate professor of psychology at San Francisco State University, decided to look at what’s going on. In a study published earlier this year, he found that people think material purchases offer better value for the money because experiences are fleeting, and material goods last longer. So, although they’ll occasionally splurge on a big vacation or concert tickets, when they’re in more money-conscious mode, they stick to material goods.
But in fact, Prof. Howell found that when people looked back at their purchases, they realized that experiences actually provided better value.
“What we find is that there’s this huge misforecast,” he says. “People think that experiences are only going to provide temporary happiness, but they actually provide both more happiness and more lasting value.” And yet we still keep on buying material things, he says, because they’re tangible and we think we can keep on using them.
Cornell University psychology professor Thomas Gilovich has reached similar conclusions. “People often make a rational calculation: I have a limited amount of money, and I can either go there, or I can have this,” he says. “If I go there, it’ll be great, but it’ll be done in no time. If I buy this thing, at least I’ll always have it. That is factually true, but not psychologically true. We adapt to our material goods.”
It’s this process of “hedonic adaptation” that makes it so hard to buy happiness through material purchases. The new dress or the fancy car provides a brief thrill, but we soon come to take it for granted.
Experiences, on the other hand, tend to meet more of our underlying psychological needs, says Prof. Gilovich. They’re often shared with other people, giving us a greater sense of connection, and they form a bigger part of our sense of identity. If you’ve climbed in the Himalayas, that’s something you’ll always remember and talk about, long after all your favorite gadgets have gone to the landfill.
Read the entire article here.
Image courtesy of Google Search.
One has to wonder how Jean-Paul Sartre would have been regarded today had he accepted the Nobel Prize in Literature in 1964, or had the characters of Monty Python not used him as a punching bag in one of their infamous, satyrical philosopher sketches:
Mrs Conclusion: What was Jean-Paul like?
Mrs Premise: Well, you know, a bit moody. Yes, he didn’t join in the fun much. Just sat there thinking. Still, Mr Rotter caught him a few times with the whoopee cushion. (she demonstrates) Le Capitalisme et La Bourgeoisie ils sont la m~me chose… Oooh we did laugh…
From the Guardian:
In this age in which all shall have prizes, in which every winning author knows what’s necessary in the post-award trial-by-photoshoot (Book jacket pressed to chest? Check. Wall-to-wall media? Check. Backdrop of sponsor’s logo? Check) and in which scarcely anyone has the couilles, as they say in France, to politely tell judges where they can put their prize, how lovely to recall what happened on 22 October 1964, when Jean-Paul Sartre turned down the Nobel prize for literature.
“I have always declined official honours,” he explained at the time. “A writer should not allow himself to be turned into an institution. This attitude is based on my conception of the writer’s enterprise. A writer who adopts political, social or literary positions must act only within the means that are his own – that is, the written word.”
Throughout his life, Sartre agonised about the purpose of literature. In 1947’s What is Literature?, he jettisoned a sacred notion of literature as capable of replacing outmoded religious beliefs in favour of the view that it should have a committed social function. However, the last pages of his enduringly brilliant memoir Words, published the same year as the Nobel refusal, despair over that function: “For a long time I looked on my pen as a sword; now I know how powerless we are.” Poetry, wrote Auden, makes nothing happen; politically committed literature, Sartre was saying, was no better. In rejecting the honour, Sartre worried that the Nobel was reserved for “the writers of the west or the rebels of the east”. He didn’t damn the Nobel in quite the bracing terms that led Hari Kunzru to decline the 2003 John Llewellyn Rhys prize, sponsored by the Mail on Sunday (“As the child of an immigrant, I am only too aware of the poisonous effect of the Mail’s editorial line”), but gently pointed out its Eurocentric shortcomings. Plus, one might say 50 years on, ça change. Sartre said that he might have accepted the Nobel if it had been offered to him during France’s imperial war in Algeria, which he vehemently opposed, because then the award would have helped in the struggle, rather than making Sartre into a brand, an institution, a depoliticised commodity. Truly, it’s difficult not to respect his compunctions.
But the story is odder than that. Sartre read in Figaro Littéraire that he was in the frame for the award, so he wrote to the Swedish Academy saying he didn’t want the honour. He was offered it anyway. “I was not aware at the time that the Nobel prize is awarded without consulting the opinion of the recipient,” he said. “But I now understand that when the Swedish Academy has made a decision, it cannot subsequently revoke it.”
Regrets? Sartre had a few – at least about the money. His principled stand cost him 250,000 kronor (about £21,000), prize money that, he reflected in his refusal statement, he could have donated to the “apartheid committee in London” who badly needed support at the time. All of which makes one wonder what his compatriot, Patrick Modiano, the 15th Frenchman to win the Nobel for literature earlier this month, did with his 8m kronor (about £700,000).
The Swedish Academy had selected Sartre for having “exerted a far-reaching influence on our age”. Is this still the case? Though he was lionised by student radicals in Paris in May 1968, his reputation as a philosopher was on the wane even then. His brand of existentialism had been eclipsed by structuralists (such as Lévi-Strauss and Althusser) and post-structuralists (such as Derrida and Deleuze). Indeed, Derrida would spend a great deal of effort deriding Sartrean existentialism as a misconstrual of Heidegger. Anglo-Saxon analytic philosophy, with the notable exception of Iris Murdoch and Arthur Danto, has for the most part been sniffy about Sartre’s philosophical credentials.
Sartre’s later reputation probably hasn’t benefited from being championed by Paris’s philosophical lightweight, Bernard-Henri Lévy, who subtitled his biography of his hero The Philosopher of the Twentieth Century (Really? Not Heidegger, Russell, Wittgenstein or Adorno?); still less by his appearance in Monty Python’s least funny philosophy sketch, “Mrs Premise and Mrs Conclusion visit Jean-Paul Sartre at his Paris home”. Sartre has become more risible than lisible: unremittingly depicted as laughable philosopher toad – ugly, randy, incomprehensible, forever excitably over-caffeinated at Les Deux Magots with Simone de Beauvoir, encircled with pipe smoke and mired in philosophical jargon, not so much a man as a stock pantomime figure. He deserves better.
How then should we approach Sartre’s writings in 2014? So much of his lifelong intellectual struggle and his work still seems pertinent. When we read the “Bad Faith” section of Being and Nothingness, it is hard not to be struck by the image of the waiter who is too ingratiating and mannered in his gestures, and how that image pertains to the dismal drama of inauthentic self-performance that we find in our culture today. When we watch his play Huis Clos, we might well think of how disastrous our relations with other people are, since we now require them, more than anything else, to confirm our self-images, while they, no less vexingly, chiefly need us to confirm theirs. When we read his claim that humans can, through imagination and action, change our destiny, we feel something of the burden of responsibility of choice that makes us moral beings. True, when we read such sentences as “the being by which Nothingness comes to the world must be its own Nothingness”, we might want to retreat to a dark room for a good cry, but let’s not spoil the story.
His lifelong commitments to socialism, anti-fascism and anti-imperialism still resonate. When we read, in his novel Nausea, of the protagonost Antoine Roquentin in Bouville’s art gallery, looking at pictures of self-satisfied local worthies, we can apply his fury at their subjects’ self-entitlement to today’s images of the powers that be (the suppressed photo, for example, of Cameron and his cronies in Bullingdon pomp), and share his disgust that such men know nothing of what the world is really like in all its absurd contingency.
In his short story Intimacy, we confront a character who, like all of us on occasion, is afraid of the burden of freedom and does everything possible to make others take her decisions for her. When we read his distinctions between being-in-itself (être-en-soi), being-for-itself (être-pour-soi) and being-for-others (être-pour-autrui), we are encouraged to think about the tragicomic nature of what it is to be human – a longing for full control over one’s destiny and for absolute identity, and at the same time, a realisation of the futility of that wish.
The existential plight of humanity, our absurd lot, our moral and political responsibilities that Sartre so brilliantly identified have not gone away; rather, we have chosen the easy path of ignoring them. That is not a surprise: for Sartre, such refusal to accept what it is to be human was overwhelmingly, paradoxically, what humans do.
Read the entire article here.
Image: Jean-Paul Sartre (c1950). Courtesy: Archivo del diario Clarín, Buenos Aires, Argentina
Linguist, philosopher, and more recently political activist, Noam Chomsky penned the title phrase in the late 1950s. The sentence is grammatically correct, but semantically nonsensical. Some now maintain that many of Chomsky’s early ideas on the innateness of human language are equally nonsensical. Chomsky popularized the idea that language is innate to humans; that somehow and somewhere the minds of human infants contain a mechanism that can make sense of language by applying rules encoded in and activated by our genes. Steven Pinker expanded on Chomsky’s theory by proposing that the mind contains an innate device that encodes a common, universal grammar, which is foundational to all languages across all human societies.
Recently however, this notion has come under increasing criticism. A growing number of prominent linguistic scholars, including Professor Vyvyan Evans, maintain that Chomsky’s and Pinker’s linguistic models are outdated — that a universal grammar is nothing but a finely-tuned myth. Evans and others maintain that language arises from and is directly embodied in experience.
From the New Scientist:
The ideas of Noam Chomsky, popularised by Steven Pinker, come under fire in Vyvyan Evans’s book The Language Myth: Why language is not an instinct
IS THE way we think about language on the cusp of a revolution? After reading The Language Myth, it certainly looks as if a major shift is in progress, one that will open people’s minds to liberating new ways of thinking about language.
I came away excited. I found that words aren’t so much things that can be limited by a dictionary definition but are encyclopaedic, pointing to sets of concepts. There is the intriguing notion that language will always be less rich than our ideas and there will always be things we cannot quite express. And there is the growing evidence that words are rooted in concepts built out of our bodily experience of living in the world.
Its author, Vyvyan Evans, is a professor of linguistics at Bangor University, UK, and his primary purpose is not so much to map out the revolution (that comes in a sequel) but to prepare you for it by sweeping out old ideas. The book is sure to whip up a storm, because in his sights are key ideas from some of the world’s great thinkers, including philosophers Noam Chomsky and Jerry Fodor.
Ideas about language that have entered the public consciousness are more myth than reality, Evans argues. Bestsellers by Steven Pinker, the Harvard University professor who popularised Chomksy in The Language Instinct, How the Mind Works and The Stuff of Thought, come in for particular criticism. “Science has moved on,” Evans writes. “And to end it all, Pinker is largely wrong, about language and about a number of other things too…”
The commonplace view of “language as instinct” is the myth Evans wants to destroy and he attempts the operation with great verve. The myth comes from the way children effortlessly learn languages just by listening to adults around them, without being aware explicitly of the governing grammatical rules.
This “miracle” of spontaneous learning led Chomsky to argue that grammar is stored in a module of the mind, a “language acquisition device”, waiting to be activated, stage-by-stage, when an infant encounters the jumble of language. The rules behind language are built into our genes.
This innate grammar is not the grammar of a school textbook, but a universal grammar, capable of generating the rules of any of the 7000 or so languages that a child might be exposed to, however different they might appear. In The Language Instinct, Pinker puts it this way: “a Universal Grammar, not reducible to history or cognition, underlies the human language instinct”. The search for that universal grammar has kept linguists busy for half a century.
They may have been chasing a mirage. Evans marshals impressive empirical evidence to take apart different facets of the “language instinct myth”. A key criticism is that the more languages are studied, the more their diversity becomes apparent and an underlying universal grammar less probable.
In a whistle-stop tour, Evans tells stories of languages with a completely free word order, including Jiwarli and Thalanyji from Australia. Then there’s the Inuit language Inuktitut, which builds sentences out of prefixes and suffixes to create giant words like tawakiqutiqarpiit, roughly meaning: “Do you have any tobacco for sale?” And there is the native Canadian language, Straits Salish, which appears not to have nouns or verbs.
An innate language module also looks shaky, says Evans, now scholars have watched languages emerge among communities of deaf people. A sign language is as rich grammatically as a spoken one, but new ones don’t appear fully formed as we might expect if grammar is laid out in our genes. Instead, they gain grammatical richness over several generations.
Now, too, we have detailed studies of how children acquire language. Grammatical sentences don’t start to pop out of their mouths at certain developmental stages, but rather bits and pieces emerge as children learn. At first, they use chunks of particular expressions they hear often, only gradually learning patterns and generalising to a fully fledged grammar. So grammars emerge from use, and the view of “language-as-instinct”, argues Evans, should be replaced by “language-as-use”.
The “innate” view also encounters a deep philosophical problem. If the rules of language are built into our genes, how is it that sentences mean something? How do they connect to our thoughts, concepts and to the outside world?
A solution from the language-as-instinct camp is that there is an internal language of thought called “mentalese”. In The Language Instinct, Pinker explains: “Knowing a language, then, is knowing how to translate mentalese into strings of words.” But philosophers are left arguing over the same question once removed: how does mentalese come to have meaning?
Read the entire article here.
Those who decry benefits fraud in their own nations should look to the illustrious example of Italian “miner” Carlo Cani. His adventures in absconding from work over a period of 35 years (yes, years) would make a wonderful indie movie, and should be an inspiration to less ambitious slackers the world over.
From the Telegraph:
An Italian coal miner’s confession that he is drawing a pension despite hardly ever putting in a day’s work over a 35-year career has underlined the country’s problem with benefit fraud and its dysfunctional pension system.
Carlo Cani started work as a miner in 1980 but soon found that he suffered from claustrophobia and hated being underground.
He started doing everything he could to avoid hacking away at the coal face, inventing an imaginative range of excuses for not venturing down the mine in Sardinia where he was employed.
He pretended to be suffering from amnesia and haemorrhoids, rubbed coal dust into his eyes to feign an infection and on occasion staggered around pretending to be drunk.
The miner, now aged 60, managed to accumulate years of sick leave, apparently with the help of compliant doctors, and was able to stay at home to indulge his passion for jazz.
He also spent extended periods of time at home on reduced pay when demand for coal from the mine dipped, under an Italian system known as “cassazione integrazione” in which employees are kept on the pay roll during periods of economic difficulty for their companies.
Despite his long periods of absence, he was still officially an employee of the mining company, Carbosulcis, and therefore eventually entitled to a pension.
“I invented everything – amnesia, pains, haemorrhoids, I used to lurch around as if I was drunk. I bumped my thumb on a wall and obviously you can’t work with a swollen thumb,” Mr Cani told La Stampa daily on Tuesday.
“Other times I would rub coal dust into my eyes. I just didn’t like the work – being a miner was not the job for me.”
But rather than find a different occupation, he managed to milk the system for 35 years, until retiring on a pension in 2006 at the age of just 52.
“I reached the pensionable age without hardly ever working. I hated being underground. “Right from the start, I had no affinity for coal.”
He said he had “respect” for his fellow miners, who had earned their pensions after “years of sweat and back-breaking work”, while he had mostly rested at home.
The case only came to light this week but has caused such a furore in Italy that Mr Cani is now refusing to take telephone calls.
He could not be contacted but another Carlo Cani, who is no relation but lives in the same area of southern Sardinia and has his number listed in the phone book, said: “People round here are absolutely furious about this – to think that someone could skive off work for so long and still get his pension. He even seems to be proud of that fact.
“It’s shameful. This is a poor region and there is no work. All the young people are leaving and moving to England and Germany.”
The former miner’s work-shy ways have caused indignation in a country in which youth unemployment is more than 40 per cent.
Read the entire story here.
Image: Bituminous coal. The type of coal not mined by retired “miner” Carlo Cani. Courtesy of Wikipedia.
A previously unpublished essay by Isaac Asimov on the creative process shows us his well reasoned thinking on the subject. While he believed that deriving new ideas could be done productively in a group, he seemed to gravitate more towards the notion of the lone creative genius. Both, however, require the innovator(s) to cross-connect thoughts, often from disparate sources.
From Technology Review:
How do people get new ideas?
Presumably, the process of creativity, whatever it is, is essentially the same in all its branches and varieties, so that the evolution of a new art form, a new gadget, a new scientific principle, all involve common factors. We are most interested in the “creation” of a new scientific principle or a new application of an old one, but we can be general here.
One way of investigating the problem is to consider the great ideas of the past and see just how they were generated. Unfortunately, the method of generation is never clear even to the “generators” themselves.
But what if the same earth-shaking idea occurred to two men, simultaneously and independently? Perhaps, the common factors involved would be illuminating. Consider the theory of evolution by natural selection, independently created by Charles Darwin and Alfred Wallace.
There is a great deal in common there. Both traveled to far places, observing strange species of plants and animals and the manner in which they varied from place to place. Both were keenly interested in finding an explanation for this, and both failed until each happened to read Malthus’s “Essay on Population.”
Both then saw how the notion of overpopulation and weeding out (which Malthus had applied to human beings) would fit into the doctrine of evolution by natural selection (if applied to species generally).
Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.
Undoubtedly in the first half of the 19th century, a great many naturalists had studied the manner in which species were differentiated among themselves. A great many people had read Malthus. Perhaps some both studied species and read Malthus. But what you needed was someone who studied species, read Malthus, and had the ability to make a cross-connection.
That is the crucial point that is the rare characteristic that must be found. Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, “How stupid of me not to have thought of this.”
But why didn’t he think of it? The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table. Making the cross-connection requires a certain daring. It must, for any cross-connection that does not require daring is performed at once by many and develops not as a “new idea,” but as a mere “corollary of an old idea.”
It is only afterward that a new idea seems reasonable. To begin with, it usually seems unreasonable. It seems the height of unreason to suppose the earth was round instead of flat, or that it moved instead of the sun, or that objects required a force to stop them when in motion, instead of a force to keep them moving, and so on.
A person willing to fly in the face of reason, authority, and common sense must be a person of considerable self-assurance. Since he occurs only rarely, he must seem eccentric (in at least that respect) to the rest of us. A person eccentric in one respect is often eccentric in others.
Consequently, the person who is most likely to get new ideas is a person of good background in the field of interest and one who is unconventional in his habits. (To be a crackpot is not, however, enough in itself.)
Once you have the people you want, the next question is: Do you want to bring them together so that they may discuss the problem mutually, or should you inform each of the problem and allow them to work in isolation?
My feeling is that as far as creativity is concerned, isolation is required. The creative person is, in any case, continually working at it. His mind is shuffling his information at all times, even when he is not conscious of it. (The famous example of Kekule working out the structure of benzene in his sleep is well-known.)
The presence of others can only inhibit this process, since creation is embarrassing. For every new good idea you have, there are a hundred, ten thousand foolish ones, which you naturally do not care to display.
Nevertheless, a meeting of such people may be desirable for reasons other than the act of creation itself.
Read the entire article here.
If ever you needed a vivid example of corporate exploitation of the most vulnerable, this is it. So-called free-marketeers will sneer at any suggestion of corporate over-reach — they will chant that it’s just the free market at work. But, the rules of this market,
as are many others, are written and enforced by the patricians and well-stacked against the plebs.
If you are a chief executive of a large company, you very likely have a noncompete clause in your contract, preventing you from jumping ship to a competitor until some period has elapsed. Likewise if you are a top engineer or product designer, holding your company’s most valuable intellectual property between your ears.
And you also probably have a noncompete agreement if you assemble sandwiches at Jimmy John’s sub sandwich chain for a living.
But what’s most startling about that information, first reported by The Huffington Post, is that it really isn’t all that uncommon. As my colleague Steven Greenhouse reported this year, employers are now insisting that workers in a surprising variety of relatively low- and moderate-paid jobs sign noncompete agreements.
Indeed, while HuffPo has no evidence that Jimmy John’s, a 2,000-location sandwich chain, ever tried to enforce the agreement to prevent some $8-an-hour sandwich maker or delivery driver from taking a job at the Blimpie down the road, there are other cases where low-paid or entry-level workers have had an employer try to restrict their employability elsewhere. The Times article tells of a camp counselor and a hair stylist who faced such restrictions.
American businesses are paying out a historically low proportion of their income in the form of wages and salaries. But the Jimmy John’s employment agreement is one small piece of evidence that workers, especially those without advanced skills, are also facing various practices and procedures that leave them worse off, even apart from what their official hourly pay might be. Collectively they tilt the playing field toward the owners of businesses and away from the workers who staff them.
You see it in disputes like the one heading to the Supreme Court over whether workers at an Amazon warehouse in Nevada must be paid for the time they wait to be screened at the end of the workday to ensure they have no stolen goods on them.
It’s evident in continuing lawsuits against Federal Express claiming that its “independent contractors” who deliver packages are in fact employees who are entitled to benefits and reimbursements of costs they incur.
And it is shown in the way many retailers assign hourly workers inconvenient schedules that can change at the last minute, giving them little ability to plan their lives (my colleague Jodi Kantor wrote memorably about the human effects of those policies on a Starbucks coffee worker in August, and Starbucks rapidly said it would end many of them).
These stories all expose the subtle ways that employers extract more value from their entry-level workers, at the cost of their quality of life (or, in the case of the noncompete agreements, freedom to leave for a more lucrative offer).
What’s striking about some of these labor practices is the absence of reciprocity. When a top executive agrees to a noncompete clause in a contract, it is typically the product of a negotiation in which there is some symmetry: The executive isn’t allowed to quit for a competitor, but he or she is guaranteed to be paid for the length of the contract even if fired.
Read the entire story here.
Image courtesy of Google Search.
Secular ideologues in the West believe they are on the moral high-ground. The separation of church (and mosque or synagogue) from state is, they believe, the path to a more just, equal and less-violent culture. They will cite example after example in contemporary and recent culture of terrible violence in the name of religious extremism and fundamentalism.
And, yet, step back for a minute from the horrendous stories and images of atrocities wrought by religious fanatics in Europe, Africa, Asia and the Middle East. Think of the recent histories of fledgling nations in Africa; the ethnic cleansings across much of Central and Eastern Europe — several times over; the egomaniacal tribal terrorists of Central Asia, the brutality of neo-fascists and their socialist bedfellows in Latin America. Delve deeper into these tragic histories — some still unfolding before our very eyes — and you will see a much more complex view of humanity. Our tribal rivalries know no bounds and our violence towards others is certainly not limited only to the catalyst of religion. Yes, we fight for our religion, but we also fight for territory, politics, resources, nationalism, revenge, poverty, ego. Soon the coming fights will be about water and food — these will make our wars over belief systems seem rather petty.
Scholar and author Karen Armstrong explores the complexities of religious and secular violence in the broader context of human struggle in her new book, Fields of Blood: Religion and the History of Violence.
From the Guardian:
As we watch the fighters of the Islamic State (Isis) rampaging through the Middle East, tearing apart the modern nation-states of Syria and Iraq created by departing European colonialists, it may be difficult to believe we are living in the 21st century. The sight of throngs of terrified refugees and the savage and indiscriminate violence is all too reminiscent of barbarian tribes sweeping away the Roman empire, or the Mongol hordes of Genghis Khan cutting a swath through China, Anatolia, Russia and eastern Europe, devastating entire cities and massacring their inhabitants. Only the wearily familiar pictures of bombs falling yet again on Middle Eastern cities and towns – this time dropped by the United States and a few Arab allies – and the gloomy predictions that this may become another Vietnam, remind us that this is indeed a very modern war.
The ferocious cruelty of these jihadist fighters, quoting the Qur’an as they behead their hapless victims, raises another distinctly modern concern: the connection between religion and violence. The atrocities of Isis would seem to prove that Sam Harris, one of the loudest voices of the “New Atheism”, was right to claim that “most Muslims are utterly deranged by their religious faith”, and to conclude that “religion itself produces a perverse solidarity that we must find some way to undercut”. Many will agree with Richard Dawkins, who wrote in The God Delusion that “only religious faith is a strong enough force to motivate such utter madness in otherwise sane and decent people”. Even those who find these statements too extreme may still believe, instinctively, that there is a violent essence inherent in religion, which inevitably radicalises any conflict – because once combatants are convinced that God is on their side, compromise becomes impossible and cruelty knows no bounds.
Despite the valiant attempts by Barack Obama and David Cameron to insist that the lawless violence of Isis has nothing to do with Islam, many will disagree. They may also feel exasperated. In the west, we learned from bitter experience that the fanatical bigotry which religion seems always to unleash can only be contained by the creation of a liberal state that separates politics and religion. Never again, we believed, would these intolerant passions be allowed to intrude on political life. But why, oh why, have Muslims found it impossible to arrive at this logicalsolution to their current problems? Why do they cling with perverse obstinacy to the obviously bad idea of theocracy? Why, in short, have they been unable to enter the modern world? The answer must surely lie in their primitive and atavistic religion.
But perhaps we should ask, instead, how it came about that we in the west developed our view of religion as a purely private pursuit, essentially separate from all other human activities, and especially distinct from politics. After all, warfare and violence have always been a feature of political life, and yet we alone drew the conclusion that separating the church from the state was a prerequisite for peace. Secularism has become so natural to us that we assume it emerged organically, as a necessary condition of any society’s progress into modernity. Yet it was in fact a distinct creation, which arose as a result of a peculiar concatenation of historical circumstances; we may be mistaken to assume that it would evolve in the same fashion in every culture in every part of the world.
We now take the secular state so much for granted that it is hard for us to appreciate its novelty, since before the modern period, there were no “secular” institutions and no “secular” states in our sense of the word. Their creation required the development of an entirely different understanding of religion, one that was unique to the modern west. No other culture has had anything remotely like it, and before the 18th century, it would have been incomprehensible even to European Catholics. The words in other languages that we translate as “religion” invariably refer to something vaguer, larger and more inclusive. The Arabic word dinsignifies an entire way of life, and the Sanskrit dharma covers law, politics, and social institutions as well as piety. The Hebrew Bible has no abstract concept of “religion”; and the Talmudic rabbis would have found it impossible to define faith in a single word or formula, because the Talmud was expressly designed to bring the whole of human life into the ambit of the sacred. The Oxford Classical Dictionary firmly states: “No word in either Greek or Latin corresponds to the English ‘religion’ or ‘religious’.” In fact, the only tradition that satisfies the modern western criterion of religion as a purely private pursuit is Protestant Christianity, which, like our western view of “religion”, was also a creation of the early modern period.
Traditional spirituality did not urge people to retreat from political activity. The prophets of Israel had harsh words for those who assiduously observed the temple rituals but neglected the plight of the poor and oppressed. Jesus’s famous maxim to “Render unto Caesar the things that are Caesar’s” was not a plea for the separation of religion and politics. Nearly all the uprisings against Rome in first-century Palestine were inspired by the conviction that the Land of Israel and its produce belonged to God, so that there was, therefore, precious little to “give back” to Caesar. When Jesus overturned the money-changers’ tables in the temple, he was not demanding a more spiritualised religion. For 500 years, the temple had been an instrument of imperial control and the tribute for Rome was stored there. Hence for Jesus it was a “den of thieves”. The bedrock message of the Qur’an is that it is wrong to build a private fortune but good to share your wealth in order to create a just, egalitarian and decent society. Gandhi would have agreed that these were matters of sacred import: “Those who say that religion has nothing to do with politics do not know what religion means.”
Before the modern period, religion was not a separate activity, hermetically sealed off from all others; rather, it permeated all human undertakings, including economics, state-building, politics and warfare. Before 1700, it would have been impossible for people to say where, for example, “politics” ended and “religion” began. The Crusades were certainly inspired by religious passion but they were also deeply political: Pope Urban II let the knights of Christendom loose on the Muslim world to extend the power of the church eastwards and create a papal monarchy that would control Christian Europe. The Spanish inquisition was a deeply flawed attempt to secure the internal order of Spain after a divisive civil war, at a time when the nation feared an imminent attack by the Ottoman empire. Similarly, the European wars of religion and the thirty years war were certainly exacerbated by the sectarian quarrels of Protestants and Catholics, but their violence reflected the birth pangs of the modern nation-state.
Read the entire article here.
We all know that making decisions from past experience is wise. We learn from the benefit of hindsight. We learn to make small improvements or radical shifts in our thinking and behaviors based on history and previous empirical evidence. Stock market gurus and investment mavens will tell you time after time that they have a proven method — based on empirical evidence and a lengthy, illustrious track record — for picking the next great stock or investing your hard-earned retirement funds.
Yet, empirical evidence shows that chimpanzees throwing darts at the WSJ stock pages are just as good at stock market tips as we humans (and the “masters of the universe”). So, it seems that random decision-making can be just as good, if not better, than wisdom and experience.
From the Guardian:
No matter how much time you spend reading the recent crop of books on How To Decide or How To Think Clearly, you’re unlikely to encounter glowing references to a decision-making system formerly used by the Azande of central Africa. Faced with a dilemma, tribespeople would force poison down the neck of a chicken while asking questions of the “poison oracle”; the chicken answered by surviving (“yes”) or expiring (“no”). Clearly, this was cruel to chickens. That aside, was it such a terrible way to choose among options? The anthropologist EE Evans-Pritchard, who lived with the Azande in the 1920s, didn’t think so. “I always kept a supply of poison [and] we regulated our affairs in accordance with the oracle’s decisions,” he wrote, adding drily: “I found this as satisfactory a way of running my home and affairs as any other I know of.” You could dismiss that as a joke. After all, chicken-poisoning is plainly superstition, delivering random results. But what if random results are sometimes exactly what you need?
The other day, US neuroscientists published details of experiments on rats, showing that in certain unpredictable situations, they stop trying to make decisions based on past experience. Instead, a circuit in their brains switches to “random mode”. The researchers’ hunch is that this serves a purpose: past experience is usually helpful, but when uncertainty levels are high, it can mislead, so randomness is in the rats’ best interests. When we’re faced with the unfamiliar, experience can mislead humans, too, partly because we filter it through various irrational biases. According to those books on thinking clearly, we should strive to overcome these biases, thus making more rational calculations. But there’s another way to bypass our biased brains: copy the rats, and choose randomly.
In certain walks of life, the usefulness of randomness is old news: the stock market, say, is so unpredictable that, to quote the economist Burton Malkiel, “a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do as well as one carefully selected by experts”. (This has been tried, with simulated monkeys, andthey beat the market.) But, generally, as Michael Schulson put it recentlyin an Aeon magazine essay, “We take it for granted that the best decisions stem from empirical analysis and informed choice.” Yet consider, he suggests, the ancient Greek tradition of filling some government positions by lottery. Randomness disinfects a process that might be dirtied by corruption.
Randomness can be similarly useful in everyday life. For tiny choices, it’s a time-saver: pick randomly from a menu, and you can get back to chatting with friends. For bigger ones, it’s an acknowledgment of how little one can ever know about the complex implications of a decision. Let’s be realistic: for the biggest decisions, such as whom to marry, trusting to randomness feels absurd. But if you can up the randomness quotient for marginally less weighty choices, especially when uncertainty prevails, you may find it pays off. Though kindly refrain from poisoning any chickens.
Read the entire article here.
The future of good design may actually lie in intentionally doing the wrong thing. While we are drawn to the beauty of symmetry — in faces, in objects — we are also drawn by the promise of imperfection.
In the late 1870s, Edgar Degas began work on what would become one of his most radical paintings, Jockeys Before the Race. Degas had been schooled in techniques of the neoclassicist and romanticist masters but had begun exploring subject matter beyond the portraits and historical events that were traditionally considered suitable for fine art, training his eye on café culture, common laborers, and—most famously—ballet dancers. But with Jockeys, Degas pushed past mild provocation. He broke some of the most established formulas of composition. The painting is technically exquisite, the horses vividly sculpted with confident brushstrokes, their musculature perfectly rendered. But while composing this beautifully balanced, impressionistically rendered image, Degas added a crucial, jarring element: a pole running vertically—and asymmetrically—in the immediate foreground, right through the head of one of the horses.
Degas wasn’t just “thinking outside of the box,” as the innovation cliché would have it. He wasn’t trying to overturn convention to find a more perfect solution. He was purposely creating something that wasn’t pleasing, intentionally doing the wrong thing. Naturally viewers were horrified. Jockeys was lampooned in the magazine Punch, derided as a “mistaken impression.” But over time, Degas’ transgression provided inspiration for other artists eager to find new ways to inject vitality and dramatic tension into work mired in convention. You can see its influence across art history, from Frederic Remington’s flouting of traditional compositional technique to the crackling photojournalism of Henri Cartier-Bresson.
Degas was engaged in a strategy that has shown up periodically for centuries across every artistic and creative field. Think of it as one step in a cycle: In the early stages, practitioners dedicate themselves to inventing and improving the rules—how to craft the most pleasing chord progression, the perfectly proportioned building, the most precisely rendered amalgamation of rhyme and meter. Over time, those rules become laws, and artists and designers dedicate themselves to excelling within these agreed-upon parameters, creating work of unparalleled refinement and sophistication—the Pantheon, the Sistine Chapel, the Goldberg Variations. But once a certain maturity has been reached, someone comes along who decides to take a different route. Instead of trying to create an ever more polished and perfect artifact, this rebel actively seeks out imperfection—sticking a pole in the middle of his painting, intentionally adding grungy feedback to a guitar solo, deliberately photographing unpleasant subjects. Eventually some of these creative breakthroughs end up becoming the foundation of a new set of aesthetic rules, and the cycle begins again.
DEGAS WASN’T JUST THINKING OUTSIDE THE BOX. HE WAS PURPOSELY CREATING SOMETHING THAT WASN’T PLEASING.
For the past 30 years, the field of technology design has been working its way through the first two stages of this cycle, an industry-wide march toward more seamless experiences, more delightful products, more leverage over the world around us. Look at our computers: beige and boxy desktop machines gave way to bright and colorful iMacs, which gave way to sleek and sexy laptops, which gave way to addictively touchable smartphones. It’s hard not to look back at this timeline and see it as a great story of human progress, a joint effort to experiment and learn and figure out the path toward a more refined and universally pleasing design.
All of this has resulted in a world where beautifully constructed tech is more powerful and more accessible than ever before. It is also more consistent. That’s why all smartphones now look basically the same—gleaming black glass with handsomely cambered edges. Google, Apple, and Microsoft all use clean, sans-serif typefaces in their respective software. After years of experimentation, we have figured out what people like and settled on some rules.
But there’s a downside to all this consensus—it can get boring. From smartphones to operating systems to web page design, it can start to feel like the truly transformational moments have come and gone, replaced by incremental updates that make our devices and interactions faster and better.
This brings us to an important and exciting moment in the design of our technologies. We have figured out the rules of creating sleek sophistication. We know, more or less, how to get it right. Now, we need a shift in perspective that allows us to move forward. We need a pole right through a horse’s head. We need to enter the third stage of this cycle. It’s time to stop figuring out how to do things the right way, and start getting it wrong.
In late 2006, when I was creative director here at WIRED, I was working on the design of a cover featuring John Hodgman. We were far along in the process—Hodgman was styled and photographed, the cover lines written, our fonts selected, the layout firmed up. I had been aiming for a timeless design with a handsome monochromatic color palette, a cover that evoked a 1960s jet-set vibe. When I presented my finished design, WIRED’s editor at the time, Chris Anderson, complained that the cover was too drab. He uttered the prescriptive phrase all graphic designers hate hearing: “Can’t you just add more colors?”
I demurred. I felt the cover was absolutely perfect. But Chris did not, and so, in a spasm of designerly “fuck you,” I drew a small rectangle into my design, a little stripe coming off from the left side of the page, rudely breaking my pristine geometries. As if that weren’t enough, I filled it with the ugliest hue I could find: neon orange— Pantone 811, to be precise. My perfect cover was now ruined!
By the time I came to my senses a couple of weeks later, it was too late. The cover had already been sent to the printer. My anger morphed into regret. To the untrained eye, that little box might not seem so offensive, but I felt that I had betrayed one of the most crucial lessons I learned in design school—that every graphic element should serve a recognizable function. This stray dash of color was careless at best, a postmodernist deviation with no real purpose or value. It confused my colleagues and detracted from the cover’s clarity, unnecessarily making the reader more conscious of the design.
But you know what? I actually came to like that crass little neon orange bar. I ended up including a version of it on the next month’s cover, and again the month after that. It added something, even though I couldn’t explain what it was. I began referring to this idea—intentionally making “bad” design choices—as Wrong Theory, and I started applying it in little ways to all of WIRED’s pages. Pictures that were supposed to run large, I made small. Where type was supposed to run around graphics, I overlapped the two. Headlines are supposed to come at the beginning of stories? I put them at the end. I would even force our designers to ruin each other’s “perfect” layouts.
At the time, this represented a major creative breakthrough for me—the idea that intentional wrongness could yield strangely pleasing results. Of course I was familiar with the idea of rule-breaking innovation—that each generation reacts against the one that came before it, starting revolutions, turning its back on tired conventions. But this was different. I wasn’t just throwing out the rulebook and starting from scratch. I was following the rules, then selectively breaking one or two for maximum impact.
Read the entire article here.
Pursuing a cherished activity, uninterrupted, with no distraction is one of life’s pleasures. Many who multi-task and brag about it have long forgotten the benefits of deep focus and immersion in one single, prolonged task. Reading can be such a process — and over the last several years researchers have found that distraction-free, thoughtful reading — slow reading — is beneficial.
So, please put down your tablet, laptop, smartphone and TV remote after you read this post, go find an unread book, shut out your daily distractions — kids, news, Facebook, boss, grocery lists, plumber — and immerse yourself in the words on a page, and nothing else. It will relieve you of stress and benefit your brain.
Once a week, members of a Wellington, New Zealand, book club arrive at a cafe, grab a drink and shut off their cellphones. Then they sink into cozy chairs and read in silence for an hour.
The point of the club isn’t to talk about literature, but to get away from pinging electronic devices and read, uninterrupted. The group calls itself the Slow Reading Club, and it is at the forefront of a movement populated by frazzled book lovers who miss old-school reading.
Slow reading advocates seek a return to the focused reading habits of years gone by, before Google, smartphones and social media started fracturing our time and attention spans. Many of its advocates say they embraced the concept after realizing they couldn’t make it through a book anymore.
“I wasn’t reading fiction the way I used to,” said Meg Williams, a 31-year-old marketing manager for an annual arts festival who started the club. “I was really sad I’d lost the thing I used to really, really enjoy.”
Slow readers list numerous benefits to a regular reading habit, saying it improves their ability to concentrate, reduces stress levels and deepens their ability to think, listen and empathize. The movement echoes a resurgence in other old-fashioned, time-consuming pursuits that offset the ever-faster pace of life, such as cooking the “slow-food” way or knitting by hand.
The benefits of reading from an early age through late adulthood have been documented by researchers. A study of 300 elderly people published by the journal Neurology last year showed that regular engagement in mentally challenging activities, including reading, slowed rates of memory loss in participants’ later years.
A study published last year in Science showed that reading literary fiction helps people understand others’ mental states and beliefs, a crucial skill in building relationships. A piece of research published in Developmental Psychology in 1997 showed first-grade reading ability was closely linked to 11th grade academic achievements.
Yet reading habits have declined in recent years. In a survey this year, about 76% of Americans 18 and older said they read at least one book in the past year, down from 79% in 2011, according to the Pew Research Center.
Attempts to revive reading are cropping up in many places. Groups in Seattle, Brooklyn, Boston and Minneapolis have hosted so-called silent reading parties, with comfortable chairs, wine and classical music.
Diana La Counte of Orange County, Calif., set up what she called a virtual slow-reading group a few years ago, with members discussing the group’s book selection online, mostly on Facebook. “When I realized I read Twitter more than a book, I knew it was time for action,” she says.
Read the entire story here.
Just over a year ago I highlighted the plight of accepted scholarly fact in Texas. The state, through its infamous School Board of Education (SBOE), had just completed a lengthy effort to revise many textbooks for middle- and high-school curricula. The SBOE and its ideological supporters throughout the Texas political machine managed to insert numerous dubious claims, fictitious statements in place of agreed upon facts and handfuls of slanted opinion in all manner of historical and social science texts. Many academics and experts in their respective fields raised alarms over the process. But the SBOE derided these “liberal elitists”, and openly flaunted its distaste for fact, preferring to distort historical record with undertones of conservative Christianity.
Many non-Texan progressives and believers-in-fact laughingly shook their heads knowing that Texas could and should be left its own devices. Unfortunately, for the rest of the country, Texas has so much buying power that textbook publishers will often publish with Texas in mind, but distribute their books throughout the entire nation.
So now it comes as no surprise to find that many newly, or soon to be, published Texas textbooks for grades 6-12 are riddled with errors. An academic review of 43 textbooks highlights the disaster waiting to happen to young minds in Texas, and across many other states. The Texas SBOE will take a vote on which books to approve in November.
Some choice examples of the errors and half-truths below.
All of the world geography textbooks inaccurately downplay the role that conquest played in the spread of Christianity.
Discovery Education — Social Studies Techbook World Geography and Cultures
The text states: “When Europeans arrived, they brought Christianity with them and spread it among the indigenous people. Over time, Christianity became the main religion in Latin America.”
Pearson Education – Contemporary World Cultures
The text states: “Priests came to Mexico to convert Native Americans to the Roman Catholic religion. The Church became an important part of life in the new colony. Churches were built in the centers of towns and cities, and church officials became leaders in the colony.”
Houghton Mifflin Harcourt – World Geography
The text states: “The Spanish brought their language and Catholic religion, both of which dominate modern Mexico.”
All but two of the world geography textbooks fail to mention the Spaniards’ forced conversions of the indigenous peoples to Christianity (e.g., the Spanish Requerimiento of 1513) and their often-systematic destruction of indigenous religious institutions. The two exceptions (Cengage Learning, Inc. – World Cultures and Geography and Houghton Mifflin Harcourt – World Geography) delay this grim news until a chapter on South America, and even there do not give it the prominence it deserves.
The Christianization of the indigenous peoples of the Americas was most decidedly not benign. These descriptions provide a distorted picture of the spread of Christianity. An accurate account must include information about the forced conversion of native peoples and the often-systematic destruction of indigenous religious institutions and practices. (This error of omission is especially problematic when contrasted with the emphasis on conquest – often violent – to describe the spread of Islam in some textbooks.)
One world history textbook (by Worldview Software, Inc.) includes outdated – and possibly offensive – anthropological categories and racial terminology in describing African civilization.
WorldView Software – World History A: Early Civilizations to the Mid-1800s
The text states: “South of the Sahara Desert most of the people before the Age of Explorations were black Africans of the Negro race.”
Elsewhere, the text states: “The first known inhabitants of Africa north of the Sahara in prehistory were Caucasoid Hamitic people of uncertain origin.”
First, the term “Negro” is archaic and fraught with ulterior meaning. It should categorically not be used in a modern textbook. Further, the first passage is unforgivably misleading because it suggests that all black native Africans belong to a single “racial” group. This is typological thinking, which disappeared largely from texts after the 1940s. It harkens back to the racialization theory that all people could be classified as one of three “races”: Caucasoid, Mongoloid, or Negroid. Better to say: “…were natives of African origin.” Similarly, in the second passage, it is more accurate to simply omit reference to “Caucasoid.”
From the Washington Post:
When it comes to controversies about curriculum, textbook content and academic standards, Texas is the state that keeps on giving.
Back in 2010, we had an uproar over proposed changes to social studies standards by religious conservatives on the State Board of Education, which included a bid to calling the United States’ hideous slave trade history as the “Atlantic triangular trade.” There were other doozies, too, such as one proposal to remove Thomas Jefferson from the Enlightenment curriculum and replace him with John Calvin. Some were changed but the board’s approved standards were roundly criticized as distorted history.
There’s a new fuss about proposed social studies textbooks for Texas public schools that are based on what are called the Texas Essential Knowledge and Skills. Scholarly reviews of 43 proposed history, geography and government textbooks for Grades 6-12 — undertaken by the Education Fund of the Texas Freedom Network, a watchdog and activist group that monitors far-right issues and organizations — found extensive problems in American Government textbooks, U.S. and World History textbooks,Religion in World History textbooks, and Religion in World Geography textbooks. The state board will vote on which books to approve in November.
Ideas promoted in various proposed textbooks include the notion that Moses and Solomon inspired American democracy, that in the era of segregation only “sometimes” were schools for black children “lower in quality” and that Jews view Jesus Christ as an important prophet.
Here are the broad findings of 10 scholars, who wrote four separate reports, taken from an executive summary, followed by the names of the scholars and a list of publishers who submitted textbooks.
Prominent neo-atheist Sam Harris continues to reject theism, and does so thoughtfully and eloquently. In his latest book, Waking Up, he continues to argue the case against religion, but makes a powerful case for spirituality. Harris defines spirituality as an inner sense of a good and powerful reality, based on sound self-awarenesses and insightful questioning of one’s own consciousness. This type of spirituality, quite rightly, is devoid of theistic angels and demons. Harris reveals more in his interview with Gary Gutting, professor of philosophy at the University of Notre Dame.
From the NYT:
Sam Harris is a neuroscientist and prominent “new atheist,” who along with others like Richard Dawkins, Daniel Dennett and Christopher Hitchens helped put criticism of religion at the forefront of public debate in recent years. In two previous books, “The End of Faith” and “Letter to a Christian Nation,” Harris argued that theistic religion has no place in a world of science. In his latest book, “Waking Up,” his thought takes a new direction. While still rejecting theism, Harris nonetheless makes a case for the value of “spirituality,” which he bases on his experiences in meditation. I interviewed him recently about the book and some of the arguments he makes in it.
Gary Gutting: A common basis for atheism is naturalism — the view that only science can give a reliable account of what’s in the world. But in “Waking Up” you say that consciousness resists scientific description, which seems to imply that it’s a reality beyond the grasp of science. Have you moved away from an atheistic view?
Sam Harris: I don’t actually argue that consciousness is “a reality” beyond the grasp of science. I just think that it is conceptually irreducible — that is, I don’t think we can fully understand it in terms of unconscious information processing. Consciousness is “subjective”— not in the pejorative sense of being unscientific, biased or merely personal, but in the sense that it is intrinsically first-person, experiential and qualitative.
The only thing in this universe that suggests the reality of consciousness is consciousness itself. Many philosophers have made this argument in one way or another — Thomas Nagel, John Searle, David Chalmers. And while I don’t agree with everything they say about consciousness, I agree with them on this point.
The primary approach to understanding consciousness in neuroscience entails correlating changes in its contents with changes in the brain. But no matter how reliable these correlations become, they won’t allow us to drop the first-person side of the equation. The experiential character of consciousness is part of the very reality we are studying. Consequently, I think science needs to be extended to include a disciplined approach to introspection.
G.G.: But science aims at objective truth, which has to be verifiable: open to confirmation by other people. In what sense do you think first-person descriptions of subjective experience can be scientific?
S.H.: In a very strong sense. The only difference between claims about first-person experience and claims about the physical world is that the latter are easier for others to verify. That is an important distinction in practical terms — it’s easier to study rocks than to study moods — but it isn’t a difference that marks a boundary between science and non-science. Nothing, in principle, prevents a solitary genius on a desert island from doing groundbreaking science. Confirmation by others is not what puts the “truth” in a truth claim. And nothing prevents us from making objective claims about subjective experience.
Are you thinking about Margaret Thatcher right now? Well, now you are. Were you thinking about her exactly six minutes ago? Probably not. There are answers to questions of this kind, whether or not anyone is in a position to verify them.
And certain truths about the nature of our minds are well worth knowing. For instance, the anger you felt yesterday, or a year ago, isn’t here anymore, and if it arises in the next moment, based on your thinking about the past, it will quickly pass away when you are no longer thinking about it. This is a profoundly important truth about the mind — and it can be absolutely liberating to understand it deeply. If you do understand it deeply — that is, if you are able to pay clear attention to the arising and passing away of anger, rather than merely think about why you have every right to be angry — it becomes impossible to stay angry for more than a few moments at a time. Again, this is an objective claim about the character of subjective experience. And I invite our readers to test it in the laboratory of their own minds.
G. G.: Of course, we all have some access to what other people are thinking or feeling. But that access is through probable inference and so lacks the special authority of first-person descriptions. Suppose I told you that in fact I didn’t think of Margaret Thatcher when I read your comment, because I misread your text as referring to Becky Thatcher in “The Adventures of Tom Sawyer”? If that’s true, I have evidence for it that you can’t have. There are some features of consciousness that we will agree on. But when our first-person accounts differ, then there’s no way to resolve the disagreement by looking at one another’s evidence. That’s very different from the way things are in science.
S.H.: This difference doesn’t run very deep. People can be mistaken about the world and about the experiences of others — and they can even be mistaken about the character of their own experience. But these forms of confusion aren’t fundamentally different. Whatever we study, we are obliged to take subjective reports seriously, all the while knowing that they are sometimes false or incomplete.
For instance, consider an emotion like fear. We now have many physiological markers for fear that we consider quite reliable, from increased activity in the amygdala and spikes in blood cortisol to peripheral physiological changes like sweating palms. However, just imagine what would happen if people started showing up in the lab complaining of feeling intense fear without showing any of these signs — and they claimed to feel suddenly quite calm when their amygdalae lit up on fMRI, their cortisol spiked, and their skin conductance increased. We would no longer consider these objective measures of fear to be valid. So everything still depends on people telling us how they feel and our (usually) believing them.
However, it is true that people can be very poor judges of their inner experience. That is why I think disciplined training in a technique like “mindfulness,” apart from its personal benefits, can be scientifically important.
Read the entire story here.
Peter Thiel on why entrepreneurs should strive for monopoly and avoid competition. If only it were that simple for esoteric restaurants, innovative technology companies and all startup businesses in between.
What valuable company is nobody building? This question is harder than it looks, because your company could create a lot of value without becoming very valuable itself. Creating value isn’t enough—you also need to capture some of the value you create.
This means that even very big businesses can be bad businesses. For example, U.S. airline companies serve millions of passengers and create hundreds of billions of dollars of value each year. But in 2012, when the average airfare each way was $178, the airlines made only 37 cents per passenger trip. Compare them to Google which creates less value but captures far more. Google brought in $50 billion in 2012 (versus $160 billion for the airlines), but it kept 21% of those revenues as profits—more than 100 times the airline industry’s profit margin that year. Google makes so much money that it is now worth three times more than every U.S. airline combined.
The airlines compete with each other, but Google stands alone. Economists use two simplified models to explain the difference: perfect competition and monopoly.
“Perfect competition” is considered both the ideal and the default state in Economics 101. So-called perfectly competitive markets achieve equilibrium when producer supply meets consumer demand. Every firm in a competitive market is undifferentiated and sells the same homogeneous products. Since no firm has any market power, they must all sell at whatever price the market determines. If there is money to be made, new firms will enter the market, increase supply, drive prices down and thereby eliminate the profits that attracted them in the first place. If too many firms enter the market, they’ll suffer losses, some will fold, and prices will rise back to sustainable levels. Under perfect competition, in the long run no company makes an economic profit.
The opposite of perfect competition is monopoly. Whereas a competitive firm must sell at the market price, a monopoly owns its market, so it can set its own prices. Since it has no competition, it produces at the quantity and price combination that maximizes its profits.
To an economist, every monopoly looks the same, whether it deviously eliminates rivals, secures a license from the state or innovates its way to the top. I’m not interested in illegal bullies or government favorites: By “monopoly,” I mean the kind of company that is so good at what it does that no other firm can offer a close substitute. Google is a good example of a company that went from 0 to 1: It hasn’t competed in search since the early 2000s, when it definitively distanced itself from Microsoft and Yahoo!
Americans mythologize competition and credit it with saving us from socialist bread lines. Actually, capitalism and competition are opposites. Capitalism is premised on the accumulation of capital, but under perfect competition, all profits get competed away. The lesson for entrepreneurs is clear: If you want to create and capture lasting value, don’t build an undifferentiated commodity business.
How much of the world is actually monopolistic? How much is truly competitive? It is hard to say because our common conversation about these matters is so confused. To the outside observer, all businesses can seem reasonably alike, so it is easy to perceive only small differences between them. But the reality is much more binary than that. There is an enormous difference between perfect competition and monopoly, and most businesses are much closer to one extreme than we commonly realize.
The confusion comes from a universal bias for describing market conditions in self-serving ways: Both monopolists and competitors are incentivized to bend the truth.
Monopolists lie to protect themselves. They know that bragging about their great monopoly invites being audited, scrutinized and attacked. Since they very much want their monopoly profits to continue unmolested, they tend to do whatever they can to conceal their monopoly—usually by exaggerating the power of their (nonexistent) competition.
Think about how Google talks about its business. It certainly doesn’t claim to be a monopoly. But is it one? Well, it depends: a monopoly in what? Let’s say that Google is primarily a search engine. As of May 2014, it owns about 68% of the search market. (Its closest competitors, Microsoft and Yahoo! have about 19% and 10%, respectively.) If that doesn’t seem dominant enough, consider the fact that the word “google” is now an official entry in the Oxford English Dictionary—as a verb. Don’t hold your breath waiting for that to happen to Bing.
But suppose we say that Google is primarily an advertising company. That changes things. The U.S. search-engine advertising market is $17 billion annually. Online advertising is $37 billion annually. The entire U.S. advertising market is $150 billion. And global advertising is a $495 billion market. So even if Google completely monopolized U.S. search-engine advertising, it would own just 3.4% of the global advertising market. From this angle, Google looks like a small player in a competitive world.
What if we frame Google as a multifaceted technology company instead? This seems reasonable enough; in addition to its search engine, Google makes dozens of other software products, not to mention robotic cars, Android phones and wearable computers. But 95% of Google’s revenue comes from search advertising; its other products generated just $2.35 billion in 2012 and its consumer-tech products a mere fraction of that. Since consumer tech is a $964 billion market globally, Google owns less than 0.24% of it—a far cry from relevance, let alone monopoly. Framing itself as just another tech company allows Google to escape all sorts of unwanted attention.
Non-monopolists tell the opposite lie: “We’re in a league of our own.” Entrepreneurs are always biased to understate the scale of competition, but that is the biggest mistake a startup can make. The fatal temptation is to describe your market extremely narrowly so that you dominate it by definition.
Read the entire article here.
Take and impassioned history professor, a mediocre U.S. high school history curriculum and add Bill Gates, and you get an opportunity to inject fresh perspectives and new ideas into young minds.
Not too long ago Professor David Christian’s collection of Big History DVDs caught Gates’ attention, leading to a broad mission to overhaul the boring history lesson — one school at a time. Professor Christian’s approach takes a thoroughly holistic approach to the subject, spanning broad and interconnected topics such as culture, biochemistry, astronomy, agriculture and physics. The sweeping narrative fundamental to Christian’s delivery reminds me somewhat of Kenneth Clark’s Civilisation and Jacob Bronowski’s The Ascent of Man, two landmark U.K. television series.
From the New York Times:
In 2008, shortly after Bill Gates stepped down from his executive role at Microsoft, he often awoke in his 66,000-square-foot home on the eastern bank of Lake Washington and walked downstairs to his private gym in a baggy T-shirt, shorts, sneakers and black socks yanked up to the midcalf. Then, during an hour on the treadmill, Gates, a self-described nerd, would pass the time by watching DVDs from the Teaching Company’s “Great Courses” series. On some mornings, he would learn about geology or meteorology; on others, it would be oceanography or U.S. history.
As Gates was working his way through the series, he stumbled upon a set of DVDs titled “Big History” — an unusual college course taught by a jovial, gesticulating professor from Australia named David Christian. Unlike the previous DVDs, “Big History” did not confine itself to any particular topic, or even to a single academic discipline. Instead, it put forward a synthesis of history, biology, chemistry, astronomy and other disparate fields, which Christian wove together into nothing less than a unifying narrative of life on earth. Standing inside a small “Mr. Rogers”-style set, flanked by an imitation ivy-covered brick wall, Christian explained to the camera that he was influenced by the Annales School, a group of early-20th-century French historians who insisted that history be explored on multiple scales of time and space. Christian had subsequently divided the history of the world into eight separate “thresholds,” beginning with the Big Bang, 13 billion years ago (Threshold 1), moving through to the origin of Homo sapiens (Threshold 6), the appearance of agriculture (Threshold 7) and, finally, the forces that gave birth to our modern world (Threshold 8).
Christian’s aim was not to offer discrete accounts of each period so much as to integrate them all into vertiginous conceptual narratives, sweeping through billions of years in the span of a single semester. A lecture on the Big Bang, for instance, offered a complete history of cosmology, starting with the ancient God-centered view of the universe and proceeding through Ptolemy’s Earth-based model, through the heliocentric versions advanced by thinkers from Copernicus to Galileo and eventually arriving at Hubble’s idea of an expanding universe. In the worldview of “Big History,” a discussion about the formation of stars cannot help including Einstein and the hydrogen bomb; a lesson on the rise of life will find its way to Jane Goodall and Dian Fossey. “I hope by the end of this course, you will also have a much better sense of the underlying unity of modern knowledge,” Christian said at the close of the first lecture. “There is a unified account.”
As Gates sweated away on his treadmill, he found himself marveling at the class’s ability to connect complex concepts. “I just loved it,” he said. “It was very clarifying for me. I thought, God, everybody should watch this thing!” At the time, the Bill & Melinda Gates Foundation had donated hundreds of millions of dollars to educational initiatives, but many of these were high-level policy projects, like the Common Core Standards Initiative, which the foundation was instrumental in pushing through. And Gates, who had recently decided to become a full-time philanthropist, seemed to pine for a project that was a little more tangible. He was frustrated with the state of interactive coursework and classroom technology since before he dropped out of Harvard in the mid-1970s; he yearned to experiment with entirely new approaches. “I wanted to explore how you did digital things,” he told me. “That was a big issue for me in terms of where education was going — taking my previous skills and applying them to education.” Soon after getting off the treadmill, he asked an assistant to set a meeting with Christian.
A few days later, the professor, who was lecturing at San Diego State University, found himself in the lobby of a hotel, waiting to meet with the billionaire. “I was scared,” Christian recalled. “Someone took me along the corridor, knocks on a door, Bill opens it, invites me in. All I remember is that within five minutes, he had so put me at my ease. I thought, I’m a nerd, he’s a nerd and this is fun!” After a bit of small talk, Gates got down to business. He told Christian that he wanted to introduce “Big History” as a course in high schools all across America. He was prepared to fund the project personally, outside his foundation, and he wanted to be personally involved. “He actually gave me his email address and said, ‘Just think about it,’ ” Christian continued. ” ‘Email me if you think this is a good idea.’ ”
Christian emailed to say that he thought it was a pretty good idea. The two men began tinkering, adapting Christian’s college course into a high-school curriculum, with modules flexible enough to teach to freshmen and seniors alike. Gates, who insisted that the course include a strong digital component, hired a team of engineers and designers to develop a website that would serve as an electronic textbook, brimming with interactive graphics and videos. Gates was particularly insistent on the idea of digital timelines, which may have been vestige of an earlier passion project, Microsoft Encarta, the electronic encyclopedia that was eventually overtaken by the growth of Wikipedia. Now he wanted to offer a multifaceted historical account of any given subject through a friendly user interface. The site, which is open to the public, would also feature a password-protected forum for teachers to trade notes and update and, in some cases, rewrite lesson plans based on their experiences in the classroom.
Read the entire article here.
Video: Clip from Threshold 1, The Big Bang. Courtesy of Big History Project, David Christian.
As this year’s Burning Man comes to an end in the eerily beautiful Black Rock Desert in Nevada I am reminded that attending this life event should be on everyone’s bucket list, before they actually kick it.
That said, applying one or more of the Ten Principle’s that guide Burners, should be a year-round quest — not a once in a lifetime transient goal.
Read more about this year’s BM here.
See more BM visuals here.
Image: Super Pool art installation, Burning Man 2014. Courtesy of Jim Urquhart / Reuters.
It would be fascinating to see a Broadway or West End show based on lyrics penned in honor of IBM and Thomas Watson, Sr., its first president. Makes you wonder if faithful employees of say, Facebook or Apple, would ever write a songbook — not in jest — for their corporate alma mater. I think not.
From ars technica:
“For thirty-seven years,” reads the opening passage in the book, “the gatherings and conventions of our IBM workers have expressed in happy songs the fine spirit of loyal cooperation and good fellowship which has promoted the signal success of our great IBM Corporation in its truly International Service for the betterment of business and benefit to mankind.”
That’s a hell of a mouthful, but it’s only the opening volley in the war on self-respect and decency that is the 1937 edition of Songs of the IBM, a booklet of corporate ditties first published in 1927 on the order of IBM company founder Thomas Watson, Sr.
The 1937 edition of the songbook is a 54-page monument to glassey-eyed corporate inhumanity, with every page overflowing with trite praise to The Company and Its Men. The booklet reads like a terribly parody of a hymnal—one that praises not the traditional Christian trinity but the new corporate triumvirate of IBM the father, Watson the son, and American entrepreneurship as the holy spirit:
Thomas Watson is our inspiration,
Head and soul of our splendid I.B.M.
We are pledged to him in every nation,
Our President and most beloved man.
His wisdom has guided each division
In service to all humanity
We have grown and broadened with his vision,
None can match him or our great company.
T. J. Watson, we all honor you,
You’re so big and so square and so true,
We will follow and serve with you forever,
All the world must know what I. B. M. can do.
—from “To Thos. J. Watson, President, I.B.M. Our Inspiration”
The wording transcends sense and sanity—these aren’t songs that normal human beings would choose to line up and sing, are they? Have people changed so much in the last 70-80 years that these songs—which seem expressly designed to debase their singers and deify their subjects—would be joyfully sung in harmony without complaint at company meetings? Were workers in the 1920s and 1930s so dehumanized by the rampaging robber barons of high industry that the only way to keep a desirable corporate job at a place like IBM was to toe the line and sing for your paycheck?
Surely no one would stand for this kind of thing in the modern world—to us, company songs seem like relics of a less-enlightened age. If anything, the mindless overflowing trite words sound like the kind of praises one would find directed at a cult of personality dictator in a decaying wreck of a country like North Korea.
Indeed, some of the songs in the book wouldn’t be out of place venerating the Juche ideal instead of IBM:
We don’t pretend we’re gay.
We always feel that way,
Because we’re filling the world with sunshine.
With I.B.M. machines,
We’ve got the finest means,
For brightly painting the clouds with sunshine.
—from “Painting the Clouds with Sunshine”
Surely no one would stand for this kind of thing in the modern world—to us, company songs seem like relics of a less-enlightened age. If anything, the mindless overflowing trite words sound like the kind of praises one would find directed at a cult of personality dictator in a decaying wreck of a country like North Korea.
All right, time to come clean: it’s incredibly easy to cherry pick terrible examples out of a 77-year old corporate songbook (though this songbook makes it easy because of how crazy it is to modern eyes). Moreover, to answer one of the rhetorical questions above, no—people have not changed so much over the past 80-ish years that they could sing mawkishly pro-IBM songs with an irony-free straight face. At least, not without some additional context.
There’s a decade-old writeup on NetworkWorld about the IBM corporate song phenomena that provides a lot of the glue necessary to build a complete mental picture of what was going on in both employees’ and leaderships’ heads. The key takeaway to deflate a lot of the looniness is that the majority of the songs came out of the Great Depression era, and employees lucky enough to be steadfastly employed by a company like IBM often werereally that grateful.
The formal integration of singing as an aspect of IBM’s culture at the time was heavily encouraged by Thomas J. Watson Sr. Watson and his employees co-opted the era’s showtunes and popular melodies for their proto-filking, ensuring that everyone would know the way the song went, if not the exact wording. Employees belting out “To the International Ticketograph Division” to the tune of “My Bonnie Lies Over the Ocean” (“In I.B.M. There’s a division. / That’s known as the Ticketograph; / It’s peopled by men who have vision, / Progressive and hard-working staff”) really isn’t all that different from any other team-building exercise that modern companies do—in fact, in a lot of ways, it’s far less humiliating than a company picnic with Mandatory Interdepartmental Three-Legged Races.
Many of the songs mirror the kinds of things that university students of the same time period might sing in honor of their alma mater. When viewed from the perspective of the Depression and post-Depression era, the singing is still silly—but it also makes a lot more sense. Watson reportedly wanted to inspire loyalty and cohesion among employees—and, remember, this was also an era where “normal” employee behavior was to work at a single company for most of one’s professional life, and then retire with a pension. It’s certainly a lot easier to sing a company’s praises if there’s paid retirement at the end of the last verse.
Read the entire article and see more songs here.
Image: Page 99-100 of the IBM Songbook, 1937. Courtesy of IBM / are technica.
Every now and then we visit the world of corporatespeak to see how business jargon is faring: which words are in, which phrases are out. Unfortunately, many of the most used and over-used still find their way into common office parlance. With apologies to our state-side readers some of the most popular British phrases follow, and, no surprise, many of these cringeworthy euphemisms seem to emanate from the U.S. Ugh!
From the Guardian:
I don’t know about you, but I’m a sucker for a bit of joined up, blue sky thinking. I love nothing more than the opportunity to touch base with my boss first thing on a Monday morning. It gives me that 24 carat feeling.
I apologise for the sarcasm, but management speak makes most people want to staple the boss’s tongue to the desk. A straw poll around my office found jargon is seen by staff as a tool for making something seem more impressive than it actually is.
The Plain English Campaign says that many staff working for big corporate organisations find themselves using management speak as a way of disguising the fact that they haven’t done their job properly. Some people think that it is easy to bluff their way through by using long, impressive-sounding words and phrases, even if they don’t know what they mean, which is telling in itself.
Furthermore, a recent survey by Institute of Leadership & Management, revealed that management speak is used in almost two thirds (64%) of offices, with nearly a quarter (23%) considering it to be a pointless irritation. “Thinking outside the box” (57%), “going forward” (55%) and “let’s touch base” (39%) were identified as the top three most overused pieces of jargon.
Walk through any office and you’ll hear this kind of thing going on every day. Here are some of the most irritating euphemisms doing the rounds:
Helicopter view – need a phrase that means broad overview of the business? Then why not say “a broad view of the business”?
Idea shower – brainstorm might be out of fashion, but surely we can thought cascade something better than this drivel.
Touch base offline – meaning let’s meet and talk. Because, contrary to popular belief, it is possible to communicate without a Wi-Fi signal. No, really, it is. Fancy a coffee?
Low hanging fruit – easy win business. This would be perfect for hungry children in orchards, but what is really happening is an admission that you don’t want to take the complicated route.
Look under the bonnet – analyse a situation. Most people wouldn’t have a clue about a car engine. When I look under a car bonnet I scratch my head, try not to look like I haven’t got a clue, jiggle a few pipes and kick the tyres before handing the job over to a qualified professional.
Get all your ducks in a row – be organised. Bert and Ernie from Sesame Street had an obsession with rubber ducks. You may think I’m disorganised, but there’s no need to talk to me like a five-year-old.
Don’t let the grass grow too long on this one – work fast. I’m looking for a polite way of suggesting that you get off your backside and get on with it.
Not enough bandwidth – too busy. Really? Try upgrading to fibre optics. I reckon I know a few people who haven’t been blessed with enough “bandwidth” and it’s got nothing to do with being busy.
Cascading relevant information – speaking to your colleagues. If anything, this is worse than touching base offline. From the flourish of cascading through to relevant, and onto information – this is complete nonsense.
The strategic staircase – business plan. Thanks, but I’ll take the lift.
Run it up the flagpole – try it out. Could you attach yourself while you’re at it?
Read the entire story here.
In case you may not have heard, sugar is bad for you. In fact, an increasing number of food scientists will tell you that sugar is a poison, and that it’s time to fight the sugar oligarchs in much the same way that health advocates resolved to take on big tobacco many decades ago.
From the Guardian:
If you have any interest at all in diet, obesity, public health, diabetes, epidemiology, your own health or that of other people, you will probably be aware that sugar, not fat, is now considered the devil’s food. Dr Robert Lustig’s book, Fat Chance: The Hidden Truth About Sugar, Obesity and Disease, for all that it sounds like a Dan Brown novel, is the difference between vaguely knowing something is probably true, and being told it as a fact. Lustig has spent the past 16 years treating childhood obesity. His meta-analysis of the cutting-edge research on large-cohort studies of what sugar does to populations across the world, alongside his own clinical observations, has him credited with starting the war on sugar. When it reaches the enemy status of tobacco, it will be because of Lustig.
“Politicians have to come in and reset the playing field, as they have with any substance that is toxic and abused, ubiquitous and with negative consequence for society,” he says. “Alcohol, cigarettes, cocaine. We don’t have to ban any of them. We don’t have to ban sugar. But the food industry cannot be given carte blanche. They’re allowed to make money, but they’re not allowed to make money by making people sick.”
Lustig argues that sugar creates an appetite for itself by a determinable hormonal mechanism – a cycle, he says, that you could no more break with willpower than you could stop feeling thirsty through sheer strength of character. He argues that the hormone related to stress, cortisol, is partly to blame. “When cortisol floods the bloodstream, it raises blood pressure; increases the blood glucose level, which can precipitate diabetes. Human research shows that cortisol specifically increases caloric intake of ‘comfort foods’.” High cortisol levels during sleep, for instance, interfere with restfulness, and increase the hunger hormone ghrelin the next day. This differs from person to person, but I was jolted by recognition of the outrageous deliciousness of doughnuts when I haven’t slept well.
“The problem in obesity is not excess weight,” Lustig says, in the central London hotel that he has made his anti-metabolic illness HQ. “The problem with obesity is that the brain is not seeing the excess weight.” The brain can’t see it because appetite is determined by a binary system. You’re either in anorexigenesis – “I’m not hungry and I can burn energy” – or you’re in orexigenesis – “I’m hungry and I want to store energy.” The flip switch is your leptin level (the hormone that regulates your body fat) but too much insulin in your system blocks the leptin signal.
It helps here if you have ever been pregnant or remember much of puberty and that savage hunger; the way it can trick you out of your best intentions, the lure of ridiculous foods: six-month-old Christmas cake, sweets from a bin. If you’re leptin resistant – that is, if your insulin is too high as a result of your sugar intake – you’ll feel like that all the time.
Telling people to simply lose weight, he tells me, “is physiologically impossible and it’s clinically dangerous. It’s a goal that’s not achievable.” He explains further in the book: “Biochemistry drives behaviour. You see a patient who drinks 10 gallons of water a day and urinates 10 gallons of water a day. What is wrong with him? Could he have a behavioural disorder and be a psychogenic water drinker? Could be. Much more likely he has diabetes.” To extend that, you could tell people with diabetes not to drink water, and 3% of them might succeed – the outliers. But that wouldn’t help the other 97% just as losing the weight doesn’t, long-term, solve the metabolic syndrome – the addiction to sugar – of which obesity is symptomatic.
Many studies have suggested that diets tend to work for two months, some for as long as six. “That’s what the data show. And then everybody’s weight comes roaring back.” During his own time working night shifts, Lustig gained 3st, which he never lost and now uses exuberantly to make two points. The first is that weight is extremely hard to lose, and the second – more important, I think – is that he’s no diet and fitness guru himself. He doesn’t want everybody to be perfect: he’s just a guy who doesn’t want to surrender civilisation to diseases caused by industry. “I’m not a fitness guru,” he says, puckishly. “I’m 45lb overweight!”
“Sugar causes diseases: unrelated to their calories and unrelated to the attendant weight gain. It’s an independent primary-risk factor. Now, there will be food-industry people who deny it until the day they die, because their livelihood depends on it.” And here we have the reason why he sees this is a crusade and not a diet book, the reason that Lustig is in London and not Washington. This is an industry problem; the obesity epidemic began in 1980. Back then, nobody knew about leptin. And nobody knew about insulin resistance until 1984.
“What they knew was, when they took the fat out they had to put the sugar in, and when they did that, people bought more. And when they added more, people bought more, and so they kept on doing it. And that’s how we got up to current levels of consumption.” Approximately 80% of the 600,000 packaged foods you can buy in the US have added calorific sweeteners (this includes bread, burgers, things you wouldn’t add sugar to if you were making them from scratch). Daily fructose consumption has doubled in the past 30 years in the US, a pattern also observable (though not identical) here, in Canada, Malaysia, India, right across the developed and developing world. World sugar consumption has tripled in the past 50 years, while the population has only doubled; it makes sense of the obesity pandemic.
“It would have happened decades earlier; the reason it didn’t was that sugar wasn’t cheap. The thing that made it cheap was high-fructose corn syrup. They didn’t necessarily know the physiology of it, but they knew the economics of it.” Adding sugar to everyday food has become as much about the industry prolonging the shelf life as it has about palatability; if you’re shopping from corner shops, you’re likely to be eating unnecessary sugar in pretty well everything. It is difficult to remain healthy in these conditions. “You here in Britain are light years ahead of us in terms of understanding the problem. We don’t get it in the US, we have this libertarian streak. You don’t have that. You’re going to solve it first. So it’s in my best interests to help you, because that will help me solve it back there.”
The problem has mushroomed all over the world in 30 years and is driven by the profits of the food and diet industries combined. We’re not looking at a global pandemic of individual greed and fecklessness: it would be impossible for the citizens of the world to coordinate their human weaknesses with that level of accuracy. Once you stop seeing it as a problem of personal responsibility it’s easier to accept how profound and serious the war on sugar is. Life doesn’t have to become wholemeal and joyless, but traffic-light systems and five-a-day messaging are under-ambitious.
“The problem isn’t a knowledge deficit,” an obesity counsellor once told me. “There isn’t a fat person on Earth who doesn’t know vegetables are good for you.” Lustig agrees. “I, personally, don’t have a lot of hope that those things will turn things around. Education has not solved any substance of abuse. This is a substance of abuse. So you need two things, you need personal intervention and you need societal intervention. Rehab and laws, rehab and laws. Education would come in with rehab. But we need laws.”
Read the entire article here.
Image: Molecular diagrams of sucrose (left) and fructose (right). Courtesy of Wikipedia.
It may not be you. You may not be the person who has tens of thousands of unread emails scattered across various email accounts. However, you know someone just like this — buried in a virtual avalanche of unopened text, unable to extricate herself (or him) and with no pragmatic plan to tackle the digital morass.
Washington Post writer Brigid Schulte has some ideas to help your friend (or you of course — your secret is safe with us).
From the Washington Post:
I was drowning in e-mail. Overwhelmed. Overloaded. Spending hours a day, it seemed, roiling in an unending onslaught of info turds and falling further and further behind. The day I returned from a two-week break, I had 23,768 messages in my inbox. And 14,460 of them were unread.
I had to do something. I kept missing stuff. Forgetting stuff. Apologizing. And getting miffed and increasingly angry e-mails from friends and others who wondered why I was ignoring them. It wasn’t just vacation that put me so far behind. I’d been behind for more than a year. Vacation only made it worse. Every time I thought of my inbox, I’d start to hyperventilate.
I’d tried tackling it before: One night a few months ago, I was determined to stay at my desk until I’d powered through all the unread e-mails. At dawn, I was still powering through and nowhere near the end. And before long, the inbox was just as crammed as it had been before I lost that entire night’s sleep.
On the advice of a friend, I’d even hired a Virtual Assistant to help me with the backlog. But I had no idea how to use one. And though I’d read about people declaring e-mail bankruptcy when their inbox was overflowing — deleting everything and starting over from scratch — I was positive there were gems somewhere in that junk, and I couldn’t bear to lose them.
I knew I wasn’t alone. I’d get automatic response messages saying someone was on vacation and the only way they could relax was by telling me they’d never, ever look at my e-mail, so please send it again when they returned. My friend, Georgetown law professor Rosa Brooks, often sends out this auto response: “My inbox looks like Pompeii, post-volcano. Will respond as soon as I have time to excavate.” And another friend, whenever an e-mail is longer than one or two lines, sends a short note, “This sounds like a conversation,” and she won’t respond unless you call her.
E-mail made the late writer Nora Ephron’s list of the 22 things she won’t miss in life. Twice. In 2013, more than 182 billion e-mails were sent every day, no doubt clogging up millions of inboxes around the globe.
Bordering on despair, I sought help from four productivity gurus. And, following their advice, in two weeks of obsession-bordering-on-compulsion, my inbox was down to zero.
*CREATE A SYSTEM. Julie Gray, a time coach who helps people dig out of e-mail overload all the time, said the first thing I had to change was my mind.
“This is such a pervasive problem. People think, ‘What am I doing wrong? They think they don’t have discipline or focus or that there’s some huge character flaw and they’re beating themselves up all the time. Which only makes it worse,” she said.
“So I first start changing their e-mail mindset from ‘This is an example of my failure,’ to ‘This just means I haven’t found the right system for me yet.’ It’s really all about finding your own path through the craziness.”
Do not spend another minute on e-mail, she admonished me, until you’ve begun to figure out a system. Otherwise, she said, I’d never dig out.
So we talked systems. It soon became clear that I’d created a really great e-mail system for when I was writing my book — ironically enough, on being overwhelmed — spending most of my time not at all overwhelmed in yoga pants in my home office working on my iMac. I was a follower of Randy Pausch who wrote, in “The Last Lecture,” to keep your e-mail inbox down to one page and religiously file everything once you’ve handled it. And I had for a couple years.
But now that I was traveling around the country to talk about the book, and back at work at The Washington Post, using my laptop, iPhone and iPad, that system was completely broken. I had six different e-mail accounts. And my main Verizon e-mail that I’d used for years and the Mac Mail inbox with meticulous file folders that I loved on my iMac didn’t sync across any of them.
Gray asked: “If everything just blew up today, and you had to start over, how would you set up your system?”
I wanted one inbox. One e-mail account. And I wanted the same inbox on all my devices. If I deleted an e-mail on my laptop, I wanted it deleted on my iMac. If I put an e-mail into a folder on my iMac, I wanted that same folder on my laptop.
So I decided to use Gmail, which does sync, as my main account. I set up an auto responder on my Verizon e-mail saying I was no longer using it and directing people to my Gmail account. I updated all my accounts to send to Gmail. And I spent hours on the phone with Apple one Sunday (thank you, Chazz,) to get my Gmail account set up in my beloved Mac mail inbox that would sync. Then I transferred old files and created new ones on Gmail. I had to keep my Washington Post account separate, but that wasn’t the real problem.
All systems go.
Read the entire article here.
Image courtesy of Google Search.
Privacy and lack thereof is much in the news and on or minds. New revelations of data breaches, phone taps, corporate hackers and governmental overreach surface on a daily basis. So, it is no surprise to learn that researchers have found a cheap way to eavesdrop on our conversations via a potato chip (crisp, to our British-English readers) packet. No news yet on which flavor of chip makes for the best spying!
From ars technica:
Watch enough spy thrillers, and you’ll undoubtedly see someone setting up a bit of equipment that points a laser at a distant window, letting the snoop listen to conversations on the other side of the glass. This isn’t something Hollywood made up; high-tech snooping devices of this sort do exist, and they take advantage of the extremely high-precision measurements made possible with lasers in order to measure the subtle vibrations caused by sound waves.
A team of researchers has now shown, however, that you can skip the lasers. All you really need is a consumer-level digital camera and a conveniently located bag of Doritos. A glass of water or a plant would also do.
Despite the differences in the technology involved, both approaches rely on the same principle: sound travels on waves of higher and lower pressure in the air. When these waves reach a flexible object, they set off small vibrations in the object. If you can detect these vibrations, it’s possible to reconstruct the sound. Laser-based systems detect the vibrations by watching for changes in the reflections of the laser light, but researchers wondered whether you could simply observe the object directly, using the ambient light it reflects. (The team involved researchers at MIT, Adobe Research, and Microsoft Research.)
The research team started with a simple test system made from a loudspeaker playing a rising tone, a high-speed camera, and a variety of objects: water, cardboard, a candy wrapper, some metallic foil, and (as a control) a brick. Each of these (even the brick) showed some response at the lowest end of the tonal range, but the other objects, particularly the cardboard and foil, had a response into much higher tonal regions. To observe the changes in ambient light, the camera didn’t have to capture the object at high resolution—it was used at 700 x 700 pixels or less—but it did have to be high-speed, capturing as many as 20,000 frames a second.
Processing the images wasn’t simple, however. A computer had to perform a weighted average over all the pixels captured, and even a twin 3.5GHz machine with 32GB of RAM took more than two hours to process one capture. Nevertheless, the results were impressive, as the algorithm was able to detect motion on the order of a thousandth of a pixel. This enabled the system to recreate the audio waves emitted by the loudspeaker.
Most of the rest of the paper describing the results involved making things harder on the system, as the researchers shifted to using human voices and moving the camera outside the room. They also showed that pre-testing the vibrating object’s response to a tone scale could help them improve their processing.
But perhaps the biggest surprise came when they showed that they didn’t actually need a specialized, high-speed camera. It turns out that most consumer-grade equipment doesn’t expose its entire sensor at once and instead scans an image across the sensor grid in a line-by-line fashion. Using a consumer video camera, the researchers were able to determine that there’s a 16 microsecond delay between each line, with a five millisecond delay between frames. Using this information, they treated each line as a separate exposure and were able to reproduce sound that way.
Read the entire article here.
Image courtesy of Google Search.
Privacy is still a valued and valuable right. It should not be a mere benefit in a democratic society. But, in our current age privacy is becoming an increasingly threatened species. We are surrounded with social networks that share and mine our behaviors and we are assaulted by the snoopers and spooks from local and national governments.
From the Observer:
We have come to the end of privacy; our private lives, as our grandparents would have recognised them, have been winnowed away to the realm of the shameful and secret. To quote ex-tabloid hack Paul McMullan, “privacy is for paedos”. Insidiously, through small concessions that only mounted up over time, we have signed away rights and privileges that other generations fought for, undermining the very cornerstones of our personalities in the process. While outposts of civilisation fight pyrrhic battles, unplugging themselves from the web – “going dark” – the rest of us have come to accept that the majority of our social, financial and even sexual interactions take place over the internet and that someone, somewhere, whether state, press or corporation, is watching.
The past few years have brought an avalanche of news about the extent to which our communications are being monitored: WikiLeaks, the phone-hacking scandal, the Snowden files. Uproar greeted revelations about Facebook’s “emotional contagion” experiment (where it tweaked mathematical formulae driving the news feeds of 700,000 of its members in order to prompt different emotional responses). Cesar A Hidalgo of the Massachusetts Institute of Technology described the Facebook news feed as “like a sausage… Everyone eats it, even though nobody knows how it is made”.
Sitting behind the outrage was a particularly modern form of disquiet – the knowledge that we are being manipulated, surveyed, rendered and that the intelligence behind this is artificial as well as human. Everything we do on the web, from our social media interactions to our shopping on Amazon, to our Netflix selections, is driven by complex mathematical formulae that are invisible and arcane.
Most recently, campaigners’ anger has turned upon the so-called Drip (Data Retention and Investigatory Powers) bill in the UK, which will see internet and telephone companies forced to retain and store their customers’ communications (and provide access to this data to police, government and up to 600 public bodies). Every week, it seems, brings a new furore over corporations – Apple, Google, Facebook – sidling into the private sphere. Often, it’s unclear whether the companies act brazenly because our governments play so fast and loose with their citizens’ privacy (“If you have nothing to hide, you’ve nothing to fear,” William Hague famously intoned); or if governments see corporations feasting upon the private lives of their users and have taken this as a licence to snoop, pry, survey.
We, the public, have looked on, at first horrified, then cynical, then bored by the revelations, by the well-meaning but seemingly useless protests. But what is the personal and psychological impact of this loss of privacy? What legal protection is afforded to those wishing to defend themselves against intrusion? Is it too late to stem the tide now that scenes from science fiction have become part of the fabric of our everyday world?
Novels have long been the province of the great What If?, allowing us to see the ramifications from present events extending into the murky future. As long ago as 1921, Yevgeny Zamyatin imagined One State, the transparent society of his dystopian novel, We. For Orwell, Huxley, Bradbury, Atwood and many others, the loss of privacy was one of the establishing nightmares of the totalitarian future. Dave Eggers’s 2013 novel The Circle paints a portrait of an America without privacy, where a vast, internet-based, multimedia empire surveys and controls the lives of its people, relying on strict adherence to its motto: “Secrets are lies, sharing is caring, and privacy is theft.” We watch as the heroine, Mae, disintegrates under the pressure of scrutiny, finally becoming one of the faceless, obedient hordes. A contemporary (and because of this, even more chilling) account of life lived in the glare of the privacy-free internet is Nikesh Shukla’s Meatspace, which charts the existence of a lonely writer whose only escape is into the shallows of the web. “The first and last thing I do every day,” the book begins, “is see what strangers are saying about me.”
Our age has seen an almost complete conflation of the previously separate spheres of the private and the secret. A taint of shame has crept over from the secret into the private so that anything that is kept from the public gaze is perceived as suspect. This, I think, is why defecation is so often used as an example of the private sphere. Sex and shitting were the only actions that the authorities in Zamyatin’s One State permitted to take place in private, and these remain the battlegrounds of the privacy debate almost a century later. A rather prim leaked memo from a GCHQ operative monitoring Yahoo webcams notes that “a surprising number of people use webcam conversations to show intimate parts of their body to the other person”.
It is to the bathroom that Max Mosley turns when we speak about his own campaign for privacy. “The need for a private life is something that is completely subjective,” he tells me. “You either would mind somebody publishing a film of you doing your ablutions in the morning or you wouldn’t. Personally I would and I think most people would.” In 2008, Mosley’s “sick Nazi orgy”, as the News of the World glossed it, featured in photographs published first in the pages of the tabloid and then across the internet. Mosley’s defence argued, successfully, that the romp involved nothing more than a “standard S&M prison scenario” and the former president of the FIA won £60,000 damages under Article 8 of the European Convention on Human Rights. Now he has rounded on Google and the continued presence of both photographs and allegations on websites accessed via the company’s search engine. If you type “Max Mosley” into Google, the eager autocomplete presents you with “video,” “case”, “scandal” and “with prostitutes”. Half-way down the first page of the search we find a link to a professional-looking YouTube video montage of the NotW story, with no acknowledgment that the claims were later disproved. I watch it several times. I feel a bit grubby.
“The moment the Nazi element of the case fell apart,” Mosley tells me, “which it did immediately, because it was a lie, any claim for public interest also fell apart.”
Here we have a clear example of the blurred lines between secrecy and privacy. Mosley believed that what he chose to do in his private life, even if it included whips and nipple-clamps, should remain just that – private. The News of the World, on the other hand, thought it had uncovered a shameful secret that, given Mosley’s professional position, justified publication. There is a momentary tremor in Mosley’s otherwise fluid delivery as he speaks about the sense of invasion. “Your privacy or your private life belongs to you. Some of it you may choose to make available, some of it should be made available, because it’s in the public interest to make it known. The rest should be yours alone. And if anyone takes it from you, that’s theft and it’s the same as the theft of property.”
Mosley has scored some recent successes, notably in continental Europe, where he has found a culture more suspicious of Google’s sweeping powers than in Britain or, particularly, the US. Courts in France and then, interestingly, Germany, ordered Google to remove pictures of the orgy permanently, with far-reaching consequences for the company. Google is appealing against the rulings, seeing it as absurd that “providers are required to monitor even the smallest components of content they transmit or store for their users”. But Mosley last week extended his action to the UK, filing a claim in the high court in London.
Mosley’s willingness to continue fighting, even when he knows that it means keeping alive the image of his white, septuagenarian buttocks in the minds (if not on the computers) of the public, seems impressively principled. He has fallen victim to what is known as the Streisand Effect, where his very attempt to hide information about himself has led to its proliferation (in 2003 Barbra Streisand tried to stop people taking pictures of her Malibu home, ensuring photos were posted far and wide). Despite this, he continues to battle – both in court, in the media and by directly confronting the websites that continue to display the pictures. It is as if he is using that initial stab of shame, turning it against those who sought to humiliate him. It is noticeable that, having been accused of fetishising one dark period of German history, he uses another to attack Google. “I think, because of the Stasi,” he says, “the Germans can understand that there isn’t a huge difference between the state watching everything you do and Google watching everything you do. Except that, in most European countries, the state tends to be an elected body, whereas Google isn’t. There’s not a lot of difference between the actions of the government of East Germany and the actions of Google.”
All this brings us to some fundamental questions about the role of search engines. Is Google the de facto librarian of the internet, given that it is estimated to handle 40% of all traffic? Is it something more than a librarian, since its algorithms carefully (and with increasing use of your personal data) select the sites it wants you to view? To what extent can Google be held responsible for the content it puts before us?
Read the entire article here.
Qatar hosts the World Cup in 2022. This gives the emirate another 8 years to finish construction of the various football venues, hotels and infrastructure required to support the world’s biggest single sporting event.
Perhaps, it will also give the emirate some time to clean up its appalling record of worker abuse and human rights violations. Numerous laborers have died during the construction process, while others are paid minimal wages or not at all. And to top it off most employees live in atrocious conditions , cannot move freely, nor can they change jobs or even repatriate — many come from the Indian subcontinent or East Asia. You could be forgiven for labeling these people indentured servants rather than workers.
From the Guardian:
Migrant workers who built luxury offices used by Qatar’s 2022 football World Cup organisers have told the Guardian they have not been paid for more than a year and are now working illegally from cockroach-infested lodgings.
Officials in Qatar’s Supreme Committee for Delivery and Legacy have been using offices on the 38th and 39th floors of Doha’s landmark al-Bidda skyscraper – known as the Tower of Football – which were fitted out by men from Nepal, Sri Lanka and India who say they have not been paid for up to 13 months’ work.
The project, a Guardian investigation shows, was directly commissioned by the Qatar government and the workers’ plight is set to raise fresh doubts over the autocratic emirate’s commitment to labour rights as construction starts this year on five new stadiums for the World Cup.
The offices, which cost £2.5m to fit, feature expensive etched glass, handmade Italian furniture, and even a heated executive toilet, project sources said. Yet some of the workers have not been paid, despite complaining to the Qatari authorities months ago and being owed wages as modest as £6 a day.
By the end of this year, several hundred thousand extra migrant workers from some of the world’s poorest countries are scheduled to have travelled to Qatar to build World Cup facilities and infrastructure. The acceleration in the building programme comes amid international concern over a rising death toll among migrant workers and the use of forced labour.
“We don’t know how much they are spending on the World Cup, but we just need our salary,” said one worker who had lost a year’s pay on the project. “We were working, but not getting the salary. The government, the company: just provide the money.”
The migrants are squeezed seven to a room, sleeping on thin, dirty mattresses on the floor and on bunk beds, in breach of Qatar’s own labour standards. They live in constant fear of imprisonment because they have been left without paperwork after the contractor on the project, Lee Trading and Contracting, collapsed. They say they are now being exploited on wages as low as 50p an hour.
Their case was raised with Qatar’s prime minister by Amnesty International last November, but the workers have said 13 of them remain stranded in Qatar. Despite having done nothing wrong, five have even been arrested and imprisoned by Qatari police because they did not have ID papers. Legal claims lodged against the former employer at the labour court in November have proved fruitless. They are so poor they can no longer afford the taxi to court to pursue their cases, they say.
A 35-year-old Nepalese worker and father of three who ssaid he too had lost a year’s pay: “If I had money to buy a ticket, I would go home.”
Qatar’s World Cup organising committee confirmed that it had been granted use of temporary offices on the floors fitted out by the unpaid workers. It said it was “heavily dismayed to learn of the behaviour of Lee Trading with regard to the timely payment of its workers”. The committee stressed it did not commission the firm. “We strongly disapprove and will continue to press for a speedy and fair conclusion to all cases,” it said.
Jim Murphy, the shadow international development secretary, said the revelation added to the pressure on the World Cup organising committee. “They work out of this building, but so far they can’t even deliver justice for the men who toiled at their own HQ,” he said.
Sharan Burrow, secretary general of the International Trade Union Confederation, said the workers’ treatment was criminal. “It is an appalling abuse of fundamental rights, yet there is no concern from the Qatar government unless they are found out,” she said. “In any other country you could prosecute this behaviour.”
Read the entire article here.
Image: Qatar. Courtesy of Google Maps.
Computer games have come a very long way since the pioneering days of Pong and Pacman. Games are now so realistic that many are indistinguishable from the real-world characters and scenarios they emulate. It is a testament to the skill and ingenuity of hardware and software engineers and the creativity of developers who bring all the diverse underlying elements of a game together. Now, however, they have a match in the form of computer system that is able to generate richly imagined and rendered world for use in the games themselves. It’s all done through algorithms.
From Technology Review:
Read the entire story here.
Video: No Man’s Sky. Courtesy of Hello Games.
The second amendment remains ever strong in the U.S. And, of course so does the number of homicides and child deaths at the hands of guns. Sigh!
From the Guardian:
In February, a nine-year-old Arkansas boy called Hank asked his uncle if he could head off on his own from their remote camp to hunt a rabbit with his .22 calibre rifle. “I said all right,” recalled his uncle Brent later. “It wasn’t a concern. Some people are like, ‘a nine year old shouldn’t be off by himself,’ but he wasn’t an average nine year old.”
Hank was steeped in hunting: when he was two, his father, Brad, would put him in a rucksack on his back when he went turkey hunting. Brad regularly took Hank hunting and said that his son often went off hunting by himself. On this particular day, Hank and his uncle Brent had gone squirrel hunting together as his father was too sick to go.
When Hank didn’t return from hunting the rabbit, his uncle raised the alarm. His mother, Kelli, didn’t learn about his disappearance for seven hours. “They didn’t want to bother me unduly,” she says.
The following morning, though, after police, family and hundreds of locals searched around the camp, Hank’s body was found by a creek with a single bullet wound to the forehead. The cause of death was, according to the police, most likely a hunting accident.
“He slipped and the butt of the gun hit the ground and the gun fired,” says Kelli.
Kelli had recently bought the gun for Hank. “It was the first gun I had purchased for my son, just a youth .22 rifle. I never thought it would be a gun that would take his life.”
Both Kelli and Brad, from whom she is separated, believe that the gun was faulty – it shouldn’t have gone off unless the trigger was pulled, they claim. Since Hank’s death, she’s been posting warnings on her Facebook page about the gun her son used: “I wish someone else had posted warnings about it before what happened,” she says.
Had Kelli not bought the gun and had Brad not trained his son to use it, Hank would have celebrated his 10th birthday on 6 June, which his mother commemorated by posting Hank’s picture on her Facebook page with the message: “Happy Birthday Hank! Mommy loves you!”
Little Hank thus became one in a tally of what the makers of a Channel 4 documentary called Kids and Guns claim to be 3,000 American children who die each year from gun-related accidents. A recent Yale University study found that more than 7,000 US children and adolescents are hospitalised or killed by guns each year and estimates that about 20 children a day are treated in US emergency rooms following incidents involving guns.
Hank’s story is striking, certainly for British readers, for two reasons. One, it dramatises how hunting is for many Americans not the privileged pursuit it is overwhelmingly here, but a traditional family activity as much to do with foraging for food as it is a sport.
Francine Shaw, who directed Kids and Guns, says: “In rural America … people hunt to eat.”
Kelli has a fond memory of her son coming home with what he’d shot. “He’d come in and say: “Momma – I’ve got some squirrel to cook.” And I’d say ‘Gee, thanks.’ That child was happy to bring home meat. He was the happiest child when he came in from shooting.”
But Hank’s story is also striking because it shows how raising kids to hunt and shoot is seen as good parenting, perhaps even as an essential part of bringing up children in America – a society rife with guns and temperamentally incapable of overturning the second amendment that confers the right to bear arms, no matter how many innocent Americans die or get maimed as a result.
“People know I was a good mother and loved him dearly,” says Kelli. “We were both really good parents and no one has said anything hateful to us. The only thing that has been said is in a news report about a nine year old being allowed to hunt alone.”
Does Kelli regret that Hank was allowed to hunt alone at that young age? “Obviously I do, because I’ve lost my son,” she tells me. But she doesn’t blame Brent for letting him go off from camp unsupervised with a gun.
“We’re sure not anti-gun here, but do I wish I could go back in time and not buy that gun? Yes I do. I know you in England don’t have guns. I wish I could go back and have my son back. I would live in England, away from the guns.”
Read the entire article here.
Infographic courtesy of Care2 via visua.ly