Sartre: Forever Linked with Mrs Premise and Mrs Conclusion

Jean-Paul_Sartre_FP

One has to wonder how Jean-Paul Sartre would have been regarded today had he accepted the Nobel Prize in Literature in 1964, or had the characters of Monty Python not used him as a punching bag in one of their infamous, satyrical philosopher sketches:

Mrs Conclusion: What was Jean-Paul like? 

Mrs Premise: Well, you know, a bit moody. Yes, he didn’t join in the fun much. Just sat there thinking. Still, Mr Rotter caught him a few times with the whoopee cushion. (she demonstrates) Le Capitalisme et La Bourgeoisie ils sont la m~me chose… Oooh we did laugh…

From the Guardian:

In this age in which all shall have prizes, in which every winning author knows what’s necessary in the post-award trial-by-photoshoot (Book jacket pressed to chest? Check. Wall-to-wall media? Check. Backdrop of sponsor’s logo? Check) and in which scarcely anyone has the couilles, as they say in France, to politely tell judges where they can put their prize, how lovely to recall what happened on 22 October 1964, when Jean-Paul Sartre turned down the Nobel prize for literature.

“I have always declined official honours,” he explained at the time. “A writer should not allow himself to be turned into an institution. This attitude is based on my conception of the writer’s enterprise. A writer who adopts political, social or literary positions must act only within the means that are his own – that is, the written word.”

Throughout his life, Sartre agonised about the purpose of literature. In 1947’s What is Literature?, he jettisoned a sacred notion of literature as capable of replacing outmoded religious beliefs in favour of the view that it should have a committed social function. However, the last pages of his enduringly brilliant memoir Words, published the same year as the Nobel refusal, despair over that function: “For a long time I looked on my pen as a sword; now I know how powerless we are.” Poetry, wrote Auden, makes nothing happen; politically committed literature, Sartre was saying, was no better. In rejecting the honour, Sartre worried that the Nobel was reserved for “the writers of the west or the rebels of the east”. He didn’t damn the Nobel in quite the bracing terms that led Hari Kunzru to decline the 2003 John Llewellyn Rhys prize, sponsored by the Mail on Sunday (“As the child of an immigrant, I am only too aware of the poisonous effect of the Mail’s editorial line”), but gently pointed out its Eurocentric shortcomings. Plus, one might say 50 years on, ça change. Sartre said that he might have accepted the Nobel if it had been offered to him during France’s imperial war in Algeria, which he vehemently opposed, because then the award would have helped in the struggle, rather than making Sartre into a brand, an institution, a depoliticised commodity. Truly, it’s difficult not to respect his compunctions.

But the story is odder than that. Sartre read in Figaro Littéraire that he was in the frame for the award, so he wrote to the Swedish Academy saying he didn’t want the honour. He was offered it anyway. “I was not aware at the time that the Nobel prize is awarded without consulting the opinion of the recipient,” he said. “But I now understand that when the Swedish Academy has made a decision, it cannot subsequently revoke it.”

Regrets? Sartre had a few – at least about the money. His principled stand cost him 250,000 kronor (about £21,000), prize money that, he reflected in his refusal statement, he could have donated to the “apartheid committee in London” who badly needed support at the time. All of which makes one wonder what his compatriot, Patrick Modiano, the 15th Frenchman to win the Nobel for literature earlier this month, did with his 8m kronor (about £700,000).

The Swedish Academy had selected Sartre for having “exerted a far-reaching influence on our age”. Is this still the case? Though he was lionised by student radicals in Paris in May 1968, his reputation as a philosopher was on the wane even then. His brand of existentialism had been eclipsed by structuralists (such as Lévi-Strauss and Althusser) and post-structuralists (such as Derrida and Deleuze). Indeed, Derrida would spend a great deal of effort deriding Sartrean existentialism as a misconstrual of Heidegger. Anglo-Saxon analytic philosophy, with the notable exception of Iris Murdoch and Arthur Danto, has for the most part been sniffy about Sartre’s philosophical credentials.

Sartre’s later reputation probably hasn’t benefited from being championed by Paris’s philosophical lightweight, Bernard-Henri Lévy, who subtitled his biography of his hero The Philosopher of the Twentieth Century (Really? Not Heidegger, Russell, Wittgenstein or Adorno?); still less by his appearance in Monty Python’s least funny philosophy sketch, “Mrs Premise and Mrs Conclusion visit Jean-Paul Sartre at his Paris home”. Sartre has become more risible than lisible: unremittingly depicted as laughable philosopher toad – ugly, randy, incomprehensible, forever excitably over-caffeinated at Les Deux Magots with Simone de Beauvoir, encircled with pipe smoke and mired in philosophical jargon, not so much a man as a stock pantomime figure. He deserves better.

How then should we approach Sartre’s writings in 2014? So much of his lifelong intellectual struggle and his work still seems pertinent. When we read the “Bad Faith” section of Being and Nothingness, it is hard not to be struck by the image of the waiter who is too ingratiating and mannered in his gestures, and how that image pertains to the dismal drama of inauthentic self-performance that we find in our culture today. When we watch his play Huis Clos, we might well think of how disastrous our relations with other people are, since we now require them, more than anything else, to confirm our self-images, while they, no less vexingly, chiefly need us to confirm theirs. When we read his claim that humans can, through imagination and action, change our destiny, we feel something of the burden of responsibility of choice that makes us moral beings. True, when we read such sentences as “the being by which Nothingness comes to the world must be its own Nothingness”, we might want to retreat to a dark room for a good cry, but let’s not spoil the story.

His lifelong commitments to socialism, anti-fascism and anti-imperialism still resonate. When we read, in his novel Nausea, of the protagonost Antoine Roquentin in Bouville’s art gallery, looking at pictures of self-satisfied local worthies, we can apply his fury at their subjects’ self-entitlement to today’s images of the powers that be (the suppressed photo, for example, of Cameron and his cronies in Bullingdon pomp), and share his disgust that such men know nothing of what the world is really like in all its absurd contingency.

In his short story Intimacy, we confront a character who, like all of us on occasion, is afraid of the burden of freedom and does everything possible to make others take her decisions for her. When we read his distinctions between being-in-itself (être-en-soi), being-for-itself (être-pour-soi) and being-for-others (être-pour-autrui), we are encouraged to think about the tragicomic nature of what it is to be human – a longing for full control over one’s destiny and for absolute identity, and at the same time, a realisation of the futility of that wish.

The existential plight of humanity, our absurd lot, our moral and political responsibilities that Sartre so brilliantly identified have not gone away; rather, we have chosen the easy path of ignoring them. That is not a surprise: for Sartre, such refusal to accept what it is to be human was overwhelmingly, paradoxically, what humans do.

Read the entire article here.

Image: Jean-Paul Sartre (c1950). Courtesy: Archivo del diario Clarín, Buenos Aires, Argentina

 

Send to Kindle

Colorless Green Ideas Sleep Furiously

Linguist, philosopher, and more recently political activist, Noam Chomsky penned the title phrase in the late 1950s. The sentence is grammatically correct, but semantically nonsensical. Some now maintain that many of Chomsky’s early ideas on the innateness of human language are equally nonsensical. Chomsky popularized the idea that language is innate to humans; that somehow and somewhere the minds of human infants contain a mechanism that can make sense of language by applying rules encoded in and activated by our genes. Steven Pinker expanded on Chomsky’s theory by proposing that the mind contains an innate device that encodes a common, universal grammar, which is foundational to all languages across all human societies.

Recently however, this notion has come under increasing criticism. A  growing number of prominent linguistic scholars, including Professor Vyvyan Evans, maintain that Chomsky’s and Pinker’s linguistic models are outdated — that a universal grammar is nothing but a finely-tuned myth. Evans and others maintain that language arises from and is directly embodied in experience.

From the New Scientist:

The ideas of Noam Chomsky, popularised by Steven Pinker, come under fire in Vyvyan Evans’s book The Language Myth: Why language is not an instinct

IS THE way we think about language on the cusp of a revolution? After reading The Language Myth, it certainly looks as if a major shift is in progress, one that will open people’s minds to liberating new ways of thinking about language.

I came away excited. I found that words aren’t so much things that can be limited by a dictionary definition but are encyclopaedic, pointing to sets of concepts. There is the intriguing notion that language will always be less rich than our ideas and there will always be things we cannot quite express. And there is the growing evidence that words are rooted in concepts built out of our bodily experience of living in the world.

Its author, Vyvyan Evans, is a professor of linguistics at Bangor University, UK, and his primary purpose is not so much to map out the revolution (that comes in a sequel) but to prepare you for it by sweeping out old ideas. The book is sure to whip up a storm, because in his sights are key ideas from some of the world’s great thinkers, including philosophers Noam Chomsky and Jerry Fodor.

Ideas about language that have entered the public consciousness are more myth than reality, Evans argues. Bestsellers by Steven Pinker, the Harvard University professor who popularised Chomksy in The Language InstinctHow the Mind Works and The Stuff of Thought, come in for particular criticism. “Science has moved on,” Evans writes. “And to end it all, Pinker is largely wrong, about language and about a number of other things too…”

The commonplace view of “language as instinct” is the myth Evans wants to destroy and he attempts the operation with great verve. The myth comes from the way children effortlessly learn languages just by listening to adults around them, without being aware explicitly of the governing grammatical rules.

This “miracle” of spontaneous learning led Chomsky to argue that grammar is stored in a module of the mind, a “language acquisition device”, waiting to be activated, stage-by-stage, when an infant encounters the jumble of language. The rules behind language are built into our genes.

This innate grammar is not the grammar of a school textbook, but a universal grammar, capable of generating the rules of any of the 7000 or so languages that a child might be exposed to, however different they might appear. In The Language Instinct, Pinker puts it this way: “a Universal Grammar, not reducible to history or cognition, underlies the human language instinct”. The search for that universal grammar has kept linguists busy for half a century.

They may have been chasing a mirage. Evans marshals impressive empirical evidence to take apart different facets of the “language instinct myth”. A key criticism is that the more languages are studied, the more their diversity becomes apparent and an underlying universal grammar less probable.

In a whistle-stop tour, Evans tells stories of languages with a completely free word order, including Jiwarli and Thalanyji from Australia. Then there’s the Inuit language Inuktitut, which builds sentences out of prefixes and suffixes to create giant words like tawakiqutiqarpiit, roughly meaning: “Do you have any tobacco for sale?” And there is the native Canadian language, Straits Salish, which appears not to have nouns or verbs.

An innate language module also looks shaky, says Evans, now scholars have watched languages emerge among communities of deaf people. A sign language is as rich grammatically as a spoken one, but new ones don’t appear fully formed as we might expect if grammar is laid out in our genes. Instead, they gain grammatical richness over several generations.

Now, too, we have detailed studies of how children acquire language. Grammatical sentences don’t start to pop out of their mouths at certain developmental stages, but rather bits and pieces emerge as children learn. At first, they use chunks of particular expressions they hear often, only gradually learning patterns and generalising to a fully fledged grammar. So grammars emerge from use, and the view of “language-as-instinct”, argues Evans, should be replaced by “language-as-use”.

The “innate” view also encounters a deep philosophical problem. If the rules of language are built into our genes, how is it that sentences mean something? How do they connect to our thoughts, concepts and to the outside world?

A solution from the language-as-instinct camp is that there is an internal language of thought called “mentalese”. In The Language Instinct, Pinker explains: “Knowing a language, then, is knowing how to translate mentalese into strings of words.” But philosophers are left arguing over the same question once removed: how does mentalese come to have meaning?

Read the entire article here.

 

Send to Kindle

The Italian Canary Sings

Coal_bituminousThose who decry benefits fraud in their own nations should look to the illustrious example of Italian “miner” Carlo Cani. His adventures in absconding from work over a period of 35 years (yes, years) would make a wonderful indie movie, and should be an inspiration to less ambitious slackers the world over.

From the Telegraph:

An Italian coal miner’s confession that he is drawing a pension despite hardly ever putting in a day’s work over a 35-year career has underlined the country’s problem with benefit fraud and its dysfunctional pension system.

Carlo Cani started work as a miner in 1980 but soon found that he suffered from claustrophobia and hated being underground.

He started doing everything he could to avoid hacking away at the coal face, inventing an imaginative range of excuses for not venturing down the mine in Sardinia where he was employed.

He pretended to be suffering from amnesia and haemorrhoids, rubbed coal dust into his eyes to feign an infection and on occasion staggered around pretending to be drunk.

The miner, now aged 60, managed to accumulate years of sick leave, apparently with the help of compliant doctors, and was able to stay at home to indulge his passion for jazz.

He also spent extended periods of time at home on reduced pay when demand for coal from the mine dipped, under an Italian system known as “cassazione integrazione” in which employees are kept on the pay roll during periods of economic difficulty for their companies.

Despite his long periods of absence, he was still officially an employee of the mining company, Carbosulcis, and therefore eventually entitled to a pension.

“I invented everything – amnesia, pains, haemorrhoids, I used to lurch around as if I was drunk. I bumped my thumb on a wall and obviously you can’t work with a swollen thumb,” Mr Cani told La Stampa daily on Tuesday.

“Other times I would rub coal dust into my eyes. I just didn’t like the work – being a miner was not the job for me.”

But rather than find a different occupation, he managed to milk the system for 35 years, until retiring on a pension in 2006 at the age of just 52.

“I reached the pensionable age without hardly ever working. I hated being underground. “Right from the start, I had no affinity for coal.”

He said he had “respect” for his fellow miners, who had earned their pensions after “years of sweat and back-breaking work”, while he had mostly rested at home.

The case only came to light this week but has caused such a furore in Italy that Mr Cani is now refusing to take telephone calls.

He could not be contacted but another Carlo Cani, who is no relation but lives in the same area of southern Sardinia and has his number listed in the phone book, said: “People round here are absolutely furious about this – to think that someone could skive off work for so long and still get his pension. He even seems to be proud of that fact.

“It’s shameful. This is a poor region and there is no work. All the young people are leaving and moving to England and Germany.”

The former miner’s work-shy ways have caused indignation in a country in which youth unemployment is more than 40 per cent.

Read the entire story here.

Image: Bituminous coal. The type of coal not mined by retired “miner” Carlo Cani. Courtesy of Wikipedia.

Send to Kindle

Cross-Connection Requires a Certain Daring

A previously unpublished essay by Isaac Asimov on the creative process shows us his well reasoned thinking on the subject. While he believed that deriving new ideas could be done productively in a group, he seemed to gravitate more towards the notion of the lone creative genius. Both, however, require the innovator(s) to cross-connect thoughts, often from disparate sources.

From Technology Review:

How do people get new ideas?

Presumably, the process of creativity, whatever it is, is essentially the same in all its branches and varieties, so that the evolution of a new art form, a new gadget, a new scientific principle, all involve common factors. We are most interested in the “creation” of a new scientific principle or a new application of an old one, but we can be general here.

One way of investigating the problem is to consider the great ideas of the past and see just how they were generated. Unfortunately, the method of generation is never clear even to the “generators” themselves.

But what if the same earth-shaking idea occurred to two men, simultaneously and independently? Perhaps, the common factors involved would be illuminating. Consider the theory of evolution by natural selection, independently created by Charles Darwin and Alfred Wallace.

There is a great deal in common there. Both traveled to far places, observing strange species of plants and animals and the manner in which they varied from place to place. Both were keenly interested in finding an explanation for this, and both failed until each happened to read Malthus’s “Essay on Population.”

Both then saw how the notion of overpopulation and weeding out (which Malthus had applied to human beings) would fit into the doctrine of evolution by natural selection (if applied to species generally).

Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.

Undoubtedly in the first half of the 19th century, a great many naturalists had studied the manner in which species were differentiated among themselves. A great many people had read Malthus. Perhaps some both studied species and read Malthus. But what you needed was someone who studied species, read Malthus, and had the ability to make a cross-connection.

That is the crucial point that is the rare characteristic that must be found. Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, “How stupid of me not to have thought of this.”

But why didn’t he think of it? The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table. Making the cross-connection requires a certain daring. It must, for any cross-connection that does not require daring is performed at once by many and develops not as a “new idea,” but as a mere “corollary of an old idea.”

It is only afterward that a new idea seems reasonable. To begin with, it usually seems unreasonable. It seems the height of unreason to suppose the earth was round instead of flat, or that it moved instead of the sun, or that objects required a force to stop them when in motion, instead of a force to keep them moving, and so on.

A person willing to fly in the face of reason, authority, and common sense must be a person of considerable self-assurance. Since he occurs only rarely, he must seem eccentric (in at least that respect) to the rest of us. A person eccentric in one respect is often eccentric in others.

Consequently, the person who is most likely to get new ideas is a person of good background in the field of interest and one who is unconventional in his habits. (To be a crackpot is not, however, enough in itself.)

Once you have the people you want, the next question is: Do you want to bring them together so that they may discuss the problem mutually, or should you inform each of the problem and allow them to work in isolation?

My feeling is that as far as creativity is concerned, isolation is required. The creative person is, in any case, continually working at it. His mind is shuffling his information at all times, even when he is not conscious of it. (The famous example of Kekule working out the structure of benzene in his sleep is well-known.)

The presence of others can only inhibit this process, since creation is embarrassing. For every new good idea you have, there are a hundred, ten thousand foolish ones, which you naturally do not care to display.

Nevertheless, a meeting of such people may be desirable for reasons other than the act of creation itself.

Read the entire article here.

Send to Kindle

Non-Adaptive Evolution of the Very Small

Is every feature that arises from evolution an adaptation?  Some evolutionary biologists think not. That is, some traits arising from the process of natural section may be due to random occurrences that natural selection failed to discard. And, it seems that smaller organisms show this quite well. To many adaptationists this is heretical — but too some researchers it opens a new, fruitful avenue of inquiry, and may lead to a fine tuning in our understanding of the evolutionary process.

From New Scientist:

I have spent my life working on slime moulds and they sent me a message that started me thinking. What puzzled me was that two different forms are found side-by-side in the soil everywhere from the tundra to the tropics. The obvious difference lies in the tiny stalks that disperse their spores. In one species this fruiting body is branched, in the other it is not.

I had assumed that the branched and the unbranched forms occupied separate ecological niches but I could not imagine what those niches might be. Perhaps there were none and neither shape had an advantage over the other, as far as natural selection was concerned.

I wrote this up and sent it to a wise and respected friend who responded with a furious letter saying that my conclusion was absurd: it was easy to imagine ways in which the two kinds of stalks might be separate adaptations and co-exist everywhere in the soil. This set me thinking again and I soon realised that both my position and his were guesses. They were hypotheses and neither could be proved.

There is no concept that is more central to evolution than natural selection, so adding this extra dimension of randomness was heresy. Because of the overwhelming success of Darwin’s natural selection, biologists – certainly all evolutionary biologists – find it hard to believe that a feature of any organism can have arisen (with minor exceptions) in any other way. Natural selection favours random genetic mutations that offer an advantage, therefore many people believe that all properties of an organism are an adaptation. If one cannot find the adaptive reason for a feature of an organism, one should just assume that there was once one, or that there is one that will be revealed in the future.

This matter has created some heated arguments. For example, the renowned biologists Stephen Jay Gould and Richard Lewontin wrote an inflammatory paper in 1979 attacking adaptionists for being like Dr Pangloss, the incurable optimist in Voltaire’s 1759 satire Candide. While their point was well taken, its aggressive tone produced counterattacks. Adaptionists assume that every feature of an organism arises as an adaption, but I assume that some features are the results of random mutations that escape being culled by natural selection. This is what I was suggesting for the branched and unbranched fruiting bodies of the slime moulds.

How can these organisms escape the stranglehold of selection? One explanation grabbed me and I have clung to it ever since; in fact it is the backbone of my new book. The reason that these organisms might have shapes that are not governed by natural selection is because they are so small. It turns out there are good reasons why this might be the case.

Development is a long, slow process for large organisms. Humans spend nine months in utero and keep growing in different ways for a long time after birth. An elephant’s gestation is even longer (about two years) and a mouse’s much shorter, but they are all are vastly longer than a single-cell microorganism. Such small forms may divide every few hours; at most their development may span days, but whatever it is it will be a small fraction of that of a larger, more complex organism.

Large organisms develop in a series of steps usually beginning with the fertilisation of an egg that then goes through many cell divisions and an increase in size of the embryo, with many twists and turns as it progresses towards adulthood. These multitudinous steps involve the laying down of complex organs such as a heart or an eye.

Building a complex organism is an immense enterprise, and the steps are often interlocked in a sequence so that if an earlier step fails through a deleterious mutation, the result is very simple: the death of the embryo. I first came across this idea in a 1965 book by Lancelot Law Whyte called Internal Factors in Evolution and have been mystified ever since why the idea has been swallowed by oblivion. His thesis was straightforward. Not only is there selection of organisms in the environment – Darwinian natural selection, which is external – but there is also continuous internal selection during development. Maybe the idea was too simple and straightforward to have taken root.

This fits in neatly with my contention that the shape of microorganisms is more affected by randomness than for large, complex organisms. Being small means very few development steps, with little or no internal selection. The effect of a mutation is likely to be immediately evident in the external morphology, so adult variants are produced with large numbers of different shapes and there is an increased chance that some of these will be untouched by natural selection.

Compare this with what happens in a big, complex organism – a mammal, say. Only those mutations that occur at a late stage of development are likely to be viable – eye or hair colour in humans are obvious examples. Any unfavourable mutation that occurs earlier in development will likely be eliminated by internal selection.

Let us now examine the situation for microorganisms. What is the evidence that their shapes are less likely to be culled by natural selection? The best examples come from organisms that make mineral shells: Radiolaria (pictured) and diatoms with their silica skeletons and Foraminifera with their calciferous shells. About 50,000 species of radiolarians have been described, 100,000 species of diatoms and some 270,000 species among the Foraminifera – all with vastly different shapes. For example, radiolarian skeletons can be shaped like spiny balls, bells, crosses and octagonal pyramids, to name but a few.

If you are a strict adaptionist, you have to find a separate explanation for each shape. If you favour my suggestion that their shapes arose through random mutation and there is little or no selection, the problem vanishes. It turns out that this very problem concerned Darwin. In the third (and subsequent) editions of On the Origin of Species he has a passage that almost takes the wind out of my sails:

“If it were no advantage, these forms would be left by natural selection unimproved or but little improved; and might remain for indefinite ages in their present little advanced condition. And geology tells us that some of the lowest forms, as the infusoria and rhizopods, have remained for an enormous period in nearly their present state.”

Read the entire article here.

Send to Kindle

The Sandwich of Corporate Exploitation

Google-search-sandwich

If ever you needed a vivid example of corporate exploitation of the most vulnerable, this is it. So-called free-marketeers will sneer at any suggestion of corporate over-reach — they will chant that it’s just the free market at work. But, the rules of this market,
as are many others, are written and enforced by the patricians and well-stacked against the plebs.

From NYT:

If you are a chief executive of a large company, you very likely have a noncompete clause in your contract, preventing you from jumping ship to a competitor until some period has elapsed. Likewise if you are a top engineer or product designer, holding your company’s most valuable intellectual property between your ears.

And you also probably have a noncompete agreement if you assemble sandwiches at Jimmy John’s sub sandwich chain for a living.

But what’s most startling about that information, first reported by The Huffington Post, is that it really isn’t all that uncommon. As my colleague Steven Greenhouse reported this year, employers are now insisting that workers in a surprising variety of relatively low- and moderate-paid jobs sign noncompete agreements.

Indeed, while HuffPo has no evidence that Jimmy John’s, a 2,000-location sandwich chain, ever tried to enforce the agreement to prevent some $8-an-hour sandwich maker or delivery driver from taking a job at the Blimpie down the road, there are other cases where low-paid or entry-level workers have had an employer try to restrict their employability elsewhere. The Times article tells of a camp counselor and a hair stylist who faced such restrictions.

American businesses are paying out a historically low proportion of their income in the form of wages and salaries. But the Jimmy John’s employment agreement is one small piece of evidence that workers, especially those without advanced skills, are also facing various practices and procedures that leave them worse off, even apart from what their official hourly pay might be. Collectively they tilt the playing field toward the owners of businesses and away from the workers who staff them.

You see it in disputes like the one heading to the Supreme Court over whether workers at an Amazon warehouse in Nevada must be paid for the time they wait to be screened at the end of the workday to ensure they have no stolen goods on them.

It’s evident in continuing lawsuits against Federal Express claiming that its “independent contractors” who deliver packages are in fact employees who are entitled to benefits and reimbursements of costs they incur.

And it is shown in the way many retailers assign hourly workers inconvenient schedules that can change at the last minute, giving them little ability to plan their lives (my colleague Jodi Kantor wrote memorably about the human effects of those policies on a Starbucks coffee worker in August, and Starbucks rapidly said it would end many of them).

These stories all expose the subtle ways that employers extract more value from their entry-level workers, at the cost of their quality of life (or, in the case of the noncompete agreements, freedom to leave for a more lucrative offer).

What’s striking about some of these labor practices is the absence of reciprocity. When a top executive agrees to a noncompete clause in a contract, it is typically the product of a negotiation in which there is some symmetry: The executive isn’t allowed to quit for a competitor, but he or she is guaranteed to be paid for the length of the contract even if fired.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Frenemies: The Religious Beheading and The Secular Guillotine

Secular ideologues in the West believe they are on the moral high-ground. The separation of church (and mosque or synagogue) from state is, they believe, the path to a more just, equal and less-violent culture. They will cite example after example in contemporary and recent culture of terrible violence in the name of religious extremism and fundamentalism.

And, yet, step back for a minute from the horrendous stories and images of atrocities wrought by religious fanatics in Europe, Africa, Asia and the Middle East. Think of the recent histories of fledgling nations in Africa; the ethnic cleansings across much of Central and Eastern Europe — several times over; the egomaniacal tribal terrorists of Central Asia, the brutality of neo-fascists and their socialist bedfellows in Latin America. Delve deeper into these tragic histories — some still unfolding before our very eyes — and you will see a much more complex view of humanity.  Our tribal rivalries know no bounds and our violence towards others is certainly not limited only to the catalyst of religion. Yes, we fight for our religion, but we also fight for territory, politics, resources, nationalism, revenge, poverty, ego.  Soon the coming fights will be about water and food — these will make our wars over belief systems seem rather petty.

Scholar and author Karen Armstrong explores the complexities of religious and secular violence in the broader context of human struggle in her new book, Fields of Blood: Religion and the History of Violence.

From the Guardian:

As we watch the fighters of the Islamic State (Isis) rampaging through the Middle East, tearing apart the modern nation-states of Syria and Iraq created by departing European colonialists, it may be difficult to believe we are living in the 21st century. The sight of throngs of terrified refugees and the savage and indiscriminate violence is all too reminiscent of barbarian tribes sweeping away the Roman empire, or the Mongol hordes of Genghis Khan cutting a swath through China, Anatolia, Russia and eastern Europe, devastating entire cities and massacring their inhabitants. Only the wearily familiar pictures of bombs falling yet again on Middle Eastern cities and towns – this time dropped by the United States and a few Arab allies – and the gloomy predictions that this may become another Vietnam, remind us that this is indeed a very modern war.

The ferocious cruelty of these jihadist fighters, quoting the Qur’an as they behead their hapless victims, raises another distinctly modern concern: the connection between religion and violence. The atrocities of Isis would seem to prove that Sam Harris, one of the loudest voices of the “New Atheism”, was right to claim that “most Muslims are utterly deranged by their religious faith”, and to conclude that “religion itself produces a perverse solidarity that we must find some way to undercut”. Many will agree with Richard Dawkins, who wrote in The God Delusion that “only religious faith is a strong enough force to motivate such utter madness in otherwise sane and decent people”. Even those who find these statements too extreme may still believe, instinctively, that there is a violent essence inherent in religion, which inevitably radicalises any conflict – because once combatants are convinced that God is on their side, compromise becomes impossible and cruelty knows no bounds.

Despite the valiant attempts by Barack Obama and David Cameron to insist that the lawless violence of Isis has nothing to do with Islam, many will disagree. They may also feel exasperated. In the west, we learned from bitter experience that the fanatical bigotry which religion seems always to unleash can only be contained by the creation of a liberal state that separates politics and religion. Never again, we believed, would these intolerant passions be allowed to intrude on political life. But why, oh why, have Muslims found it impossible to arrive at this logicalsolution to their current problems? Why do they cling with perverse obstinacy to the obviously bad idea of theocracy? Why, in short, have they been unable to enter the modern world? The answer must surely lie in their primitive and atavistic religion.

But perhaps we should ask, instead, how it came about that we in the west developed our view of religion as a purely private pursuit, essentially separate from all other human activities, and especially distinct from politics. After all, warfare and violence have always been a feature of political life, and yet we alone drew the conclusion that separating the church from the state was a prerequisite for peace. Secularism has become so natural to us that we assume it emerged organically, as a necessary condition of any society’s progress into modernity. Yet it was in fact a distinct creation, which arose as a result of a peculiar concatenation of historical circumstances; we may be mistaken to assume that it would evolve in the same fashion in every culture in every part of the world.

We now take the secular state so much for granted that it is hard for us to appreciate its novelty, since before the modern period, there were no “secular” institutions and no “secular” states in our sense of the word. Their creation required the development of an entirely different understanding of religion, one that was unique to the modern west. No other culture has had anything remotely like it, and before the 18th century, it would have been incomprehensible even to European Catholics. The words in other languages that we translate as “religion” invariably refer to something vaguer, larger and more inclusive. The Arabic word dinsignifies an entire way of life, and the Sanskrit dharma covers law, politics, and social institutions as well as piety. The Hebrew Bible has no abstract concept of “religion”; and the Talmudic rabbis would have found it impossible to define faith in a single word or formula, because the Talmud was expressly designed to bring the whole of human life into the ambit of the sacred. The Oxford Classical Dictionary firmly states: “No word in either Greek or Latin corresponds to the English ‘religion’ or ‘religious’.” In fact, the only tradition that satisfies the modern western criterion of religion as a purely private pursuit is Protestant Christianity, which, like our western view of “religion”, was also a creation of the early modern period.

Traditional spirituality did not urge people to retreat from political activity. The prophets of Israel had harsh words for those who assiduously observed the temple rituals but neglected the plight of the poor and oppressed. Jesus’s famous maxim to “Render unto Caesar the things that are Caesar’s” was not a plea for the separation of religion and politics. Nearly all the uprisings against Rome in first-century Palestine were inspired by the conviction that the Land of Israel and its produce belonged to God, so that there was, therefore, precious little to “give back” to Caesar. When Jesus overturned the money-changers’ tables in the temple, he was not demanding a more spiritualised religion. For 500 years, the temple had been an instrument of imperial control and the tribute for Rome was stored there. Hence for Jesus it was a “den of thieves”. The bedrock message of the Qur’an is that it is wrong to build a private fortune but good to share your wealth in order to create a just, egalitarian and decent society. Gandhi would have agreed that these were matters of sacred import: “Those who say that religion has nothing to do with politics do not know what religion means.”

The myth of religious violence

Before the modern period, religion was not a separate activity, hermetically sealed off from all others; rather, it permeated all human undertakings, including economics, state-building, politics and warfare. Before 1700, it would have been impossible for people to say where, for example, “politics” ended and “religion” began. The Crusades were certainly inspired by religious passion but they were also deeply political: Pope Urban II let the knights of Christendom loose on the Muslim world to extend the power of the church eastwards and create a papal monarchy that would control Christian Europe. The Spanish inquisition was a deeply flawed attempt to secure the internal order of Spain after a divisive civil war, at a time when the nation feared an imminent attack by the Ottoman empire. Similarly, the European wars of religion and the thirty years war were certainly exacerbated by the sectarian quarrels of Protestants and Catholics, but their violence reflected the birth pangs of the modern nation-state.

Read the entire article here.

Send to Kindle

Past Experience is Good; Random Decision-Making is Better

We all know that making decisions from past experience is wise. We learn from the benefit of hindsight. We learn to make small improvements or radical shifts in our thinking and behaviors based on history and previous empirical evidence. Stock market gurus and investment mavens will tell you time after time that they have a proven method — based on empirical evidence and a lengthy, illustrious track record — for picking the next great stock or investing your hard-earned retirement funds.

Yet, empirical evidence shows that chimpanzees throwing darts at the WSJ stock pages are just as good at stock market tips as we humans (and the “masters of the universe”). So, it seems that random decision-making can be just as good, if not better, than wisdom and experience.

From the Guardian:

No matter how much time you spend reading the recent crop of books on How To Decide or How To Think Clearly, you’re unlikely to encounter glowing references to a decision-making system formerly used by the Azande of central Africa. Faced with a dilemma, tribespeople would force poison down the neck of a chicken while asking questions of the “poison oracle”; the chicken answered by surviving (“yes”) or expiring (“no”). Clearly, this was cruel to chickens. That aside, was it such a terrible way to choose among options? The anthropologist EE Evans-Pritchard, who lived with the Azande in the 1920s, didn’t think so. “I always kept a supply of poison [and] we regulated our affairs in accordance with the oracle’s decisions,” he wrote, adding drily: “I found this as satisfactory a way of running my home and affairs as any other I know of.” You could dismiss that as a joke. After all, chicken-poisoning is plainly superstition, delivering random results. But what if random results are sometimes exactly what you need?

The other day, US neuroscientists published details of experiments on rats, showing that in certain unpredictable situations, they stop trying to make decisions based on past experience. Instead, a circuit in their brains switches to “random mode”. The researchers’ hunch is that this serves a purpose: past experience is usually helpful, but when uncertainty levels are high, it can mislead, so randomness is in the rats’ best interests. When we’re faced with the unfamiliar, experience can mislead humans, too, partly because we filter it through various irrational biases. According to those books on thinking clearly, we should strive to overcome these biases, thus making more rational calculations. But there’s another way to bypass our biased brains: copy the rats, and choose randomly.

In certain walks of life, the usefulness of randomness is old news: the stock market, say, is so unpredictable that, to quote the economist Burton Malkiel, “a blindfolded monkey throwing darts at a newspaper’s financial pages could select a portfolio that would do as well as one carefully selected by experts”. (This has been tried, with simulated monkeys, andthey beat the market.) But, generally, as Michael Schulson put it recentlyin an Aeon magazine essay, “We take it for granted that the best decisions stem from empirical analysis and informed choice.” Yet consider, he suggests, the ancient Greek tradition of filling some government positions by lottery. Randomness disinfects a process that might be dirtied by corruption.

Randomness can be similarly useful in everyday life. For tiny choices, it’s a time-saver: pick randomly from a menu, and you can get back to chatting with friends. For bigger ones, it’s an acknowledgment of how little one can ever know about the complex implications of a decision. Let’s be realistic: for the biggest decisions, such as whom to marry, trusting to randomness feels absurd. But if you can up the randomness quotient for marginally less weighty choices, especially when uncertainty prevails, you may find it pays off. Though kindly refrain from poisoning any chickens.

Read the entire article here.

Send to Kindle

UnDesign

The future of good design may actually lie in intentionally doing the wrong thing. While we are drawn to the beauty of symmetry — in faces, in objects — we are also drawn by the promise of imperfection.

From Wired:

In the late 1870s, Edgar Degas began work on what would become one of his most radical paintings, Jockeys Before the Race. Degas had been schooled in techniques of the neoclassicist and romanticist masters but had begun exploring subject matter beyond the portraits and historical events that were traditionally considered suitable for fine art, training his eye on café culture, common laborers, and—most famously—ballet dancers. But with Jockeys, Degas pushed past mild provocation. He broke some of the most established formulas of composition. The painting is technically exquisite, the horses vividly sculpted with confident brushstrokes, their musculature perfectly rendered. But while composing this beautifully balanced, impressionistically rendered image, Degas added a crucial, jarring element: a pole running vertically—and asymmetrically—in the immediate foreground, right through the head of one of the horses.

Degas wasn’t just “thinking outside of the box,” as the innovation cliché would have it. He wasn’t trying to overturn convention to find a more perfect solution. He was purposely creating something that wasn’t pleasing, intentionally doing the wrong thing. Naturally viewers were horrified. Jockeys was lampooned in the magazine Punch, derided as a “mistaken impression.” But over time, Degas’ transgression provided inspiration for other artists eager to find new ways to inject vitality and dramatic tension into work mired in convention. You can see its influence across art history, from Frederic Remington’s flouting of traditional compositional technique to the crackling photojournalism of Henri Cartier-Bresson.

Degas was engaged in a strategy that has shown up periodically for centuries across every artistic and creative field. Think of it as one step in a cycle: In the early stages, practitioners dedicate themselves to inventing and improving the rules—how to craft the most pleasing chord progression, the perfectly proportioned building, the most precisely rendered amalgamation of rhyme and meter. Over time, those rules become laws, and artists and designers dedicate themselves to excelling within these agreed-upon parameters, creating work of unparalleled refinement and sophistication—the Pantheon, the Sistine Chapel, the Goldberg Variations. But once a certain maturity has been reached, someone comes along who decides to take a different route. Instead of trying to create an ever more polished and perfect artifact, this rebel actively seeks out imperfection—sticking a pole in the middle of his painting, intentionally adding grungy feedback to a guitar solo, deliberately photographing unpleasant subjects. Eventually some of these creative breakthroughs end up becoming the foundation of a new set of aesthetic rules, and the cycle begins again.

DEGAS WASN’T JUST THINKING OUTSIDE THE BOX. HE WAS PURPOSELY CREATING SOMETHING THAT WASN’T PLEASING.

For the past 30 years, the field of technology design has been working its way through the first two stages of this cycle, an industry-wide march toward more seamless experiences, more delightful products, more leverage over the world around us. Look at our computers: beige and boxy desktop machines gave way to bright and colorful iMacs, which gave way to sleek and sexy laptops, which gave way to addictively touchable smartphones. It’s hard not to look back at this timeline and see it as a great story of human progress, a joint effort to experiment and learn and figure out the path toward a more refined and universally pleasing design.

All of this has resulted in a world where beautifully constructed tech is more powerful and more accessible than ever before. It is also more consistent. That’s why all smartphones now look basically the same—gleaming black glass with handsomely cambered edges. Google, Apple, and Microsoft all use clean, sans-serif typefaces in their respective software. After years of experimentation, we have figured out what people like and settled on some rules.

But there’s a downside to all this consensus—it can get boring. From smartphones to operating systems to web page design, it can start to feel like the truly transformational moments have come and gone, replaced by incremental updates that make our devices and interactions faster and better.

This brings us to an important and exciting moment in the design of our technologies. We have figured out the rules of creating sleek sophistication. We know, more or less, how to get it right. Now, we need a shift in perspective that allows us to move forward. We need a pole right through a horse’s head. We need to enter the third stage of this cycle. It’s time to stop figuring out how to do things the right way, and start getting it wrong.

In late 2006, when I was creative director here at WIRED, I was working on the design of a cover featuring John Hodgman. We were far along in the process—Hodgman was styled and photographed, the cover lines written, our fonts selected, the layout firmed up. I had been aiming for a timeless design with a handsome monochromatic color palette, a cover that evoked a 1960s jet-set vibe. When I presented my finished design, WIRED’s editor at the time, Chris Anderson, complained that the cover was too drab. He uttered the prescriptive phrase all graphic designers hate hearing: “Can’t you just add more colors?”

I demurred. I felt the cover was absolutely perfect. But Chris did not, and so, in a spasm of designerly “fuck you,” I drew a small rectangle into my design, a little stripe coming off from the left side of the page, rudely breaking my pristine geometries. As if that weren’t enough, I filled it with the ugliest hue I could find: neon orange— Pantone 811, to be precise. My perfect cover was now ruined!

By the time I came to my senses a couple of weeks later, it was too late. The cover had already been sent to the printer. My anger morphed into regret. To the untrained eye, that little box might not seem so offensive, but I felt that I had betrayed one of the most crucial lessons I learned in design school—that every graphic element should serve a recognizable function. This stray dash of color was careless at best, a postmodernist deviation with no real purpose or value. It confused my colleagues and detracted from the cover’s clarity, unnecessarily making the reader more conscious of the design.

But you know what? I actually came to like that crass little neon orange bar. I ended up including a version of it on the next month’s cover, and again the month after that. It added something, even though I couldn’t explain what it was. I began referring to this idea—intentionally making “bad” design choices—as Wrong Theory, and I started applying it in little ways to all of WIRED’s pages. Pictures that were supposed to run large, I made small. Where type was supposed to run around graphics, I overlapped the two. Headlines are supposed to come at the beginning of stories? I put them at the end. I would even force our designers to ruin each other’s “perfect” layouts.

At the time, this represented a major creative breakthrough for me—the idea that intentional wrongness could yield strangely pleasing results. Of course I was familiar with the idea of rule-breaking innovation—that each generation reacts against the one that came before it, starting revolutions, turning its back on tired conventions. But this was different. I wasn’t just throwing out the rulebook and starting from scratch. I was following the rules, then selectively breaking one or two for maximum impact.

Read the entire article here.

Send to Kindle

MondayMap: Music

everynoise

If you decide to do only one thing today, do this: visit everynoise, discover new music and have some expansive auditory fun.

Every Noise At Once is the brainchild of Glenn McDonald, a self-described data alchemist. He has sampled and categorized popular music into an astounding 1,264 genres — at current estimates.

You’re probably familiar with glam rock, emo punk, motown, ambient, garage, house, dub step, rap, metal and so on. But are you up on: neo-synthpop, fallen angel, deep orgcore, neurostep, death metal, skweee and cow punk? Well, here’s your chance to find out and expand your senses and your mind.

From the Guardian:

Music used to be easy. Some people liked rock. Some people liked pop. Some people liked jazz, blues or classical. And, basically, that was sort of it. However, musicians are a restless bunch and you can only play Smoke on the Water, Always Crashing in the Same Car or Roast Fish and Cornbread so many times before someone is bound to say: “Hang on a minute, what would happen if we played them all at the same time?” And so it is that new genres are born. Now imagine that happening for at least half a century or so – all over the world – and you reach a point at which, according to the engineer and “data alchemist” Glenn McDonald, there are now 1,264 genres of popular music; all you need to do is go directly to his startlingly cleverEveryNoise.com website and look – well, listen – for yourself.

Every Noise at Once is an ongoing attempt to build an algorithmically generated map of the entire musical genre-space, based on data tracked and analysed by Spotify’s music-intelligence division, The Echo Nest. It is also – in truth – one of the greatest time-eating devices ever created. You thought you had some kind of idea of just how much music is out there? You don’t. I don’t. But McDonald does. So he’s covered those genres – such as death metal, techno or hip-hop, which you’ll have heard of. Others, such as electro trash, indietronica or hard glam you may only have the most passing acquaintance with. Then, rather wonderfully, there are the outliers, those genres that you almost certainly didn’t even know existed – much less ever explored – suomi rock, shimmer psych,fourth world – right there at your fingertips any time you please. But the question is: what lives even further out than the outliers? How odd can it all get? Well, here are 10 genres (we could have nominated about 50) that even mouth-breathing indie record-shop blowhards (full disclosure: I used to be a mouth-breathing indie-record shop blowhard) would be hardpressed to help you find …

Read the entire story and sample some bands here.

Image: Every Noise at Once, screenshot. Courtesy of Glenn McDonald, Every Noise at Once.

Send to Kindle

Slow Reading is Catching on Fast (Again)

Pursuing a cherished activity, uninterrupted, with no distraction is one of life’s pleasures. Many who multi-task and brag about it have long forgotten the benefits of deep focus and immersion in one single, prolonged task. Reading can be such a process — and over the last several years researchers have found that distraction-free, thoughtful reading — slow reading — is beneficial.

So, please put down your tablet, laptop, smartphone and TV remote after you read this post, go find an unread book, shut out your daily distractions — kids, news, Facebook, boss, grocery lists, plumber — and immerse yourself in the words on a page, and nothing else. It will relieve you of stress and benefit your brain.

From WSJ:

Once a week, members of a Wellington, New Zealand, book club arrive at a cafe, grab a drink and shut off their cellphones. Then they sink into cozy chairs and read in silence for an hour.

The point of the club isn’t to talk about literature, but to get away from pinging electronic devices and read, uninterrupted. The group calls itself the Slow Reading Club, and it is at the forefront of a movement populated by frazzled book lovers who miss old-school reading.

Slow reading advocates seek a return to the focused reading habits of years gone by, before Google, smartphones and social media started fracturing our time and attention spans. Many of its advocates say they embraced the concept after realizing they couldn’t make it through a book anymore.

“I wasn’t reading fiction the way I used to,” said Meg Williams, a 31-year-old marketing manager for an annual arts festival who started the club. “I was really sad I’d lost the thing I used to really, really enjoy.”

Slow readers list numerous benefits to a regular reading habit, saying it improves their ability to concentrate, reduces stress levels and deepens their ability to think, listen and empathize. The movement echoes a resurgence in other old-fashioned, time-consuming pursuits that offset the ever-faster pace of life, such as cooking the “slow-food” way or knitting by hand.

The benefits of reading from an early age through late adulthood have been documented by researchers. A study of 300 elderly people published by the journal Neurology last year showed that regular engagement in mentally challenging activities, including reading, slowed rates of memory loss in participants’ later years.

A study published last year in Science showed that reading literary fiction helps people understand others’ mental states and beliefs, a crucial skill in building relationships. A piece of research published in Developmental Psychology in 1997 showed first-grade reading ability was closely linked to 11th grade academic achievements.

Yet reading habits have declined in recent years. In a survey this year, about 76% of Americans 18 and older said they read at least one book in the past year, down from 79% in 2011, according to the Pew Research Center.

Attempts to revive reading are cropping up in many places. Groups in Seattle, Brooklyn, Boston and Minneapolis have hosted so-called silent reading parties, with comfortable chairs, wine and classical music.

Diana La Counte of Orange County, Calif., set up what she called a virtual slow-reading group a few years ago, with members discussing the group’s book selection online, mostly on Facebook. “When I realized I read Twitter more than a book, I knew it was time for action,” she says.

Read the entire story here.

Send to Kindle

Cellphone Only Lanes

slow-walking-lane

You’ve seen the high occupancy vehicle lane on select highways. You’ve seen pedestrian only zones. You’ve seen cycle friendly zones. Now, it’s time for the slow walking lane — for pedestrians using smartphones! Perhaps we’ll eventually see separate lanes for tourists with tablets, smartwatch users and, of course, a completely separate zone for texting t(w)eens.

From the Independent:

The Chinese city of Chongqing claims to have introduced the world’s first ‘slow-walking lane’ for smartphone users.

No more will the most efficient of pedestrians be forced to stare frustratedly at the occiput of their meandering counterparts.

Two 100-ft lanes have been painted on to a pavement in the city, with one side reserved for those wanting to stare into their handheld device and the other exclusively for those who can presumably spare five minutes without checking their latest Weibo update.

However, according to the Telegraph, officials in Chongqing only introduced the signage to make the point that “it is best not to play with your phone while walking”.

Read the entire story here.

Image: City of Chongqing. Courtesy of the Independent.

 

Send to Kindle

Texas and Its Textbooks: The Farce Continues

Just over a year ago I highlighted the plight of accepted scholarly fact in Texas. The state, through its infamous School Board of Education (SBOE), had just completed a lengthy effort to revise many textbooks for middle- and high-school curricula. The SBOE and its ideological supporters throughout the Texas political machine managed to insert numerous dubious claims, fictitious statements in place of agreed upon facts and handfuls of slanted opinion in all manner of historical and social science texts. Many academics and experts in their respective fields raised alarms over the process. But the SBOE derided these “liberal elitists”, and openly flaunted its distaste for fact, preferring to distort historical record with undertones of conservative Christianity.

Many non-Texan progressives and believers-in-fact laughingly shook their heads knowing that Texas could and should be left its own devices. Unfortunately, for the rest of the country, Texas has so much buying power that textbook publishers will often publish with Texas in mind, but distribute their books throughout the entire nation.

So now it comes as no surprise to find that many newly, or soon to be, published Texas textbooks for grades 6-12 are riddled with errors. An academic review of 43 textbooks highlights the disaster waiting to happen to young minds in Texas, and across many other states. The Texas SBOE will take a vote on which books to approve in November.

Some choice examples of the errors and half-truths below.

All of the world geography textbooks inaccurately downplay the role that conquest played in the spread of Christianity.

Discovery Education — Social Studies Techbook World Geography and Cultures

The text states: “When Europeans arrived, they brought Christianity with them and spread it among the indigenous people. Over time, Christianity became the main religion in Latin America.”

Pearson Education – Contemporary World Cultures

The text states: “Priests came to Mexico to convert Native Americans to the Roman Catholic religion. The Church became an important part of life in the new colony. Churches were built in the centers of towns and cities, and church officials became leaders in the colony.”

Houghton Mifflin Harcourt – World Geography

The text states: “The Spanish brought their language and Catholic religion, both of which dominate modern Mexico.”

Various

All but two of the world geography textbooks fail to mention the Spaniards’ forced conversions of the indigenous peoples to Christianity (e.g., the Spanish Requerimiento of 1513) and their often-systematic destruction of indigenous religious institutions. The two exceptions (Cengage Learning, Inc. – World Cultures and Geography and Houghton Mifflin Harcourt – World Geography) delay this grim news until a chapter on South America, and even there do not give it the prominence it deserves.

What’s Wrong?

The Christianization of the indigenous peoples of the Americas was most decidedly not benign. These descriptions provide a distorted picture of the spread of Christianity. An accurate account must include information about the forced conversion of native peoples and the often-systematic destruction of indigenous religious institutions and practices. (This error of omission is especially problematic when contrasted with the emphasis on conquest – often violent – to describe the spread of Islam in some textbooks.)

One world history textbook (by Worldview Software, Inc.) includes outdated – and possibly offensive – anthropological categories and racial terminology in describing African civilization.

WorldView Software – World History A: Early Civilizations to the Mid-1800s

The text states: “South of the Sahara Desert most of the people before the Age of Explorations were black Africans of the Negro race.”

 Elsewhere, the text states: “The first known inhabitants of Africa north of the Sahara in prehistory were Caucasoid Hamitic people of uncertain origin.”

What’s Wrong?

First, the term “Negro” is archaic and fraught with ulterior meaning. It should categorically not be used in a modern textbook. Further, the first passage is unforgivably misleading because it suggests that all black native Africans belong to a single “racial” group. This is typological thinking, which disappeared largely from texts after the 1940s. It harkens back to the racialization theory that all people could be classified as one of three “races”: Caucasoid, Mongoloid, or Negroid. Better to say: “…were natives of African origin.” Similarly, in the second passage, it is more accurate to simply omit reference to “Caucasoid.”

From the Washington Post:

When it comes to controversies about curriculum, textbook content and academic standards, Texas is the state that keeps on giving.

Back in 2010, we had an uproar over proposed changes to social studies standards by religious conservatives on the State Board of Education, which included a bid to calling the United States’ hideous slave trade history as the “Atlantic triangular trade.” There were other doozies, too, such as one proposal to remove Thomas Jefferson from the Enlightenment curriculum and replace him with John Calvin. Some were changed but the board’s approved standards were roundly criticized as distorted history.

There’s a new fuss about proposed social studies textbooks for Texas public schools that are based on what are called the Texas Essential  Knowledge  and  Skills.  Scholarly reviews of 43 proposed history, geography and government textbooks for Grades 6-12 — undertaken by the Education Fund of the Texas Freedom Network, a watchdog and activist group that monitors far-right issues and organizations — found extensive problems in American Government textbooks, U.S. and World History textbooks,Religion in World History textbooks, and Religion in World Geography textbooks.  The state board will vote on which books to approve in November.

Ideas promoted in various proposed textbooks include the notion that Moses and Solomon inspired American democracy, that in the era of segregation only “sometimes” were schools for black children “lower in quality” and that Jews view Jesus Christ as an important prophet.

Here are the broad findings of 10 scholars, who wrote four separate reports, taken from an executive summary, followed by the names of the scholars and a list of publishers who submitted textbooks.

The findings:

  • A number of government and world history textbooks exaggerate Judeo-Christian influence on the nation’s founding and Western political tradition.
  • Two government textbooks include misleading information that undermines the Constitutional concept of the separation of church and state.
  • Several world history and world geography textbooks include biased statements that inappropriately portray Islam and Muslims negatively.
  • All of the world geography textbooks inaccurately downplay the role that conquest played in the spread of Christianity.
  • Several world geography and history textbooks suffer from an incomplete – and often inaccurate – account of religions other than Christianity.
  • Coverage of key Christian concepts and historical events are lacking in a few textbooks, often due to the assumption that all students are Christians and already familiar with Christian events and doctrine.
  • A few government and U.S. history textbooks suffer from an uncritical celebration of the free enterprise system, both by ignoring legitimate problems that exist in capitalism and failing to include coverage of government’s role in the U.S. economic system.
  • One government textbook flirts with contemporary Tea Party ideology, particularly regarding the inclusion of anti-taxation and anti-regulation arguments.
  • One world history textbook includes outdated – and possibly offensive – anthropological categories and racial terminology in describing African civilization.

Read the entire article here and check out the academic report here.

 

Send to Kindle

MondayMap: Our New Address — Laniakea

laniakea_nrao

Once upon a time we humans sat smugly at the center of the universe. Now, many of us (though, not yet all) know better. Over the the last several centuries we learned and accepted that the Earth spun around the nearest Star, and not the converse. We then learned that the Sun formed part of an immense galaxy, the Milky Way, itself spinning in a vast cosmological dance. More recently, we learned that the Milky Way formed part of a larger cluster of galaxies, known as the Local Group.

Now we find that our Local Group is a mere speck within an immense supercluster containing around 100,000 galaxies spanning half a billion light years. Researchers have dubbed this galactic supercluster, rather aptly, Laniakea, Hawaiian for “immense heaven”. Laniakea is your new address. And, fascinatingly, Laniakea is moving towards an even larger grouping of galaxies named the Shapely supercluster.

From the Guardian:

In what amounts to a back-to-school gift for pupils with nerdier leanings, researchers have added a fresh line to the cosmic address of humanity. No longer will a standard home address followed by “the Earth, the solar system, the Milky Way, the universe” suffice for aficionados of the extended astronomical location system.

The extra line places the Milky Way in a vast network of neighbouring galaxies or “supercluster” that forms a spectacular web of stars and planets stretching across 520m light years of our local patch of universe. Named Laniakea, meaning “immeasurable heaven” in Hawaiian, the supercluster contains 100,000 large galaxies that together have the mass of 100 million billion suns.

Our home galaxy, the Milky Way, lies on the far outskirts of Laniakea near the border with another supercluster of galaxies named Perseus-Pisces. “When you look at it in three dimensions, is looks like a sphere that’s been badly beaten up and we are over near the edge, being pulled towards the centre,” said Brent Tully, an astronomer at the University of Hawaii in Honolulu.

Astronomers have long known that just as the solar system is part of the Milky Way, so the Milky Way belongs to a cosmic structure that is much larger still. But their attempts to define the larger structure had been thwarted because it was impossible to work out where one cluster of galaxies ended and another began.

Tully’s team gathered measurements on the positions and movement of more than 8,000 galaxies and, after discounting the expansion of the universe, worked out which were being pulled towards us and which were being pulled away. This allowed the scientists to define superclusters of galaxies that all moved in the same direction (if you’re reading this story on a mobile device, click here to watch a video explaining the research).

The work published in Nature gives astronomers their first look at the vast group of galaxies to which the Milky Way belongs. A narrow arch of galaxies connects Laniakea to the neighbouring Perseus-Pisces supercluster, while two other superclusters called Shapley and Coma lie on the far side of our own.

Tully said the research will help scientists understand why the Milky Way is hurtling through space at 600km a second towards the constellation of Centaurus. Part of the reason is the gravitational pull of other galaxies in our supercluster.

“But our whole supercluster is being pulled in the direction of this other supercluster, Shapley, though it remains to be seen if that’s all that’s going on,” said Tully.

Read the entire article here or the nerdier paper here.

Image: Laniakea: Our Home Supercluster of Galaxies. The blue dot represents the location of the Milky Way. Courtesy: R. Brent Tully (U. Hawaii) et al., SDvision, DP, CEA/Saclay.

Send to Kindle

Theism Versus Spirituality

Prominent neo-atheist Sam Harris continues to reject theism, and does so thoughtfully and eloquently. In his latest book, Waking Up, he continues to argue the case against religion, but makes a powerful case for spirituality. Harris defines spirituality as an inner sense of a good and powerful reality, based on sound self-awarenesses and insightful questioning of one’s own consciousness. This type of spirituality, quite rightly, is devoid of theistic angels and demons. Harris reveals more in his interview with Gary Gutting, professor of philosophy at the University of Notre Dame.

From the NYT:

Sam Harris is a neuroscientist and prominent “new atheist,” who along with others like Richard Dawkins, Daniel Dennett and Christopher Hitchens helped put criticism of religion at the forefront of public debate in recent years. In two previous books, “The End of Faith” and “Letter to a Christian Nation,” Harris argued that theistic religion has no place in a world of science. In his latest book, “Waking Up,” his thought takes a new direction. While still rejecting theism, Harris nonetheless makes a case for the value of “spirituality,” which he bases on his experiences in meditation. I interviewed him recently about the book and some of the arguments he makes in it.

Gary Gutting: A common basis for atheism is naturalism — the view that only science can give a reliable account of what’s in the world. But in “Waking Up” you say that consciousness resists scientific description, which seems to imply that it’s a reality beyond the grasp of science. Have you moved away from an atheistic view?

Sam Harris: I don’t actually argue that consciousness is “a reality” beyond the grasp of science. I just think that it is conceptually irreducible — that is, I don’t think we can fully understand it in terms of unconscious information processing. Consciousness is “subjective”— not in the pejorative sense of being unscientific, biased or merely personal, but in the sense that it is intrinsically first-person, experiential and qualitative.

The only thing in this universe that suggests the reality of consciousness is consciousness itself. Many philosophers have made this argument in one way or another — Thomas Nagel, John Searle, David Chalmers. And while I don’t agree with everything they say about consciousness, I agree with them on this point.

The primary approach to understanding consciousness in neuroscience entails correlating changes in its contents with changes in the brain. But no matter how reliable these correlations become, they won’t allow us to drop the first-person side of the equation. The experiential character of consciousness is part of the very reality we are studying. Consequently, I think science needs to be extended to include a disciplined approach to introspection.

G.G.: But science aims at objective truth, which has to be verifiable: open to confirmation by other people. In what sense do you think first-person descriptions of subjective experience can be scientific?

S.H.: In a very strong sense. The only difference between claims about first-person experience and claims about the physical world is that the latter are easier for others to verify. That is an important distinction in practical terms — it’s easier to study rocks than to study moods — but it isn’t a difference that marks a boundary between science and non-science. Nothing, in principle, prevents a solitary genius on a desert island from doing groundbreaking science. Confirmation by others is not what puts the “truth” in a truth claim. And nothing prevents us from making objective claims about subjective experience.

Are you thinking about Margaret Thatcher right now? Well, now you are. Were you thinking about her exactly six minutes ago? Probably not. There are answers to questions of this kind, whether or not anyone is in a position to verify them.

And certain truths about the nature of our minds are well worth knowing. For instance, the anger you felt yesterday, or a year ago, isn’t here anymore, and if it arises in the next moment, based on your thinking about the past, it will quickly pass away when you are no longer thinking about it. This is a profoundly important truth about the mind — and it can be absolutely liberating to understand it deeply. If you do understand it deeply — that is, if you are able to pay clear attention to the arising and passing away of anger, rather than merely think about why you have every right to be angry — it becomes impossible to stay angry for more than a few moments at a time. Again, this is an objective claim about the character of subjective experience. And I invite our readers to test it in the laboratory of their own minds.

G. G.: Of course, we all have some access to what other people are thinking or feeling. But that access is through probable inference and so lacks the special authority of first-person descriptions. Suppose I told you that in fact I didn’t think of Margaret Thatcher when I read your comment, because I misread your text as referring to Becky Thatcher in “The Adventures of Tom Sawyer”? If that’s true, I have evidence for it that you can’t have. There are some features of consciousness that we will agree on. But when our first-person accounts differ, then there’s no way to resolve the disagreement by looking at one another’s evidence. That’s very different from the way things are in science.

S.H.: This difference doesn’t run very deep. People can be mistaken about the world and about the experiences of others — and they can even be mistaken about the character of their own experience. But these forms of confusion aren’t fundamentally different. Whatever we study, we are obliged to take subjective reports seriously, all the while knowing that they are sometimes false or incomplete.

For instance, consider an emotion like fear. We now have many physiological markers for fear that we consider quite reliable, from increased activity in the amygdala and spikes in blood cortisol to peripheral physiological changes like sweating palms. However, just imagine what would happen if people started showing up in the lab complaining of feeling intense fear without showing any of these signs — and they claimed to feel suddenly quite calm when their amygdalae lit up on fMRI, their cortisol spiked, and their skin conductance increased. We would no longer consider these objective measures of fear to be valid. So everything still depends on people telling us how they feel and our (usually) believing them.

However, it is true that people can be very poor judges of their inner experience. That is why I think disciplined training in a technique like “mindfulness,” apart from its personal benefits, can be scientifically important.

Read the entire story here.

Send to Kindle

An Ode to the Monopolist

Peter Thiel on why entrepreneurs should strive for monopoly and avoid competition. If only it were that simple for esoteric restaurants, innovative technology companies and all startup businesses in between.

From WSJ:

What valuable company is nobody building? This question is harder than it looks, because your company could create a lot of value without becoming very valuable itself. Creating value isn’t enough—you also need to capture some of the value you create.

This means that even very big businesses can be bad businesses. For example, U.S. airline companies serve millions of passengers and create hundreds of billions of dollars of value each year. But in 2012, when the average airfare each way was $178, the airlines made only 37 cents per passenger trip. Compare them to Google which creates less value but captures far more. Google brought in $50 billion in 2012 (versus $160 billion for the airlines), but it kept 21% of those revenues as profits—more than 100 times the airline industry’s profit margin that year. Google makes so much money that it is now worth three times more than every U.S. airline combined.

The airlines compete with each other, but Google stands alone. Economists use two simplified models to explain the difference: perfect competition and monopoly.

“Perfect competition” is considered both the ideal and the default state in Economics 101. So-called perfectly competitive markets achieve equilibrium when producer supply meets consumer demand. Every firm in a competitive market is undifferentiated and sells the same homogeneous products. Since no firm has any market power, they must all sell at whatever price the market determines. If there is money to be made, new firms will enter the market, increase supply, drive prices down and thereby eliminate the profits that attracted them in the first place. If too many firms enter the market, they’ll suffer losses, some will fold, and prices will rise back to sustainable levels. Under perfect competition, in the long run no company makes an economic profit.

The opposite of perfect competition is monopoly. Whereas a competitive firm must sell at the market price, a monopoly owns its market, so it can set its own prices. Since it has no competition, it produces at the quantity and price combination that maximizes its profits.

To an economist, every monopoly looks the same, whether it deviously eliminates rivals, secures a license from the state or innovates its way to the top. I’m not interested in illegal bullies or government favorites: By “monopoly,” I mean the kind of company that is so good at what it does that no other firm can offer a close substitute. Google is a good example of a company that went from 0 to 1: It hasn’t competed in search since the early 2000s, when it definitively distanced itself from Microsoft and Yahoo!

Americans mythologize competition and credit it with saving us from socialist bread lines. Actually, capitalism and competition are opposites. Capitalism is premised on the accumulation of capital, but under perfect competition, all profits get competed away. The lesson for entrepreneurs is clear: If you want to create and capture lasting value, don’t build an undifferentiated commodity business.

How much of the world is actually monopolistic? How much is truly competitive? It is hard to say because our common conversation about these matters is so confused. To the outside observer, all businesses can seem reasonably alike, so it is easy to perceive only small differences between them. But the reality is much more binary than that. There is an enormous difference between perfect competition and monopoly, and most businesses are much closer to one extreme than we commonly realize.

The confusion comes from a universal bias for describing market conditions in self-serving ways: Both monopolists and competitors are incentivized to bend the truth.

Monopolists lie to protect themselves. They know that bragging about their great monopoly invites being audited, scrutinized and attacked. Since they very much want their monopoly profits to continue unmolested, they tend to do whatever they can to conceal their monopoly—usually by exaggerating the power of their (nonexistent) competition.

Think about how Google talks about its business. It certainly doesn’t claim to be a monopoly. But is it one? Well, it depends: a monopoly in what? Let’s say that Google is primarily a search engine. As of May 2014, it owns about 68% of the search market. (Its closest competitors, Microsoft and Yahoo! have about 19% and 10%, respectively.) If that doesn’t seem dominant enough, consider the fact that the word “google” is now an official entry in the Oxford English Dictionary—as a verb. Don’t hold your breath waiting for that to happen to Bing.

But suppose we say that Google is primarily an advertising company. That changes things. The U.S. search-engine advertising market is $17 billion annually. Online advertising is $37 billion annually. The entire U.S. advertising market is $150 billion. And global advertising is a $495 billion market. So even if Google completely monopolized U.S. search-engine advertising, it would own just 3.4% of the global advertising market. From this angle, Google looks like a small player in a competitive world.

What if we frame Google as a multifaceted technology company instead? This seems reasonable enough; in addition to its search engine, Google makes dozens of other software products, not to mention robotic cars, Android phones and wearable computers. But 95% of Google’s revenue comes from search advertising; its other products generated just $2.35 billion in 2012 and its consumer-tech products a mere fraction of that. Since consumer tech is a $964 billion market globally, Google owns less than 0.24% of it—a far cry from relevance, let alone monopoly. Framing itself as just another tech company allows Google to escape all sorts of unwanted attention.

Non-monopolists tell the opposite lie: “We’re in a league of our own.” Entrepreneurs are always biased to understate the scale of competition, but that is the biggest mistake a startup can make. The fatal temptation is to describe your market extremely narrowly so that you dominate it by definition.

Read the entire article here.

Send to Kindle

The Next (and Final) Doomsday Scenario

Personally, I love dystopian visions and apocalyptic nightmares. So, news that the famed Higgs boson may ultimately cause our demise, and incidentally the end of the entire cosmos, caught my attention.

Apparently theoreticians have calculated that the Higgs potential of which the Higgs boson is a manifestation has characteristics that make the universe unstable. (The Higgs was discovered in 2012 by teams at CERN’s Large Hadron Collider.) Luckily for those wishing to avoid the final catastrophe this instability may keep the universe intact for several more billions of years, and if suddenly the Higgs were to trigger the final apocalypse it would be at the speed of light.

From Popular Mechanics:

In July 2012, when scientists at CERN’s Large Hadron Collider culminated decades of work with their discovery of the Higgs boson, most physicists celebrated. Stephen Hawking did not. The famed theorist expressed his disappointmentthat nothing more unusual was found, calling the discovery “a pity in a way.” But did he ever say the Higgs could destroy the universe?

That’s what many reports in the media said earlier this week, quoting a preface Hawking wrote to a book called Starmus. According to The Australian, the preface reads in part: “The Higgs potential has the worrisome feature that it might become metastable at energies above 100 [billion] gigaelectronvolts (GeV). This could mean that the universe could undergo catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light. This could happen at any time and we wouldn’t see it coming.”

What Hawking is talking about here is not the Higgs boson but what’s called the Higgs potential, which are “totally different concepts,” says Katie Mack, a theoretical astrophysicist at Melbourne University. The Higgs field permeates the entire universe, and the Higgs boson is an excitation of that field, just like an electron is an excitation of an electric field. In this analogy, the Higgs potential is like the voltage, determining the value of the field.

Once physicists began to close in on the mass of the Higgs boson, they were able to work out the Higgs potential. That value seemed to reveal that the universe exists in what’s known as a meta-stable vacuum state, or false vacuum, a state that’s stable for now but could slip into the “true” vacuum at any time. This is the catastrophic vacuum decay in Hawking’s warning, though he is not the first to posit the idea.

Is he right?

“There are a couple of really good reasons to think that’s not the end of the story,” Mack says. There are two ways for a meta-stable state to fall off into the true vacuum—one classical way, and one quantum way. The first would occur via a huge energy boost, the 100 billion GeVs Hawking mentions. But, Mack says, the universe already experienced such high energies during the period of inflation just after the big bang. Particles in cosmic rays from space also regularly collide with these kinds of high energies, and yet the vacuum hasn’t collapsed (otherwise, we wouldn’t be here).

“Imagine that somebody hands you a piece of paper and says, ‘This piece of paper has the potential to spontaneously combust,’ and so you might be worried,” Mack says. “But then they tell you 20 years ago it was in a furnace.” If it didn’t combust in the furnace, it’s not likely to combust sitting in your hand.

Of course, there’s always the quantum world to consider, and that’s where things always get weirder. In the quantum world, where the smallest of particles interact, it’s possible for a particle on one side of a barrier to suddenly appear on the other side of the barrier without actually going through it, a phenomenon known as quantum tunneling. If our universe was in fact in a meta-stable state, it could quantum tunnel through the barrier to the vacuum on the other side with no warning, destroying everything in an instant. And while that is theoretically possible, predictions show that if it were to happen, it’s not likely for billions of billions of years. By then, the sun and Earth and you and I and Stephen Hawking will be a distant memory, so it’s probably not worth losing sleep over it.

What’s more likely, Mack says, is that there is some new physics not yet understood that makes our vacuum stable. Physicists know there are parts of the model missing; mysteries like quantum gravity and dark matter that still defy explanation. When two physicists published a paper documenting the Higgs potential conundrum in March, their conclusion was that an explanation lies beyond the Standard Model, not that the universe may collapse at any time.

Read the article here.

Send to Kindle

The Original Rolling Stones

rocks-at-racetrack_arno_gourdol

Who or what has been moving these Death Valley boulders? Theories have persisted for quite some time: unknown inhabitants of the desert straddling California and Nevada; mischievous troglodytes from Middle Earth; aliens sending us cryptic, geologic messages; invisible demons; telepathic teenagers.

But now we know, and the mysterious forces at work are, unfortunately, rather mundane — the rocks are moved through a combination of rain, ice and wind. Oh well — time to focus on crop circles again!

From ars technica:

Mario is just a video game, and rocks don’t have legs. Both of these things are true. Yet, like the Mario ghosts that advance only when your back is turned, there are rocks that we know have been moving—even though no one has ever seen them do it.

The rocks in question occupy a spot called Racetrack Playa in Death Valley. Playas are desert mudflats that sometimes host shallow lakes when enough water is around. Racetrack Playa gets its name from long furrows extending from large rocks sitting on the playa bed—tracks that make it look as if the rocks had been dragged through the mud. The tracks of the various rocks run parallel to each other, sometimes suggesting that the rocks had made sharp turns in unison, like dehydrated synchronize swimmers.

Many potential explanations have been offered up (some going back to the 1940s) for this bizarre situation, as the rocks seem to only move occasionally and had never been caught in the act. One thing everyone could agree on was that it must occur when the playa is wet and the muddy bottom is slick. At first, suggestions revolved around especially strong winds. One geologist went as far as to bring out a propeller airplane to see how much wind it would take.

The other idea was that ice, which does occasionally form there, could be responsible. If the rocks were frozen into a sheet of ice, a little buoyancy might reduce the friction beneath them. And again, strong winds over the surface of the ice could drag the whole mess around, accounting for the synchronized nature of the tracks.

Over the years, a number of clever studies have attempted to test these possibilities. But to truly put the question to rest, the rocks were going to have to be observed while moving. A team led by Richard Norris and his engineer cousin James Norris set out to do just that. They set out 15 rocks with GPS loggers, a weather station, and some time-lapse cameras in 2011. Magnetic triggers were buried beneath the rocks so that the loggers would start recording when they began to move. And the Norrises waited.

They got what they were after last winter. A little rain and snow provided enough water to fill the lake to a depth of a few centimeters. At night, temperatures were low enough for ice to form. On a few sunny days, the rocks stirred.

By noon, the thin sheet of ice—just a few millimeters thick—would start breaking up. Light wind pushed the ice, and the water in the lake, to the northeast. The rocks, which weren’t frozen into the thin ice, went along for the ride. On one occasion, two rocks were recorded traveling 65 meters over 16 minutes, with a peak rate of 5 to 6 meters per minute.

These movements were detectable in the time-lapse images, but you might not actually notice it if you were standing there. The researchers note that the tracks carved in the mud aren’t immediately apparent due to the muddy water.

The total distances traveled by the instrumented rocks between November and February ranged from 15 to 225 meters. While all moving rocks travel in the direction of the prevailing wind, they didn’t all move together—motion depended on the way the ice broke up and the depth of the water around each rock.

While the proposed explanations weren’t far off, the thinness of the ice and the minimal wind speed that were needed were both surprises. There was no ice buoyancy lifting the rocks. They were just being pushed by loose sheets of thin ice that were themselves being pushed by wind and water.

In the end, there’s nothing extraordinary about the motion of these rocks, but the necessary conditions are rare enough that the results still shock us. Similar tracks have been found in a few playas elsewhere around the world, though, and ice-pushed rocks also leave marks in the shallows of Canada’s Great Slave Lake. There’s no need to worry about the rocks at Racetrack Playa coming to life and opening secretly ferocious jaws when you look away.

Read the entire story here.

Image: Rocks at Racetrack Playa, Death Valley. Courtesy of Arno Gourdol. Some Rights Reserved.

Send to Kindle

The Future of History

Take and impassioned history professor, a mediocre U.S. high school history curriculum and add Bill Gates, and you get an opportunity to inject fresh perspectives and new ideas into young minds.

Not too long ago Professor David Christian’s collection of Big History DVDs caught Gates’ attention, leading to a broad mission to overhaul the boring history lesson — one school at a time. Professor Christian’s approach takes a thoroughly holistic approach to the subject, spanning broad and interconnected topics such as culture, biochemistry, astronomy, agriculture and physics. The sweeping narrative fundamental to Christian’s delivery reminds me somewhat of Kenneth Clark’s Civilisation and Jacob Bronowski’s The Ascent of Man, two landmark U.K. television series.

From the New York Times:

In 2008, shortly after Bill Gates stepped down from his executive role at Microsoft, he often awoke in his 66,000-square-foot home on the eastern bank of Lake Washington and walked downstairs to his private gym in a baggy T-shirt, shorts, sneakers and black socks yanked up to the midcalf. Then, during an hour on the treadmill, Gates, a self-described nerd, would pass the time by watching DVDs from the Teaching Company’s “Great Courses” series. On some mornings, he would learn about geology or meteorology; on others, it would be oceanography or U.S. history.

As Gates was working his way through the series, he stumbled upon a set of DVDs titled “Big History” — an unusual college course taught by a jovial, gesticulating professor from Australia named David Christian. Unlike the previous DVDs, “Big History” did not confine itself to any particular topic, or even to a single academic discipline. Instead, it put forward a synthesis of history, biology, chemistry, astronomy and other disparate fields, which Christian wove together into nothing less than a unifying narrative of life on earth. Standing inside a small “Mr. Rogers”-style set, flanked by an imitation ivy-covered brick wall, Christian explained to the camera that he was influenced by the Annales School, a group of early-20th-century French historians who insisted that history be explored on multiple scales of time and space. Christian had subsequently divided the history of the world into eight separate “thresholds,” beginning with the Big Bang, 13 billion years ago (Threshold 1), moving through to the origin of Homo sapiens (Threshold 6), the appearance of agriculture (Threshold 7) and, finally, the forces that gave birth to our modern world (Threshold 8).

Christian’s aim was not to offer discrete accounts of each period so much as to integrate them all into vertiginous conceptual narratives, sweeping through billions of years in the span of a single semester. A lecture on the Big Bang, for instance, offered a complete history of cosmology, starting with the ancient God-centered view of the universe and proceeding through Ptolemy’s Earth-based model, through the heliocentric versions advanced by thinkers from Copernicus to Galileo and eventually arriving at Hubble’s idea of an expanding universe. In the worldview of “Big History,” a discussion about the formation of stars cannot help including Einstein and the hydrogen bomb; a lesson on the rise of life will find its way to Jane Goodall and Dian Fossey. “I hope by the end of this course, you will also have a much better sense of the underlying unity of modern knowledge,” Christian said at the close of the first lecture. “There is a unified account.”

As Gates sweated away on his treadmill, he found himself marveling at the class’s ability to connect complex concepts. “I just loved it,” he said. “It was very clarifying for me. I thought, God, everybody should watch this thing!” At the time, the Bill & Melinda Gates Foundation had donated hundreds of millions of dollars to educational initiatives, but many of these were high-level policy projects, like the Common Core Standards Initiative, which the foundation was instrumental in pushing through. And Gates, who had recently decided to become a full-time philanthropist, seemed to pine for a project that was a little more tangible. He was frustrated with the state of interactive coursework and classroom technology since before he dropped out of Harvard in the mid-1970s; he yearned to experiment with entirely new approaches. “I wanted to explore how you did digital things,” he told me. “That was a big issue for me in terms of where education was going — taking my previous skills and applying them to education.” Soon after getting off the treadmill, he asked an assistant to set a meeting with Christian.

A few days later, the professor, who was lecturing at San Diego State University, found himself in the lobby of a hotel, waiting to meet with the billionaire. “I was scared,” Christian recalled. “Someone took me along the corridor, knocks on a door, Bill opens it, invites me in. All I remember is that within five minutes, he had so put me at my ease. I thought, I’m a nerd, he’s a nerd and this is fun!” After a bit of small talk, Gates got down to business. He told Christian that he wanted to introduce “Big History” as a course in high schools all across America. He was prepared to fund the project personally, outside his foundation, and he wanted to be personally involved. “He actually gave me his email address and said, ‘Just think about it,’ ” Christian continued. ” ‘Email me if you think this is a good idea.’ ”

Christian emailed to say that he thought it was a pretty good idea. The two men began tinkering, adapting Christian’s college course into a high-school curriculum, with modules flexible enough to teach to freshmen and seniors alike. Gates, who insisted that the course include a strong digital component, hired a team of engineers and designers to develop a website that would serve as an electronic textbook, brimming with interactive graphics and videos. Gates was particularly insistent on the idea of digital timelines, which may have been vestige of an earlier passion project, Microsoft Encarta, the electronic encyclopedia that was eventually overtaken by the growth of Wikipedia. Now he wanted to offer a multifaceted historical account of any given subject through a friendly user interface. The site, which is open to the public, would also feature a password-protected forum for teachers to trade notes and update and, in some cases, rewrite lesson plans based on their experiences in the classroom.

Read the entire article here.

Video: Clip from Threshold 1, The Big Bang. Courtesy of Big History Project, David Christian.

Send to Kindle

Measuring the Quantum Jitter

Some physicists are determined to find out if we are mere holograms. Perhaps not quite like the dystopian but romanticized version fictionalized in The Matrix, but still a fascinating idea nonetheless. Armed with a very precise measuring tool, known as a Holometer or more precisely twin correlated Michelson holographic interferometers, researchers aim to find the scale at which the universe becomes jittery. In turn this will give a better picture of the fundamental units of space-time, well beyond the the elementary particles themselves, and somewhat closer to the Planck Length.

From the New Scientist:

The search for the fundamental units of space and time has officially begun. Physicists at the Fermi National Accelerator Laboratory near Chicago, Illinois, announced this week that the Holometer, a device designed to test whether we live in a giant hologram, has started taking data.

The experiment is testing the idea that the universe is actually made up of tiny “bits”, in a similar way to how a newspaper photo is actually made up of dots. These fundamental units of space and time would be unbelievably tiny: a hundred billion billion times smaller than a proton. And like the well-knownquantum behaviour of matter and energy, these bits of space-time would behave more like waves than particles.

“The theory is that space is made of waves instead of points, that everything is a little jittery, and never sits still,” says Craig Hogan at the University of Chicago, who dreamed up the experiment.

The Holometer is designed to measure this “jitter”. The surprisingly simple device is operated from a shed in a field near Chicago, and consists of two powerful laser beams that are directed through tubes 40 metres long. The lasers precisely measure the positions of mirrors along their paths at two points in time.

If space-time is smooth and shows no quantum behaviour, then the mirrors should remain perfectly still. But if both lasers measure an identical, small difference in the mirrors’ position over time, that could mean the mirrors are being jiggled about by fluctuations in the fabric of space itself.

 So what of the idea that the universe is a hologram? This stems from the notion that information cannot be destroyed, so for example the 2D event horizon of a black hole “records” everything that falls into it. If this is the case, then the boundary of the universe could also form a 2D representation of everything contained within the universe, like a hologram storing a 3D image in 2D .

Hogan cautions that the idea that the universe is a hologram is somewhat misleading because it suggests that our experience is some kind of illusion, a projection like a television screen. If the Holometer finds a fundamental unit of space, it won’t mean that our 3D world doesn’t exist. Rather it will change the way we understand its basic makeup. And so far, the machine appears to be working.

In a presentation given in Chicago on Monday at the International Conference on Particle Physics and Cosmology, Hogan said that the initial results show the Holometer is capable of measuring quantum fluctuations in space-time, if they are there.

“This was kind of an amazing moment,” says Hogan. “It’s just noise right now – we don’t know whether it’s space-time noise – but the machine is operating at that specification.”

Hogan expects that the Holometer will have gathered enough data to put together an answer to the quantum question within a year. If the space-time jitter is there, Hogan says it could underpin entirely new explanations for why the expansion of our universe is accelerating, something traditionally attributed to the little understood phenomenon of dark energy.

Read the entire article here.

Send to Kindle

Burning Man Bucket List

BM-super-pool-art

As this year’s Burning Man comes to an end in the eerily beautiful Black Rock Desert in Nevada I am reminded that attending this life event should be on everyone’s bucket list, before they actually kick it.

That said, applying one or more of the Ten Principle’s that guide Burners, should be a year-round quest — not a once in a lifetime transient goal.

Read more about this year’s BM here.

See more BM visuals here.

Image: Super Pool art installation, Burning Man 2014. Courtesy of Jim Urquhart / Reuters.

 

Send to Kindle

How to Get Blazingly Fast Internet

Chattanooga,_TennesseeIt’s rather simple in theory, and only requires two steps. Step 1: Follow the lead of a city like Chattanooga, Tennessee. Step 2: Tell you monopolistic cable company what to do with its cables. Done. Now you have a 1 Gigabit Internet connection — around 50-100 times faster than your mother’s Wifi.

This experiment is fueling a renaissance of sorts in the Southern U.S. city and other metropolitan areas can only look on in awe. It comes as no surprise that the cable oligarchs at Comcast, Time Warner and AT&T are looking for any way to halt the city’s progress into the 21st Century.

The Guardian:

Loveman’s department store on Market Street in Chattanooga closed its doors in 1993 after almost a century in business, another victim of a nationwide decline in downtowns that hollowed out so many US towns. Now the opulent building is buzzing again, this time with tech entrepreneurs taking advantage of the fastest internet in the western hemisphere.

Financed by the cash raised from the sale of logistics group Access America, a group of thirty-something local entrepreneurs have set up Lamp Post, an incubator for a new generation of tech companies, in the building. A dozen startups are currently working out of the glitzy downtown office.

“We’re not Silicon Valley. No one will ever replicate that,” says Allan Davis, one of Lamp Post’s partners. “But we don’t need to be and not everyone wants that. The expense, the hassle. You don’t need to be there to create great technology. You can do it here.”

He’s not alone in thinking so. Lamp Post is one of several tech incubators in this mid-sized Tennessee city. Money is flowing in. Chattanooga has gone from close to zero venture capital in 2009 to more than five organized funds with investable capital over $50m in 2014 – not bad for a city of 171,000 people.

The city’s go-getting mayor Andy Berke, a Democrat tipped for higher office, is currently reviewing plans for a city center tech zone specifically designed to meet the needs of its new workforce.

In large part the success is being driven by The Gig. Thanks to an ambitious roll-out by the city’s municipally owned electricity company, EPB, Chattanooga is one of the only places on Earth with internet at speeds as fast as 1 gigabit per second – about 50 times faster than the US average.

The tech buildup comes after more than a decade of reconstruction in Chattanooga that has regenerated the city with a world-class aquarium, 12 miles of river walks along the Tennessee River, an arts district built around the Hunter Museum of American Arts, high-end restaurants and outdoor activities.

But it’s the city’s tech boom has sparked interest from other municipalities across the world. It also comes as the Federal Communications Commission (FCC) prepares to address some of the biggest questions the internet has faced when it returns from the summer break. And while the FCC discusses whether Comcast, the world’s biggest cable company, should take over Time Warner, the US’s second largest cable operator, and whether to allow those companies to set up fast lanes (and therefore slow lanes) for internet traffic, Chattanooga is proof that another path is possible.

It’s a story that is being watched very closely by Big Cable’s critics. “In DC there is often an attitude that the only way to solve our problems is to hand them over to big business. Chattanooga is a reminder that the best solutions are often local and work out better than handing over control to Comcast or AT&T to do whatever they want with us,” said Chris Mitchell, director of community broadband networks at advocacy group the Institute for Local Self-Reliance.

On Friday, the US cable industry called on the FCC to block Chattanooga’s plan to expand, as well as a similar plan for Wilson, North Carolina.

“The success of public broadband is a mixed record, with numerous examples of failures,” USTelecom said in a blog post. “With state taxpayers on the financial hook when a municipal broadband network goes under, it is entirely reasonable for state legislatures to be cautious in limiting or even prohibiting that activity.”

Mayor Berke has dealt with requests for visits from everyone from tiny rural communities to “humungous international cities”. “You don’t see many mid-sized cities that have the kind of activity that we have right now in Chattanooga,” he said. “What the Gig did was change the idea of what our city could be. Mid-sized southern cities are not generally seen as being ahead of the technological curve, the Gig changed that. We now have people coming in looking to us as a leader.”

It’s still early days but there have already been notable successes. In addition to Access America’s sale for an undisclosed sum, last year restaurant booking site OpenTable bought a local company, QuickCue, for $11.5m. “That’s a great example of a story that just doesn’t happen in other mid-sized southern cities,” said Berke.

But it’s what Chattanooga can do next that has the local tech community buzzed.

EPB’s high-speed network came about after it decided to set up a smart electric grid in order to cut power outages. EPB estimated it would take 10 years to build the system and raised a $170m through a municipal bond to pay for it. In 2009 president Barack Obama launched the American Recovery and Reinvestment Act, a stimulus programme aimed at getting the US economy back on track amid the devastation of the recession. EPB was awarded $111m to get its smart grid up and running. Less than three years later the whole service territory was built.

The fibre-optic network uses IntelliRupter PulseClosers, made by S&C Electric, that can reroute power during outages. The University of California at Berkeley estimates that power outages cost the US economy $80bn a year through business disruption with manufacturers stopping their lines and restaurants closing. Chattanooga’s share of that loss was about $100m, EPB estimates. The smart grid can detect a fault in milliseconds and route power around problems. Since the system was installed the duration of power outages has been cut in half.

But it was the other uses of that fiber that fired up enthusiasm in Chattanooga. “When we first started talking about this and the uses of the smart grid we would say to customers and community groups ‘Oh and it can also offer very high-speed internet, TV and phone.’ The electric power stuff was no longer of interest. This is what what people got excited about and it’s the same today,” said EPB vice president Danna Bailey.

Read the entire story here.

Image: Chattanooga, TN skyline. Courtesy of Wikipedia.

Send to Kindle

The IBM Songbook

IBM Songbook

It would be fascinating to see a Broadway or West End show based on lyrics penned in honor of IBM and Thomas Watson, Sr., its first president. Makes you wonder if faithful employees of say, Facebook or Apple, would ever write a songbook — not in jest — for their corporate alma mater. I think not.

From ars technica:

“For thirty-seven years,” reads the opening passage in the book, “the gatherings and conventions of our IBM workers have expressed in happy songs the fine spirit of loyal cooperation and good fellowship which has promoted the signal success of our great IBM Corporation in its truly International Service for the betterment of business and benefit to mankind.”

That’s a hell of a mouthful, but it’s only the opening volley in the war on self-respect and decency that is the 1937 edition of Songs of the IBM, a booklet of corporate ditties first published in 1927 on the order of IBM company founder Thomas Watson, Sr.

The 1937 edition of the songbook is a 54-page monument to glassey-eyed corporate inhumanity, with every page overflowing with trite praise to The Company and Its Men. The booklet reads like a terribly parody of a hymnal—one that praises not the traditional Christian trinity but the new corporate triumvirate of IBM the father, Watson the son, and American entrepreneurship as the holy spirit:

Thomas Watson is our inspiration,
Head and soul of our splendid I.B.M.
We are pledged to him in every nation,
Our President and most beloved man.
His wisdom has guided each division
In service to all humanity
We have grown and broadened with his vision,
None can match him or our great company.
T. J. Watson, we all honor you,
You’re so big and so square and so true,
We will follow and serve with you forever,
All the world must know what I. B. M. can do.

—from “To Thos. J. Watson, President, I.B.M. Our Inspiration”

The wording transcends sense and sanity—these aren’t songs that normal human beings would choose to line up and sing, are they? Have people changed so much in the last 70-80 years that these songs—which seem expressly designed to debase their singers and deify their subjects—would be joyfully sung in harmony without complaint at company meetings? Were workers in the 1920s and 1930s so dehumanized by the rampaging robber barons of high industry that the only way to keep a desirable corporate job at a place like IBM was to toe the line and sing for your paycheck?

Surely no one would stand for this kind of thing in the modern world—to us, company songs seem like relics of a less-enlightened age. If anything, the mindless overflowing trite words sound like the kind of praises one would find directed at a cult of personality dictator in a decaying wreck of a country like North Korea.

Indeed, some of the songs in the book wouldn’t be out of place venerating the Juche ideal instead of IBM:

We don’t pretend we’re gay.
We always feel that way,
Because we’re filling the world with sunshine.
With I.B.M. machines,
We’ve got the finest means,
For brightly painting the clouds with sunshine.

—from “Painting the Clouds with Sunshine”

Surely no one would stand for this kind of thing in the modern world—to us, company songs seem like relics of a less-enlightened age. If anything, the mindless overflowing trite words sound like the kind of praises one would find directed at a cult of personality dictator in a decaying wreck of a country like North Korea.

Tie an onion to your belt

All right, time to come clean: it’s incredibly easy to cherry pick terrible examples out of a 77-year old corporate songbook (though this songbook makes it easy because of how crazy it is to modern eyes). Moreover, to answer one of the rhetorical questions above, no—people have not changed so much over the past 80-ish years that they could sing mawkishly pro-IBM songs with an irony-free straight face. At least, not without some additional context.

There’s a decade-old writeup on NetworkWorld about the IBM corporate song phenomena that provides a lot of the glue necessary to build a complete mental picture of what was going on in both employees’ and leaderships’ heads. The key takeaway to deflate a lot of the looniness is that the majority of the songs came out of the Great Depression era, and employees lucky enough to be steadfastly employed by a company like IBM often werereally that grateful.

The formal integration of singing as an aspect of IBM’s culture at the time was heavily encouraged by Thomas J. Watson Sr. Watson and his employees co-opted the era’s showtunes and popular melodies for their proto-filking, ensuring that everyone would know the way the song went, if not the exact wording. Employees belting out “To the International Ticketograph Division” to the tune of “My Bonnie Lies Over the Ocean” (“In I.B.M. There’s a division. / That’s known as the Ticketograph; / It’s peopled by men who have vision, / Progressive and hard-working staff”) really isn’t all that different from any other team-building exercise that modern companies do—in fact, in a lot of ways, it’s far less humiliating than a company picnic with Mandatory Interdepartmental Three-Legged Races.

Many of the songs mirror the kinds of things that university students of the same time period might sing in honor of their alma mater. When viewed from the perspective of the Depression and post-Depression era, the singing is still silly—but it also makes a lot more sense. Watson reportedly wanted to inspire loyalty and cohesion among employees—and, remember, this was also an era where “normal” employee behavior was to work at a single company for most of one’s professional life, and then retire with a pension. It’s certainly a lot easier to sing a company’s praises if there’s paid retirement at the end of the last verse.

Read the entire article and see more songs here.

Image: Page 99-100 of the IBM Songbook, 1937. Courtesy of IBM / are technica.

Send to Kindle

Syndrome X

DNA_Structure

The quest for immortality or even great longevity has probably led humans since they first became self-aware. Entire cultural movements and industries are founded on the desire to enhance and extend our lives. Genetic research, of course, may eventually unlock some or all of life and death’s mysteries. In the meantime, groups of dedicated scientists continue to look for for the foundation of aging with a view to understanding the process and eventually slowing (and perhaps stopping) it. Richard Walker is one of these singularly focused researchers.

From the BBC:

Richard Walker has been trying to conquer ageing since he was a 26-year-old free-loving hippie. It was the 1960s, an era marked by youth: Vietnam War protests, psychedelic drugs, sexual revolutions. The young Walker relished the culture of exultation, of joie de vivre, and yet was also acutely aware of its passing. He was haunted by the knowledge that ageing would eventually steal away his vitality – that with each passing day his body was slightly less robust, slightly more decayed. One evening he went for a drive in his convertible and vowed that by his 40th birthday, he would find a cure for ageing.

Walker became a scientist to understand why he was mortal. “Certainly it wasn’t due to original sin and punishment by God, as I was taught by nuns in catechism,” he says. “No, it was the result of a biological process, and therefore is controlled by a mechanism that we can understand.”

Scientists have published several hundred theories of ageing, and have tied it to a wide variety of biological processes. But no one yet understands how to integrate all of this disparate information.

Walker, now 74, believes that the key to ending ageing may lie in a rare disease that doesn’t even have a real name, “Syndrome X”. He has identified four girls with this condition, marked by what seems to be a permanent state of infancy, a dramatic developmental arrest. He suspects that the disease is caused by a glitch somewhere in the girls’ DNA. His quest for immortality depends on finding it.

It’s the end of another busy week and MaryMargret Williams is shuttling her brood home from school. She drives an enormous SUV, but her six children and their coats and bags and snacks manage to fill every inch. The three big kids are bouncing in the very back. Sophia, 10, with a mouth of new braces, is complaining about a boy-crazy friend. She sits next to Anthony, seven, and Aleena, five, who are glued to something on their mother’s iPhone. The three little kids squirm in three car seats across the middle row. Myah, two, is mining a cherry slushy, and Luke, one, is pawing a bag of fresh crickets bought for the family gecko.

Finally there’s Gabrielle, who’s the smallest child, and the second oldest, at nine years old. She has long, skinny legs and a long, skinny ponytail, both of which spill out over the edges of her car seat. While her siblings giggle and squeal, Gabby’s dusty-blue eyes roll up towards the ceiling. By the calendar, she’s almost an adolescent. But she has the buttery skin, tightly clenched fingers and hazy awareness of a newborn.

Back in 2004, when MaryMargret and her husband, John, went to the hospital to deliver Gabby, they had no idea anything was wrong. They knew from an ultrasound that she would have clubbed feet, but so had their other daughter, Sophia, who was otherwise healthy. And because MaryMargret was a week early, they knew Gabby would be small, but not abnormally so. “So it was such a shock to us when she was born,” MaryMargret says.

Gabby came out purple and limp. Doctors stabilised her in the neonatal intensive care unit and then began a battery of tests. Within days the Williamses knew their new baby had lost the genetic lottery. Her brain’s frontal lobe was smooth, lacking the folds and grooves that allow neurons to pack in tightly. Her optic nerve, which runs between the eyes and the brain, was atrophied, which would probably leave her blind. She had two heart defects. Her tiny fists couldn’t be pried open. She had a cleft palate and an abnormal swallowing reflex, which meant she had to be fed through a tube in her nose. “They started trying to prepare us that she probably wouldn’t come home with us,” John says. Their family priest came by to baptise her.

Day after day, MaryMargret and John shuttled between Gabby in the hospital and 13-month-old Sophia at home. The doctors tested for a few known genetic syndromes, but they all came back negative. Nobody had a clue what was in store for her. Her strong Catholic family put their faith in God. “MaryMargret just kept saying, ‘She’s coming home, she’s coming home’,” recalls her sister, Jennie Hansen. And after 40 days, she did.

Gabby cried a lot, loved to be held, and ate every three hours, just like any other newborn. But of course she wasn’t. Her arms would stiffen and fly up to her ears, in a pose that the family nicknamed her “Harley-Davidson”. At four months old she started having seizures. Most puzzling and problematic, she still wasn’t growing. John and MaryMargret took her to specialist after specialist: a cardiologist, a gastroenterologist, a geneticist, a neurologist, an ophthalmologist and an orthopaedist. “You almost get your hopes up a little – ’This is exciting! We’re going to the gastro doctor, and maybe he’ll have some answers’,” MaryMargret says. But the experts always said the same thing: nothing could be done.

The first few years with Gabby were stressful. When she was one and Sophia two, the Williamses drove from their home in Billings, Montana, to MaryMargret’s brother’s home outside of St Paul, Minnesota. For nearly all of those 850 miles, Gabby cried and screamed. This continued for months until doctors realised she had a run-of-the-mill bladder infection. Around the same period, she acquired a severe respiratory infection that left her struggling to breathe. John and MaryMargret tried to prepare Sophia for the worst, and even planned which readings and songs to use at Gabby’s funeral. But the tiny toddler toughed it out.

While Gabby’s hair and nails grew, her body wasn’t getting bigger. She was developing in subtle ways, but at her own pace. MaryMargret vividly remembers a day at work when she was pushing Gabby’s stroller down a hallway with skylights in the ceiling. She looked down at Gabby and was shocked to see her eyes reacting to the sunlight. “I thought, ‘Well, you’re seeing that light!’” MaryMargret says. Gabby wasn’t blind, after all.

Despite the hardships, the couple decided they wanted more children. In 2007 MaryMargret had Anthony, and the following year she had Aleena. By this time, the Williamses had stopped trudging to specialists, accepting that Gabby was never going to be fixed. “At some point we just decided,” John recalls, “it’s time to make our peace.”

Mortal questions

When Walker began his scientific career, he focused on the female reproductive system as a model of “pure ageing”: a woman’s ovaries, even in the absence of any disease, slowly but inevitably slide into the throes of menopause. His studies investigated how food, light, hormones and brain chemicals influence fertility in rats. But academic science is slow. He hadn’t cured ageing by his 40th birthday, nor by his 50th or 60th. His life’s work was tangential, at best, to answering the question of why we’re mortal, and he wasn’t happy about it. He was running out of time.

So he went back to the drawing board. As he describes in his book, Why We Age, Walker began a series of thought experiments to reflect on what was known and not known about ageing.

Ageing is usually defined as the slow accumulation of damage in our cells, organs and tissues, ultimately causing the physical transformations that we all recognise in elderly people. Jaws shrink and gums recede. Skin slacks. Bones brittle, cartilage thins and joints swell. Arteries stiffen and clog. Hair greys. Vision dims. Memory fades. The notion that ageing is a natural, inevitable part of life is so fixed in our culture that we rarely question it. But biologists have been questioning it for a long time.

It’s a harsh world out there, and even young cells are vulnerable. It’s like buying a new car: the engine runs perfectly but is still at risk of getting smashed on the highway. Our young cells survive only because they have a slew of trusty mechanics on call. Take DNA, which provides the all-important instructions for making proteins. Every time a cell divides, it makes a near-perfect copy of its three-billion-letter code. Copying mistakes happen frequently along the way, but we have specialised repair enzymes to fix them, like an automatic spellcheck. Proteins, too, are ever vulnerable. If it gets too hot, they twist into deviant shapes that keep them from working. But here again, we have a fixer: so-called ‘heat shock proteins’ that rush to the aid of their misfolded brethren. Our bodies are also regularly exposed to environmental poisons, such as the reactive and unstable ‘free radical’ molecules that come from the oxidisation of the air we breathe. Happily, our tissues are stocked with antioxidants and vitamins that neutralise this chemical damage. Time and time again, our cellular mechanics come to the rescue.

Which leads to the biologists’ longstanding conundrum: if our bodies are so well tuned, why, then, does everything eventually go to hell?

One theory is that it all boils down to the pressures of evolution. Humans reproduce early in life, well before ageing rears its ugly head. All of the repair mechanisms that are important in youth – the DNA editors, the heat shock proteins, the antioxidants – help the young survive until reproduction, and are therefore passed down to future generations. But problems that show up after we’re done reproducing cannot be weeded out by evolution. Hence, ageing.

Most scientists say that ageing is not caused by any one culprit but by the breakdown of many systems at once. Our sturdy DNA mechanics become less effective with age, meaning that our genetic code sees a gradual increase in mutations. Telomeres, the sequences of DNA that act as protective caps on the ends of our chromosomes, get shorter every year. Epigenetic messages, which help turn genes on and off, get corrupted with time. Heat shock proteins run down, leading to tangled protein clumps that muck up the smooth workings of a cell. Faced with all of this damage, our cells try to adjust by changing the way they metabolise nutrients and store energy. To ward off cancer, they even know how to shut themselves down. But eventually cells stop dividing and stop communicating with each other, triggering the decline we see from the outside.

Scientists trying to slow the ageing process tend to focus on one of these interconnected pathways at a time. Some researchers have shown, for example, that mice on restricted-calorie diets live longer than normal. Other labs have reported that giving mice rapamycin, a drug that targets an important cell-growth pathway, boosts their lifespan. Still other groups are investigating substances that restore telomeres, DNA repair enzymes and heat shock proteins.

During his thought experiments, Walker wondered whether all of these scientists were fixating on the wrong thing. What if all of these various types of cellular damages were the consequences of ageing, but not the root cause of it? He came up with an alternative theory: that ageing is the unavoidable fallout of our development.

The idea sat on the back burner of Walker’s mind until the evening of 23 October 2005. He was working in his home office when his wife called out to him to join her in the family room. She knew he would want to see what was on TV: an episode of Dateline about a young girl who seemed to be “frozen in time”. Walker watched the show and couldn’t believe what he was seeing. Brooke Greenberg was 12 years old, but just 13 pounds (6kg) and 27 inches (69cm) long. Her doctors had never seen anything like her condition, and suspected the cause was a random genetic mutation. “She literally is the Fountain of Youth,” her father, Howard Greenberg, said.

Walker was immediately intrigued. He had heard of other genetic diseases, such as progeria and Werner syndrome, which cause premature ageing in children and adults respectively. But this girl seemed to be different. She had a genetic disease that stopped her development and with it, Walker suspected, the ageing process. Brooke Greenberg, in other words, could help him test his theory.

Uneven growth

Brooke was born a few weeks premature, with many birth defects. Her paediatrician labeled her with Syndrome X, not knowing what else to call it.

After watching the show, Walker tracked down Howard Greenberg’s address. Two weeks went by before Walker heard back, and after much discussion he was allowed to test Brooke. He was sent Brooke’s medical records as well as blood samples for genetic testing. In 2009, his team published a brief report describing her case.

Walker’s analysis found that Brooke’s organs and tissues were developing at different rates. Her mental age, according to standardised tests, was between one and eight months. Her teeth appeared to be eight years old; her bones, 10 years. She had lost all of her baby fat, and her hair and nails grew normally, but she had not reached puberty. Her telomeres were considerably shorter than those of healthy teenagers, suggesting that her cells were ageing at an accelerated rate.

All of this was evidence of what Walker dubbed “developmental disorganisation”. Brooke’s body seemed to be developing not as a coordinated unit, he wrote, but rather as a collection of individual, out-of-sync parts. “She is not simply ‘frozen in time’,” Walker wrote. “Her development is continuing, albeit in a disorganised fashion.”

The big question remained: why was Brooke developmentally disorganised? It wasn’t nutritional and it wasn’t hormonal. The answer had to be in her genes. Walker suspected that she carried a glitch in a gene (or a set of genes, or some kind of complex genetic programme) that directed healthy development. There must be some mechanism, after all, that allows us to develop from a single cell to a system of trillions of cells. This genetic programme, Walker reasoned, would have two main functions: it would initiate and drive dramatic changes throughout the organism, and it would also coordinate these changes into a cohesive unit.

Ageing, he thought, comes about because this developmental programme, this constant change, never turns off. From birth until puberty, change is crucial: we need it to grow and mature. After we’ve matured, however, our adult bodies don’t need change, but rather maintenance. “If you’ve built the perfect house, you would want to stop adding bricks at a certain point,” Walker says. “When you’ve built a perfect body, you’d want to stop screwing around with it. But that’s not how evolution works.” Because natural selection cannot influence traits that show up after we have passed on our genes, we never evolved a “stop switch” for development, Walker says. So we keep adding bricks to the house. At first this doesn’t cause much damage – a sagging roof here, a broken window there. But eventually the foundation can’t sustain the additions, and the house topples. This, Walker says, is ageing.

Brooke was special because she seemed to have been born with a stop switch. But finding the genetic culprit turned out to be difficult. Walker would need to sequence Brooke’s entire genome, letter by letter.

That never happened. Much to Walker’s chagrin, Howard Greenberg abruptly severed their relationship. The Greenbergs have not publicly explained why they ended their collaboration with Walker, and declined to comment for this article.

Second chance

In August 2009, MaryMargret Williams saw a photo of Brooke on the cover of People magazine, just below the headline “Heartbreaking mystery: The 16-year-old baby”. She thought Brooke sounded a lot like Gabby, so contacted Walker.

After reviewing Gabby’s details, Walker filled her in on his theory. Testing Gabby’s genes, he said, could help him in his mission to end age-related disease – and maybe even ageing itself.

This didn’t sit well with the Williamses. John, who works for the Montana Department of Corrections, often interacts with people facing the reality of our finite time on Earth. “If you’re spending the rest of your life in prison, you know, it makes you think about the mortality of life,” he says. What’s important is not how long you live, but rather what you do with the life you’re given. MaryMargret feels the same way. For years she has worked in a local dermatology office. She knows all too well the cultural pressures to stay young, and wishes more people would embrace the inevitability of getting older. “You get wrinkles, you get old, that’s part of the process,” she says.

But Walker’s research also had its upside. First and foremost, it could reveal whether the other Williams children were at risk of passing on Gabby’s condition.

For several months, John and MaryMargret hashed out the pros and cons. They were under no illusion that the fruits of Walker’s research would change Gabby’s condition, nor would they want it to. But they did want to know why. “What happened, genetically, to make her who she is?” John says. And more importantly: “Is there a bigger meaning for it?”

John and MaryMargret firmly believe that God gave them Gabby for a reason. Walker’s research offered them a comforting one: to help treat Alzheimer’s and other age-related diseases. “Is there a small piece that Gabby could present to help people solve these awful diseases?” John asks. “Thinking about it, it’s like, no, that’s for other people, that’s not for us.” But then he thinks back to the day Gabby was born. “I was in that delivery room, thinking the same thing – this happens to other people, not us.”

Still not entirely certain, the Williamses went ahead with the research.

Amassing evidence

Walker published his theory in 2011, but he’s only the latest of many researchers to think along the same lines. “Theories relating developmental processes to ageing have been around for a very long time, but have been somewhat under the radar for most researchers,” says Joao Pedro de Magalhaes, a biologist at the University of Liverpool. In 1932, for example, English zoologist George Parker Bidder suggested that mammals have some kind of biological “regulator” that stops growth after the animal reaches a specific size. Ageing, Bidder thought, was the continued action of this regulator after growth was done.

Subsequent studies showed that Bidder wasn’t quite right; there are lots of marine organisms, for example, that never stop growing but age anyway. Still, his fundamental idea of a developmental programme leading to ageing has persisted.

For several years, Stuart Kim’s group at Stanford University has been comparing which genes are expressed in young and old nematode worms. It turns out that some genes involved in ageing also help drive development in youth.

Kim suggested that the root cause of ageing is the “drift”, or mistiming, of developmental pathways during the ageing process, rather than an accumulation of cellular damage.

Other groups have since found similar patterns in mice and primates. One study, for example, found that many genes turned on in the brains of old monkeys and humans are the same as those expressed in young brains, suggesting that ageing and development are controlled by some of the same gene networks.

Perhaps most provocative of all, some studies of worms have shown that shutting down essential development genes in adults significantly prolongs life. “We’ve found quite a lot of genes in which this happened – several dozen,” de Magalhaes says.

Nobody knows whether the same sort of developmental-programme genes exist in people. But say that they do exist. If someone was born with a mutation that completely destroyed this programme, Walker reasoned, that person would undoubtedly die. But if a mutation only partially destroyed it, it might lead to a condition like what he saw in Brooke Greenberg or Gabby Williams. So if Walker could identify the genetic cause of Syndrome X, then he might also have a driver of the ageing process in the rest of us.

And if he found that, then could it lead to treatments that slow – or even end – ageing? “There’s no doubt about it,” he says.

Public stage

After agreeing to participate in Walker’s research, the Williamses, just like the Greenbergs before them, became famous. In January 2011, when Gabby was six, the television channel TLC featured her on a one-hour documentary. The Williams family also appeared on Japanese television and in dozens of newspaper and magazine articles.

Other than becoming a local celebrity, though, Gabby’s everyday life hasn’t changed much since getting involved in Walker’s research. She spends her days surrounded by her large family. She’ll usually lie on the floor, or in one of several cushions designed to keep her spine from twisting into a C shape. She makes noises that would make an outsider worry: grunting, gasping for air, grinding her teeth. Her siblings think nothing of it. They play boisterously in the same room, somehow always careful not to crash into her. Once a week, a teacher comes to the house to work with Gabby. She uses sounds and shapes on an iPad to try to teach cause and effect. When Gabby turned nine, last October, the family made her a birthday cake and had a party, just as they always do. Most of her gifts were blankets, stuffed animals and clothes, just as they are every year. Her aunt Jennie gave her make-up.

Walker teamed up with geneticists at Duke University and screened the genomes of Gabby, John and MaryMargret. This test looked at the exome, the 2% of the genome that codes for proteins. From this comparison, the researchers could tell that Gabby did not inherit any exome mutations from her parents – meaning that it wasn’t likely that her siblings would be able to pass on the condition to their kids. “It was a huge relief – huge,” MaryMargret says.

Still, the exome screening didn’t give any clues as to what was behind Gabby’s disease. Gabby carries several mutations in her exome, but none in a gene that would make sense of her condition. All of us have mutations littering our genomes. So it’s impossible to know, in any single individual, whether a particular mutation is harmful or benign – unless you can compare two people with the same condition.

All girls

Luckily for him, Walker’s continued presence in the media has led him to two other young girls who he believes have the same syndrome. One of them, Mackenzee Wittke, of Alberta, Canada, is now five years old, with has long and skinny limbs, just like Gabby. “We have basically been stuck in a time warp,” says her mother, Kim Wittke. The fact that all of these possible Syndrome X cases are girls is intriguing – it could mean that the crucial mutation is on their X chromosome. Or it could just be a coincidence.

Walker is working with a commercial outfit in California to compare all three girls’ entire genome sequences – the exome plus the other 98% of DNA code, which is thought to be responsible for regulating the expression of protein-coding genes.

For his theory, Walker says, “this is do or die – we’re going to do every single bit of DNA in these girls. If we find a mutation that’s common to them all, that would be very exciting.”

But that seems like a very big if.

Most researchers agree that finding out the genes behind Syndrome X is a worthwhile scientific endeavour, as these genes will no doubt be relevant to our understanding of development. They’re far less convinced, though, that the girls’ condition has anything to do with ageing. “It’s a tenuous interpretation to think that this is going to be relevant to ageing,” says David Gems, a geneticist at University College London. It’s not likely that these girls will even make it to adulthood, he says, let alone old age.

It’s also not at all clear that these girls have the same condition. Even if they do, and even if Walker and his collaborators discover the genetic cause, there would still be a steep hill to climb. The researchers would need to silence the same gene or genes in laboratory mice, which typically have a lifespan of two or three years. “If that animal lives to be 10, then we’ll know we’re on the right track,” Walker says. Then they’d have to find a way to achieve the same genetic silencing in people, whether with a drug or some kind of gene therapy. And then they’d have to begin long and expensive clinical trials to make sure that the treatment was safe and effective. Science is often too slow, and life too fast.

End of life

On 24 October 2013, Brooke passed away. She was 20 years old. MaryMargret heard about it when a friend called after reading it in a magazine. The news hit her hard. “Even though we’ve never met the family, they’ve just been such a part of our world,” she says.

MaryMargret doesn’t see Brooke as a template for Gabby – it’s not as if she now believes that she only has 11 years left with her daughter. But she can empathise with the pain the Greenbergs must be feeling. “It just makes me feel so sad for them, knowing that there’s a lot that goes into a child like that,” she says. “You’re prepared for them to die, but when it finally happens, you can just imagine the hurt.”

Today Gabby is doing well. MaryMargret and John are no longer planning her funeral. Instead, they’re beginning to think about what would happen if Gabby outlives them. (Sophia has offered to take care of her sister.) John turned 50 this year, and MaryMargret will be 41. If there were a pill to end ageing, they say they’d have no interest in it. Quite the contrary: they look forward to getting older, because it means experiencing the new joys, new pains and new ways to grow that come along with that stage of life.

Richard Walker, of course, has a fundamentally different view of growing old. When asked why he’s so tormented by it, he says it stems from childhood, when he watched his grandparents physically and psychologically deteriorate. “There was nothing charming to me about sedentary old people, rocking chairs, hot houses with Victorian trappings,” he says. At his grandparents’ funerals, he couldn’t help but notice that they didn’t look much different in death than they did at the end of life. And that was heartbreaking. “To say I love life is an understatement,” he says. “Life is the most beautiful and magic of all things.”

If his hypothesis is correct – who knows? – it might one day help prevent disease and modestly extend life for millions of people. Walker is all too aware, though, that it would come too late for him. As he writes in his book: “I feel a bit like Moses who, after wandering in the desert for most years of his life, was allowed to gaze upon the Promised Land but not granted entrance into it.”

 Read the entire story here.

Story courtesy of BBC and Mosaic under Creative Commons License.

Image: DNA structure. Courtesy of Wikipedia.

Send to Kindle

The Idea Shower and The Strategic Staircase

Every now and then we visit the world of corporatespeak to see how business jargon is faring: which words are in, which phrases are out. Unfortunately, many of the most used and over-used still find their way into common office parlance. With apologies to our state-side readers some of the most popular British phrases follow, and, no surprise, many of these cringeworthy euphemisms seem to emanate from the U.S. Ugh!

From the Guardian:

I don’t know about you, but I’m a sucker for a bit of joined up, blue sky thinking. I love nothing more than the opportunity to touch base with my boss first thing on a Monday morning. It gives me that 24 carat feeling.

I apologise for the sarcasm, but management speak makes most people want to staple the boss’s tongue to the desk. A straw poll around my office found jargon is seen by staff as a tool for making something seem more impressive than it actually is.

The Plain English Campaign says that many staff working for big corporate organisations find themselves using management speak as a way of disguising the fact that they haven’t done their job properly. Some people think that it is easy to bluff their way through by using long, impressive-sounding words and phrases, even if they don’t know what they mean, which is telling in itself.

Furthermore, a recent survey by Institute of Leadership & Management, revealed that management speak is used in almost two thirds (64%) of offices, with nearly a quarter (23%) considering it to be a pointless irritation. “Thinking outside the box” (57%), “going forward” (55%) and “let’s touch base” (39%) were identified as the top three most overused pieces of jargon.

Walk through any office and you’ll hear this kind of thing going on every day. Here are some of the most irritating euphemisms doing the rounds:

Helicopter view – need a phrase that means broad overview of the business? Then why not say “a broad view of the business”?

Idea shower – brainstorm might be out of fashion, but surely we can thought cascade something better than this drivel.

Touch base offline – meaning let’s meet and talk. Because, contrary to popular belief, it is possible to communicate without a Wi-Fi signal. No, really, it is. Fancy a coffee?

Low hanging fruit – easy win business. This would be perfect for hungry children in orchards, but what is really happening is an admission that you don’t want to take the complicated route.

Look under the bonnet – analyse a situation. Most people wouldn’t have a clue about a car engine. When I look under a car bonnet I scratch my head, try not to look like I haven’t got a clue, jiggle a few pipes and kick the tyres before handing the job over to a qualified professional.

Get all your ducks in a row – be organised. Bert and Ernie from Sesame Street had an obsession with rubber ducks. You may think I’m disorganised, but there’s no need to talk to me like a five-year-old.

Don’t let the grass grow too long on this one – work fast. I’m looking for a polite way of suggesting that you get off your backside and get on with it.

Not enough bandwidth – too busy. Really? Try upgrading to fibre optics. I reckon I know a few people who haven’t been blessed with enough “bandwidth” and it’s got nothing to do with being busy.

Cascading relevant information – speaking to your colleagues. If anything, this is worse than touching base offline. From the flourish of cascading through to relevant, and onto information – this is complete nonsense.

The strategic staircase – business plan. Thanks, but I’ll take the lift.

Run it up the flagpole – try it out. Could you attach yourself while you’re at it?

Read the entire story here.

Send to Kindle

Sugar Is Bad For You, Really? Really!

 

sugar moleculesIn case you may not have heard, sugar is bad for you. In fact, an increasing number of food scientists will tell you that sugar is a poison, and that it’s time to fight the sugar oligarchs in much the same way that health advocates resolved to take on big tobacco many decades ago.

From the Guardian:

If you have any interest at all in diet, obesity, public health, diabetes, epidemiology, your own health or that of other people, you will probably be aware that sugar, not fat, is now considered the devil’s food. Dr Robert Lustig’s book, Fat Chance: The Hidden Truth About Sugar, Obesity and Disease, for all that it sounds like a Dan Brown novel, is the difference between vaguely knowing something is probably true, and being told it as a fact. Lustig has spent the past 16 years treating childhood obesity. His meta-analysis of the cutting-edge research on large-cohort studies of what sugar does to populations across the world, alongside his own clinical observations, has him credited with starting the war on sugar. When it reaches the enemy status of tobacco, it will be because of Lustig.

“Politicians have to come in and reset the playing field, as they have with any substance that is toxic and abused, ubiquitous and with negative consequence for society,” he says. “Alcohol, cigarettes, cocaine. We don’t have to ban any of them. We don’t have to ban sugar. But the food industry cannot be given carte blanche. They’re allowed to make money, but they’re not allowed to make money by making people sick.”

Lustig argues that sugar creates an appetite for itself by a determinable hormonal mechanism – a cycle, he says, that you could no more break with willpower than you could stop feeling thirsty through sheer strength of character. He argues that the hormone related to stress, cortisol, is partly to blame. “When cortisol floods the bloodstream, it raises blood pressure; increases the blood glucose level, which can precipitate diabetes. Human research shows that cortisol specifically increases caloric intake of ‘comfort foods’.” High cortisol levels during sleep, for instance, interfere with restfulness, and increase the hunger hormone ghrelin the next day. This differs from person to person, but I was jolted by recognition of the outrageous deliciousness of doughnuts when I haven’t slept well.

“The problem in obesity is not excess weight,” Lustig says, in the central London hotel that he has made his anti-metabolic illness HQ. “The problem with obesity is that the brain is not seeing the excess weight.” The brain can’t see it because appetite is determined by a binary system. You’re either in anorexigenesis – “I’m not hungry and I can burn energy” – or you’re in orexigenesis – “I’m hungry and I want to store energy.” The flip switch is your leptin level (the hormone that regulates your body fat) but too much insulin in your system blocks the leptin signal.

It helps here if you have ever been pregnant or remember much of puberty and that savage hunger; the way it can trick you out of your best intentions, the lure of ridiculous foods: six-month-old Christmas cake, sweets from a bin. If you’re leptin resistant – that is, if your insulin is too high as a result of your sugar intake – you’ll feel like that all the time.

Telling people to simply lose weight, he tells me, “is physiologically impossible and it’s clinically dangerous. It’s a goal that’s not achievable.” He explains further in the book: “Biochemistry drives behaviour. You see a patient who drinks 10 gallons of water a day and urinates 10 gallons of water a day. What is wrong with him? Could he have a behavioural disorder and be a psychogenic water drinker? Could be. Much more likely he has diabetes.” To extend that, you could tell people with diabetes not to drink water, and 3% of them might succeed – the outliers. But that wouldn’t help the other 97% just as losing the weight doesn’t, long-term, solve the metabolic syndrome – the addiction to sugar – of which obesity is symptomatic.

Many studies have suggested that diets tend to work for two months, some for as long as six. “That’s what the data show. And then everybody’s weight comes roaring back.” During his own time working night shifts, Lustig gained 3st, which he never lost and now uses exuberantly to make two points. The first is that weight is extremely hard to lose, and the second – more important, I think – is that he’s no diet and fitness guru himself. He doesn’t want everybody to be perfect: he’s just a guy who doesn’t want to surrender civilisation to diseases caused by industry. “I’m not a fitness guru,” he says, puckishly. “I’m 45lb overweight!”

“Sugar causes diseases: unrelated to their calories and unrelated to the attendant weight gain. It’s an independent primary-risk factor. Now, there will be food-industry people who deny it until the day they die, because their livelihood depends on it.” And here we have the reason why he sees this is a crusade and not a diet book, the reason that Lustig is in London and not Washington. This is an industry problem; the obesity epidemic began in 1980. Back then, nobody knew about leptin. And nobody knew about insulin resistance until 1984.

“What they knew was, when they took the fat out they had to put the sugar in, and when they did that, people bought more. And when they added more, people bought more, and so they kept on doing it. And that’s how we got up to current levels of consumption.” Approximately 80% of the 600,000 packaged foods you can buy in the US have added calorific sweeteners (this includes bread, burgers, things you wouldn’t add sugar to if you were making them from scratch). Daily fructose consumption has doubled in the past 30 years in the US, a pattern also observable (though not identical) here, in Canada, Malaysia, India, right across the developed and developing world. World sugar consumption has tripled in the past 50 years, while the population has only doubled; it makes sense of the obesity pandemic.

“It would have happened decades earlier; the reason it didn’t was that sugar wasn’t cheap. The thing that made it cheap was high-fructose corn syrup. They didn’t necessarily know the physiology of it, but they knew the economics of it.” Adding sugar to everyday food has become as much about the industry prolonging the shelf life as it has about palatability; if you’re shopping from corner shops, you’re likely to be eating unnecessary sugar in pretty well everything. It is difficult to remain healthy in these conditions. “You here in Britain are light years ahead of us in terms of understanding the problem. We don’t get it in the US, we have this libertarian streak. You don’t have that. You’re going to solve it first. So it’s in my best interests to help you, because that will help me solve it back there.”

The problem has mushroomed all over the world in 30 years and is driven by the profits of the food and diet industries combined. We’re not looking at a global pandemic of individual greed and fecklessness: it would be impossible for the citizens of the world to coordinate their human weaknesses with that level of accuracy. Once you stop seeing it as a problem of personal responsibility it’s easier to accept how profound and serious the war on sugar is. Life doesn’t have to become wholemeal and joyless, but traffic-light systems and five-a-day messaging are under-ambitious.

“The problem isn’t a knowledge deficit,” an obesity counsellor once told me. “There isn’t a fat person on Earth who doesn’t know vegetables are good for you.” Lustig agrees. “I, personally, don’t have a lot of hope that those things will turn things around. Education has not solved any substance of abuse. This is a substance of abuse. So you need two things, you need personal intervention and you need societal intervention. Rehab and laws, rehab and laws. Education would come in with rehab. But we need laws.”

Read the entire article here.

Image: Molecular diagrams of sucrose (left) and fructose (right). Courtesy of Wikipedia.

 

Send to Kindle

National Extinction Coming Soon

Based on declining fertility rates in some Asian nations a new study predicts complete national extinctions in the not too distant future.

From the Telegraph:

South Koreans will be ‘extinct’ by 2750 if nothing is done to halt the nation’s falling fertility rate, according to a study by The National Assembly Research Service in Seoul.

The fertility rate declined to a new low of 1.19 children per woman in 2013, the study showed, well below the fertility rate required to sustainSouth Korea‘s current population of 50 million people, the Chosun Ilbo reported.

In a simulation, the NARS study suggests that the population will shrink to 40 million in 2056 and 10 million in 2136. The last South Korean, the report indicates, will die in 2750, making it the first national group in the world to become extinct.

The simulation is a worst-case scenario and does not consider possible changes in immigration policy, for example.

The study, carried out at the request of Yang Seung-jo, a member of the opposition New Politics Alliance for Democracy, underlines the challenges facing a number of nations in the Asia-Pacific region.

Japan, Taiwan, Singapore and increasingly China are all experiencing growing financial pressures caused by rising healthcare costs and pension payments for an elderly population.

The problem is particularly serious in South Korea, where more than 38 per cent of the population is predicted to be of retirement age by 2050, according to the National Statistics Office. The equivalent figure in Japan is an estimated 39.6 per cent by 2050.

According to a 2012 study conducted by Tohoku University, Japan will go extinct in about one thousand years, with the last Japanese child born in 3011.

David Coleman, a population expert at Oxford University, has previously warned that South Korea’s fertility rate is so low that it threatens the existence of the nation.

The NARS study suggests that the southern Korean port city of Busan is most at risk, largely because of a sharp decline in the number of young and middle-aged residents, and that the last person will be born in the city in 2413.

Read the entire article here.

Send to Kindle

Those 25,000 Unread Emails

Google-search-emailIt may not be you. You may not be the person who has tens of thousands of unread emails scattered across various email accounts. However, you know someone just like this — buried in a virtual avalanche of unopened text, unable to extricate herself (or him) and with no pragmatic plan to tackle the digital morass.

Washington Post writer Brigid Schulte has some ideas to help your friend  (or you of course — your secret is safe with us).

From the Washington Post:

I was drowning in e-mail. Overwhelmed. Overloaded. Spending hours a day, it seemed, roiling in an unending onslaught of info turds and falling further and further behind. The day I returned from a two-week break, I had 23,768 messages in my inbox. And 14,460 of them were unread.

I had to do something. I kept missing stuff. Forgetting stuff. Apologizing. And getting miffed and increasingly angry e-mails from friends and others who wondered why I was ignoring them. It wasn’t just vacation that put me so far behind. I’d been behind for more than a year. Vacation only made it worse. Every time I thought of my inbox, I’d start to hyperventilate.

I’d tried tackling it before: One night a few months ago, I was determined to stay at my desk until I’d powered through all the unread e-mails. At dawn, I was still powering through and nowhere near the end. And before long, the inbox was just as crammed as it had been before I lost that entire night’s sleep.

On the advice of a friend, I’d even hired a Virtual Assistant to help me with the backlog. But I had no idea how to use one. And though I’d read about people declaring e-mail bankruptcy when their inbox was overflowing — deleting everything and starting over from scratch — I was positive there were gems somewhere in that junk, and I couldn’t bear to lose them.

I knew I wasn’t alone. I’d get automatic response messages saying someone was on vacation and the only way they could relax was by telling me they’d never, ever look at my e-mail, so please send it again when they returned. My friend, Georgetown law professor Rosa Brooks, often sends out this auto response: “My inbox looks like Pompeii, post-volcano. Will respond as soon as I have time to excavate.” And another friend, whenever an e-mail is longer than one or two lines, sends a short note, “This sounds like a conversation,” and she won’t respond unless you call her.

E-mail made the late writer Nora Ephron’s list of the 22 things she won’t miss in life. Twice. In 2013, more than 182 billion e-mails were sent every day, no doubt clogging up millions of inboxes around the globe.

Bordering on despair, I sought help from four productivity gurus. And, following their advice, in two weeks of obsession-bordering-on-compulsion, my inbox was down to zero.

Here’s how.

*CREATE A SYSTEM. Julie Gray, a time coach who helps people dig out of e-mail overload all the time, said the first thing I had to change was my mind.

“This is such a pervasive problem. People think, ‘What am I doing wrong? They think they don’t have discipline or focus or that there’s some huge character flaw and they’re beating themselves up all the time. Which only makes it worse,” she said.

“So I first start changing their e-mail mindset from ‘This is an example of my failure,’ to ‘This just means I haven’t found the right system for me yet.’ It’s really all about finding your own path through the craziness.”

Do not spend another minute on e-mail, she admonished me, until you’ve begun to figure out a system. Otherwise, she said, I’d never dig out.

So we talked systems. It soon became clear that I’d created a really great e-mail system for when I was writing my book — ironically enough, on being overwhelmed — spending most of my time not at all overwhelmed in yoga pants in my home office working on my iMac. I was a follower of Randy Pausch who wrote, in “The Last Lecture,” to keep your e-mail inbox down to one page and religiously file everything once you’ve handled it. And I had for a couple years.

But now that I was traveling around the country to talk about the book, and back at work at The Washington Post, using my laptop, iPhone and iPad, that system was completely broken. I had six different e-mail accounts. And my main Verizon e-mail that I’d used for years and the Mac Mail inbox with meticulous file folders that I loved on my iMac didn’t sync across any of them.

Gray asked: “If everything just blew up today, and you had to start over, how would you set up your system?”

I wanted one inbox. One e-mail account. And I wanted the same inbox on all my devices. If I deleted an e-mail on my laptop, I wanted it deleted on my iMac. If I put an e-mail into a folder on my iMac, I wanted that same folder on my laptop.

So I decided to use Gmail, which does sync, as my main account. I set up an auto responder on my Verizon e-mail saying I was no longer using it and directing people to my Gmail account. I updated all my accounts to send to Gmail. And I spent hours on the phone with Apple one Sunday (thank you, Chazz,) to get my Gmail account set up in my beloved Mac mail inbox that would sync. Then I transferred old files and created new ones on Gmail. I had to keep my Washington Post account separate, but that wasn’t the real problem.

All systems go.

Read the entire article here.

Image courtesy of Google Search.

 

Send to Kindle

Robin Williams You Will Be Missed

Google-search-robin-williams

Mork returned to Ork this weekend; sadly, his creator Robin Williams passed away on August 11, 2014. He was 63. His unique comic genius will be sorely missed.

From NYT:

Some years ago, at a party at the Cannes Film Festival, I was leaning against a rail watching a fireworks display when I heard a familiar voice behind me. Or rather, at least a dozen voices, punctuating the offshore explosions with jokes, non sequiturs and off-the-wall pop-cultural, sexual and political references.

There was no need to turn around: The voices were not talking directly to me and they could not have belonged to anyone other than Robin Williams, who was extemporizing a monologue at least as pyrotechnically amazing as what was unfolding against the Mediterranean sky. I’m unable to recall the details now, but you can probably imagine the rapid-fire succession of accents and pitches — macho basso, squeaky girly, French, Spanish, African-American, human, animal and alien — entangling with curlicues of self-conscious commentary about the sheer ridiculousness of anyone trying to narrate explosions of colored gunpowder in real time.

Part of the shock of his death on Monday came from the fact that he had been on — ubiquitous, self-reinventing, insistently present — for so long. On Twitter, mourners dated themselves with memories of the first time they had noticed him. For some it was the movie “Aladdin.” For others “Dead Poets Society” or “Mrs. Doubtfire.” I go back even further, to the “Mork and Mindy” television show and an album called “Reality — What a Concept” that blew my eighth-grade mind.

Back then, it was clear that Mr. Williams was one of the most explosively, exhaustingly, prodigiously verbal comedians who ever lived. The only thing faster than his mouth was his mind, which was capable of breathtaking leaps of free-associative absurdity. Janet Maslin, reviewing his standup act in 1979, cataloged a tumble of riffs that ranged from an impression of Jacques Cousteau to “an evangelist at the Disco Temple of Comedy,” to Truman Capote Jr. at “the Kindergarten of the Stars” (whatever that was). “He acts out the Reader’s Digest condensed version of ‘Roots,’ ” Ms. Maslin wrote, “which lasts 15 seconds in its entirety. He improvises a Shakespearean-sounding epic about the Three Mile Island nuclear disaster, playing all the parts himself, including Einstein’s ghost.” (That, or something like it, was a role he would reprise more than 20 years later in Steven Spielberg’s “A.I.”)

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle