Category Archives: Idea Soup

Answers to Life’s Big Questions

Do you gulp Pepsi or Coke? Are you a Mac or a PC? Do you side with MSNBC or Fox News? Do you sip tea or coffee? Do you prefer thin crust or deep pan pizza.

Hunch has compiled a telling infographic compiled from millions of answers gathered via its online Teach Hunch About You (THAY) questions. Interestingly, it looks like 61 percent of respondents are “dog people” and 31 percent “cat people” (with 8 percent neither).

[div class=attrib]From Hunch:[end-div]

[div class=attrib]More from theSource here.[end-div]

Morality 1: Good without gods

[div class=attrib]From QualiaSoup:[end-div]

Some people claim that morality is dependent upon religion, that atheists cannot possibly be moral since god and morality are intertwined (well, in their minds). Unfortunately, this is one way that religious people dehumanise atheists who have a logical way of thinking about what constitutes moral social behaviour. More than simply being a (incorrect) definition in the Oxford dictionary, morality is actually the main subject of many philosophers’ intellectual lives. This video, the first of a multi-part series, begins this discussion by defining morality and then moving on to look at six hypothetical cultures’ and their beliefs.

[tube]T7xt5LtgsxQ[/tube]

Favela Futurism, Very Chic

[div class=attrib]From BigThink:[end-div]

The future of global innovation is the Brazilian favela, the Mumbai slum and the Nairobi shanty-town. At a time when countries across the world, from Latin America to Africa to Asia, are producing new mega-slums on an epic scale, when emerging mega-cities in China are pushing the limits of urban infrastructure by adding millions of new inhabitants each year, it is becoming increasingly likely that the lowly favela, slum or ghetto may hold the key to the future of human development.

Back in 2009, futurist and science fiction writer Bruce Sterling first introduced Favela Chic as a way of thinking about our modern world. What is favela chic? It’s what happens “when you’ve lost everything materially… but are wired to the gills and are big on Facebook.” Favela chic doesn’t have to be exclusively an emerging market notion, either. As Sterling has noted, it can be a hastily thrown-together high-rise in downtown Miami, covered over with weeds, without any indoor plumbing, filled with squatters.

Flash forward to the end of 2010, when the World Future Society named favela innovation one of the Top 10 trends to watch in 2011: “Dwellers of slums, favelas, and ghettos have learned to use and reuse resources and commodities more efficiently than their wealthier counterparts. The neighborhoods are high-density and walkable, mixing commercial and residential areas rather than segregating these functions. In many of these informal cities, participants play a role in communal commercial endeavors such as growing food or raising livestock.”

What’s fascinating is that the online digital communities we are busy creating in “developed” nations more closely resemble favelas than they do carefully planned urban cities. They are messy, emergent and always in beta. With few exceptions, there are no civil rights and no effective ways to organize. When asked how to define favela chic at this year’s SXSW event in Austin, Sterling referred to Facebook as the poster child of a digital favela. It’s thrown-up, in permanent beta, and easily disposed of quickly. Apps and social games are the corrugated steel of our digital shanty-towns.

[div class=attrib]More from theSource here.[end-div]

Bad reasoning about reasoning

[div class=attrib]By Massimo Pigliucci at Rationally Speaking:[end-div]

A recent paper on the evolutionary psychology of reasoning has made mainstream news, with extensive coverage by the New York Times, among others. Too bad the “research” is badly flawed, and the lesson drawn by Patricia Cohen’s commentary in the Times is precisely the wrong one.

Readers of this blog and listeners to our podcast know very well that I tend to be pretty skeptical of evolutionary psychology in general. The reason isn’t because there is anything inherently wrong about thinking that (some) human behavioral traits evolved in response to natural selection. That’s just an uncontroversial consequence of standard evolutionary theory. The devil, rather, is in the details: it is next to impossible to test specific evopsych hypotheses because the crucial data are often missing. The fossil record hardly helps (if we are talking about behavior), there are precious few closely related species for comparison (and they are not at all that closely related), and the current ecological-social environment is very different from the “ERE,” the Evolutionarily Relevant Environment (which means that measuring selection on a given trait in today’s humans is pretty much irrelevant).
That said, I was curious about Hugo Mercier and Dan Sperber’s paper, “Why do humans reason? Arguments for an argumentative theory,” published in Behavioral and Brain Sciences (volume 34, pp. 57-111, 2011), which is accompanied by an extensive peer commentary. My curiosity was piqued in particular because of the Times’ headline from the June 14 article: “Reason Seen More as Weapon Than Path to Truth.” Oh crap, I thought.

Mercier and Sperber’s basic argument is that reason did not evolve to allow us to seek truth, but rather to win arguments with our fellow human beings. We are natural lawyers, not natural philosophers. This, according to them, explains why people are so bad at reasoning, for instance why we tend to fall for basic mistakes such as the well known confirmation bias — a tendency to seek evidence in favor of one’s position and discount contrary evidence that is well on display in politics and pseudoscience. (One could immediately raise the obvious “so what?” objection to all of this: language possibly evolved to coordinate hunting and gossip about your neighbor. That doesn’t mean we can’t take writing and speaking courses and dramatically improve on our given endowment, natural selection be damned.)

The first substantive thing to notice about the paper is that there isn’t a single new datum to back up the central hypothesis. It is one (long) argument in which the authors review well known cognitive science literature and simply apply evopsych speculation to it. If that’s the way to get into the New York Times, I better increase my speculation quotient.

[div class=attrib]More from theSource here.[end-div]

Postcards from the Atomic Age

Remember the lowly tourist postcard? Undoubtedly, you will have sent one or two “Wish you where here!” missives to your parents or work colleagues while vacationing in the Caribbean or hiking in Austria. Or, you may still have some in a desk drawer. Remember, those that you never mailed because you had neither time or local currency to purchase a stamp. If not, someone in your extended family surely has a collection of old postcards with strangely saturated and slightly off-kilter colors, chronicling family travels to interesting and not-so-interesting places.

Then, there are postcards of a different kind, sent from places that wouldn’t normally spring to mind as departure points for a quick and trivial dispatch. Tom Vanderbilt over at Slate introduces us to a new book, Atomic Postcards:

“Having a great time,” reads the archetypical postcard. “Wish you were here.” But what about when the “here” is the blasted, irradiated wastes of Frenchman’s Flat, in the Nevada desert? Or the site of America’s worst nuclear disaster? John O’Brian and Jeremy Borsos’ new book, Atomic Postcards, fuses the almost inherently banal form of the canned tourist dispatch with the incipient peril, and nervously giddy promise, of the nuclear age. Collected within are two-sided curios spanning the vast range of the military-industrial complex—”radioactive messages from the Cold War,” as the book promises. They depict everything from haunting afterimages of atomic incineration on the Nagasaki streets to achingly prosaic sales materials from atomic suppliers to a gauzy homage to the “first atomic research reactor in Israel,” a concrete monolith jutting from the sand, looking at once futuristic and ancient. Taken as a whole, the postcards form a kind of de facto and largely cheery dissemination campaign for the wonder of atomic power (and weapons). And who’s to mind if that sunny tropical beach is flecked with radionuclides?

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image: Marshall Islands, 1955. Image courtesy of Atomic Postcards.[end-div]

“Spectacular nuclear explosion” reads a caption on the back (or “verso,” as postcard geeks would say) of this card—released by “Ray Helberg’s Pacific Service”—of a test in the Marshall Islands. The disembodied cloud—a ferocious water funnel of water thrust upward, spreading into a toroid of vapor—recalls a Dutch sea painting with something new and alien in its center. “Quite a site [sic] to watch,” reads a laconic comment on the back. Outside the frame of the stylized blast cloud are its consequences. As Nathan Hodge and Sharon Weinberg write in Nuclear Family Vacation, “[F]or the people of the Marshall Islands, the consequences of atomic testing in the Pacific were extraordinary. Traditional communities were displaced by the tests; prolonged exposure to radiation created a legacy of illness and disease.”

Is Anyone There?

[div class=attrib]From the New York Times:[end-div]

“WHEN people don’t answer my e-mails, I always think maybe something tragic happened,” said John Leguizamo, the writer and performer, whose first marriage ended when his wife asked him by e-mail for a divorce. “Like maybe they got hit by a meteorite.”

Betsy Rapoport, an editor and life coach, said: “I don’t believe I have ever received an answer from any e-mail I’ve ever sent my children, now 21 and 18. Unless you count ‘idk’ as a response.”

The British linguist David Crystal said that his wife recently got a reply to an e-mail she sent in 2006. “It was like getting a postcard from the Second World War,” he said.

The roaring silence. The pause that does not refresh. The world is full of examples of how the anonymity and remove of the Internet cause us to write and post things that we later regret. But what of the way that anonymity and remove sometimes leave us dangling like a cartoon character that has run off a cliff?

For every fiery screed or gushy, tear-streaked confession in the ethersphere, it seems there’s a big patch of grainy, unresolved black. Though it would comfort us to think that these long silences are the product of technical failure or mishap, the more likely culprits are lack of courtesy and passive aggression.

“The Internet is something very informal that happened to a society that was already very informal,” said P. M. Forni, an etiquette expert and the author of “Choosing Civility.” “We can get away with murder, so to speak. The endless amount of people we can contact means we are not as cautious or kind as we might be. Consciously or unconsciously we think of our interlocutors as disposable or replaceable.”

Judith Kallos, who runs a site on Internet etiquette called netmanners.com, said the No. 1 complaint is that “people feel they’re being ignored.”

[div class=attrib]More from theSource here.[end-div]

The five top regrets of dying people

Social scientists may have already examined the cross-cultural regrets of those nearing end of life. If not, it would make fascinating reading to explore the differences and similarities. However, despite the many traits and beliefs that divide humanity, it’s likely that many of these are common.

[div class=attrib]By Massimo Pigliucci at Rationally Speaking:[end-div]

Bronnie Ware is the author (a bit too much on the mystical-touchy-feely side for my taste) of the blog “Inspiration and Chai” (QED). But she has also worked for years in palliative care, thereby having the life-altering experience of sharing people’s last few weeks and listening to what they regretted the most about their now about to end lives. The result is this list of “top five” things people wished they had done differently:

1. I wish I’d had the courage to live a life true to myself, not the life others expected of me.
2. I wish I didn’t work so hard.
3. I wish I’d had the courage to express my feelings.
4. I wish I had stayed in touch with my friends.
5. I wish that I had let myself be happier.

This is, of course, anecdotal evidence from a single source, and as such it needs to be taken with a rather large grain of salt. But it is hard to read the list and not begin reflecting on your own life — even if you are (hopefully!) very far from the end.

Ware’s list, of course, is precisely why Socrates famously said that “the unexamined life is not worth living” (in Apology 38a, Plato’s rendition of Socrates’ speech at his trial), and why Aristotle considered the quest for eudaimonia (flourishing) a life-long commitment the success of which can be assessed only at the very end.

Let’s then briefly consider the list and see what we can learn from it. Beginning with the first entry, I’m not sure what it means for someone to be true to oneself, but I take it that the notion attempts to get at the fact that too many of us cave to societal forces early on and do not actually follow our aspirations. The practicalities of life have a way of imposing themselves on us, beginning with parental pressure to enter a remunerative career path and continuing with the fact that no matter what your vocation is you still have to somehow pay the bills and put dinner on the table every evening. And yet, you wouldn’t believe the number of people I’ve met in recent years who — about midway through their expected lifespan — suddenly decided that what they had been doing with their lives during the previous couple of decades was somewhat empty and needed to change. Almost without exception, these friends in their late ‘30s or early ‘40s contemplated — and many actually followed through — going back to (graduate) school and preparing for a new career in areas that they felt augmented the meaningfulness of their lives (often, but not always, that meant teaching). One could argue that such self-examination should have occurred much earlier, but we are often badly equipped, in terms of both education and life experience, to ask ourselves that sort of question when we are entering college. Better midway than at the end, though…

[div class=attrib]More from theSource here.[end-div]

Learning to learn

[div class=attrib]By George Blecher for Eurozine:[end-div]

Before I learned how to learn, I was full of bullshit. I exaggerate. But like any bright student, I spent a lot of time faking it, pretending to know things about which I had only vague generalizations and a fund of catch-words. Why do bright students need to fake it? I guess because if they’re considered “bright”, they’re caught in a tautology: bright students are supposed to know, so if they risk not knowing, they must not be bright.

In any case, I faked it. I faked it so well that even my teachers were afraid to contradict me. I faked it so well that I convinced myself that I wasn’t faking it. In the darkest corners of the bright student’s mind, the borders between real and fake knowledge are blurred, and he puts so much effort into faking it that he may not even recognize when he actually knows something.

Above all, he dreads that his bluff will be called – that an honest soul will respect him enough to pick apart his faulty reasoning and superficial grasp of a subject, and expose him for the fraud he believes himself to be. So he lives in a state of constant fear: fear of being exposed, fear of not knowing, fear of appearing afraid. No wonder that Plato in The Republic cautions against teaching the “dialectic” to future Archons before the age of 30: he knew that instead of using it to pursue “Truth”, they’d wield it like a weapon to appear cleverer than their fellows.

Sometimes the worst actually happens. The bright student gets caught with his intellectual pants down. I remember taking an exam when I was 12, speeding through it with great cockiness until I realized that I’d left out a whole section. I did what the bright student usually does: I turned it back on the teacher, insisting that the question was misleading, and that I should be granted another half hour to fill in the missing part. (Probably Mr Lipkin just gave in because he knew what a pain in the ass the bright student can be!)

So then I was somewhere in my early 30s. No more teachers or parents to impress; no more exams to ace: just the day-to-day toiling in the trenches, trying to build a life.

[div class=attrib]More from theSource here.[end-div]

The Pervasive Threat of Conformity: Peer Pressure Is Here to Stay

[div class=attrib]From BigThink:[end-div]

Today, I’d like to revisit one of the most well-known experiments in social psychology: Solomon Asch’s lines study. Let’s look once more at his striking findings on the power of group conformity and consider what they mean now, more than 50 years later, in a world that is much changed from Asch’s 1950s America.

How long are these lines? I don’t know until you tell me.

In the 1950s, Solomon Asch conducted a series of studies to examine the effects of peer pressure, in as clear-cut a setting as possible: visual perception. The idea was to see if, when presented with lines of differing lengths and asked questions about the lines (Which was the longest? Which corresponded to a reference line of a certain length?), participants would answer with the choice that was obviously correct – or would fall sway to the pressure of a group that gave an incorrect response. Here is a sample stimulus from one of the studies:

Which line matches the reference line? It seems obvious, no? Now, imagine that you were in a group with six other people – and they all said that it was, in fact, Line B.  Now, you would have no idea that you were the only actual participant and that the group was carefully arranged with confederates, who were instructed to give that answer and were seated in such a way that they would answer before you. You’d think that they, like you, were participants in the study – and that they all gave what appeared to you to be a patently wrong answer. Would you call their bluff and say, no, the answer is clearly Line A? Are you all blind? Or, would you start to question your own judgment? Maybe it really is Line B. Maybe I’m just not seeing things correctly. How could everyone else be wrong and I be the only person who is right?

We don’t like to be the lone voice of dissent

While we’d all like to imagine that we fall into the second camp, statistically speaking, we are three times more likely to be in the first: over 75% of Asch’s subjects (and far more in the actual condition given above) gave the wrong answer, going along with the group opinion.

[div class=attrib]More from theSource here.[end-div]

The Worst of States, the Best of States

Following on from our recent article showing the best of these United States, it’s time to look at the worst.

[div class=attrib]From Frank Jacobs / BigThink:[end-div]

The United States of Shame again gets most of its data from health stats, detailing the deplorable firsts of 14 states (9). Eight states get worst marks for crime, from white-collar to violent (10), while four lead in road accidents (11). Six can be classed as economic worst cases (12), five as moral nadirs (13), two as environmental basket cases (14). In a category of one are states like Ohio (‘Nerdiest’), Maine (‘Dumbest’) and North Dakota (‘Ugliest’).

All claims are neatly backed up by references, some of them to reliable statistics, others to less scientific straw polls. In at least one case, to paraphrase Dickens, the best of stats really is the worst of stats. Ohio’s ‘shameful’ status as nerdiest state is based on its top ranking in library visits. Yet on the ‘awesome’ map, Ohio is listed as the state with… most library visits.

Juxtaposing each state’s best and worst leads to interesting statistical pairings. But with data as haphazardly corralled together as this, causal linkage should be avoided. Otherwise it could be concluded that:

A higher degree of equality leads to an increase in suicides (Alaska);
Sunny weather induces alcoholism (Arizona);
Breastfeeding raises the risk of homelessness (Oregon).
Yet in some cases, some kind of link can be inferred. New Yorkers use more public transit than other Americans, but are also stuck with the longest commutes.

[div class=attrib]More from theSource here.[end-div]

Culturally Specific Mental Disorders: A Bad Case of the Brain Fags

Is this man buff enough? Image courtesy of Slate

If you happen to have just read The Psychopath Test by Jon Ronson, this article in Slate is appropriately timely, and presents new fodder for continuing research (and a sequel). It would therefore come as no surprise to find Mr.Ronson trekking through Newfoundland in search of “Old Hag Syndrome”, a type of sleep paralysis, visiting art museums in Italy for “Stendhal Syndrome,” a delusional disorder experienced by Italians after studying artistic masterpieces, and checking on Nigerian college students afflicted by “Brain Fag Syndrome”. Then there is: “Wild Man Syndrome,” from New Guinea (a syndrome combining hyperactivity, clumsiness and forgetfulness), “Koro Syndrome” (a delusion of disappearing protruding body parts) first described in China over 2,000 years ago, “Jiko-shisen-kyofu” from Japan (a fear of offending others by glancing at them), and here in the west, “Muscle Dysmorphia Syndrome” (a delusion common in weight-lifters that one’s body is insufficiently ripped).

All of these and more can be found in the latest version of the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) manual.

[div class=attrib]From Slate:[end-div]

In 1951, Hong Kong psychiatrist Pow-Meng Yap authored an influential paper in the Journal of Mental Sciences on the subject of “peculiar psychiatric disorders”—those that did not fit neatly into the dominant disease-model classification scheme of the time and yet appeared to be prominent, even commonplace, in certain parts of the world. Curiously these same conditions—which include “amok” in Southeast Asia and bouffée délirante in French-speaking countries—were almost unheard of outside particular cultural contexts. The American Psychiatric Association has conceded that certain mysterious mental afflictions are so common, in some places, that they do in fact warrant inclusion as “culture-bound syndromes” in the official Diagnostic and Statistical Manual of Mental Disorders.

he working version of this manual, the DSM-IV, specifies 25 such syndromes. Take “Old Hag Syndrome,” a type of sleep paralysis in Newfoundland in which one is visited by what appears to be a rather unpleasant old hag sitting on one’s chest at night. (If I were a bitter, divorced straight man, I’d probably say something diabolical about my ex-wife here.) Then there’s gururumba, or “Wild Man Syndrome,” in which New Guinean males become hyperactive, clumsy, kleptomaniacal, and conveniently amnesic, “Brain Fag Syndrome” (more on that in a moment), and “Stendhal Syndrome,” a delusional disorder experienced mostly by Italians after gazing upon artistic masterpieces. The DSM-IV defines culture-bound syndromes as “recurrent, locality-specific patterns of aberrant behavior and troubling experience that may or may not be linked to a particular diagnostic category.”
And therein lies the nosological pickle: The symptoms of culture-bound syndromes often overlap with more general, known psychiatric conditions that are universal in nature, such as schizophrenia, body dysmorphia, and social anxiety. What varies across cultures, and is presumably moulded by them, is the unique constellation of symptoms, or “idioms of distress.”

Some scholars believe that many additional distinct culture-bound syndromes exist. One that’s not in the manual but could be, argue psychiatrists Gen Kanayama and Harrison Pope in a short paper published earlier this year in the Harvard Review of Psychiatry, is “muscle dysmorphia.” The condition is limited to Western males, who suffer the delusion that they are insufficiently ripped. “As a result,” write the authors, “they may lift weights compulsively in the gym, often gain large amounts of muscle mass, yet still perceive themselves as too small.” Within body-building circles, in fact, muscle dysmorphia has long been recognized as a sort of reverse anorexia nervosa. But it’s almost entirely unheard of among Asian men. Unlike hypermasculine Western heroes such as Hercules, Thor, and the chiseled Arnold of yesteryear, the Japanese and Chinese have tended to prefer their heroes fully clothed, mentally acute, and lithe, argue Kanayama and Pope. In fact, they say anabolic steroid use is virtually nonexistent in Asian countries, even though the drugs are considerably easier to obtain, being available without a prescription at most neighborhood drugstores.

[div class=attrib]More from theSource here.[end-div]

Disconnected?

[div class=attrib]From Slate:[end-div]

Have you heard that divorce is contagious? A lot of people have. Last summer a study claiming to show that break-ups can propagate from friend to friend to friend like a marriage-eating bacillus spread across the news agar from CNN to CBS to ABC with predictable speed. “Think of this ‘idea’ of getting divorced, this ‘option’ of getting divorced like a virus, because it spreads more or less the same way,” explained University of California-San Diego professor James Fowler to the folks at Good Morning America.

It’s a surprising, quirky, and seemingly plausible finding, which explains why so many news outlets caught the bug. But one weird thing about the media outbreak was that the study on which it was based had never been published in a scientific journal. The paper had been posted to the Social Science Research Network web site, a sort of academic way station for working papers whose tagline is “Tomorrow’s Research Today.” But tomorrow had not yet come for the contagious divorce study: It had never actually passed peer review, and still hasn’t. “It is under review,” Fowler explained last week in an email. He co-authored the paper with his long-time collaborator, Harvard’s Nicholas Christakis, and lead author Rose McDermott.

A few months before the contagious divorce story broke, Slate ran an article I’d written based on a related, but also unpublished, scientific paper. The mathematician Russell Lyons had posted a dense treatise on his website suggesting that the methods employed by Christakis and Fowler in their social network studies were riddled with statistical errors at many levels. The authors were claiming—in the New England Journal of Medicine, in a popular book, in TED talks, in snappy PR videos—that everything from obesity to loneliness to poor sleep could spread from person to person to person like a case of the galloping crud. But according to Lyons and several other experts, their arguments were shaky at best. “It’s not clear that the social contagionists have enough evidence to be telling people that they owe it to their social network to lose weight,” I wrote last April. As for the theory that obesity and divorce and happiness contagions radiate from human beings through three degrees of friendship, I concluded “perhaps it’s best to flock away for now.”

The case against Christakis and Fowler has grown since then. The Lyons paper passed peer review and was published in the May issue of the journal Statistics, Politics, and Policy. Two other recent papers raise serious doubts about their conclusions. And now something of a consensus is forming within the statistics and social-networking communities that Christakis and Fowler’s headline-grabbing contagion papers are fatally flawed. Andrew Gelman, a professor of statistics at Columbia, wrote a delicately worded blog post in June noting that he’d “have to go with Lyons” and say that the claims of contagious obesity, divorce and the like “have not been convincingly demonstrated.” Another highly respected social-networking expert, Tom Snijders of Oxford, called the mathematical model used by Christakis and Fowler “not coherent.” And just a few days ago, Cosma Shalizi, a statistician at Carnegie Mellon, declared, “I agree with pretty much everything Snijders says.”

[div class=attrib]More from theSource here.[end-div]

The Best of States, the Worst of States

[div class=attrib]From Frank Jacobs / BigThink:[end-div]

Are these maps cartograms or mere infographics?

An ‘information graphic’ is defined as any graphic representation of data. It follows from that definition that infographics are less determined by type than by purpose. Which is to represent complex information in a readily graspable graphic format. Those formats are often, but not only: diagrams, flow charts, and maps.

Although one definition of maps – the graphic representation of spatial data – is very similar to that of infographics, the two are easily distinguished by, among other things, the context of the latter, which are usually confined to and embedded in technical and journalistic writing.

Cartograms are a subset of infographics, limited to one type of graphic representation: maps. On these maps, one set of quantitative information (usually surface or distance) is replaced by another (often demographic data or electoral results). The result is an informative distortion of the map (1).

The distortion on these maps is not of the distance-bending or surface-stretching kind. It merely substitutes the names of US states with statistical information relevant to each of them (2). This substitution is non-quantitative, affecting the toponymy rather than the topography of the map. So is this a mere infographic? As the information presented is statistical (each label describes each state as first or last in a Top 50), I’d say this is – if you’ll excuse the pun – a borderline case.

What’s more relevant, from this blog’s perspective, is that it is an atypical, curious and entertaining use of cartography.

The first set of maps labels each and every one of the states as best and worst at something. All of those distinctions, both the favourable and the unfavourable kind, are backed up by some sort of evidence.

The first map, the United States of Awesome, charts fifty things that each state of the Union is best at. Most of those indicators, 12 in all, are related to health and well-being (3). Ten are economic (4), six environmental (5), five educational (6). Three can be classified as ‘moral’, even if these particular distinctions make for strange bedfellows (7).

The best thing that can be said about Missouri and Illinois, apparently, is that they’re extremely average (8). While that may excite few people, it will greatly interest political pollsters and anyone in need of a focus group. Virginia and Indiana are the states with the most birthplaces of presidents and vice-presidents, respectively. South Carolinians prefer to spend their time golfing, Pennsylvanians hunting. Violent crime is lowest in Maine, public corruption in Nebraska. The most bizarre distinctions, finally, are reserved for New Mexico (Spaceport Home), Oklahoma (Best Licence Plate) and Missouri (Bromine Production). If that’s the best thing about those states, what might be the worst?

[div class=attrib]More from theSource here.[end-div]

Scientific Evidence for Indeterminism

[div class=attrib]From Evolutionary Philosophy:[end-div]

The advantage of being a materialist is that so much of our experience seems to point to a material basis for reality. Idealists usually have to appeal to some inner knowing as the justification of their faith that mind, not matter, is the foundation of reality. Unfortunately the appeal to inner knowing is exactly what a materialist has trouble with in the first place.

Charles Sanders Peirce was a logician and a scientist first and a philosopher second. He thought like a scientists and as he developed his evolutionary philosophy his reasons for believing in it were very logical and scientific. One of the early insights that lead him to his understanding of an evolving universe was his realization that the state of our world or its future was not necessarily predetermined.

One conclusion that materialism tends to lead to is a belief that ‘nothing comes from nothing.’ Everything comes from some form of matter or interaction between material things. Nothing just immerges spontaneously. Everything is part of an ongoing chain of cause and effect. The question, how did the chain of cause and effect start, is one that is generally felt best to be left to the realm of metaphysics and unsuitable for scientific investigation.

And so the image of a materially based universe tends to lead to a deterministic account of reality. You start with something and then that something unravels according to immutable laws. As an image to picture imagine this, a large bucket filled with pink and green tennis balls. Then imagine that there are two smaller buckets that are empty. This arrangement represents the starting point of the universe. The natural laws of this universe dictate that individual tennis balls will be removed from the large bucket and placed in one of the two smaller ones. If the ball that is removed is pink it goes in the left hand bucket and if it is green it goes in the right hand bucket. In this simple model the end state of the universe is going to be that the large bucket will be empty, the left hand bucket will be filled with pink tennis balls and the right hand bucket will be filled with green tennis balls. The outcome of the process is predetermined by the initial conditions and the laws governing the subsequent activity.

A belief in this kind of determinism seems to be constantly reinforced for us through our ongoing experience with the material universe.  Go ahead pick up a rock hold it up and then let it go. It will fall. Every single time it will fall. It is predetermined that a rock that is held up in the air and then dropped will fall. Punch a wall. It will hurt – every single time.  Over and over again our experience of everyday reality seems to reinforce the fact that we live in a universe which is exactly governed by immutable laws.

[div class=attrib]More from theSource here.[end-div]

The Arrow of Time

No, not a cosmologist’s convoluted hypothesis as to why time moves in only (so far discovered) one direction. The arrow of time here is a thoroughly personal look at the linearity of the 4th dimension and an homage to the family portrait in the process.

The family takes a “snapshot” of each member at the same time each year; we’ve just glimpsed the latest for 2011. And, in so doing they give us much to ponder on the nature of change and the nature of stasis.

[div class=attrib]From Diego Goldberg and family:[end-div]

Catch all the intervening years between 1976 and 2011 at theSource here.

How your dad’s music influences your taste

[div class=attrib]From Sonos:[end-div]

There’s no end to the reasons why you listen to the music you do today, but we’re willing to bet that more than a few of you were subjected to your father’s music at some point in the past (or present). So that leads to the question: what do dear old dad’s listening habits say about the artists in your repertoire? In honor of Father’s Day, we tried our hand at finding out.

[div class=attrib]More from the Source here.[end-div]

Prophecy Fail

[div class=attrib]From Slate:[end-div]

What happens to a doomsday cult when the world doesn’t end?

Preacher and evangelical broadcaster Harold Camping has announced that Jesus Christ will return to Earth this Saturday, May 21, and many of his followers are traveling the country in preparation for the weekend Rapture. They’re undeterred, it seems, by Mr. Camping’s dodgy track record with end-of-the-world predictions. (Years ago, he argued at length that the reckoning would come in 1994.) We’ve yet to learn what motivates people like him to predict (and predict again) the end of the world, but there’s a long and unexpected psychological literature on how the faithful make sense of missed appointments with the apocalypse.

The most famous study into doomsday mix-ups was published in a 1956 book by renowned psychologist Leon Festinger and his colleagues called When Prophecy Fails. A fringe religious group called the Seekers had made the papers by predicting that a flood was coming to destroy the West Coast. The group was led by an eccentric but earnest lady called Dorothy Martin, given the pseudonym Marian Keech in the book, who believed that superior beings from the planet Clarion were communicating to her through automatic writing. They told her they had been monitoring Earth and would arrive to rescue the Seekers in a flying saucer before the cataclysm struck.
Festinger was fascinated by how we deal with information that fails to match up to our beliefs, and suspected that we are strongly motivated to resolve the conflict—a state of mind he called “cognitive dissonance.” He wanted a clear-cut case with which to test his fledgling ideas, so decided to follow Martin’s group as the much vaunted date came and went. Would they give up their closely held beliefs, or would they work to justify them even in the face of the most brutal contradiction?

The Seekers abandoned their jobs, possessions, and spouses to wait for the flying saucer, but neither the aliens nor the apocalypse arrived. After several uncomfortable hours on the appointed day, Martin received a “message” saying that the group “had spread so much light that God had saved the world from destruction.” The group responded by proselytizing with a renewed vigour. According to Festinger, they resolved the intense conflict between reality and prophecy by seeking safety in numbers. “If more people can be persuaded that the system of belief is correct, then clearly, it must, after all, be correct.”

[div class=attrib]More from theSource here:[end-div]

Are We Intelligent Matter or Incarnate Spirit?

[div class=attrib]From Evolutionary Philosophy:[end-div]

One of the most confounding philosophical questions involves our understanding of who we really are. Are we intelligent matter – stuff that got smart – or are we incarnate spirit – smarts that grew stuff around it? This question is inherent in the very nature of our experience of being human. We have bodies and we have the experience of consciousness – mind and matter, body and soul. Which one is more us, which came first, and which is really running the show?

The great religious traditions of the west have tended towards the outlook that we are spiritual beings who became flesh. First there was God, pure spirit and from God came us. Our more recent scientific understanding of reality has lead many to believe that we are matter that evolved into life and intelligence. Now, of course, there are always those who land somewhere in between these extremes – probably most people reading this blog for instance – still this is the divide that has generally separated science from religion and idealists from materialists.

If, in fact, we are essentially spirit that has taken form it would mean that in some significant way human beings are separate from the universe. We have some source of intelligence and will that is free from the rest of nature, that acts in nature while maintaining a foothold in some transcendent outside reference point. In this view, the core of our being stands apart from and above the laws of nature and we are therefore uniquely autonomous and responsible as the source of our own action in the universe.

[div class=attrib]More from theSource here.[end-div]

Test-tube truths

[div class=attrib]From Eurozine:[end-div]

In his new book, American atheist Sam Harris argues that science can replace theology as the ultimate moral authority. Kenan Malik is sceptical of any such yearning for moral certainty, be it scientific or divine.

“If God does not exist, everything is permitted.” Dostoevsky never actually wrote that line, though so often is it attributed to him that he may as well have. It has become the almost reflexive response of believers when faced with an argument for a godless world. Without religious faith, runs the argument, we cannot anchor our moral truths or truly know right from wrong. Without belief in God we will be lost in a miasma of moral nihilism. In recent years, the riposte of many to this challenge has been to argue that moral codes are not revealed by God but instantiated in nature, and in particular in the brain. Ethics is not a theological matter but a scientific one. Science is not simply a means of making sense of facts about the world, but also about values, because values are in essence facts in another form.

Few people have expressed this argument more forcefully than the neuroscientist Sam Harris. Over the past few years, through books such as The End of Faith and Letter to a Christian Nation, Harris has gained a considerable reputation as a no-holds-barred critic of religion, in particular of Islam, and as an acerbic champion of science. In his new book, The Moral Landscape: How Science Can Determine Human Values, he sets out to demolish the traditional philosophical distinction between is and ought, between the way the world is and the way that it should be, a distinction we most associate with David Hume.

What Hume failed to understand, Harris argues, is that science can bridge the gap between ought and is, by turning moral claims into empirical facts. Values, he argues, are facts about the “states of the world” and “states of the human brain”. We need to think of morality, therefore, as “an undeveloped branch of science”: “Questions about values are really questions about the wellbeing of conscious creatures. Values, therefore, translate into facts that can be scientifically understood: regarding positive and negative social emotions, the effects of specific laws on human relationships, the neurophysiology of happiness and suffering, etc.” Science, and neuroscience in particular, does not simply explain why we might respond in particular ways to equality or to torture but also whether equality is a good, and torture morally acceptable. Where there are disagreements over moral questions, Harris believes, science will decide which view is right “because the discrepant answers people give to them translate into differences in our brains, in the brains of others and in the world at large.”

Harris is nothing if not self-confident. There is a voluminous philosophical literature that stretches back almost to the origins of the discipline on the relationship between facts and values. Harris chooses to ignore most of it. He does not wish to engage “more directly with the academic literature on moral philosophy”, he explains in a footnote, because he did not develop his arguments “by reading the work of moral philosophers” and because he is “convinced that every appearance of terms like ‘metaethics’, ‘deontology’, ‘noncognitivism’, ‘antirealism’, ’emotivism’, etc directly increases the amount of boredom in the universe.”

[div class=attrib]More from theSource here.[end-div]

America: Paradoxical icon of the new

[div class=attrib]From Eurozine:[end-div]

Blaming the American Way of Life for the ills of post-industrial European society is a poor excuse for Europeans’ own partiality to consumer pleasures, writes Petr Fischer. On a positive note, American individualism could teach Europe a thing or two about social solidarity.

“Business–Answer–Solution” reads the advertising banner of the subsidiary of a foreign company in the centre of Prague. At first sight, the banner is not particularly interesting, in this case meaning that it is not particularly surprising. Surprising things are those that capture our attention, that shock us in their particular way. This corporate motto repeats the famous, infinitely repeated mantra of aggressive global capitalism, its focus purely pragmatic: give us a problem and we will come up with a solution that profits both you and us. “Win-win capitalism”, one could say in today’s international newspeak.

What is interesting – in other words disconcerting – is the fact that the banner covers the window of a small shop situated directly behind the National Museum, a building that – as in every other European city – symbolizes a certain perception of historicity cultivated on the old continent at least since the nineteenth century. The National Museum preserves the history of the Czech nation, and the people who work in it analyse and reflect on Czech national existence, its peculiarity, uniqueness, difference or connectedness. This activity is not governed by the pragmatic slogan of performance, of completed things, of faits accomplis; rather, it is ruled by a different three words, directed at thinking and its incessant, uncertain movement: Discussion–Question–Searching.

Both slogans represent two sides of the same coin of western civilization, two sides that, so far, have been more or less separate. The first represents the straightforward American way, leveraging everything along the way, everything at hand that can help business; the latter represents the difficult, reflective way of the old continent, left by its American child so that it could later be changed according to America’s picture. The fact that the multinational company’s motto is located just “behind” the building that, synecdochically, expresses the basic historic orientation of all European nations, is symbolic. “Behind”, meta in Greek, describes, in the European tradition, something that transcends everything we can arrive at though normal reasoning. In Aristotle’s canon, so the philosophical legend has it, such was the name of the texts found in the library behind the thinker’s treatise on physics. However metaphysics has since come to signify a system of thought that transcends the world of tangible facts and things, that represents some invisible internal order of the world. Business–Answer–Solution, the catchword of American pragmatism, is, as its location behind the National Museum suggests, perhaps the only really functioning metaphysics of today’s world.

New is always better

Since its discovery, America has been referred to as the New World. But what exactly is new about it for the Europeans? In De la démocratie en Amérique, Alexis de Tocqueville – one of the first to systematically analyse American institutions, republican political systems, and above all what today is called the “American way of life” – concluded that the newness of America consist mostly of a kind of neophilia, a love of all that is new.

“The Americans live in a country of wonders, everything around them is in incessant motion, and every motion seems to be progress,” says de Tocqueville. “The image of the new is closely connected with the image of the better. They see no limits set by Nature on man’s efforts; in American eyes, that which does not exist is what no one has yet tried.” In this extension of the purest Enlightenment optimism, the new is associated with a higher, moral quality. The gaze of the man turns toward the future, the past ceases to be important because, in the rush towards the new, the better, it loses its value, becomes inferior. The essential is what will be, or rather, what part of the future can be realized “now”.

[div class=attrib]More from theSource here.[end-div]

A radical pessimist’s guide to the next 10 years

[div class=attrib]The Globe and Mail:[end-div]

The iconic writer reveals the shape of things to come, with 45 tips for survival and a matching glossary of the new words you’ll need to talk about your messed-up future.

1) It’s going to get worse

No silver linings and no lemonade. The elevator only goes down. The bright note is that the elevator will, at some point, stop.

2) The future isn’t going to feel futuristic

It’s simply going to feel weird and out-of-control-ish, the way it does now, because too many things are changing too quickly. The reason the future feels odd is because of its unpredictability. If the future didn’t feel weirdly unexpected, then something would be wrong.

3) The future is going to happen no matter what we do. The future will feel even faster than it does now

The next sets of triumphing technologies are going to happen, no matter who invents them or where or how. Not that technology alone dictates the future, but in the end it always leaves its mark. The only unknown factor is the pace at which new technologies will appear. This technological determinism, with its sense of constantly awaiting a new era-changing technology every day, is one of the hallmarks of the next decade.

4)Move to Vancouver, San Diego, Shannon or Liverpool

There’ll be just as much freaky extreme weather in these west-coast cities, but at least the west coasts won’t be broiling hot and cryogenically cold.

5) You’ll spend a lot of your time feeling like a dog leashed to a pole outside the grocery store – separation anxiety will become your permanent state

6) The middle class is over. It’s not coming back

Remember travel agents? Remember how they just kind of vanished one day?

That’s where all the other jobs that once made us middle-class are going – to that same, magical, class-killing, job-sucking wormhole into which travel-agency jobs vanished, never to return. However, this won’t stop people from self-identifying as middle-class, and as the years pass we’ll be entering a replay of the antebellum South, when people defined themselves by the social status of their ancestors three generations back. Enjoy the new monoclass!

7) Retail will start to resemble Mexican drugstores

In Mexico, if one wishes to buy a toothbrush, one goes to a drugstore where one of every item for sale is on display inside a glass display case that circles the store. One selects the toothbrush and one of an obvious surplus of staff runs to the back to fetch the toothbrush. It’s not very efficient, but it does offer otherwise unemployed people something to do during the day.

8) Try to live near a subway entrance

In a world of crazy-expensive oil, it’s the only real estate that will hold its value, if not increase.

9) The suburbs are doomed, especially thoseE.T. , California-style suburbs

This is a no-brainer, but the former homes will make amazing hangouts for gangs, weirdoes and people performing illegal activities. The pretend gates at the entranceways to gated communities will become real, and the charred stubs of previous white-collar homes will serve only to make the still-standing structures creepier and more exotic.

10) In the same way you can never go backward to a slower computer, you can never go backward to a lessened state of connectedness

11) Old people won’t be quite so clueless

No more “the Google,” because they’ll be just that little bit younger.

12) Expect less

Not zero, just less.

13) Enjoy lettuce while you still can

And anything else that arrives in your life from a truck, for that matter. For vegetables, get used to whatever it is they served in railway hotels in the 1890s. Jams. Preserves. Pickled everything.

14) Something smarter than us is going to emerge

Thank you, algorithms and cloud computing.

15) Make sure you’ve got someone to change your diaper

Sponsor a Class of 2112 med student. Adopt up a storm around the age of 50.

16) “You” will be turning into a cloud of data that circles the planet like a thin gauze

While it’s already hard enough to tell how others perceive us physically, your global, phantom, information-self will prove equally vexing to you: your shopping trends, blog residues, CCTV appearances – it all works in tandem to create a virtual being that you may neither like nor recognize.

17) You may well burn out on the effort of being an individual

You’ve become a notch in the Internet’s belt. Don’t try to delude yourself that you’re a romantic lone individual. To the new order, you’re just a node. There is no escape

18) Untombed landfills will glut the market with 20th-century artifacts

19) The Arctic will become like Antarctica – an everyone/no one space

Who owns Antarctica? Everyone and no one. It’s pie-sliced into unenforceable wedges. And before getting huffy, ask yourself, if you’re a Canadian: Could you draw an even remotely convincing map of all those islands in Nunavut and the Northwest Territories? Quick, draw Ellesmere Island.

20)

North America can easily fragment quickly as did the Eastern Bloc in 1989

Quebec will decide to quietly and quite pleasantly leave Canada. California contemplates splitting into two states, fiscal and non-fiscal. Cuba becomes a Club Med with weapons. The Hate States will form a coalition.

21) We will still be annoyed by people who pun, but we will be able to show them mercy because punning will be revealed to be some sort of connectopathic glitch: The punner, like someone with Tourette’s, has no medical ability not to pun

22) Your sense of time will continue to shred. Years will feel like hours

23) Everyone will be feeling the same way as you

There’s some comfort to be found there.

24) It is going to become much easier to explain why you are the way you are

Much of what we now consider “personality” will be explained away as structural and chemical functions of the brain.

25) Dreams will get better

26)

Being alone will become easier

27)Hooking up will become ever more mechanical and binary

28) It will become harder to view your life as “a story”

The way we define our sense of self will continue to morph via new ways of socializing. The notion of your life needing to be a story will seem slightly corny and dated. Your life becomes however many friends you have online.

29) You will have more say in how long or short you wish your life to feel

Time perception is very much about how you sequence your activities, how many activities you layer overtop of others, and the types of gaps, if any, you leave in between activities.

30) Some existing medical conditions will be seen as sequencing malfunctions

The ability to create and remember sequences is an almost entirely human ability (some crows have been shown to sequence). Dogs, while highly intelligent, still cannot form sequences; it’s the reason why well-trained dogs at shows are still led from station to station by handlers instead of completing the course themselves.

Dysfunctional mental states stem from malfunctions in the brain’s sequencing capacity. One commonly known short-term sequencing dysfunction is dyslexia. People unable to sequence over a slightly longer term might be “not good with directions.” The ultimate sequencing dysfunction is the inability to look at one’s life as a meaningful sequence or story.

31) The built world will continue looking more and more like Microsoft packaging

“We were flying over Phoenix, and it looked like the crumpled-up packaging from a 2006 MS Digital Image Suite.”

32) Musical appreciation will shed all age barriers

33) People who shun new technologies will be viewed as passive-aggressive control freaks trying to rope people into their world, much like vegetarian teenage girls in the early 1980s

1980: “We can’t go to that restaurant. Karen’s vegetarian and it doesn’t have anything for her.”

2010: “What restaurant are we going to? I don’t know. Karen was supposed to tell me, but she doesn’t have a cell, so I can’t ask her. I’m sick of her crazy control-freak behaviour. Let’s go someplace else and not tell her where.”

34) You’re going to miss the 1990s more than you ever thought

35) Stupid people will be in charge, only to be replaced by ever-stupider people. You will live in a world without kings, only princes in whom our faith is shattered

36) Metaphor drift will become pandemic

Words adopted by technology will increasingly drift into new realms to the point where they observe different grammatical laws, e.g., “one mouse”/“three mouses;” “memory hog”/“delete the spam.”

37) People will stop caring how they appear to others

The number of tribal categories one can belong to will become infinite. To use a high-school analogy, 40 years ago you had jocks and nerds. Nowadays, there are Goths, emos, punks, metal-heads, geeks and so forth.

38)Knowing everything will become dull

It all started out so graciously: At a dinner for six, a question arises about, say, that Japanese movie you saw in 1997 (Tampopo), or whether or not Joey Bishop is still alive (no). And before long, you know the answer to everything.

39) IKEA will become an ever-more-spiritual sanctuary

40) We will become more matter-of-fact, in general, about our bodies

41) The future of politics is the careful and effective implanting into the minds of voters images that can never be removed

42) You’ll spend a lot of time shopping online from your jail cell

Over-criminalization of the populace, paired with the triumph of shopping as a dominant cultural activity, will create a world where the two poles of society are shopping and jail.

43) Getting to work will provide vibrant and fun new challenges

Gravel roads, potholes, outhouses, overcrowded buses, short-term hired bodyguards, highwaymen, kidnapping, overnight camping in fields, snaggle-toothed crazy ladies casting spells on you, frightened villagers, organ thieves, exhibitionists and lots of healthy fresh air.

44) Your dream life will increasingly look like Google Street View

45) We will accept the obvious truth that we brought this upon ourselves

Douglas Coupland is a writer and artist based in Vancouver, where he will deliver the first of five CBC Massey Lectures – a ‘novel in five hours’ about the future – on Tuesday.

[div class=attrib]More from theSource here.[end-div]

Contain this!

[div class=attrib]From Eurozine:[end-div]

WikiLeaks’ series of exposés is causing a very different news and informational landscape to emerge. Whilst acknowledging the structural leakiness of networked organisations, Felix Stalder finds deeper reasons for the crisis of information security and the new distribution of investigative journalism.

WikiLeaks is one of the defining stories of the Internet, which means by now, one of the defining stories of the present, period. At least four large-scale trends which permeate our societies as a whole are fused here into an explosive mixture whose fall-out is far from clear. First is a change in the materiality of communication. Communication becomes more extensive, more recorded, and the records become more mobile. Second is a crisis of institutions, particularly in western democracies, where moralistic rhetoric and the ugliness of daily practice are diverging ever more at the very moment when institutional personnel are being encouraged to think more for themselves. Third is the rise of new actors, “super-empowered” individuals, capable of intervening into historical developments at a systemic level. Finally, fourth is a structural transformation of the public sphere (through media consolidation at one pole, and the explosion of non-institutional publishers at the other), to an extent that rivals the one described by Habermas with the rise of mass media at the turn of the twentieth century.

Leaky containers

Imagine dumping nearly 400 000 paper documents into a dead drop located discreetly on the hard shoulder of a road. Impossible. Now imagine the same thing with digital records on a USB stick, or as an upload from any networked computer. No problem at all. Yet, the material differences between paper and digital records go much further than mere bulk. Digital records are the impulses travelling through the nervous systems of dynamic, distributed organisations of all sizes. They are intended, from the beginning, to circulate with ease. Otherwise such organisations would fall apart and dynamism would grind to a halt. The more flexible and distributed organisations become, the more records they need to produce and the faster these need to circulate. Due to their distributed aspect and the pressure for cross-organisational cooperation, it is increasingly difficult to keep records within particular organisations whose boundaries are blurring anyway. Surveillance researchers such as David Lyon have long been writing about the leakiness of “containers”, meaning the tendency for sensitive digital records to cross the boundaries of the institutions which produce them. This leakiness is often driven by commercial considerations (private data being sold), but it happens also out of incompetence (systems being secured insufficiently), or because insiders deliberately violate organisational policies for their own purposes. Either they are whistle-blowers motivated by conscience, as in the case of WikiLeaks, or individuals selling information for private gain, as in the case of the numerous employees of Swiss banks who recently copied the details of private accounts and sold them to tax authorities across Europe. Within certain organisation such as banks and the military, virtually everything is classified and large number of people have access to this data, not least mid-level staff who handle the streams of raw data such as individuals’ records produced as part of daily procedure.

[div class=attrib]More from theSource here.[end-div]

Map of the World’s Countries Rearranged by Population

[div class=attrib]From Frank Jacobs / BigThink:[end-div]

What if the world were rearranged so that the inhabitants of the country with the largest population would move to the country with the largest area? And the second-largest population would migrate to the second-largest country, and so on?

The result would be this disconcerting, disorienting map. In the world described by it, the differences in population density between countries would be less extreme than they are today. The world’s most densely populated country currently is Monaco, with 43,830 inhabitants/mi² (16,923 per km²) (1). On the other end of the scale is Mongolia, which is less densely populated by a factor of almost exactly 10,000, with a mere 4.4 inhabitants/mi² (1.7 per km²).

The averages per country would more closely resemble the global average of 34 per mi² (13 per km²). But those evened-out statistics would describe a very strange world indeed. The global population realignment would involve massive migrations, lead to a heap of painful demotions and triumphant promotions, and produce a few very weird new neighbourhoods.

Take the world’s largest country: Russia. It would be taken over by its Asian neighbour and rival China, the country with the world’s largest population. Overcrowded China would not just occupy underpopulated Siberia – a long-time Russian fear – but also fan out all the way across the Urals to Russia’s westernmost borders. China would thus become a major European power. Russia itself would be relegated to Kazakhstan, which still is the largest landlocked country in the world, but with few hopes of a role on the world stage commensurate with Russia’s clout, which in no small part derives from its sheer size.

Canada, the world’s second-largest country, would be transformed into an Arctic, or at least quite chilly version of India, the country with the world’s second-largest population. The country would no longer be a thinly populated northern afterthought of the US. The billion Indians north of the Great Lakes would make Canada a very distinct, very powerful global player.

Strangely enough, the US itself would not have to swap its population with another country. With 310 million inhabitants, it is the third most populous nation in the world. And with an area of just over 3.7 million mi² (slightly more than 9.6 million km²), it is also the world’s third largest country (2). Brazil, at number five in both lists, is in the same situation. Other non-movers are Yemen and Ireland. Every other country moves house. A few interesting swaps:

  • Countries with relatively high population densities move to more spacious environments. This increases their visibility. Look at those 94 million Filipinos, for example, no longer confined to that small archipelago just south of China. They now occupy the sprawling Democratic Republic of the Congo, the 12th largest country in the world, and slap bang in the middle of Africa too.
  • The reverse is also true. Mongolia, that large, sparsely populated chunk of a country between Russia and China, is relegated to tiny Belgium, whose even tinier neighbour Luxembourg is populated by 320,000 Icelanders, no longer enjoying the instant recognition provided by their distinctly shaped North Atlantic island home.
  • Australia’s 22.5 million inhabitants would move to Spain, the world’s 51st largest country. This would probably be the furthest migration, as both countries are almost exactly antipodean to each other. But Australians would not have to adapt too much to the mainly hot and dry Spanish climate.
  • But spare a thought for those unfortunate Vietnamese. Used to a lush, tropical climate, the 85 million inhabitants of Vietnam would be shipped off to icy Greenland. Even though that Arctic dependency of Denmark has warmed up a bit due to recent climate changes, it would still be mainly snowy, empty and freezing. One imagines a giant group huddle, just to keep warm.
  • Jamaica would still be island-shaped – but landlocked, as the Jamaicans would move to Lesotho, an independent enclave completely surrounded by South Africa – or rather, in this strange new world, South Korea. Those South Koreans probably couldn’t believe their bad luck. Of all the potential new friends in the world, who gets to be their northern neighbour but their wacky cousin, North Korea? It seems the heavily militarised DMZ will move from the Korean peninsula to the South African-Botswanan border.
  • The UK migrates from its strategically advantageous island position off Europe’s western edge to a place smack in the middle of the Sahara desert, to one of those countries the name of which one always has to look up (3). No longer splendidly isolated, it will have to share the neighbourhood with such upstarts as Mexico, Myanmar, Thailand and – good heavens – Iran. Back home, its sceptered isles are taken over by the Tunisians. Even Enoch Powell didn’t see that one coming.
  • Some countries only move a few doors down, so to speak. El Salvador gets Guatemala, Honduras takes over Nicaragua, Nepal occupies Birma/Myanmar and Turkey sets up house in Iran. Others wake up in a whole new environment. Dusty, landlocked Central African Republic is moving to the luscious island of Sri Lanka, with its pristine, ocean-lapped shores. The mountain-dwelling Swiss will have to adapt to life in the flood-infested river delta of Bangladesh.
  • Geography, they say, is destiny (4). Some countries are plagued or blessed by their present location. How would they fare elsewhere? Take Iraq, brought down by wars both of the civil and the other kind, and burdened with enough oil to finance lavish dictatorships and arouse the avidity of superpowers. What if the 31.5 million Iraqis moved to the somewhat larger, equally sunny country of Zambia – getting a lot of nice, non-threatening neighbours in the process?

Rearranged maps that switch the labels of the countries depicted, as if in some parlour game, to represent some type of statistical data, are an interesting subcategory of curious cartography. The most popular example discussed on this blog is the map of the US, with the states’ names replaced by that of countries with an equivalent GDP (see #131). Somewhat related, if by topic rather than technique, is the cartogram discussed in blog post #96, showing the world’s countries shrunk or inflated to reflect the size of their population.

Many thanks to all who sent in this map: Matt Chisholm, Criggie, Roel Damiaans, Sebastian Dinjens, Irwin Hébert, Allard H., Olivier Muzerelle, Rodrigo Oliva, Rich Sturges, and John Thorne. The map is referenced on half a dozen websites where it can be seen in full resolution (this one among them), but it is unclear where it first originated, and who produced it (the map is signed, in the bottom right hand corner, by JPALMZ).

—–

(1) Most (dependent) territories and countries in the top 20 of Wikipedia’s population density ranking have tiny areas, with populations that are, in relation to those of other countries, quite negligeable. The first country on the list with both a substantial surface and population is Bangladesh, in 9th place with a total population of over 162 million and a density of 1,126 inhabitants/mi² (56 per km²).

(2) Actually, the US contends third place with China. Both countries have almost the same size, and varying definitions of how large they are. Depending on whether or not you include Taiwan and (other) disputed areas in China, and overseas territories in the US, either country  can be third of fourth on the list.

(3) Niger, not to be confused with nearby Nigeria. Nor with neighbouring Burkina Faso, which used to be Upper Volta (even though there never was a Lower Volta except, perhaps, Niger. Or Nigeria).

(4) The same is said of demography. And of a bunch of other stuff.

[div class=attrib]More from theSource here.[end-div]

Small Change. Why the Revolution will Not be Tweeted

[div class=attrib]From The New Yorker:[end-div]

At four-thirty in the afternoon on Monday, February 1, 1960, four college students sat down at the lunch counter at the Woolworth’s in downtown Greensboro, North Carolina. They were freshmen at North Carolina A. & T., a black college a mile or so away.

“I’d like a cup of coffee, please,” one of the four, Ezell Blair, said to the waitress.

“We don’t serve Negroes here,” she replied.

The Woolworth’s lunch counter was a long L-shaped bar that could seat sixty-six people, with a standup snack bar at one end. The seats were for whites. The snack bar was for blacks. Another employee, a black woman who worked at the steam table, approached the students and tried to warn them away. “You’re acting stupid, ignorant!” she said. They didn’t move. Around five-thirty, the front doors to the store were locked. The four still didn’t move. Finally, they left by a side door. Outside, a small crowd had gathered, including a photographer from the Greensboro Record. “I’ll be back tomorrow with A. & T. College,” one of the students said.

By next morning, the protest had grown to twenty-seven men and four women, most from the same dormitory as the original four. The men were dressed in suits and ties. The students had brought their schoolwork, and studied as they sat at the counter. On Wednesday, students from Greensboro’s “Negro” secondary school, Dudley High, joined in, and the number of protesters swelled to eighty. By Thursday, the protesters numbered three hundred, including three white women, from the Greensboro campus of the University of North Carolina. By Saturday, the sit-in had reached six hundred. People spilled out onto the street. White teen-agers waved Confederate flags. Someone threw a firecracker. At noon, the A. & T. football team arrived. “Here comes the wrecking crew,” one of the white students shouted.

By the following Monday, sit-ins had spread to Winston-Salem, twenty-five miles away, and Durham, fifty miles away. The day after that, students at Fayetteville State Teachers College and at Johnson C. Smith College, in Charlotte, joined in, followed on Wednesday by students at St. Augustine’s College and Shaw University, in Raleigh. On Thursday and Friday, the protest crossed state lines, surfacing in Hampton and Portsmouth, Virginia, in Rock Hill, South Carolina, and in Chattanooga, Tennessee. By the end of the month, there were sit-ins throughout the South, as far west as Texas. “I asked every student I met what the first day of the sitdowns had been like on his campus,” the political theorist Michael Walzer wrote in Dissent. “The answer was always the same: ‘It was like a fever. Everyone wanted to go.’ ” Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

The world, we are told, is in the midst of a revolution. The new tools of social media have reinvented social activism. With Facebook and Twitter and the like, the traditional relationship between political authority and popular will has been upended, making it easier for the powerless to collaborate, coördinate, and give voice to their concerns. When ten thousand protesters took to the streets in Moldova in the spring of 2009 to protest against their country’s Communist government, the action was dubbed the Twitter Revolution, because of the means by which the demonstrators had been brought together. A few months after that, when student protests rocked Tehran, the State Department took the unusual step of asking Twitter to suspend scheduled maintenance of its Web site, because the Administration didn’t want such a critical organizing tool out of service at the height of the demonstrations. “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy,” Mark Pfeifle, a former national-security adviser, later wrote, calling for Twitter to be nominated for the Nobel Peace Prize. Where activists were once defined by their causes, they are now defined by their tools. Facebook warriors go online to push for change. “You are the best hope for us all,” James K. Glassman, a former senior State Department official, told a crowd of cyber activists at a recent conference sponsored by Facebook, A. T. & T., Howcast, MTV, and Google. Sites like Facebook, Glassman said, “give the U.S. a significant competitive advantage over terrorists. Some time ago, I said that Al Qaeda was ‘eating our lunch on the Internet.’ That is no longer the case. Al Qaeda is stuck in Web 1.0. The Internet is now about interactivity and conversation.”

These are strong, and puzzling, claims. Why does it matter who is eating whose lunch on the Internet? Are people who log on to their Facebook page really the best hope for us all? As for Moldova’s so-called Twitter Revolution, Evgeny Morozov, a scholar at Stanford who has been the most persistent of digital evangelism’s critics, points out that Twitter had scant internal significance in Moldova, a country where very few Twitter accounts exist. Nor does it seem to have been a revolution, not least because the protests—as Anne Applebaum suggested in the Washington Post—may well have been a bit of stagecraft cooked up by the government. (In a country paranoid about Romanian revanchism, the protesters flew a Romanian flag over the Parliament building.) In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

Some of this grandiosity is to be expected. Innovators tend to be solipsists. They often want to cram every stray fact and experience into their new model. As the historian Robert Darnton has written, “The marvels of communication technology in the present have produced a false consciousness about the past—even a sense that communication has no history, or had nothing of importance to consider before the days of television and the Internet.” But there is something else at work here, in the outsized enthusiasm for social media. Fifty years after one of the most extraordinary episodes of social upheaval in American history, we seem to have forgotten what activism is.

[div class=attrib]More from theSource here.[end-div]

Google’s Earth

[div class=attrib]From The New York Times:[end-div]

“I ACTUALLY think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot.

We have yet to take Google’s measure. We’ve seen nothing like it before, and we already perceive much of our world through it. We would all very much like to be sagely and reliably advised by our own private genie; we would like the genie to make the world more transparent, more easily navigable. Google does that for us: it makes everything in the world accessible to everyone, and everyone accessible to the world. But we see everyone looking in, and blame Google.

Google is not ours. Which feels confusing, because we are its unpaid content-providers, in one way or another. We generate product for Google, our every search a minuscule contribution. Google is made of us, a sort of coral reef of human minds and their products. And still we balk at Mr. Schmidt’s claim that we want Google to tell us what to do next. Is he saying that when we search for dinner recommendations, Google might recommend a movie instead? If our genie recommended the movie, I imagine we’d go, intrigued. If Google did that, I imagine, we’d bridle, then begin our next search.

We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies. We also seldom imagined (in spite of ample evidence) that emergent technologies would leave legislation in the dust, yet they do. In a world characterized by technologically driven change, we necessarily legislate after the fact, perpetually scrambling to catch up, while the core architectures of the future, increasingly, are erected by entities like Google.

William Gibson is the author of the forthcoming novel “Zero History.”

[div class=attrib]More from theSource here.[end-div]

Sergey Brin’s Search for a Parkinson’s Cure

[div class=attrib]From Wired:[end-div]

Several evenings a week, after a day’s work at Google headquarters in Mountain View, California, Sergey Brin drives up the road to a local pool. There, he changes into swim trunks, steps out on a 3-meter springboard, looks at the water below, and dives.

Brin is competent at all four types of springboard diving—forward, back, reverse, and inward. Recently, he’s been working on his twists, which have been something of a struggle. But overall, he’s not bad; in 2006 he competed in the master’s division world championships. (He’s quick to point out he placed sixth out of six in his event.)

The diving is the sort of challenge that Brin, who has also dabbled in yoga, gymnastics, and acrobatics, is drawn to: equal parts physical and mental exertion. “The dive itself is brief but intense,” he says. “You push off really hard and then have to twist right away. It does get your heart rate going.”

There’s another benefit as well: With every dive, Brin gains a little bit of leverage—leverage against a risk, looming somewhere out there, that someday he may develop the neurodegenerative disorder Parkinson’s disease. Buried deep within each cell in Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s.

Not everyone with Parkinson’s has an LRRK2 mutation; nor will everyone with the mutation get the disease. But it does increase the chance that Parkinson’s will emerge sometime in the carrier’s life to between 30 and 75 percent. (By comparison, the risk for an average American is about 1 percent.) Brin himself splits the difference and figures his DNA gives him about 50-50 odds.

That’s where exercise comes in. Parkinson’s is a poorly understood disease, but research has associated a handful of behaviors with lower rates of disease, starting with exercise. One study found that young men who work out have a 60 percent lower risk. Coffee, likewise, has been linked to a reduced risk. For a time, Brin drank a cup or two a day, but he can’t stand the taste of the stuff, so he switched to green tea. (“Most researchers think it’s the caffeine, though they don’t know for sure,” he says.) Cigarette smokers also seem to have a lower chance of developing Parkinson’s, but Brin has not opted to take up the habit. With every pool workout and every cup of tea, he hopes to diminish his odds, to adjust his algorithm by counteracting his DNA with environmental factors.

“This is all off the cuff,” he says, “but let’s say that based on diet, exercise, and so forth, I can get my risk down by half, to about 25 percent.” The steady progress of neuroscience, Brin figures, will cut his risk by around another half—bringing his overall chance of getting Parkinson’s to about 13 percent. It’s all guesswork, mind you, but the way he delivers the numbers and explains his rationale, he is utterly convincing.

Brin, of course, is no ordinary 36-year-old. As half of the duo that founded Google, he’s worth about $15 billion. That bounty provides additional leverage: Since learning that he carries a LRRK2 mutation, Brin has contributed some $50 million to Parkinson’s research, enough, he figures, to “really move the needle.” In light of the uptick in research into drug treatments and possible cures, Brin adjusts his overall risk again, down to “somewhere under 10 percent.” That’s still 10 times the average, but it goes a long way to counterbalancing his genetic predisposition.

It sounds so pragmatic, so obvious, that you can almost miss a striking fact: Many philanthropists have funded research into diseases they themselves have been diagnosed with. But Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place.

[div class=attrib]More from theSource here.[end-div]

Mind Over Mass Media

[div class=attrib]From the New York Times:[end-div]

NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.

So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.

But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.

For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.

Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”

[div class=attrib]More from theSource here.[end-div]

The meaning of network culture

[div class=attrib]From Eurozine:[end-div]

Whereas in postmodernism, being was left in a free-floating fabric of emotional intensities, in contemporary culture the existence of the self is affirmed through the network. Kazys Varnelis discusses what this means for the democratic public sphere.

Not all at once but rather slowly, in fits and starts, a new societal condition is emerging: network culture. As digital computing matures and meshes with increasingly mobile networking technology, society is also changing, undergoing a cultural shift. Just as modernism and postmodernism served as crucial heuristic devices in their day, studying network culture as a historical phenomenon allows us to better understand broader sociocultural trends and structures, to give duration and temporality to our own, ahistorical time.

If more subtle than the much-talked about economic collapse of fall 2008, this shift in society is real and far more radical, underscoring even the logic of that collapse. During the space of a decade, the network has become the dominant cultural logic. Our economy, public sphere, culture, even our subjectivity are mutating rapidly and show little evidence of slowing down the pace of their evolution. The global economic crisis only demonstrated our faith in the network and its dangers. Over the last two decades, markets and regulators had increasingly placed their faith in the efficient market hypothesis, which posited that investors were fundamentally rational and, fed information by highly efficient data networks, would always make the right decision. The failure came when key parts of the network – the investors, regulators, and the finance industry – failed to think through the consequences of their actions and placed their trust in each other.

The collapse of the markets seems to have been sudden, but it was actually a long-term process, beginning with bad decisions made longer before the collapse. Most of the changes in network culture are subtle and only appear radical in retrospect. Take our relationship with the press. One morning you noted with interest that your daily newspaper had established a website. Another day you decided to stop buying the paper and just read it online. Then you started reading it on a mobile Internet platform, or began listening to a podcast of your favourite column while riding a train. Perhaps you dispensed with official news entirely, preferring a collection of blogs and amateur content. Eventually the paper may well be distributed only on the net, directly incorporating user comments and feedback. Or take the way cell phones have changed our lives. When you first bought a mobile phone, were you aware of how profoundly it would alter your life? Soon, however, you found yourself abandoning the tedium of scheduling dinner plans with friends in advance, instead coordinating with them en route to a particular neighbourhood. Or if your friends or family moved away to university or a new career, you found that through a social networking site like Facebook and through the every-present telematic links of the mobile phone, you did not lose touch with them.

If it is difficult to realize the radical impact of the contemporary, this is in part due to the hype about the near-future impact of computing on society in the 1990s. The failure of the near-future to be realized immediately, due to the limits of the technology of the day, made us jaded. The dot.com crash only reinforced that sense. But slowly, technology advanced and society changed, finding new uses for it, in turn spurring more change. Network culture crept up on us. Its impact on us today is radical and undeniable.

[div class=attrib]More from theSource here.[end-div]

Your Digital Privacy? It May Already Be an Illusion

[div class=attrib]From Discover:[end-div]

As his friends flocked to social networks like Facebook and MySpace, Alessandro Acquisti, an associate professor of information technology at Carnegie Mellon University, worried about the downside of all this online sharing. “The personal information is not particularly sensitive, but what happens when you combine those pieces together?” he asks. “You can come up with something that is much more sensitive than the individual pieces.”

Acquisti tested his idea in a study, reported earlier this year in Proceedings of the National Academy of Sciences. He took seemingly innocuous pieces of personal data that many people put online (birthplace and date of birth, both frequently posted on social networking sites) and combined them with information from the Death Master File, a public database from the U.S. Social Security Administration. With a little clever analysis, he found he could determine, in as few as 1,000 tries, someone’s Social Security number 8.5 percent of the time. Data thieves could easily do the same thing: They could keep hitting the log-on page of a bank account until they got one right, then go on a spending spree. With an automated program, making thousands of attempts is no trouble at all.

The problem, Acquisti found, is that the way the Death Master File numbers are created is predictable. Typically the first three digits of a Social Security number, the “area number,” are based on the zip code of the person’s birthplace; the next two, the “group number,” are assigned in a predetermined order within a particular area-number group; and the final four, the “serial number,” are assigned consecutively within each group number. When Acquisti plotted the birth information and corresponding Social Security numbers on a graph, he found that the set of possible IDs that could be assigned to a person with a given date and place of birth fell within a restricted range, making it fairly simple to sift through all of the possibilities.

To check the accuracy of his guesses, Acquisti used a list of students who had posted their birth information on a social network and whose Social Security numbers were matched anon­ymously by the university they attended. His system worked—yet another reason why you should never use your Social Security number as a password for sensitive transactions.

Welcome to the unnerving world of data mining, the fine art (some might say black art) of extracting important or sensitive pieces from the growing cloud of information that surrounds almost all of us. Since data persist essentially forever online—just check out the Internet Archive Wayback Machine, the repository of almost everything that ever appeared on the Internet—some bit of seemingly harmless information that you post today could easily come back to haunt you years from now.

[div class=attrib]More from theSource here.[end-div]