Book Review: The First Detective

A new book by James Morton examines the life and times of cross-dressing burglar, prison-escapee and snitch turned super-detective Eugène-François Vidocq.

[div class=attrib]From The Barnes & Noble Review:[end-div]

The daring costumed escapes and bedsheet-rope prison breaks of the old romances weren’t merely creaky plot devices; they were also the objective correlatives of the lost politics of early modern Europe. Not yet susceptible to legislative amelioration, rules and customs that seemed both indefensible and unassailable had to be vaulted over like collapsing bridges or tunneled under like manor walls. Not only fictional musketeers but such illustrious figures as the young Casanova and the philosopher Jean-Jacques Rousseau spent their early years making narrow escapes from overlapping orthodoxies, swimming moats to marriages of convenience and digging their way  out of prisons of privilege by dressing in drag or posing as noblemen’s sons. If one ran afoul of the local clergy or some aristocratic cuckold, there were always new bishops and magistrates to charm in the next diocese or département.

In 1775–roughly a generation after the exploits of Rousseau and Casanova–a prosperous baker’s son named Eugène-François Vidocq was born in Arras, in northern France. Indolent and adventuresome, he embarked upon a career that in its early phase looked even more hapless and disastrous than those of his illustrious forebears. An indifferent soldier in the chaotic, bloody interregnum of revolutionary France, Vidocq quickly fell into petty crime (at one point, he assumed the name Rousseau for a time as an alias and nom de guerre). A hapless housebreaker and a credulous co-conspirator, his criminal misadventures were equaled only by his skill escaping from the dungeons and bagnes that passed for a penal system in the pre-Napoleonic era.

By 1809, his canniness as an informer landed him a job with the police; with his old criminal comrades as willing foot soldiers, Vidocq organized a brigade de sûreté, a unit of plainclothes police, which in 1813 Napoleon made an official organ of state security. Throughout his subsequent career he would lay much of the foundation of modern policing, and may be considered a forebear not only to the Dupins and the Holmes of modern detective literature but of swashbuckling, above-the-law policemen like Eliot Ness and J. Edgar Hoover as well.

[div class=attrib]More from theSource here.[end-div]

When the multiverse and many-worlds collide

[div class=attrib]From the New Scientist:[end-div]

TWO of the strangest ideas in modern physics – that the cosmos constantly splits into parallel universes in which every conceivable outcome of every event happens, and the notion that our universe is part of a larger multiverse – have been unified into a single theory. This solves a bizarre but fundamental problem in cosmology and has set physics circles buzzing with excitement, as well as some bewilderment.

The problem is the observability of our universe. While most of us simply take it for granted that we should be able to observe our universe, it is a different story for cosmologists. When they apply quantum mechanics – which successfully describes the behaviour of very small objects like atoms – to the entire cosmos, the equations imply that it must exist in many different states simultaneously, a phenomenon called a superposition. Yet that is clearly not what we observe.

Cosmologists reconcile this seeming contradiction by assuming that the superposition eventually “collapses” to a single state. But they tend to ignore the problem of how or why such a collapse might occur, says cosmologist Raphael Bousso at the University of California, Berkeley. “We’ve no right to assume that it collapses. We’ve been lying to ourselves about this,” he says.

In an attempt to find a more satisfying way to explain the universe’s observability, Bousso, together with Leonard Susskind at Stanford University in California, turned to the work of physicists who have puzzled over the same problem but on a much smaller scale: why tiny objects such as electrons and photons exist in a superposition of states but larger objects like footballs and planets apparently do not.

This problem is captured in the famous thought experiment of Schrödinger’s cat. This unhappy feline is inside a sealed box containing a vial of poison that will break open when a radioactive atom decays. Being a quantum object, the atom exists in a superposition of states – so it has both decayed and not decayed at the same time. This implies that the vial must be in a superposition of states too – both broken and unbroken. And if that’s the case, then the cat must be both dead and alive as well.

[div class=attrib]More from theSource here.[end-div]

Dark energy spotted in the cosmic microwave background

[div class=attrib]From Institute of Physics:[end-div]

Astronomers studying the cosmic microwave background (CMB) have uncovered new direct evidence for dark energy – the mysterious substance that appears to be accelerating the expansion of the universe. Their findings could also help map the structure of dark matter on the universe’s largest length scales.

The CMB is the faint afterglow of the universe’s birth in the Big Bang. Around 400,000 years after its creation, the universe had cooled sufficiently to allow electrons to bind to atomic nuclei. This “recombination” set the CMB radiation free from the dense fog of plasma that was containing it. Space telescopes such as WMAP and Planck have charted the CMB and found its presence in all parts of the sky, with a temperature of 2.7 K. However, measurements also show tiny fluctuations in this temperature on the scale of one part in a million. These fluctuations follow a Gaussian distribution.

In the first of two papers, a team of astronomers including Sudeep Das at the University of California, Berkeley, has uncovered fluctuations in the CMB that deviate from this Gaussian distribution. The deviations, observed with the Atacama Cosmology Telescope in Chile, are caused by interactions with large-scale structures in the universe, such as galaxy clusters. “On average, a CMB photon will have encountered around 50 large-scale structures before it reaches our telescope,” Das told physicsworld.com. “The gravitational influence of these structures, which are dominated by massive clumps of dark matter, will each deflect the path of the photon,” he adds. This process, called “lensing”, eventually adds up to a total deflection of around 3 arc minutes – one-20th of a degree.

Dark energy versus structure

In the second paper Das, along with Blake Sherwin of Princeton University and Joanna Dunkley of Oxford University, looks at how lensing could reveal dark energy. Dark energy acts to counter the emergence of structures within the universe. A universe with no dark energy would have a lot of structure. As a result, the CMB photons would undergo greater lensing and the fluctuations would deviate more from the original Gaussian distribution.

[div class=attrib]More from theSource here.[end-div]

Green Bootleggers and Baptists

[div class=attrib]Bjørn Lomborg for Project Syndicate:[end-div]

In May, the United Nations’ International Panel on Climate Change made media waves with a new report on renewable energy. As in the past, the IPCC first issued a short summary; only later would it reveal all of the data. So it was left up to the IPCC’s spin-doctors to present the take-home message for journalists.

The first line of the IPCC’s press release declared, “Close to 80% of the world‘s energy supply could be met by renewables by mid-century if backed by the right enabling public policies.” That story was repeated by media organizations worldwide.

Last month, the IPCC released the full report, together with the data behind this startlingly optimistic claim. Only then did it emerge that it was based solely on the most optimistic of 164 modeling scenarios that researchers investigated. And this single scenario stemmed from a single study that was traced back to a report by the environmental organization Greenpeace. The author of that report – a Greenpeace staff member – was one of the IPCC’s lead authors.

The claim rested on the assumption of a large reduction in global energy use. Given the number of people climbing out of poverty in China and India, that is a deeply implausible scenario.

When the IPCC first made the claim, global-warming activists and renewable-energy companies cheered. “The report clearly demonstrates that renewable technologies could supply the world with more energy than it would ever need,” boasted Steve Sawyer, Secretary-General of the Global Wind Energy Council.

This sort of behavior – with activists and big energy companies uniting to applaud anything that suggests a need for increased subsidies to alternative energy – was famously captured by the so-called “bootleggers and Baptists” theory of politics.

The theory grew out of the experience of the southern United States, where many jurisdictions required stores to close on Sunday, thus preventing the sale of alcohol. The regulation was supported by religious groups for moral reasons, but also by bootleggers, because they had the market to themselves on Sundays. Politicians would adopt the Baptists’ pious rhetoric, while quietly taking campaign contributions from the criminals.

Of course, today’s climate-change “bootleggers” are not engaged in any illegal behavior. But the self-interest of energy companies, biofuel producers, insurance firms, lobbyists, and others in supporting “green” policies is a point that is often missed.

Indeed, the “bootleggers and Baptists” theory helps to account for other developments in global warming policy over the past decade or so. For example, the Kyoto Protocol would have cost trillions of dollars, but would have achieved a practically indiscernible difference in stemming the rise in global temperature. Yet activists claimed that there was a moral obligation to cut carbon-dioxide emissions, and were cheered on by businesses that stood to gain.

[div class=attrib]More from theSource here[end-div]

Hello Internet; Goodbye Memory

Imagine a world without books; you’d have to commit useful experiences, narratives and data to handwritten form and memory.Imagine a world without the internet and real-time search; you’d have to rely on a trusted expert or a printed dictionary to find answers to your questions. Imagine a world without the written word; you’d have to revert to memory and oral tradition to pass on meaningful life lessons and stories.

Technology is a wonderfully double-edged mechanism. It brings convenience. It helps in most aspects of our lives. Yet, it also brings fundamental cognitive change that brain scientists have only recently begun to fathom. Recent studies, including the one cited below from Columbia University explore this in detail.

[div class=attrib]From Technology Review:[end-div]

A study says that we rely on external tools, including the Internet, to augment our memory.

The flood of information available online with just a few clicks and finger-taps may be subtly changing the way we retain information, according to a new study. But this doesn’t mean we’re becoming less mentally agile or thoughtful, say the researchers involved. Instead, the change can be seen as a natural extension of the way we already rely upon social memory aids—like a friend who knows a particular subject inside out.

Researchers and writers have debated over how our growing reliance on Internet-connected computers may be changing our mental faculties. The constant assault of tweets and YouTube videos, the argument goes, might be making us more distracted and less thoughtful—in short, dumber. However, there is little empirical evidence of the Internet’s effects, particularly on memory.

Betsy Sparrow, assistant professor of psychology at Columbia University and lead author of the new study, put college students through a series of four experiments to explore this question.

One experiment involved participants reading and then typing out a series of statements, like “Rubber bands last longer when refrigerated,” on a computer. Half of the participants were told that their statements would be saved, and the other half were told they would be erased. Additionally, half of the people in each group were explicitly told to remember the statements they typed, while the other half were not. Participants who believed the statements would be erased were better at recalling them, regardless of whether they were told to remember them.

[div class=attrib]More from theSource here.[end-div]

The Good, the Bad and the Ugly – 40 years on

One of the most fascinating and (in)famous experiments in social psychology began in the bowels of Stanford University 40 years ago next month. The experiment intended to evaluate how people react to being powerless. However, on conclusion it took a broader look at role assignment and reaction to authority.

The Stanford Prison Experiment incarcerated male college student volunteers in a mock prison for 6 fateful days. Some of the students were selected to be prison guards, the remainder would be prisoners. The researchers, led by psychology professor Philip Zimbardo encouraged the guards to think of themselves as actual guards in a real prison. What happened during these 6 days in “prison” is the stuff of social science legend. The results continues to shock psychologists to this day; many were not prepared for the outcome after 6 days, which saw guards take their roles to the extreme becoming overarchingly authoritarian and mentally abusive, and prisoners become down-trodden and eventually rebellious. A whistle-blower eventually called the experiment to an abrupt end (it was to have continued for 2 weeks).

Forty years on, researchers went back to interview professor Zimbardo and some of the participating guards and prisoners to probe their feelings now. Recollections from one of the guards is below.

[div class=attrib]From Stanford Magazine:[end-div]

I was just looking for some summer work. I had a choice of doing this or working at a pizza parlor. I thought this would be an interesting and different way of finding summer employment.

The only person I knew going in was John Mark. He was another guard and wasn’t even on my shift. That was critical. If there were prisoners in there who knew me before they encountered me, then I never would have been able to pull off anything I did. The act that I put on—they would have seen through it immediately.

What came over me was not an accident. It was planned. I set out with a definite plan in mind, to try to force the action, force something to happen, so that the researchers would have something to work with. After all, what could they possibly learn from guys sitting around like it was a country club? So I consciously created this persona. I was in all kinds of drama productions in high school and college. It was something I was very familiar with: to take on another personality before you step out on the stage. I was kind of running my own experiment in there, by saying, “How far can I push these things and how much abuse will these people take before they say, ‘knock it off?'” But the other guards didn’t stop me. They seemed to join in. They were taking my lead. Not a single guard said, “I don’t think we should do this.”

The fact that I ramped up the intimidation and the mental abuse without any real sense as to whether I was hurting anybody— I definitely regret that. But in the long run, no one suffered any lasting damage. When the Abu Ghraib scandal broke, my first reaction was, this is so familiar to me. I knew exactly what was going on. I could picture myself in the middle of that and watching it spin out of control. When you have little or no supervision as to what you’re doing, and no one steps in and says, “Hey, you can’t do this”—things just keep escalating. You think, how can we top what we did yesterday? How do we do something even more outrageous? I felt a deep sense of familiarity with that whole situation.

Sometimes when people know about the experiment and then meet me, it’s like, My God, this guy’s a psycho! But everyone who knows me would just laugh at that.

[div class=attrib]More from theSource here.[end-div]

3D Printing – A demonstration

Three dimensional “printing” has been around for a few years now, but the technology continues to advance by leaps and bounds. The technology has already progressed to such an extent that some 3D print machines can now “print” objects with moving parts and in color as well. And, we all thought those cool replicator machines in Star Trek were the stuff of science fiction.

[tube]LQfYm4ZVcVI[/tube]

Book Review: “Millennium People”: J.G. Ballard’s last hurrah

[div class=attrib]From Salon:[end-div]

In this, his last novel, the darkly comic “Millennium People,” J.G. Ballard returns to many of the themes that have established him as one of the 20th century’s principal chroniclers of modernity as dystopia. Throughout his career Ballard, who died in 2009, wrote many different variations on the same theme: A random act of violence propels a somewhat affectless protagonist into a violent pathology lurking just under the tissue-thin layer of postmodern civilization. As in “Crash” (1973) and “Concrete Island” (1974), the car parks, housing estates, motorways and suburban sprawl of London in “Millennium People” form a psychological geography. At its center, Heathrow Airport — a recurrent setting for Ballard — exerts its subtly malevolent pull on the bored lives and violent dreams of the alienated middle class.

“Millennium People” begins with the explosion of a bomb at Heathrow, which kills the ex-wife of David Markham, an industrial psychologist. The normally passive Markham sets out to investigate the anonymous bombing and the gated community of Chelsea Marina, a middle-class neighborhood that has become ground zero for a terrorist group and a burgeoning rebellion of London’s seemingly docile middle class. Exploited not so much for their labor as for their deeply ingrained and self-policing sense of social responsibility and good manners, the educated and professional residents of Chelsea Marina regard themselves as the “new proletariat,” with their exorbitant maintenance and parking fees as the new form of oppression, their careers, cultured tastes and education the new gulag.

In the company of a down-and-out priest and a film professor turned Che Guevara of the Volvo set, Markham quickly discovers that the line between amateur detective and amateur terrorist is not so clear, as he is drawn deeper into acts of sabotage and violence against the symbols and institutions of his own safe and sensible life. Targets include travel agencies, video stores, the Tate Modern, the BBC and National Film Theater — all “soporifics” designed to con people into believing their lives are interesting or going somewhere.

[div class=attrib]More from theSource here.[end-div]

Happy Birthday Neptune

One hundred and sixty-four years ago, or one Neptunian year, Neptune was first observed by telescope. Significantly, it was the first planet to be discovered deliberately; the existence and location of the gas giant was calculated mathematically. Subsequently, it was located by telescope, on 24 September 1846, and found to be within one degree of the mathematically predicted location. Astronomers hypothesized Neptune’s existence due to perturbations in the orbit of its planetary neighbor, Uranus, around the sun, which could only be explained by the presence of another object in nearby orbit. A triumph for the scientific method, and besides, it’s beautiful too.

[div class=attrib]Image courtesy of NASA.[end-div]

Culturally Specific Mental Disorders: A Bad Case of the Brain Fags

Is this man buff enough? Image courtesy of Slate

If you happen to have just read The Psychopath Test by Jon Ronson, this article in Slate is appropriately timely, and presents new fodder for continuing research (and a sequel). It would therefore come as no surprise to find Mr.Ronson trekking through Newfoundland in search of “Old Hag Syndrome”, a type of sleep paralysis, visiting art museums in Italy for “Stendhal Syndrome,” a delusional disorder experienced by Italians after studying artistic masterpieces, and checking on Nigerian college students afflicted by “Brain Fag Syndrome”. Then there is: “Wild Man Syndrome,” from New Guinea (a syndrome combining hyperactivity, clumsiness and forgetfulness), “Koro Syndrome” (a delusion of disappearing protruding body parts) first described in China over 2,000 years ago, “Jiko-shisen-kyofu” from Japan (a fear of offending others by glancing at them), and here in the west, “Muscle Dysmorphia Syndrome” (a delusion common in weight-lifters that one’s body is insufficiently ripped).

All of these and more can be found in the latest version of the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) manual.

[div class=attrib]From Slate:[end-div]

In 1951, Hong Kong psychiatrist Pow-Meng Yap authored an influential paper in the Journal of Mental Sciences on the subject of “peculiar psychiatric disorders”—those that did not fit neatly into the dominant disease-model classification scheme of the time and yet appeared to be prominent, even commonplace, in certain parts of the world. Curiously these same conditions—which include “amok” in Southeast Asia and bouffée délirante in French-speaking countries—were almost unheard of outside particular cultural contexts. The American Psychiatric Association has conceded that certain mysterious mental afflictions are so common, in some places, that they do in fact warrant inclusion as “culture-bound syndromes” in the official Diagnostic and Statistical Manual of Mental Disorders.

he working version of this manual, the DSM-IV, specifies 25 such syndromes. Take “Old Hag Syndrome,” a type of sleep paralysis in Newfoundland in which one is visited by what appears to be a rather unpleasant old hag sitting on one’s chest at night. (If I were a bitter, divorced straight man, I’d probably say something diabolical about my ex-wife here.) Then there’s gururumba, or “Wild Man Syndrome,” in which New Guinean males become hyperactive, clumsy, kleptomaniacal, and conveniently amnesic, “Brain Fag Syndrome” (more on that in a moment), and “Stendhal Syndrome,” a delusional disorder experienced mostly by Italians after gazing upon artistic masterpieces. The DSM-IV defines culture-bound syndromes as “recurrent, locality-specific patterns of aberrant behavior and troubling experience that may or may not be linked to a particular diagnostic category.”
And therein lies the nosological pickle: The symptoms of culture-bound syndromes often overlap with more general, known psychiatric conditions that are universal in nature, such as schizophrenia, body dysmorphia, and social anxiety. What varies across cultures, and is presumably moulded by them, is the unique constellation of symptoms, or “idioms of distress.”

Some scholars believe that many additional distinct culture-bound syndromes exist. One that’s not in the manual but could be, argue psychiatrists Gen Kanayama and Harrison Pope in a short paper published earlier this year in the Harvard Review of Psychiatry, is “muscle dysmorphia.” The condition is limited to Western males, who suffer the delusion that they are insufficiently ripped. “As a result,” write the authors, “they may lift weights compulsively in the gym, often gain large amounts of muscle mass, yet still perceive themselves as too small.” Within body-building circles, in fact, muscle dysmorphia has long been recognized as a sort of reverse anorexia nervosa. But it’s almost entirely unheard of among Asian men. Unlike hypermasculine Western heroes such as Hercules, Thor, and the chiseled Arnold of yesteryear, the Japanese and Chinese have tended to prefer their heroes fully clothed, mentally acute, and lithe, argue Kanayama and Pope. In fact, they say anabolic steroid use is virtually nonexistent in Asian countries, even though the drugs are considerably easier to obtain, being available without a prescription at most neighborhood drugstores.

[div class=attrib]More from theSource here.[end-div]

Disconnected?

[div class=attrib]From Slate:[end-div]

Have you heard that divorce is contagious? A lot of people have. Last summer a study claiming to show that break-ups can propagate from friend to friend to friend like a marriage-eating bacillus spread across the news agar from CNN to CBS to ABC with predictable speed. “Think of this ‘idea’ of getting divorced, this ‘option’ of getting divorced like a virus, because it spreads more or less the same way,” explained University of California-San Diego professor James Fowler to the folks at Good Morning America.

It’s a surprising, quirky, and seemingly plausible finding, which explains why so many news outlets caught the bug. But one weird thing about the media outbreak was that the study on which it was based had never been published in a scientific journal. The paper had been posted to the Social Science Research Network web site, a sort of academic way station for working papers whose tagline is “Tomorrow’s Research Today.” But tomorrow had not yet come for the contagious divorce study: It had never actually passed peer review, and still hasn’t. “It is under review,” Fowler explained last week in an email. He co-authored the paper with his long-time collaborator, Harvard’s Nicholas Christakis, and lead author Rose McDermott.

A few months before the contagious divorce story broke, Slate ran an article I’d written based on a related, but also unpublished, scientific paper. The mathematician Russell Lyons had posted a dense treatise on his website suggesting that the methods employed by Christakis and Fowler in their social network studies were riddled with statistical errors at many levels. The authors were claiming—in the New England Journal of Medicine, in a popular book, in TED talks, in snappy PR videos—that everything from obesity to loneliness to poor sleep could spread from person to person to person like a case of the galloping crud. But according to Lyons and several other experts, their arguments were shaky at best. “It’s not clear that the social contagionists have enough evidence to be telling people that they owe it to their social network to lose weight,” I wrote last April. As for the theory that obesity and divorce and happiness contagions radiate from human beings through three degrees of friendship, I concluded “perhaps it’s best to flock away for now.”

The case against Christakis and Fowler has grown since then. The Lyons paper passed peer review and was published in the May issue of the journal Statistics, Politics, and Policy. Two other recent papers raise serious doubts about their conclusions. And now something of a consensus is forming within the statistics and social-networking communities that Christakis and Fowler’s headline-grabbing contagion papers are fatally flawed. Andrew Gelman, a professor of statistics at Columbia, wrote a delicately worded blog post in June noting that he’d “have to go with Lyons” and say that the claims of contagious obesity, divorce and the like “have not been convincingly demonstrated.” Another highly respected social-networking expert, Tom Snijders of Oxford, called the mathematical model used by Christakis and Fowler “not coherent.” And just a few days ago, Cosma Shalizi, a statistician at Carnegie Mellon, declared, “I agree with pretty much everything Snijders says.”

[div class=attrib]More from theSource here.[end-div]

MondayPoem: If You Forget Me

Pablo Neruda (1904–1973)

[div class=attrib]If You Forget Me, Pablo Neruda[end-div]

I want you to know
one thing.

You know how this is:
if I look
at the crystal moon, at the red branch
of the slow autumn at my window,
if I touch
near the fire
the impalpable ash
or the wrinkled body of the log,
everything carries me to you,
as if everything that exists,
aromas, light, metals,
were little boats
that sail
toward those isles of yours that wait for me.

Well, now,
if little by little you stop loving me
I shall stop loving you little by little.

If suddenly
you forget me
do not look for me,
for I shall already have forgotten you.

If you think it long and mad,
the wind of banners
that passes through my life,
and you decide
to leave me at the shore
of the heart where I have roots,
remember
that on that day,
at that hour,
I shall lift my arms
and my roots will set off
to seek another land.

But
if each day,
each hour,
you feel that you are destined for me
with implacable sweetness,
if each day a flower
climbs up to your lips to seek me,
ah my love, ah my own,
in me all that fire is repeated,
in me nothing is extinguished or forgotten,
my love feeds on your love, beloved,
and as long as you live it will be in your arms
without leaving mine.

The Allure of Steampunk Videotelephony and the Telephonoscope

Video telephony as imagined in 1910

A concept for the videophone surfaced just a couple of years after the telephone was patented in the United States. The telephonoscope as it was called first appeared in Victorian journals and early French science fiction in 1878.

In 1891 Alexander Graham Bell recorded his concept of an electrical radiophone, which discussed, “…the possibility of seeing by electricity”. He later went on to predict that, “…the day would come when the man at the telephone would be able to see the distant person to whom he was speaking”.

The world’s first videophone entered service in 1934, in Germany. The service was offered in select post offices linking several major German cities, and provided bi-directional voice and image on 8 inch square displays. In the U.S., AT&T launched the Picturephone in the mid-1960s. However, the costly equipment, high-cost per call, and inconveniently located public video-telephone booths ensured that the service would never gain public acceptance. Similar to the U.S., experience major telephone companies in France, Japan and Sweden had limited success with video-telephony during the 1970s-80s.

Major improvements in video technology, telecommunications deregulation and increases in bandwidth during the 1980s-90s brought the price point down considerably. However, significant usage remained mostly within the realm of major corporations due to the still not insignificant investment in equipment and cost of bandwidth.

Fast forward to the 21st century. Skype and other IP (internet protocol) based services have made videochat commonplace and affordable, and in most cases free.It now seems that videchat has become almost ubiquitous. Recent moves into this space by tech heavyweights like Apple with Facetime, Microsoft with its acquisition of Skype, Google with its Google Plus social network video calling component, and Facebook’s new video calling service will in all likelihood add further momentum.

Of course, while videochat is an effective communication tool it does have a cost in terms of personal and social consequences over its non-video cousin, the telephone. Next time you videochat rather than make a telephone call you will surely be paying greater attention to your bad hair and poor grooming, your crumpled clothes, uncoordinated pajamas or lack thereof, the unwanted visitors in the background shot, and the not so subtle back-lighting that focuses attention on the clutter in your office or bedroom. Doesn’t it make you harken back for the days of the simple telephone? Either that or perhaps you are drawn to the more alluring and elegant steampunk form of videochat as imagined by the Victorians, in the image above.

The Best of States, the Worst of States

[div class=attrib]From Frank Jacobs / BigThink:[end-div]

Are these maps cartograms or mere infographics?

An ‘information graphic’ is defined as any graphic representation of data. It follows from that definition that infographics are less determined by type than by purpose. Which is to represent complex information in a readily graspable graphic format. Those formats are often, but not only: diagrams, flow charts, and maps.

Although one definition of maps – the graphic representation of spatial data – is very similar to that of infographics, the two are easily distinguished by, among other things, the context of the latter, which are usually confined to and embedded in technical and journalistic writing.

Cartograms are a subset of infographics, limited to one type of graphic representation: maps. On these maps, one set of quantitative information (usually surface or distance) is replaced by another (often demographic data or electoral results). The result is an informative distortion of the map (1).

The distortion on these maps is not of the distance-bending or surface-stretching kind. It merely substitutes the names of US states with statistical information relevant to each of them (2). This substitution is non-quantitative, affecting the toponymy rather than the topography of the map. So is this a mere infographic? As the information presented is statistical (each label describes each state as first or last in a Top 50), I’d say this is – if you’ll excuse the pun – a borderline case.

What’s more relevant, from this blog’s perspective, is that it is an atypical, curious and entertaining use of cartography.

The first set of maps labels each and every one of the states as best and worst at something. All of those distinctions, both the favourable and the unfavourable kind, are backed up by some sort of evidence.

The first map, the United States of Awesome, charts fifty things that each state of the Union is best at. Most of those indicators, 12 in all, are related to health and well-being (3). Ten are economic (4), six environmental (5), five educational (6). Three can be classified as ‘moral’, even if these particular distinctions make for strange bedfellows (7).

The best thing that can be said about Missouri and Illinois, apparently, is that they’re extremely average (8). While that may excite few people, it will greatly interest political pollsters and anyone in need of a focus group. Virginia and Indiana are the states with the most birthplaces of presidents and vice-presidents, respectively. South Carolinians prefer to spend their time golfing, Pennsylvanians hunting. Violent crime is lowest in Maine, public corruption in Nebraska. The most bizarre distinctions, finally, are reserved for New Mexico (Spaceport Home), Oklahoma (Best Licence Plate) and Missouri (Bromine Production). If that’s the best thing about those states, what might be the worst?

[div class=attrib]More from theSource here.[end-div]

Cy Twombly, Idiosyncratic Painter, Dies at 83

Cy Twombly. Image courtesy of Sundance Channel

[div class=attrib]From the New York Times:[end-div]

Cy Twombly, whose spare childlike scribbles and poetic engagement with antiquity left him stubbornly out of step with the movements of postwar American art even as he became one of the era’s most important painters, died in Rome Tuesday. He was 83.

The cause was not immediately known, although Mr. Twombly had suffered from cancer. His death was announced by the Gagosian Gallery, which represents his work.

In a career that slyly subverted Abstract Expressionism, toyed briefly with Minimalism, seemed barely to acknowledge Pop Art and anticipated some of the concerns of Conceptualism, Mr. Twombly was a divisive artist almost from the start. The curator Kirk Varnedoe, on the occasion of a 1994 retrospective at the Museum of Modern Art, wrote that his work was “influential among artists, discomfiting to many critics and truculently difficult not just for a broad public, but for sophisticated initiates of postwar art as well.” The critic Robert Hughes called him “the Third Man, a shadowy figure, beside that vivid duumvirate of his friends Jasper Johns and Robert Rauschenberg.”

Mr. Twombly’s decision to settle permanently in southern Italy in 1957 as the art world shifted decisively in the other direction, from Europe to New York, was only the most symbolic of his idiosyncrasies. He avoided publicity throughout his life and mostly ignored his critics, who questioned constantly whether his work deserved a place at the forefront of 20th-century abstraction, though he lived long enough to see it arrive there. It didn’t help that his paintings, because of their surface complexity and whirlwinds of tiny detail – scratches, erasures, drips, penciled fragments of Italian and classical verse amid scrawled phalluses and buttocks – lost much of their power in reproduction.

But Mr. Twombly, a tall, rangy Virginian who once practiced drawing in the dark to make his lines less purposeful, steadfastly followed his own program and looked to his own muses: often literary ones like Catullus, Rumi, Pound and Rilke. He seemed to welcome the privacy that came with unpopularity.

“I had my freedom and that was nice,” he said in a rare interview, with Nicholas Serota, the director of the Tate, before a 2008 survey of his career at the Tate Modern.

The critical low point probably came after a 1964 exhibition at the Leo Castelli Gallery in New York that was widely panned. The artist and writer Donald Judd, who was hostile toward painting in general, was especially damning even so, calling the show a fiasco. “There are a few drips and splatters and an occasional pencil line,” he wrote in a review. “There isn’t anything to these paintings.”

[div class=attrib]More from theSource here.[end-div]

Book Review: The Psychopath Test. Jon Ronson

Hilarious and disturbing. I suspect Jon Ronson would strike a couple of checkmarks in the Hare PCL-R Checklist against my name for finding his latest work both hilarious and disturbing. Would this, perhaps, make me a psychopath?

Jon Ronson is author of The Psychopath Test and the Hare PCL-R, named for its inventor,  Canadian psychologist Bob Hare, is the gold standard in personality trait measurement for psychopathic disorder (officially known as Antisocial Personality Disorder).

Ronson’s book is a fascinating journey through the “madness industry” covering psychiatrists, clinical psychologists, criminal scientists, criminal profilers, and of course their clients: patients, criminals and the “insane” at large. Fascinated by the psychopathic traits that the industry applied to the criminally insane, Ronson goes on to explore these behavior and personality traits in the general population. And, perhaps to no surprise he finds that a not insignificant proportion of business leaders and others in positions on authority could be classified as “psychopaths” based on the standard PCL-R checklist.

Ronson’s stories are poignant. He tells us the tale of Tony, who feigned madness to avoid what he believed would be have been a harsher prison sentence for a violent crime. Instead, Tony found himself in Broadmoor, a notorious maximum security institution for the criminally insane. Twelve years on, Tony still incarcerated, finds it impossible to convince anyone of his sanity, despite behaving quite normally. His doctors now admit that he was sane at the time of admission, but agree that he must have been nuts to feign insanity in the first place, and furthermore only someone who is insane could behave so “sanely” while surrounded by the insane!

Tony’s story and the other characters that Ronson illuminates in this work are thoroughly memorable, especially Al Dunlap, empathy poor, former CEO of Sunbeam — perhaps one of the high-functioning psychopaths who lives in our midst. Peppered throughout Ronson’s interviews with madmen and madwomen, are his perpetual anxiety and self-reflection; he now has considerable diagnostic power and insight versed on such tools as the PCL-R checklist. As a result, Ronson begins seeing “psychopaths” everywhere.

My only criticism of the book is that Jon Ronson should have made it 200 pages longer and focused much more on the “psychopathic” personalities that roam amongst us, not just those who live behind bars, and on the madness industry itself, now seemingly lead by the major  pharmaceutical companies.

The Cutting-Edge Physics of Jackson Pollock

 

Untitled, ca. 1948-49. Jackson Pollock

[div class=attrib]From Wired:[end-div]

Jackson Pollock, famous for his deceptively random-seeming drip paintings, took advantage of certain features of fluid dynamics years before physicists thought to study them.

“His particular painting technique essentially lets physics be a player in the creative process,” said physicist Andrzej Herczynski of Boston College, coauthor of a new paper in Physics Today that analyzes the physics in Pollock’s art. “To the degree that he lets physics take a role in the painting process, he is inviting physics to be a coauthor of his pieces.”

Pollock’s unique technique — letting paint drip and splatter on the floor rather than spreading it on a vertical canvas — revolutionized the art world in the 1940s. The resulting streaks and blobs look haphazard, but art historians and, more recently, physicists argue they’re anything but. Some have suggested that the snarls of paint have lasting appeal because they reflect fractal geometry that shows up in clouds and coast lines.

Now, Boston College art historian Claude Cernuschi, Harvard mathematician Lakshminarayanan Mahadevan and Herczynski have turned the tools of physics on Pollock’s painting process. In what they believe is the first quantitative analysis of drip painting, the researchers derived an equation for how Pollock spread paint.

The team focused on the painting Untitled 1948-49, which features wiggling lines and curlicues of red paint. Those loops formed through a fluid instability called coiling, in which thick fluids fold onto themselves like coils of rope.

“People thought perhaps Pollock created this effect by wiggling his hand in a sinusoidal way, but he didn’t,” Herczynski said.

Coiling is familiar to anyone who’s ever squeezed honey on toast, but it’s only recently grabbed the attention of physicists. Recent studies have shown that the patterns fluids form as they fall depends on their viscosity and their speed. Viscous liquids fall in straight lines when moving quickly, but form loops, squiggles and figure eights when poured slowly, as seen in this video of honey falling on a conveyor belt.

The first physics papers that touched on this phenomenon appeared in the late 1950s, but Pollock knew all about it in 1948. Pollock was famous for searching for using different kinds of paints than anyone else in the art world, and mixing his paints with solvents to make them thicker or thinner. Instead of using a brush or pouring paint directly from a can, he lifted paint with a rod and let it dribble onto the canvas in continuous streams. By moving his arm at different speeds and using paints of different thicknesses, he could control how much coiling showed up in the final painting.

[div class=attrib]More from theSource here.[end-div]

The Homogenous Culture of “Like”

[div class=attrib]Echo and Narcissus, John William Waterhouse [Public domain], via Wikimedia Commons[end-div]

About 12 months ago I committed suicide — internet suicide that is. I closed my personal Facebook account after recognizing several important issues. First, it was a colossal waste of time; time that I could and should be using more productively. Second, it became apparent that following, belonging and agreeing with others through the trivial “wall” status-in-a-can postings and now pervasive “like button” was nothing other than a declaration of mindless group-think and a curious way to maintain social standing. So, my choice was clear: become part of a group that had similar interests, like-minded activities, same politics, parallel beliefs, common likes and dislikes; or revert to my own weirdly independent path. I chose the latter, rejecting the road towards a homogeneity of ideas and a points-based system of instant self-esteem.

This facet of the Facebook ecosystem has an affect similar to the filter bubble that I described is a previous post, The Technology of Personalization and the Bubble Syndrome. In both cases my explicit choices on Facebook, such as which friends I follow or which content I “like”, and my implicit browsing behaviors that increasingly filter what I see and don’t see causes a narrowing of the world of ideas to which I am a exposed. This cannot be good.

So, although I may incur the wrath of author Neil Strauss for including an excerpt of his recent column below, I cannot help but “like” what he has to say. More importantly, he does a much more eloquent job of describing the issue which commoditizes social relationships and, dare I say it, lowers the barrier to entry for narcissists to grow and fine tune their skills.

[div class=attrib]By Neil Strauss for the Wall Street Journal:[end-div]

If you happen to be reading this article online, you’ll notice that right above it, there is a button labeled “like.” Please stop reading and click on “like” right now.

Thank you. I feel much better. It’s good to be liked.

Don’t forget to comment on, tweet, blog about and StumbleUpon this article. And be sure to “+1” it if you’re on the newly launched Google+ social network. In fact, if you don’t want to read the rest of this article, at least stay on the page for a few minutes before clicking elsewhere. That way, it will appear to the site analytics as if you’ve read the whole thing.

Once, there was something called a point of view. And, after much strife and conflict, it eventually became a commonly held idea in some parts of the world that people were entitled to their own points of view.

Unfortunately, this idea is becoming an anachronism. When the Internet first came into public use, it was hailed as a liberation from conformity, a floating world ruled by passion, creativity, innovation and freedom of information. When it was hijacked first by advertising and then by commerce, it seemed like it had been fully co-opted and brought into line with human greed and ambition.

But there was one other element of human nature that the Internet still needed to conquer: the need to belong. The “like” button began on the website FriendFeed in 2007, appeared on Facebook in 2009, began spreading everywhere from YouTube to Amazon to most major news sites last year, and has now been officially embraced by Google as the agreeable, supportive and more status-conscious “+1.” As a result, we can now search not just for information, merchandise and kitten videos on the Internet, but for approval.

Just as stand-up comedians are trained to be funny by observing which of their lines and expressions are greeted with laughter, so too are our thoughts online molded to conform to popular opinion by these buttons. A status update that is met with no likes (or a clever tweet that isn’t retweeted) becomes the equivalent of a joke met with silence. It must be rethought and rewritten. And so we don’t show our true selves online, but a mask designed to conform to the opinions of those around us.

Conversely, when we’re looking at someone else’s content—whether a video or a news story—we are able to see first how many people liked it and, often, whether our friends liked it. And so we are encouraged not to form our own opinion but to look to others for cues on how to feel.

“Like” culture is antithetical to the concept of self-esteem, which a healthy individual should be developing from the inside out rather than from the outside in. Instead, we are shaped by our stats, which include not just “likes” but the number of comments generated in response to what we write and the number of friends or followers we have. I’ve seen rock stars agonize over the fact that another artist has far more Facebook “likes” and Twitter followers than they do.

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Let America Be America Again

[div class=attrib]Let America Be America Again, Langston Hughes[end-div]

Let America be America again.
Let it be the dream it used to be.
Let it be the pioneer on the plain
Seeking a home where he himself is free.

(America never was America to me.)

Let America be the dream the dreamers dreamed–
Let it be that great strong land of love
Where never kings connive nor tyrants scheme
That any man be crushed by one above.

(It never was America to me.)

O, let my land be a land where Liberty
Is crowned with no false patriotic wreath,
But opportunity is real, and life is free,
Equality is in the air we breathe.

(There’s never been equality for me,
Nor freedom in this “homeland of the free.”)

Say, who are you that mumbles in the dark?
And who are you that draws your veil across the stars?

I am the poor white, fooled and pushed apart,
I am the Negro bearing slavery’s scars.
I am the red man driven from the land,
I am the immigrant clutching the hope I seek–
And finding only the same old stupid plan
Of dog eat dog, of mighty crush the weak.

I am the young man, full of strength and hope,
Tangled in that ancient endless chain
Of profit, power, gain, of grab the land!
Of grab the gold! Of grab the ways of satisfying need!
Of work the men! Of take the pay!
Of owning everything for one’s own greed!

I am the farmer, bondsman to the soil.
I am the worker sold to the machine.
I am the Negro, servant to you all.
I am the people, humble, hungry, mean–
Hungry yet today despite the dream.
Beaten yet today–O, Pioneers!
I am the man who never got ahead,
The poorest worker bartered through the years.

Yet I’m the one who dreamt our basic dream
In the Old World while still a serf of kings,
Who dreamt a dream so strong, so brave, so true,
That even yet its mighty daring sings
In every brick and stone, in every furrow turned
That’s made America the land it has become.
O, I’m the man who sailed those early seas
In search of what I meant to be my home–
For I’m the one who left dark Ireland’s shore,
And Poland’s plain, and England’s grassy lea,
And torn from Black Africa’s strand I came
To build a “homeland of the free.”

The free?

Who said the free? Not me?
Surely not me? The millions on relief today?
The millions shot down when we strike?
The millions who have nothing for our pay?
For all the dreams we’ve dreamed
And all the songs we’ve sung
And all the hopes we’ve held
And all the flags we’ve hung,
The millions who have nothing for our pay–
Except the dream that’s almost dead today.

O, let America be America again–
The land that never has been yet–
And yet must be–the land where every man is free.
The land that’s mine–the poor man’s, Indian’s, Negro’s, ME–
Who made America,
Whose sweat and blood, whose faith and pain,
Whose hand at the foundry, whose plow in the rain,
Must bring back our mighty dream again.

Sure, call me any ugly name you choose–
The steel of freedom does not stain.
From those who live like leeches on the people’s lives,
We must take back our land again,
America!

O, yes,
I say it plain,
America never was America to me,
And yet I swear this oath–
America will be!

Out of the rack and ruin of our gangster death,
The rape and rot of graft, and stealth, and lies,
We, the people, must redeem
The land, the mines, the plants, the rivers.
The mountains and the endless plain–
All, all the stretch of these great green states–
And make America again!

Undiscovered

[div class=attrib]From Eurozine:[end-div]

Neurological and Darwinistic strands in the philosophy of consciousness see human beings as no more than our evolved brains. Avoiding naturalistic explanations of human beings’ fundamental difference from other animals requires openness to more expansive approaches, argues Raymond Tallis.

For several decades I have been arguing against what I call biologism. This is the idea, currently dominant within secular humanist circles, that humans are essentially animals (or at least much more beastly than has been hitherto thought) and that we need therefore to look to the biological sciences, and only there, to advance our understanding of human nature. As a result of my criticism of this position I have been accused of being a Cartesian dualist, who thinks that the mind is some kind of a ghost in the machinery of the brain. Worse, it has been suggested that I am opposed to Darwinism, to neuroscience or to science itself. Worst of all, some have suggested that I have a hidden religious agenda. For the record, I regard neuroscience (which was my own area of research) as one of the greatest monuments of the human intellect; I think Cartesian dualism is a lost cause; and I believe that Darwin’s theory is supported by overwhelming evidence. Nor do I have a hidden religious agenda: I am an atheist humanist. And this is in fact the reason why I have watched the rise of biologism with such dismay: it is a consequence of the widespread assumption that the only alternative to a supernatural understanding of human beings is a strictly naturalistic one that sees us as just another kind of beast and, ultimately, as being less conscious agents than pieces of matter stitched into the material world.

This is to do humanity a gross disservice, as I think we are so much more than gifted chimps. Unpacking the most “ordinary” moment of human life reveals knowledge, skills, emotions, intuitions, a sense of past and future and of an infinitely elaborated world, that are not to be found elsewhere in the living world.

Biologism has two strands: “Neuromania” and “Darwinitis”. Neuromania arises out of the belief that human consciousness is identical with neural activity in certain parts of the brain. It follows from this that the best way to investigate what we humans truly are, to understand the origins of our beliefs, our predispositions, our morality and even our aesthetic pleasures, will be to peer into the brains of human subjects using the latest scanning technology. This way we shall know what is really going on when we are having experiences, thinking thoughts, feeling emotions, remembering memories, making decisions, being wise or silly, breaking the law, falling in love and so on.

The other strand is Darwinitis, rooted in the belief that evolutionary theory not only explains the origin of the species H. sapiens – which it does, of course – but also explains humans as they are today; that people are at bottom the organisms forged by the processes of natural selection and nothing more.

[div class=attrib]More from theSource here.[end-div]

Scientific Evidence for Indeterminism

[div class=attrib]From Evolutionary Philosophy:[end-div]

The advantage of being a materialist is that so much of our experience seems to point to a material basis for reality. Idealists usually have to appeal to some inner knowing as the justification of their faith that mind, not matter, is the foundation of reality. Unfortunately the appeal to inner knowing is exactly what a materialist has trouble with in the first place.

Charles Sanders Peirce was a logician and a scientist first and a philosopher second. He thought like a scientists and as he developed his evolutionary philosophy his reasons for believing in it were very logical and scientific. One of the early insights that lead him to his understanding of an evolving universe was his realization that the state of our world or its future was not necessarily predetermined.

One conclusion that materialism tends to lead to is a belief that ‘nothing comes from nothing.’ Everything comes from some form of matter or interaction between material things. Nothing just immerges spontaneously. Everything is part of an ongoing chain of cause and effect. The question, how did the chain of cause and effect start, is one that is generally felt best to be left to the realm of metaphysics and unsuitable for scientific investigation.

And so the image of a materially based universe tends to lead to a deterministic account of reality. You start with something and then that something unravels according to immutable laws. As an image to picture imagine this, a large bucket filled with pink and green tennis balls. Then imagine that there are two smaller buckets that are empty. This arrangement represents the starting point of the universe. The natural laws of this universe dictate that individual tennis balls will be removed from the large bucket and placed in one of the two smaller ones. If the ball that is removed is pink it goes in the left hand bucket and if it is green it goes in the right hand bucket. In this simple model the end state of the universe is going to be that the large bucket will be empty, the left hand bucket will be filled with pink tennis balls and the right hand bucket will be filled with green tennis balls. The outcome of the process is predetermined by the initial conditions and the laws governing the subsequent activity.

A belief in this kind of determinism seems to be constantly reinforced for us through our ongoing experience with the material universe.  Go ahead pick up a rock hold it up and then let it go. It will fall. Every single time it will fall. It is predetermined that a rock that is held up in the air and then dropped will fall. Punch a wall. It will hurt – every single time.  Over and over again our experience of everyday reality seems to reinforce the fact that we live in a universe which is exactly governed by immutable laws.

[div class=attrib]More from theSource here.[end-div]

Brilliant, but Distant: Most Far-Flung Known Quasar Offers Glimpse into Early Universe

[div class=attrib]From Scientific American:[end-div]

Peering far across space and time, astronomers have located a luminous beacon aglow when the universe was still in its infancy. That beacon, a bright astrophysical object known as a quasar, shines with the luminosity of 63 trillion suns as gas falling into a supermassive black holes compresses, heats up and radiates brightly. It is farther from Earth than any other known quasar—so distant that its light, emitted 13 billion years ago, is only now reaching Earth. Because of its extreme luminosity and record-setting distance, the quasar offers a unique opportunity to study the conditions of the universe as it underwent an important transition early in cosmic history.

By the time the universe was one billion years old, the once-neutral hydrogen gas atoms in between galaxies had been almost completely stripped of their electrons (ionized) by the glow of the first massive stars. But the full timeline of that process, known as re-ionization because it separated protons and electrons, as they had been in the first 380,000 years post–big bang, is somewhat uncertain. Quasars, with their tremendous intrinsic brightness, should make for excellent markers of the re-ionization process, acting as flashlights to illuminate the intergalactic medium. But quasar hunters working with optical telescopes had only been able to see back as far as 870 million years after the big bang, when the intergalactic medium’s transition from neutral to ionized was almost complete. (The universe is now 13.75 billion years old.) Beyond that point, a quasar’s light has been so stretched, or redshifted, by cosmic expansion that it no longer falls in the visible portion of the electromagnetic spectrum but rather in the longer-wavelength infrared.

Daniel Mortlock, an astrophysicist at Imperial College London, and his colleagues used that fact to their advantage. The researchers looked for objects that showed up in a large-area infrared sky survey but not in a visible-light survey covering the same area of sky, essentially isolating the high-redshift objects. They could thus discover a quasar, known as ULAS J1120+0641, at redshift 7.085, corresponding to a time just 770 million years after the big bang. That places the newfound quasar about 100 million years earlier in cosmic history than the previous record holder, which was at redshift 6.44. Mortlock and his colleagues report their finding in the June 30 issue of Nature. (Scientific American is part of Nature Publishing Group.)

[div class=attrib]More from theSource here.[end-div]

New Tevatron collider result may help explain the matter-antimatter asymmetry in the universe

[div class=attrib]From Symmetry Breaking:[end-div]

About a year ago, the DZero collaboration at Fermilab published  a tantalizing result in which the universe unexpectedly showed a preference for matter over antimatter. Now the collaboration has more data, and the evidence for this effect has grown stronger.

The result is extremely exciting: The question of why our universe should exist solely of matter is one of the burning scientific questions of our time. Theory predicts that matter and antimatter was made in equal quantities. If something hadn’t slightly favored matter over antimatter, our universe would consist of a bath of photons and little else. Matter wouldn’t exist.

The Standard Model predicts a value near zero for one of the parameters that is associated with the difference between the production of muons and antimuons in B meson decays. The DZero results from 2010 and 2011 differ from zero and are consistent with each other. The vertical bars of the measurements indicate their uncertainty. 

The 2010 measurement looked at muons and antimuons emerging from the decays of neutral mesons containing bottom quarks, which is a source that scientists have long expected to be a fruitful place to study the behavior of matter and antimatter under high-energy conditions. DZero scientists found a 1 percent difference between the production of pairs of muons and pairs of antimuons in B meson decays at Fermilab’s Tevatron collider. Like all measurements, that measurement had an uncertainty associated with it. Specifically, there was about a 0.07 percent chance that the measurement could come from a random fluctuation of the data recorded. That’s a tiny probability, but since DZero makes thousands of measurements, scientists expect to see the occasional rare fluctuation that turns out to be nothing.

During the last year, the DZero collaboration has taken more data and refined its analysis techniques. In addition, other scientists have raised questions and requested additional cross-checks. One concern was whether the muons and antimuons are actually coming from the decay of B mesons, rather than some other source.

Now, after incorporating almost 50 percent more data and dozens of cross-checks, DZero scientists are even more confident in the strength of their result. The probability that the observed effect is from a random fluctuation has dropped quite a bit and now is only 0.005 percent. DZero scientists will present the details of their analysis in a seminar geared toward particle physicists later today.

Scientists are a cautious bunch and require a high level of certainty to claim a discovery. For a measurement of the level of certainty achieved in the summer of 2010, particle physicists claim that they have evidence for an unexpected phenomenon. A claim of discovery requires a higher level of certainty.

If the earlier measurement were a fluctuation, scientists would expect the uncertainty of the new result to grow, not get smaller. Instead, the improvement is exactly what scientists expect if the effect is real. But the uncertainty associated with the new result is still too high to claim a discovery. For a discovery, particle physicists require an uncertainty of less than 0.00005 percent.

The new result suggests that DZero is hot on the trail of a crucial clue in one of the defining questions of all time: Why are we here at all?

[div class=attrib]More from theSource here.[end-div]