Hello Internet; Goodbye Memory

Imagine a world without books; you’d have to commit useful experiences, narratives and data to handwritten form and memory.Imagine a world without the internet and real-time search; you’d have to rely on a trusted expert or a printed dictionary to find answers to your questions. Imagine a world without the written word; you’d have to revert to memory and oral tradition to pass on meaningful life lessons and stories.

Technology is a wonderfully double-edged mechanism. It brings convenience. It helps in most aspects of our lives. Yet, it also brings fundamental cognitive change that brain scientists have only recently begun to fathom. Recent studies, including the one cited below from Columbia University explore this in detail.

[div class=attrib]From Technology Review:[end-div]

A study says that we rely on external tools, including the Internet, to augment our memory.

The flood of information available online with just a few clicks and finger-taps may be subtly changing the way we retain information, according to a new study. But this doesn’t mean we’re becoming less mentally agile or thoughtful, say the researchers involved. Instead, the change can be seen as a natural extension of the way we already rely upon social memory aids—like a friend who knows a particular subject inside out.

Researchers and writers have debated over how our growing reliance on Internet-connected computers may be changing our mental faculties. The constant assault of tweets and YouTube videos, the argument goes, might be making us more distracted and less thoughtful—in short, dumber. However, there is little empirical evidence of the Internet’s effects, particularly on memory.

Betsy Sparrow, assistant professor of psychology at Columbia University and lead author of the new study, put college students through a series of four experiments to explore this question.

One experiment involved participants reading and then typing out a series of statements, like “Rubber bands last longer when refrigerated,” on a computer. Half of the participants were told that their statements would be saved, and the other half were told they would be erased. Additionally, half of the people in each group were explicitly told to remember the statements they typed, while the other half were not. Participants who believed the statements would be erased were better at recalling them, regardless of whether they were told to remember them.

[div class=attrib]More from theSource here.[end-div]

The Good, the Bad and the Ugly – 40 years on

One of the most fascinating and (in)famous experiments in social psychology began in the bowels of Stanford University 40 years ago next month. The experiment intended to evaluate how people react to being powerless. However, on conclusion it took a broader look at role assignment and reaction to authority.

The Stanford Prison Experiment incarcerated male college student volunteers in a mock prison for 6 fateful days. Some of the students were selected to be prison guards, the remainder would be prisoners. The researchers, led by psychology professor Philip Zimbardo encouraged the guards to think of themselves as actual guards in a real prison. What happened during these 6 days in “prison” is the stuff of social science legend. The results continues to shock psychologists to this day; many were not prepared for the outcome after 6 days, which saw guards take their roles to the extreme becoming overarchingly authoritarian and mentally abusive, and prisoners become down-trodden and eventually rebellious. A whistle-blower eventually called the experiment to an abrupt end (it was to have continued for 2 weeks).

Forty years on, researchers went back to interview professor Zimbardo and some of the participating guards and prisoners to probe their feelings now. Recollections from one of the guards is below.

[div class=attrib]From Stanford Magazine:[end-div]

I was just looking for some summer work. I had a choice of doing this or working at a pizza parlor. I thought this would be an interesting and different way of finding summer employment.

The only person I knew going in was John Mark. He was another guard and wasn’t even on my shift. That was critical. If there were prisoners in there who knew me before they encountered me, then I never would have been able to pull off anything I did. The act that I put on—they would have seen through it immediately.

What came over me was not an accident. It was planned. I set out with a definite plan in mind, to try to force the action, force something to happen, so that the researchers would have something to work with. After all, what could they possibly learn from guys sitting around like it was a country club? So I consciously created this persona. I was in all kinds of drama productions in high school and college. It was something I was very familiar with: to take on another personality before you step out on the stage. I was kind of running my own experiment in there, by saying, “How far can I push these things and how much abuse will these people take before they say, ‘knock it off?'” But the other guards didn’t stop me. They seemed to join in. They were taking my lead. Not a single guard said, “I don’t think we should do this.”

The fact that I ramped up the intimidation and the mental abuse without any real sense as to whether I was hurting anybody— I definitely regret that. But in the long run, no one suffered any lasting damage. When the Abu Ghraib scandal broke, my first reaction was, this is so familiar to me. I knew exactly what was going on. I could picture myself in the middle of that and watching it spin out of control. When you have little or no supervision as to what you’re doing, and no one steps in and says, “Hey, you can’t do this”—things just keep escalating. You think, how can we top what we did yesterday? How do we do something even more outrageous? I felt a deep sense of familiarity with that whole situation.

Sometimes when people know about the experiment and then meet me, it’s like, My God, this guy’s a psycho! But everyone who knows me would just laugh at that.

[div class=attrib]More from theSource here.[end-div]

3D Printing – A demonstration

Three dimensional “printing” has been around for a few years now, but the technology continues to advance by leaps and bounds. The technology has already progressed to such an extent that some 3D print machines can now “print” objects with moving parts and in color as well. And, we all thought those cool replicator machines in Star Trek were the stuff of science fiction.

[tube]LQfYm4ZVcVI[/tube]

Book Review: “Millennium People”: J.G. Ballard’s last hurrah

[div class=attrib]From Salon:[end-div]

In this, his last novel, the darkly comic “Millennium People,” J.G. Ballard returns to many of the themes that have established him as one of the 20th century’s principal chroniclers of modernity as dystopia. Throughout his career Ballard, who died in 2009, wrote many different variations on the same theme: A random act of violence propels a somewhat affectless protagonist into a violent pathology lurking just under the tissue-thin layer of postmodern civilization. As in “Crash” (1973) and “Concrete Island” (1974), the car parks, housing estates, motorways and suburban sprawl of London in “Millennium People” form a psychological geography. At its center, Heathrow Airport — a recurrent setting for Ballard — exerts its subtly malevolent pull on the bored lives and violent dreams of the alienated middle class.

“Millennium People” begins with the explosion of a bomb at Heathrow, which kills the ex-wife of David Markham, an industrial psychologist. The normally passive Markham sets out to investigate the anonymous bombing and the gated community of Chelsea Marina, a middle-class neighborhood that has become ground zero for a terrorist group and a burgeoning rebellion of London’s seemingly docile middle class. Exploited not so much for their labor as for their deeply ingrained and self-policing sense of social responsibility and good manners, the educated and professional residents of Chelsea Marina regard themselves as the “new proletariat,” with their exorbitant maintenance and parking fees as the new form of oppression, their careers, cultured tastes and education the new gulag.

In the company of a down-and-out priest and a film professor turned Che Guevara of the Volvo set, Markham quickly discovers that the line between amateur detective and amateur terrorist is not so clear, as he is drawn deeper into acts of sabotage and violence against the symbols and institutions of his own safe and sensible life. Targets include travel agencies, video stores, the Tate Modern, the BBC and National Film Theater — all “soporifics” designed to con people into believing their lives are interesting or going somewhere.

[div class=attrib]More from theSource here.[end-div]

Happy Birthday Neptune

One hundred and sixty-four years ago, or one Neptunian year, Neptune was first observed by telescope. Significantly, it was the first planet to be discovered deliberately; the existence and location of the gas giant was calculated mathematically. Subsequently, it was located by telescope, on 24 September 1846, and found to be within one degree of the mathematically predicted location. Astronomers hypothesized Neptune’s existence due to perturbations in the orbit of its planetary neighbor, Uranus, around the sun, which could only be explained by the presence of another object in nearby orbit. A triumph for the scientific method, and besides, it’s beautiful too.

[div class=attrib]Image courtesy of NASA.[end-div]

Culturally Specific Mental Disorders: A Bad Case of the Brain Fags

Is this man buff enough? Image courtesy of Slate

If you happen to have just read The Psychopath Test by Jon Ronson, this article in Slate is appropriately timely, and presents new fodder for continuing research (and a sequel). It would therefore come as no surprise to find Mr.Ronson trekking through Newfoundland in search of “Old Hag Syndrome”, a type of sleep paralysis, visiting art museums in Italy for “Stendhal Syndrome,” a delusional disorder experienced by Italians after studying artistic masterpieces, and checking on Nigerian college students afflicted by “Brain Fag Syndrome”. Then there is: “Wild Man Syndrome,” from New Guinea (a syndrome combining hyperactivity, clumsiness and forgetfulness), “Koro Syndrome” (a delusion of disappearing protruding body parts) first described in China over 2,000 years ago, “Jiko-shisen-kyofu” from Japan (a fear of offending others by glancing at them), and here in the west, “Muscle Dysmorphia Syndrome” (a delusion common in weight-lifters that one’s body is insufficiently ripped).

All of these and more can be found in the latest version of the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) manual.

[div class=attrib]From Slate:[end-div]

In 1951, Hong Kong psychiatrist Pow-Meng Yap authored an influential paper in the Journal of Mental Sciences on the subject of “peculiar psychiatric disorders”—those that did not fit neatly into the dominant disease-model classification scheme of the time and yet appeared to be prominent, even commonplace, in certain parts of the world. Curiously these same conditions—which include “amok” in Southeast Asia and bouffée délirante in French-speaking countries—were almost unheard of outside particular cultural contexts. The American Psychiatric Association has conceded that certain mysterious mental afflictions are so common, in some places, that they do in fact warrant inclusion as “culture-bound syndromes” in the official Diagnostic and Statistical Manual of Mental Disorders.

he working version of this manual, the DSM-IV, specifies 25 such syndromes. Take “Old Hag Syndrome,” a type of sleep paralysis in Newfoundland in which one is visited by what appears to be a rather unpleasant old hag sitting on one’s chest at night. (If I were a bitter, divorced straight man, I’d probably say something diabolical about my ex-wife here.) Then there’s gururumba, or “Wild Man Syndrome,” in which New Guinean males become hyperactive, clumsy, kleptomaniacal, and conveniently amnesic, “Brain Fag Syndrome” (more on that in a moment), and “Stendhal Syndrome,” a delusional disorder experienced mostly by Italians after gazing upon artistic masterpieces. The DSM-IV defines culture-bound syndromes as “recurrent, locality-specific patterns of aberrant behavior and troubling experience that may or may not be linked to a particular diagnostic category.”
And therein lies the nosological pickle: The symptoms of culture-bound syndromes often overlap with more general, known psychiatric conditions that are universal in nature, such as schizophrenia, body dysmorphia, and social anxiety. What varies across cultures, and is presumably moulded by them, is the unique constellation of symptoms, or “idioms of distress.”

Some scholars believe that many additional distinct culture-bound syndromes exist. One that’s not in the manual but could be, argue psychiatrists Gen Kanayama and Harrison Pope in a short paper published earlier this year in the Harvard Review of Psychiatry, is “muscle dysmorphia.” The condition is limited to Western males, who suffer the delusion that they are insufficiently ripped. “As a result,” write the authors, “they may lift weights compulsively in the gym, often gain large amounts of muscle mass, yet still perceive themselves as too small.” Within body-building circles, in fact, muscle dysmorphia has long been recognized as a sort of reverse anorexia nervosa. But it’s almost entirely unheard of among Asian men. Unlike hypermasculine Western heroes such as Hercules, Thor, and the chiseled Arnold of yesteryear, the Japanese and Chinese have tended to prefer their heroes fully clothed, mentally acute, and lithe, argue Kanayama and Pope. In fact, they say anabolic steroid use is virtually nonexistent in Asian countries, even though the drugs are considerably easier to obtain, being available without a prescription at most neighborhood drugstores.

[div class=attrib]More from theSource here.[end-div]

Disconnected?

[div class=attrib]From Slate:[end-div]

Have you heard that divorce is contagious? A lot of people have. Last summer a study claiming to show that break-ups can propagate from friend to friend to friend like a marriage-eating bacillus spread across the news agar from CNN to CBS to ABC with predictable speed. “Think of this ‘idea’ of getting divorced, this ‘option’ of getting divorced like a virus, because it spreads more or less the same way,” explained University of California-San Diego professor James Fowler to the folks at Good Morning America.

It’s a surprising, quirky, and seemingly plausible finding, which explains why so many news outlets caught the bug. But one weird thing about the media outbreak was that the study on which it was based had never been published in a scientific journal. The paper had been posted to the Social Science Research Network web site, a sort of academic way station for working papers whose tagline is “Tomorrow’s Research Today.” But tomorrow had not yet come for the contagious divorce study: It had never actually passed peer review, and still hasn’t. “It is under review,” Fowler explained last week in an email. He co-authored the paper with his long-time collaborator, Harvard’s Nicholas Christakis, and lead author Rose McDermott.

A few months before the contagious divorce story broke, Slate ran an article I’d written based on a related, but also unpublished, scientific paper. The mathematician Russell Lyons had posted a dense treatise on his website suggesting that the methods employed by Christakis and Fowler in their social network studies were riddled with statistical errors at many levels. The authors were claiming—in the New England Journal of Medicine, in a popular book, in TED talks, in snappy PR videos—that everything from obesity to loneliness to poor sleep could spread from person to person to person like a case of the galloping crud. But according to Lyons and several other experts, their arguments were shaky at best. “It’s not clear that the social contagionists have enough evidence to be telling people that they owe it to their social network to lose weight,” I wrote last April. As for the theory that obesity and divorce and happiness contagions radiate from human beings through three degrees of friendship, I concluded “perhaps it’s best to flock away for now.”

The case against Christakis and Fowler has grown since then. The Lyons paper passed peer review and was published in the May issue of the journal Statistics, Politics, and Policy. Two other recent papers raise serious doubts about their conclusions. And now something of a consensus is forming within the statistics and social-networking communities that Christakis and Fowler’s headline-grabbing contagion papers are fatally flawed. Andrew Gelman, a professor of statistics at Columbia, wrote a delicately worded blog post in June noting that he’d “have to go with Lyons” and say that the claims of contagious obesity, divorce and the like “have not been convincingly demonstrated.” Another highly respected social-networking expert, Tom Snijders of Oxford, called the mathematical model used by Christakis and Fowler “not coherent.” And just a few days ago, Cosma Shalizi, a statistician at Carnegie Mellon, declared, “I agree with pretty much everything Snijders says.”

[div class=attrib]More from theSource here.[end-div]

MondayPoem: If You Forget Me

Pablo Neruda (1904–1973)

[div class=attrib]If You Forget Me, Pablo Neruda[end-div]

I want you to know
one thing.

You know how this is:
if I look
at the crystal moon, at the red branch
of the slow autumn at my window,
if I touch
near the fire
the impalpable ash
or the wrinkled body of the log,
everything carries me to you,
as if everything that exists,
aromas, light, metals,
were little boats
that sail
toward those isles of yours that wait for me.

Well, now,
if little by little you stop loving me
I shall stop loving you little by little.

If suddenly
you forget me
do not look for me,
for I shall already have forgotten you.

If you think it long and mad,
the wind of banners
that passes through my life,
and you decide
to leave me at the shore
of the heart where I have roots,
remember
that on that day,
at that hour,
I shall lift my arms
and my roots will set off
to seek another land.

But
if each day,
each hour,
you feel that you are destined for me
with implacable sweetness,
if each day a flower
climbs up to your lips to seek me,
ah my love, ah my own,
in me all that fire is repeated,
in me nothing is extinguished or forgotten,
my love feeds on your love, beloved,
and as long as you live it will be in your arms
without leaving mine.

The Allure of Steampunk Videotelephony and the Telephonoscope

Video telephony as imagined in 1910

A concept for the videophone surfaced just a couple of years after the telephone was patented in the United States. The telephonoscope as it was called first appeared in Victorian journals and early French science fiction in 1878.

In 1891 Alexander Graham Bell recorded his concept of an electrical radiophone, which discussed, “…the possibility of seeing by electricity”. He later went on to predict that, “…the day would come when the man at the telephone would be able to see the distant person to whom he was speaking”.

The world’s first videophone entered service in 1934, in Germany. The service was offered in select post offices linking several major German cities, and provided bi-directional voice and image on 8 inch square displays. In the U.S., AT&T launched the Picturephone in the mid-1960s. However, the costly equipment, high-cost per call, and inconveniently located public video-telephone booths ensured that the service would never gain public acceptance. Similar to the U.S., experience major telephone companies in France, Japan and Sweden had limited success with video-telephony during the 1970s-80s.

Major improvements in video technology, telecommunications deregulation and increases in bandwidth during the 1980s-90s brought the price point down considerably. However, significant usage remained mostly within the realm of major corporations due to the still not insignificant investment in equipment and cost of bandwidth.

Fast forward to the 21st century. Skype and other IP (internet protocol) based services have made videochat commonplace and affordable, and in most cases free.It now seems that videchat has become almost ubiquitous. Recent moves into this space by tech heavyweights like Apple with Facetime, Microsoft with its acquisition of Skype, Google with its Google Plus social network video calling component, and Facebook’s new video calling service will in all likelihood add further momentum.

Of course, while videochat is an effective communication tool it does have a cost in terms of personal and social consequences over its non-video cousin, the telephone. Next time you videochat rather than make a telephone call you will surely be paying greater attention to your bad hair and poor grooming, your crumpled clothes, uncoordinated pajamas or lack thereof, the unwanted visitors in the background shot, and the not so subtle back-lighting that focuses attention on the clutter in your office or bedroom. Doesn’t it make you harken back for the days of the simple telephone? Either that or perhaps you are drawn to the more alluring and elegant steampunk form of videochat as imagined by the Victorians, in the image above.

The Best of States, the Worst of States

[div class=attrib]From Frank Jacobs / BigThink:[end-div]

Are these maps cartograms or mere infographics?

An ‘information graphic’ is defined as any graphic representation of data. It follows from that definition that infographics are less determined by type than by purpose. Which is to represent complex information in a readily graspable graphic format. Those formats are often, but not only: diagrams, flow charts, and maps.

Although one definition of maps – the graphic representation of spatial data – is very similar to that of infographics, the two are easily distinguished by, among other things, the context of the latter, which are usually confined to and embedded in technical and journalistic writing.

Cartograms are a subset of infographics, limited to one type of graphic representation: maps. On these maps, one set of quantitative information (usually surface or distance) is replaced by another (often demographic data or electoral results). The result is an informative distortion of the map (1).

The distortion on these maps is not of the distance-bending or surface-stretching kind. It merely substitutes the names of US states with statistical information relevant to each of them (2). This substitution is non-quantitative, affecting the toponymy rather than the topography of the map. So is this a mere infographic? As the information presented is statistical (each label describes each state as first or last in a Top 50), I’d say this is – if you’ll excuse the pun – a borderline case.

What’s more relevant, from this blog’s perspective, is that it is an atypical, curious and entertaining use of cartography.

The first set of maps labels each and every one of the states as best and worst at something. All of those distinctions, both the favourable and the unfavourable kind, are backed up by some sort of evidence.

The first map, the United States of Awesome, charts fifty things that each state of the Union is best at. Most of those indicators, 12 in all, are related to health and well-being (3). Ten are economic (4), six environmental (5), five educational (6). Three can be classified as ‘moral’, even if these particular distinctions make for strange bedfellows (7).

The best thing that can be said about Missouri and Illinois, apparently, is that they’re extremely average (8). While that may excite few people, it will greatly interest political pollsters and anyone in need of a focus group. Virginia and Indiana are the states with the most birthplaces of presidents and vice-presidents, respectively. South Carolinians prefer to spend their time golfing, Pennsylvanians hunting. Violent crime is lowest in Maine, public corruption in Nebraska. The most bizarre distinctions, finally, are reserved for New Mexico (Spaceport Home), Oklahoma (Best Licence Plate) and Missouri (Bromine Production). If that’s the best thing about those states, what might be the worst?

[div class=attrib]More from theSource here.[end-div]

Cy Twombly, Idiosyncratic Painter, Dies at 83

Cy Twombly. Image courtesy of Sundance Channel

[div class=attrib]From the New York Times:[end-div]

Cy Twombly, whose spare childlike scribbles and poetic engagement with antiquity left him stubbornly out of step with the movements of postwar American art even as he became one of the era’s most important painters, died in Rome Tuesday. He was 83.

The cause was not immediately known, although Mr. Twombly had suffered from cancer. His death was announced by the Gagosian Gallery, which represents his work.

In a career that slyly subverted Abstract Expressionism, toyed briefly with Minimalism, seemed barely to acknowledge Pop Art and anticipated some of the concerns of Conceptualism, Mr. Twombly was a divisive artist almost from the start. The curator Kirk Varnedoe, on the occasion of a 1994 retrospective at the Museum of Modern Art, wrote that his work was “influential among artists, discomfiting to many critics and truculently difficult not just for a broad public, but for sophisticated initiates of postwar art as well.” The critic Robert Hughes called him “the Third Man, a shadowy figure, beside that vivid duumvirate of his friends Jasper Johns and Robert Rauschenberg.”

Mr. Twombly’s decision to settle permanently in southern Italy in 1957 as the art world shifted decisively in the other direction, from Europe to New York, was only the most symbolic of his idiosyncrasies. He avoided publicity throughout his life and mostly ignored his critics, who questioned constantly whether his work deserved a place at the forefront of 20th-century abstraction, though he lived long enough to see it arrive there. It didn’t help that his paintings, because of their surface complexity and whirlwinds of tiny detail – scratches, erasures, drips, penciled fragments of Italian and classical verse amid scrawled phalluses and buttocks – lost much of their power in reproduction.

But Mr. Twombly, a tall, rangy Virginian who once practiced drawing in the dark to make his lines less purposeful, steadfastly followed his own program and looked to his own muses: often literary ones like Catullus, Rumi, Pound and Rilke. He seemed to welcome the privacy that came with unpopularity.

“I had my freedom and that was nice,” he said in a rare interview, with Nicholas Serota, the director of the Tate, before a 2008 survey of his career at the Tate Modern.

The critical low point probably came after a 1964 exhibition at the Leo Castelli Gallery in New York that was widely panned. The artist and writer Donald Judd, who was hostile toward painting in general, was especially damning even so, calling the show a fiasco. “There are a few drips and splatters and an occasional pencil line,” he wrote in a review. “There isn’t anything to these paintings.”

[div class=attrib]More from theSource here.[end-div]

Book Review: The Psychopath Test. Jon Ronson

Hilarious and disturbing. I suspect Jon Ronson would strike a couple of checkmarks in the Hare PCL-R Checklist against my name for finding his latest work both hilarious and disturbing. Would this, perhaps, make me a psychopath?

Jon Ronson is author of The Psychopath Test and the Hare PCL-R, named for its inventor,  Canadian psychologist Bob Hare, is the gold standard in personality trait measurement for psychopathic disorder (officially known as Antisocial Personality Disorder).

Ronson’s book is a fascinating journey through the “madness industry” covering psychiatrists, clinical psychologists, criminal scientists, criminal profilers, and of course their clients: patients, criminals and the “insane” at large. Fascinated by the psychopathic traits that the industry applied to the criminally insane, Ronson goes on to explore these behavior and personality traits in the general population. And, perhaps to no surprise he finds that a not insignificant proportion of business leaders and others in positions on authority could be classified as “psychopaths” based on the standard PCL-R checklist.

Ronson’s stories are poignant. He tells us the tale of Tony, who feigned madness to avoid what he believed would be have been a harsher prison sentence for a violent crime. Instead, Tony found himself in Broadmoor, a notorious maximum security institution for the criminally insane. Twelve years on, Tony still incarcerated, finds it impossible to convince anyone of his sanity, despite behaving quite normally. His doctors now admit that he was sane at the time of admission, but agree that he must have been nuts to feign insanity in the first place, and furthermore only someone who is insane could behave so “sanely” while surrounded by the insane!

Tony’s story and the other characters that Ronson illuminates in this work are thoroughly memorable, especially Al Dunlap, empathy poor, former CEO of Sunbeam — perhaps one of the high-functioning psychopaths who lives in our midst. Peppered throughout Ronson’s interviews with madmen and madwomen, are his perpetual anxiety and self-reflection; he now has considerable diagnostic power and insight versed on such tools as the PCL-R checklist. As a result, Ronson begins seeing “psychopaths” everywhere.

My only criticism of the book is that Jon Ronson should have made it 200 pages longer and focused much more on the “psychopathic” personalities that roam amongst us, not just those who live behind bars, and on the madness industry itself, now seemingly lead by the major  pharmaceutical companies.

The Cutting-Edge Physics of Jackson Pollock

 

Untitled, ca. 1948-49. Jackson Pollock

[div class=attrib]From Wired:[end-div]

Jackson Pollock, famous for his deceptively random-seeming drip paintings, took advantage of certain features of fluid dynamics years before physicists thought to study them.

“His particular painting technique essentially lets physics be a player in the creative process,” said physicist Andrzej Herczynski of Boston College, coauthor of a new paper in Physics Today that analyzes the physics in Pollock’s art. “To the degree that he lets physics take a role in the painting process, he is inviting physics to be a coauthor of his pieces.”

Pollock’s unique technique — letting paint drip and splatter on the floor rather than spreading it on a vertical canvas — revolutionized the art world in the 1940s. The resulting streaks and blobs look haphazard, but art historians and, more recently, physicists argue they’re anything but. Some have suggested that the snarls of paint have lasting appeal because they reflect fractal geometry that shows up in clouds and coast lines.

Now, Boston College art historian Claude Cernuschi, Harvard mathematician Lakshminarayanan Mahadevan and Herczynski have turned the tools of physics on Pollock’s painting process. In what they believe is the first quantitative analysis of drip painting, the researchers derived an equation for how Pollock spread paint.

The team focused on the painting Untitled 1948-49, which features wiggling lines and curlicues of red paint. Those loops formed through a fluid instability called coiling, in which thick fluids fold onto themselves like coils of rope.

“People thought perhaps Pollock created this effect by wiggling his hand in a sinusoidal way, but he didn’t,” Herczynski said.

Coiling is familiar to anyone who’s ever squeezed honey on toast, but it’s only recently grabbed the attention of physicists. Recent studies have shown that the patterns fluids form as they fall depends on their viscosity and their speed. Viscous liquids fall in straight lines when moving quickly, but form loops, squiggles and figure eights when poured slowly, as seen in this video of honey falling on a conveyor belt.

The first physics papers that touched on this phenomenon appeared in the late 1950s, but Pollock knew all about it in 1948. Pollock was famous for searching for using different kinds of paints than anyone else in the art world, and mixing his paints with solvents to make them thicker or thinner. Instead of using a brush or pouring paint directly from a can, he lifted paint with a rod and let it dribble onto the canvas in continuous streams. By moving his arm at different speeds and using paints of different thicknesses, he could control how much coiling showed up in the final painting.

[div class=attrib]More from theSource here.[end-div]

The Homogenous Culture of “Like”

[div class=attrib]Echo and Narcissus, John William Waterhouse [Public domain], via Wikimedia Commons[end-div]

About 12 months ago I committed suicide — internet suicide that is. I closed my personal Facebook account after recognizing several important issues. First, it was a colossal waste of time; time that I could and should be using more productively. Second, it became apparent that following, belonging and agreeing with others through the trivial “wall” status-in-a-can postings and now pervasive “like button” was nothing other than a declaration of mindless group-think and a curious way to maintain social standing. So, my choice was clear: become part of a group that had similar interests, like-minded activities, same politics, parallel beliefs, common likes and dislikes; or revert to my own weirdly independent path. I chose the latter, rejecting the road towards a homogeneity of ideas and a points-based system of instant self-esteem.

This facet of the Facebook ecosystem has an affect similar to the filter bubble that I described is a previous post, The Technology of Personalization and the Bubble Syndrome. In both cases my explicit choices on Facebook, such as which friends I follow or which content I “like”, and my implicit browsing behaviors that increasingly filter what I see and don’t see causes a narrowing of the world of ideas to which I am a exposed. This cannot be good.

So, although I may incur the wrath of author Neil Strauss for including an excerpt of his recent column below, I cannot help but “like” what he has to say. More importantly, he does a much more eloquent job of describing the issue which commoditizes social relationships and, dare I say it, lowers the barrier to entry for narcissists to grow and fine tune their skills.

[div class=attrib]By Neil Strauss for the Wall Street Journal:[end-div]

If you happen to be reading this article online, you’ll notice that right above it, there is a button labeled “like.” Please stop reading and click on “like” right now.

Thank you. I feel much better. It’s good to be liked.

Don’t forget to comment on, tweet, blog about and StumbleUpon this article. And be sure to “+1” it if you’re on the newly launched Google+ social network. In fact, if you don’t want to read the rest of this article, at least stay on the page for a few minutes before clicking elsewhere. That way, it will appear to the site analytics as if you’ve read the whole thing.

Once, there was something called a point of view. And, after much strife and conflict, it eventually became a commonly held idea in some parts of the world that people were entitled to their own points of view.

Unfortunately, this idea is becoming an anachronism. When the Internet first came into public use, it was hailed as a liberation from conformity, a floating world ruled by passion, creativity, innovation and freedom of information. When it was hijacked first by advertising and then by commerce, it seemed like it had been fully co-opted and brought into line with human greed and ambition.

But there was one other element of human nature that the Internet still needed to conquer: the need to belong. The “like” button began on the website FriendFeed in 2007, appeared on Facebook in 2009, began spreading everywhere from YouTube to Amazon to most major news sites last year, and has now been officially embraced by Google as the agreeable, supportive and more status-conscious “+1.” As a result, we can now search not just for information, merchandise and kitten videos on the Internet, but for approval.

Just as stand-up comedians are trained to be funny by observing which of their lines and expressions are greeted with laughter, so too are our thoughts online molded to conform to popular opinion by these buttons. A status update that is met with no likes (or a clever tweet that isn’t retweeted) becomes the equivalent of a joke met with silence. It must be rethought and rewritten. And so we don’t show our true selves online, but a mask designed to conform to the opinions of those around us.

Conversely, when we’re looking at someone else’s content—whether a video or a news story—we are able to see first how many people liked it and, often, whether our friends liked it. And so we are encouraged not to form our own opinion but to look to others for cues on how to feel.

“Like” culture is antithetical to the concept of self-esteem, which a healthy individual should be developing from the inside out rather than from the outside in. Instead, we are shaped by our stats, which include not just “likes” but the number of comments generated in response to what we write and the number of friends or followers we have. I’ve seen rock stars agonize over the fact that another artist has far more Facebook “likes” and Twitter followers than they do.

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Let America Be America Again

[div class=attrib]Let America Be America Again, Langston Hughes[end-div]

Let America be America again.
Let it be the dream it used to be.
Let it be the pioneer on the plain
Seeking a home where he himself is free.

(America never was America to me.)

Let America be the dream the dreamers dreamed–
Let it be that great strong land of love
Where never kings connive nor tyrants scheme
That any man be crushed by one above.

(It never was America to me.)

O, let my land be a land where Liberty
Is crowned with no false patriotic wreath,
But opportunity is real, and life is free,
Equality is in the air we breathe.

(There’s never been equality for me,
Nor freedom in this “homeland of the free.”)

Say, who are you that mumbles in the dark?
And who are you that draws your veil across the stars?

I am the poor white, fooled and pushed apart,
I am the Negro bearing slavery’s scars.
I am the red man driven from the land,
I am the immigrant clutching the hope I seek–
And finding only the same old stupid plan
Of dog eat dog, of mighty crush the weak.

I am the young man, full of strength and hope,
Tangled in that ancient endless chain
Of profit, power, gain, of grab the land!
Of grab the gold! Of grab the ways of satisfying need!
Of work the men! Of take the pay!
Of owning everything for one’s own greed!

I am the farmer, bondsman to the soil.
I am the worker sold to the machine.
I am the Negro, servant to you all.
I am the people, humble, hungry, mean–
Hungry yet today despite the dream.
Beaten yet today–O, Pioneers!
I am the man who never got ahead,
The poorest worker bartered through the years.

Yet I’m the one who dreamt our basic dream
In the Old World while still a serf of kings,
Who dreamt a dream so strong, so brave, so true,
That even yet its mighty daring sings
In every brick and stone, in every furrow turned
That’s made America the land it has become.
O, I’m the man who sailed those early seas
In search of what I meant to be my home–
For I’m the one who left dark Ireland’s shore,
And Poland’s plain, and England’s grassy lea,
And torn from Black Africa’s strand I came
To build a “homeland of the free.”

The free?

Who said the free? Not me?
Surely not me? The millions on relief today?
The millions shot down when we strike?
The millions who have nothing for our pay?
For all the dreams we’ve dreamed
And all the songs we’ve sung
And all the hopes we’ve held
And all the flags we’ve hung,
The millions who have nothing for our pay–
Except the dream that’s almost dead today.

O, let America be America again–
The land that never has been yet–
And yet must be–the land where every man is free.
The land that’s mine–the poor man’s, Indian’s, Negro’s, ME–
Who made America,
Whose sweat and blood, whose faith and pain,
Whose hand at the foundry, whose plow in the rain,
Must bring back our mighty dream again.

Sure, call me any ugly name you choose–
The steel of freedom does not stain.
From those who live like leeches on the people’s lives,
We must take back our land again,
America!

O, yes,
I say it plain,
America never was America to me,
And yet I swear this oath–
America will be!

Out of the rack and ruin of our gangster death,
The rape and rot of graft, and stealth, and lies,
We, the people, must redeem
The land, the mines, the plants, the rivers.
The mountains and the endless plain–
All, all the stretch of these great green states–
And make America again!

Undiscovered

[div class=attrib]From Eurozine:[end-div]

Neurological and Darwinistic strands in the philosophy of consciousness see human beings as no more than our evolved brains. Avoiding naturalistic explanations of human beings’ fundamental difference from other animals requires openness to more expansive approaches, argues Raymond Tallis.

For several decades I have been arguing against what I call biologism. This is the idea, currently dominant within secular humanist circles, that humans are essentially animals (or at least much more beastly than has been hitherto thought) and that we need therefore to look to the biological sciences, and only there, to advance our understanding of human nature. As a result of my criticism of this position I have been accused of being a Cartesian dualist, who thinks that the mind is some kind of a ghost in the machinery of the brain. Worse, it has been suggested that I am opposed to Darwinism, to neuroscience or to science itself. Worst of all, some have suggested that I have a hidden religious agenda. For the record, I regard neuroscience (which was my own area of research) as one of the greatest monuments of the human intellect; I think Cartesian dualism is a lost cause; and I believe that Darwin’s theory is supported by overwhelming evidence. Nor do I have a hidden religious agenda: I am an atheist humanist. And this is in fact the reason why I have watched the rise of biologism with such dismay: it is a consequence of the widespread assumption that the only alternative to a supernatural understanding of human beings is a strictly naturalistic one that sees us as just another kind of beast and, ultimately, as being less conscious agents than pieces of matter stitched into the material world.

This is to do humanity a gross disservice, as I think we are so much more than gifted chimps. Unpacking the most “ordinary” moment of human life reveals knowledge, skills, emotions, intuitions, a sense of past and future and of an infinitely elaborated world, that are not to be found elsewhere in the living world.

Biologism has two strands: “Neuromania” and “Darwinitis”. Neuromania arises out of the belief that human consciousness is identical with neural activity in certain parts of the brain. It follows from this that the best way to investigate what we humans truly are, to understand the origins of our beliefs, our predispositions, our morality and even our aesthetic pleasures, will be to peer into the brains of human subjects using the latest scanning technology. This way we shall know what is really going on when we are having experiences, thinking thoughts, feeling emotions, remembering memories, making decisions, being wise or silly, breaking the law, falling in love and so on.

The other strand is Darwinitis, rooted in the belief that evolutionary theory not only explains the origin of the species H. sapiens – which it does, of course – but also explains humans as they are today; that people are at bottom the organisms forged by the processes of natural selection and nothing more.

[div class=attrib]More from theSource here.[end-div]

Scientific Evidence for Indeterminism

[div class=attrib]From Evolutionary Philosophy:[end-div]

The advantage of being a materialist is that so much of our experience seems to point to a material basis for reality. Idealists usually have to appeal to some inner knowing as the justification of their faith that mind, not matter, is the foundation of reality. Unfortunately the appeal to inner knowing is exactly what a materialist has trouble with in the first place.

Charles Sanders Peirce was a logician and a scientist first and a philosopher second. He thought like a scientists and as he developed his evolutionary philosophy his reasons for believing in it were very logical and scientific. One of the early insights that lead him to his understanding of an evolving universe was his realization that the state of our world or its future was not necessarily predetermined.

One conclusion that materialism tends to lead to is a belief that ‘nothing comes from nothing.’ Everything comes from some form of matter or interaction between material things. Nothing just immerges spontaneously. Everything is part of an ongoing chain of cause and effect. The question, how did the chain of cause and effect start, is one that is generally felt best to be left to the realm of metaphysics and unsuitable for scientific investigation.

And so the image of a materially based universe tends to lead to a deterministic account of reality. You start with something and then that something unravels according to immutable laws. As an image to picture imagine this, a large bucket filled with pink and green tennis balls. Then imagine that there are two smaller buckets that are empty. This arrangement represents the starting point of the universe. The natural laws of this universe dictate that individual tennis balls will be removed from the large bucket and placed in one of the two smaller ones. If the ball that is removed is pink it goes in the left hand bucket and if it is green it goes in the right hand bucket. In this simple model the end state of the universe is going to be that the large bucket will be empty, the left hand bucket will be filled with pink tennis balls and the right hand bucket will be filled with green tennis balls. The outcome of the process is predetermined by the initial conditions and the laws governing the subsequent activity.

A belief in this kind of determinism seems to be constantly reinforced for us through our ongoing experience with the material universe.  Go ahead pick up a rock hold it up and then let it go. It will fall. Every single time it will fall. It is predetermined that a rock that is held up in the air and then dropped will fall. Punch a wall. It will hurt – every single time.  Over and over again our experience of everyday reality seems to reinforce the fact that we live in a universe which is exactly governed by immutable laws.

[div class=attrib]More from theSource here.[end-div]

Brilliant, but Distant: Most Far-Flung Known Quasar Offers Glimpse into Early Universe

[div class=attrib]From Scientific American:[end-div]

Peering far across space and time, astronomers have located a luminous beacon aglow when the universe was still in its infancy. That beacon, a bright astrophysical object known as a quasar, shines with the luminosity of 63 trillion suns as gas falling into a supermassive black holes compresses, heats up and radiates brightly. It is farther from Earth than any other known quasar—so distant that its light, emitted 13 billion years ago, is only now reaching Earth. Because of its extreme luminosity and record-setting distance, the quasar offers a unique opportunity to study the conditions of the universe as it underwent an important transition early in cosmic history.

By the time the universe was one billion years old, the once-neutral hydrogen gas atoms in between galaxies had been almost completely stripped of their electrons (ionized) by the glow of the first massive stars. But the full timeline of that process, known as re-ionization because it separated protons and electrons, as they had been in the first 380,000 years post–big bang, is somewhat uncertain. Quasars, with their tremendous intrinsic brightness, should make for excellent markers of the re-ionization process, acting as flashlights to illuminate the intergalactic medium. But quasar hunters working with optical telescopes had only been able to see back as far as 870 million years after the big bang, when the intergalactic medium’s transition from neutral to ionized was almost complete. (The universe is now 13.75 billion years old.) Beyond that point, a quasar’s light has been so stretched, or redshifted, by cosmic expansion that it no longer falls in the visible portion of the electromagnetic spectrum but rather in the longer-wavelength infrared.

Daniel Mortlock, an astrophysicist at Imperial College London, and his colleagues used that fact to their advantage. The researchers looked for objects that showed up in a large-area infrared sky survey but not in a visible-light survey covering the same area of sky, essentially isolating the high-redshift objects. They could thus discover a quasar, known as ULAS J1120+0641, at redshift 7.085, corresponding to a time just 770 million years after the big bang. That places the newfound quasar about 100 million years earlier in cosmic history than the previous record holder, which was at redshift 6.44. Mortlock and his colleagues report their finding in the June 30 issue of Nature. (Scientific American is part of Nature Publishing Group.)

[div class=attrib]More from theSource here.[end-div]

New Tevatron collider result may help explain the matter-antimatter asymmetry in the universe

[div class=attrib]From Symmetry Breaking:[end-div]

About a year ago, the DZero collaboration at Fermilab published  a tantalizing result in which the universe unexpectedly showed a preference for matter over antimatter. Now the collaboration has more data, and the evidence for this effect has grown stronger.

The result is extremely exciting: The question of why our universe should exist solely of matter is one of the burning scientific questions of our time. Theory predicts that matter and antimatter was made in equal quantities. If something hadn’t slightly favored matter over antimatter, our universe would consist of a bath of photons and little else. Matter wouldn’t exist.

The Standard Model predicts a value near zero for one of the parameters that is associated with the difference between the production of muons and antimuons in B meson decays. The DZero results from 2010 and 2011 differ from zero and are consistent with each other. The vertical bars of the measurements indicate their uncertainty. 

The 2010 measurement looked at muons and antimuons emerging from the decays of neutral mesons containing bottom quarks, which is a source that scientists have long expected to be a fruitful place to study the behavior of matter and antimatter under high-energy conditions. DZero scientists found a 1 percent difference between the production of pairs of muons and pairs of antimuons in B meson decays at Fermilab’s Tevatron collider. Like all measurements, that measurement had an uncertainty associated with it. Specifically, there was about a 0.07 percent chance that the measurement could come from a random fluctuation of the data recorded. That’s a tiny probability, but since DZero makes thousands of measurements, scientists expect to see the occasional rare fluctuation that turns out to be nothing.

During the last year, the DZero collaboration has taken more data and refined its analysis techniques. In addition, other scientists have raised questions and requested additional cross-checks. One concern was whether the muons and antimuons are actually coming from the decay of B mesons, rather than some other source.

Now, after incorporating almost 50 percent more data and dozens of cross-checks, DZero scientists are even more confident in the strength of their result. The probability that the observed effect is from a random fluctuation has dropped quite a bit and now is only 0.005 percent. DZero scientists will present the details of their analysis in a seminar geared toward particle physicists later today.

Scientists are a cautious bunch and require a high level of certainty to claim a discovery. For a measurement of the level of certainty achieved in the summer of 2010, particle physicists claim that they have evidence for an unexpected phenomenon. A claim of discovery requires a higher level of certainty.

If the earlier measurement were a fluctuation, scientists would expect the uncertainty of the new result to grow, not get smaller. Instead, the improvement is exactly what scientists expect if the effect is real. But the uncertainty associated with the new result is still too high to claim a discovery. For a discovery, particle physicists require an uncertainty of less than 0.00005 percent.

The new result suggests that DZero is hot on the trail of a crucial clue in one of the defining questions of all time: Why are we here at all?

[div class=attrib]More from theSource here.[end-div]

Banned and Challenged Books: A Summer Reading List

Each year the American Library Association publishes a list of attempts by groups and individuals to have books banned from classrooms, libraries and other public places in the United States. The list includes classics such as Ulysses, 1984, Beloved, Gone With the Wind, and The Lord of the Rings. So, if you’re at a loss this summer for a good book in which to get lost, pick one (or three) from the list below and mark one up for the freedom of ideas.

[div class=attrib]From American Library Association:[end-div]

The titles below represent banned or challenged books on that list ( see the entire list here). For more information on why these books were challenged, visit challenged classics and the Banned Books Week Web site

1. The Great Gatsby, by F. Scott Fitzgerald
2. The Catcher in the Rye, by J.D. Salinger
3. The Grapes of Wrath, by John Steinbeck
4. To Kill a Mockingbird, by Harper Lee
5. The Color Purple, by Alice Walker
6. Ulysses, by James Joyce
7. Beloved, by Toni Morrison
8. The Lord of the Flies, by William Golding
9. 1984, by George Orwell

11. Lolita, by Vladmir Nabokov
12. Of Mice and Men, by John Steinbeck

15. Catch-22, by Joseph Heller
16. Brave New World, by Aldous Huxley
17. Animal Farm, by George Orwell
18. The Sun Also Rises, by Ernest Hemingway
19. As I Lay Dying, by William Faulkner
20. A Farewell to Arms, by Ernest Hemingway

23. Their Eyes Were Watching God, by Zora Neale Hurston
24. Invisible Man, by Ralph Ellison
25. Song of Solomon, by Toni Morrison
26. Gone with the Wind, by Margaret Mitchell
27. Native Son, by Richard Wright
28. One Flew Over the Cuckoo’s Nest, by Ken Kesey
29. Slaughterhouse-Five, by Kurt Vonnegut
30. For Whom the Bell Tolls, by Ernest Hemingway

33. The Call of the Wild, by Jack London

36. Go Tell it on the Mountain, by James Baldwin

38. All the King’s Men, by Robert Penn Warren

40. The Lord of the Rings, by J.R.R. Tolkien

45. The Jungle, by Upton Sinclair

48. Lady Chatterley’s Lover, by D.H. Lawrence
49. A Clockwork Orange, by Anthony Burgess
50. The Awakening, by Kate Chopin

53. In Cold Blood, by Truman Capote

55. The Satanic Verses, by Salman Rushdie

57. Sophie’s Choice, by William Styron

64. Sons and Lovers, by D.H. Lawrence

66. Cat’s Cradle, by Kurt Vonnegut
67. A Separate Peace, by John Knowles

73. Naked Lunch, by William S. Burroughs
74. Brideshead Revisited, by Evelyn Waugh
75. Women in Love, by D.H. Lawrence

80. The Naked and the Dead, by Norman Mailer

84. Tropic of Cancer, by Henry Miller

88. An American Tragedy, by Theodore Dreiser

97. Rabbit, Run, by John Updike

The Arrow of Time

No, not a cosmologist’s convoluted hypothesis as to why time moves in only (so far discovered) one direction. The arrow of time here is a thoroughly personal look at the linearity of the 4th dimension and an homage to the family portrait in the process.

The family takes a “snapshot” of each member at the same time each year; we’ve just glimpsed the latest for 2011. And, in so doing they give us much to ponder on the nature of change and the nature of stasis.

[div class=attrib]From Diego Goldberg and family:[end-div]

Catch all the intervening years between 1976 and 2011 at theSource here.

More subatomic spot changing

[div class=attrib]From the Economist:[end-div]

IN THIS week’s print edition we report a recent result from the T2K collaboration in Japan which has found strong hints that neutrinos, the elusive particles theorists believe to be as abundant in the universe as photons, but which almost never interact with anything, are as fickle as they are coy.

It has been known for some time that neutrinos switch between three types, or flavours, as they zip through space at a smidgen below the speed of light. The flavours are distinguished by the particles which emerge on the rare occasion a neutrino does bump into something. And so, an electron-neutrino conjures up an electron, a muon-neutrino, a muon, and a tau-neutrino, a tau particle (muons and tau are a lot like electrons, but heavier and less stable). Researchers at T2K observed, for the first time, muon-neutrinos transmuting into the electron variety—the one sort of spot-changing that had not been seen before. But their results, with a 0.7% chance of being a fluke, was, by the elevated standards of particle physics, tenuous.

Now, T2K’s rival across the Pacific has made it less so. MINOS beams muon-neutrinos from Fermilab, America’s biggest particle-physics lab located near Chicago, to a 5,000-tonne detector sitting in the Soudan mine in Minnesota, 735km (450 miles) to the north-west. On June 24th its researchers annouced that they, too, had witnessed some of muon-neutrinos change to the electron variety along the way. To be precise, the experiment recorded 62 events which could have been caused by electron-neutrinos. If the proposed transmutation does not occur in nature, it ought to have seen no more than 49 (the result of electron-neutrinos streaming in from space or radioactive rocks on Earth). Were the T2K figures spot on, as it were, it should have seen 71.

As such, the result from MINOS, which uses different methods to study the same phenomenon, puts the transmutation hypothesis on a firmer footing. This advances the search for a number known as delta (?). This is one of the parameters of the formula which physicists think describes neutrinos spot-changing antics. Physicists are keen to pin it down, since it also governs the description of the putative asymmetry between matter and antimatter that left matter as the dominant feature of the universe after the Big Bang.

In light of the latest result, it remains unclear whether either the American or the Japanese experiment is precise enough to measure delta. In 2013, however, MINOS will be supplanted by NOvA, a fancier device located in another Minnesota mine 810km from Fermilab’s muon-neutrino cannon. That ought to do the trick. Then again, nature has the habit of springing surprises.

And in more ways than one. Days after T2K’s run was cut short by the earthquake that shook Japan in March, devastating the muon-neutrino source at J-PARC, the country’s main particle-accelerator complex, MINOS had its own share of woe when the Soudan mine sustained significant flooding. Fortunately, the experiment itself escaped relatively unscathed. But the eerie coincidence spurred some boffins, not a particularly superstitious bunch, to speak of a neutrino curse. Fingers crossed that isn’t the case.

[div class=attrib]More from theSource here.[end-div]

[div]Image courtesy of Fermilab.[end-div]

Solar power from space: Beam it down, Scotty

[div class=attrib]From the Economist:[end-div]

THE idea of collecting solar energy in space and beaming it to Earth has been around for at least 70 years. In “Reason”, a short story by Isaac Asimov that was published in 1941, a space station transmits energy collected from the sun to various planets using microwave beams.

The advantage of intercepting sunlight in space, instead of letting it find its own way through the atmosphere, is that so much gets absorbed by the air. By converting it to the right frequency first (one of the so-called windows in the atmosphere, in which little energy is absorbed) a space-based collector could, enthusiasts claim, yield on average five times as much power as one located on the ground.

The disadvantage is cost. Launching and maintaining suitable satellites would be ludicrously expensive. But perhaps not, if the satellites were small and the customers specialised. Military expeditions, rescuers in disaster zones, remote desalination plants and scientific-research bases might be willing to pay for such power from the sky. And a research group based at the University of Surrey, in England, hopes that in a few years it will be possible to offer it to them.

This summer, Stephen Sweeney and his colleagues will test a laser that would do the job which Asimov assigned to microwaves. Certainly, microwaves would work: a test carried out in 2008 transmitted useful amounts of microwave energy between two Hawaiian islands 148km (92 miles) apart, so penetrating the 100km of the atmosphere would be a doddle. But microwaves spread out as they propagate. A collector on Earth that was picking up power from a geostationary satellite orbiting at an altitude of 35,800km would need to be spread over hundreds of square metres. Using a laser means the collector need be only tens of square metres in area.

[div class=attrib]More from theSource here.[end-div]

Largest cosmic structures ‘too big’ for theories

[div class=attrib]From New Scientist:[end-div]

Space is festooned with vast “hyperclusters” of galaxies, a new cosmic map suggests. It could mean that gravity or dark energy – or perhaps something completely unknown – is behaving very strangely indeed.

We know that the universe was smooth just after its birth. Measurements of the cosmic microwave background radiation (CMB), the light emitted 370,000 years after the big bang, reveal only very slight variations in density from place to place. Gravity then took hold and amplified these variations into today’s galaxies and galaxy clusters, which in turn are arranged into big strings and knots called superclusters, with relatively empty voids in between.

On even larger scales, though, cosmological models say that the expansion of the universe should trump the clumping effect of gravity. That means there should be very little structure on scales larger than a few hundred million light years across.

But the universe, it seems, did not get the memo. Shaun Thomas of University College London (UCL), and colleagues have found aggregations of galaxies stretching for more than 3 billion light years. The hyperclusters are not very sharply defined, with only a couple of per cent variation in density from place to place, but even that density contrast is twice what theory predicts.

“This is a challenging result for the standard cosmological models,” says Francesco Sylos Labini of the University of Rome, Italy, who was not involved in the work.

Colour guide

The clumpiness emerges from an enormous catalogue of galaxies called the Sloan Digital Sky Survey, compiled with a telescope at Apache Point, New Mexico. The survey plots the 2D positions of galaxies across a quarter of the sky. “Before this survey people were looking at smaller areas,” says Thomas. “As you look at more of the sky, you start to see larger structures.”

A 2D picture of the sky cannot reveal the true large-scale structure in the universe. To get the full picture, Thomas and his colleagues also used the colour of galaxies recorded in the survey.

More distant galaxies look redder than nearby ones because their light has been stretched to longer wavelengths while travelling through an expanding universe. By selecting a variety of bright, old elliptical galaxies whose natural colour is well known, the team calculated approximate distances to more than 700,000 objects. The upshot is a rough 3D map of one quadrant of the universe, showing the hazy outlines of some enormous structures.

[div class=attrib]More from theSource here.[end-div]

Life of a Facebook Photo

Before photo-sharing, photo blogs, photo friending, “PhotoShopping” and countless other photo-enabled apps and services, there was compose, point, focus, click, develop, print. The process seemed a lot simpler way back then. Perhaps, this was due to lack of options for both input and output. Input? Simple. Go buy a real camera. Output? Simple. Slide or prints. The end.

The options for input and output have exploded by orders of magnitude over the last couple of decades. Nowadays, even my toaster can take pictures and I can output them on my digital refrigerator, sans, of course, real photographs with that limp, bendable magnetic backing. The entire end-to-end process of taking a photograph and sharing it with someone else is now replete with so many choices and options that today it seems to have become inordinately more complex.

So, to help all prehistoric photographers like me, here’s an interesting process flow for your digital images in the age of Facebook.

[div class=attrib]From Pixable:[end-div]

Evolution machine: Genetic engineering on fast forward

[div class=attrib]From the New Scientist:[end-div]

Automated genetic tinkering is just the start – this machine could be used to rewrite the language of life and create new species of humans

IT IS a strange combination of clumsiness and beauty. Sitting on a cheap-looking worktop is a motley ensemble of flasks, trays and tubes squeezed onto a home-made frame. Arrays of empty pipette tips wait expectantly. Bunches of black and grey wires adorn its corners. On the top, robotic arms slide purposefully back and forth along metal tracks, dropping liquids from one compartment to another in an intricately choreographed dance. Inside, bacteria are shunted through slim plastic tubes, and alternately coddled, chilled and electrocuted. The whole assembly is about a metre and a half across, and controlled by an ordinary computer.

Say hello to the evolution machine. It can achieve in days what takes genetic engineers years. So far it is just a prototype, but if its proponents are to be believed, future versions could revolutionise biology, allowing us to evolve new organisms or rewrite whole genomes with ease. It might even transform humanity itself.

These days everything from your food and clothes to the medicines you take may well come from genetically modified plants or bacteria. The first generation of engineered organisms has been a huge hit with farmers and manufacturers – if not consumers. And this is just the start. So far organisms have only been changed in relatively crude and simple ways, often involving just one or two genes. To achieve their grander ambitions, such as creating algae capable of churning out fuel for cars, genetic engineers are now trying to make far more sweeping changes.

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Morning In The Burned House

[div class=attrib]Morning In The Burned House, Margaret Atwood[end-div]

In the burned house I am eating breakfast.
You understand: there is no house, there is no breakfast,
yet here I am.

The spoon which was melted scrapes against
the bowl which was melted also.
No one else is around.

Where have they gone to, brother and sister,
mother and father? Off along the shore,
perhaps. Their clothes are still on the hangers,

their dishes piled beside the sink,
which is beside the woodstove
with its grate and sooty kettle,

every detail clear,
tin cup and rippled mirror.
The day is bright and songless,

the lake is blue, the forest watchful.
In the east a bank of cloud
rises up silently like dark bread.

I can see the swirls in the oilcloth,
I can see the flaws in the glass,
those flares where the sun hits them.

I can’t see my own arms and legs
or know if this is a trap or blessing,
finding myself back here, where everything

in this house has long been over,
kettle and mirror, spoon and bowl,
including my own body,

including the body I had then,
including the body I have now
as I sit at this morning table, alone and happy,

bare child’s feet on the scorched floorboards
(I can almost see)
in my burning clothes, the thin green shorts

and grubby yellow T-shirt
holding my cindery, non-existent,
radiant flesh. Incandescent.

Nick Risinger’s Photopic Sky Survey

Big science covering scales from the microscopic to the vastness of the universe continues to deliver stunning new insights, now on a daily basis. I takes huge machines such as the Tevatron at Fermilab, CERN’s Large Hadron Collider, NASA’s Hubble Telescope and the myriad other detectors, arrays, spectrometers, particle smashers to probe some of our ultimate questions. The results from these machines bring us fantastic new perspectives and often show us remarkable pictures of the very small and very large.

Then there is Nick Risinger’s Photopic Sky Survey. No big science, no vast machines — just Nick Risinger, accompanied by retired father, camera equipment and 45,000 miles of travels capturing our beautiful night sky as never before.

[div class=attrib]From Nick Risinger:[end-div]

The Photopic Sky Survey is a 5,000 megapixel photograph of the entire night sky stitched together from 37,440 exposures. Large in size and scope, it portrays a world far beyond the one beneath our feet and reveals our familiar Milky Way with unfamiliar clarity.

It was clear that such a survey would be quite difficult visually hopping from one area of the sky to the next—not to mention possible lapses in coverage—so this called for a more systematic approach. I divided the sky into 624 uniformly spaced areas and entered their coordinates into the computer which gave me assurance that I was on target and would finish without any gaps. Each frame received a total of 60 exposures: 4 short, 4 medium, and 4 long shots for each camera which would help to reduce the amount of noise, overhead satellite trails and other unwanted artifacts.

And so it was with this blueprint that I worked my way through the sky, frame by frame, night after night. The click-clack of the shutters opening and closing became a staccato soundtrack for the many nights spent under the stars. Occasionally, the routine would be pierced by a bright meteor or the cry of a jackal, each compelling a feeling of eerie beauty that seemed to hang in the air. It was an experience that will stay with me a lifetime.

A truly remarkable and beautiful achievement. This is what focus and passion can achieve.

[div class=attrib]More from theSource here.[end-div]

Susan Wolf and Meaningfulness

[div class=attrib]From PEA Soup:[end-div]

A lot of interesting work has been done recently on what makes lives meaningful. One brilliant example of this is Susan Wolf’s recent wonderful book Meaning in Life and Why It Matters. It consists of two short lectures, critical commentaries by John Koethe, Robert M. Adams, Nomy Arpaly, and Jonathan Haidt, and responses by Wolf herself. What I want to do here is to introduce quickly Wolf’s ‘Fitting Fulfillment’ View, and then I’ll raise a potential objection to it.

According to Wolf, all meaningful lives have both a ‘subjective’ and an ‘objective’ element to them. These elements can make lives meaningful only together. Wolf’s view of the subjective side is highly complex. The starting-point is the idea that agent’s projects and activities ultimately make her life meaningful. However, this happens only when the projects and activities satisfy two conditions on the subjective side and one on the objective side.
Firstly, in order for one’s projects and activities to make one’s life meaningful, one must be at least somewhat successful in carrying them out. This does not mean that one must fully complete one’s projects and excel in the activities but, other things being equal, the more successful one is in one’s projects and activities the more they can contribute to the meaningfulness of one’s life.

Secondly, one must have a special relation to one’s projects and activities. This special relation has several overlapping aspects which seem to have two main aspects. I’ll call one of them the ‘loving relation’. Thus, Wolf often seems to claim that one must love the relevant projects and activities, experience subjective attraction towards them, and be gripped and excited by them. This seems to imply that one must be passionate about the relevant projects and activities. It also seems to entail that our willingness to pursue the relevant projects must be diachronically stable (and even constitute ‘volitional necessities’).

The second aspect could be called the ‘fulfilment side’. This means that, when one is successfully engaged in one’s projects and activities, one must experience some positive sensations – fulfilment, satisfaction, feeling good and happy and the like. Wolf is careful to emphasise that there need not be single felt quality present in all cases. Rather, there is a range of the positive experiences some of which need to be present in each case.

Finally, on the objective side, one’s projects and activities must be objectively worthwhile. One way to think about this is to start from the idea that one can be more or less successful in the relevant projects and activities. This seems to entail that the relevant projects and activities are difficult to complete and master in the beginning. As a result, one can become better in them through practice.

The objective element of Wolf’s view requires that some objective values are promoted either during this process or as a consequence of completion. There are some basic reasons to take part in the activities and to try to succeed in the relevant projects. These reasons are neither purely prudential nor necessarily universal moral reasons. Wolf is a pluralist about which projects and activities are objectively worthwhile (she takes no substantial stand in order to avoid any criticism of elitism). She also emphasises that saying all of this is fairly neutral metaethically.

[div class=attrib]More from theSource here.[end-div]