All posts by Mike

Using An Antimagnet to Build an Invisibility Cloak

The invisibility cloak of science fiction takes another step further into science fact this week. Researchers over at Physics arVix report a practical method for building a device that repels electromagnetic waves. Alvaro Sanchez and colleagues at Spain’s Universitat Autonoma de Barcelona describe the design of a such a device utilizing the bizarre properties of metamaterials.

[div class=attrib]From Technology Review:[end-div]

A metamaterial is a bizarre substance with properties that physicists can fine tune as they wish. Tuned in a certain way, a metamaterial can make light perform all kinds of gymnastics, steering it round objects to make them seem invisible.

This phenomenon, known as cloaking, is set to revolutionise various areas of electromagnetic science.

But metamaterials can do more. One idea is that as well as electromagnetic fields, metamaterials ought to be able to manipulate plain old magnetic fields too. After all, a static magnetic field is merely an electromagnetic wave with a frequency of zero.

So creating a magnetic invisibility cloak isn’t such a crazy idea.

Today, Alvaro Sanchez and friends at Universitat Autonoma de Barcelona in Spain reveal the design of a cloak that can do just this.

The basic ingredients are two materials; one with a permeability that is smaller than 1 in one direction and one with a permeability greater than one in a perpendicular direction.

Materials with these permeabilities are easy to find. Superconductors have a permeability of 0 and ordinary ferromagnets have a permeability greater than 1.

The difficulty is creating a material with both these properties at the same time. Sanchez and co solve the problem with a design consisting of ferromagnetic shells coated with a superconducting layer.

The result is a device that can completely shield the outside world from a magnet inside it.

[div class=attrib]More from theSource here.[end-div]

Nuclear Fission in the Kitchen

theDiagonal usually does not report on the news. Though we do make a few worthy exceptions based on the import or surreal nature of the event. A case in point below.

Humans do have a curious way of repeating history. In a less meticulous attempt to re-enact the late-90s true story, which eventually led to the book “The Radioactive Boy Scout“, a Swedish man was recently arrested for trying to set up a nuclear reactor in his kitchen.

[div class=attrib]From the AP:[end-div]

A Swedish man who was arrested after trying to split atoms in his kitchen said Wednesday he was only doing it as a hobby.

Richard Handl told The Associated Press that he had the radioactive elements radium, americium and uranium in his apartment in southern Sweden when police showed up and arrested him on charges of unauthorized possession of nuclear material.

The 31-year-old Handl said he had tried for months to set up a nuclear reactor at home and kept a blog about his experiments, describing how he created a small meltdown on his stove.

Only later did he realize it might not be legal and sent a question to Sweden’s Radiation Authority, which answered by sending the police.

“I have always been interested in physics and chemistry,” Handl said, adding he just wanted to “see if it’s possible to split atoms at home.”

[div class=attrib]More from theSource here.[end-div]

A Reason for Reason

[div class attrib]From Wilson Quarterly:[end-div]

For all its stellar achievements, human reason seems particularly ill suited to, well, reasoning. Study after study demonstrates reason’s deficiencies, such as the oft-noted confirmation bias (the tendency to recall, select, or interpret evidence in a way that supports one’s preexisting beliefs) and people’s poor performance on straightforward logic puzzles. Why is reason so defective?

To the contrary, reason isn’t defective in the least, argue cognitive scientists Hugo Mercier of the University of Pennsylvania and Dan Sperber of the Jean Nicod Institute in Paris. The problem is that we’ve misunderstood why reason exists and measured its strengths and weaknesses against the wrong standards.

Mercier and Sperber argue that reason did not evolve to allow individuals to think through problems and make brilliant decisions on their own. Rather, it serves a fundamentally social purpose: It promotes argument. Research shows that people solve problems more effectively when they debate them in groups—and the interchange also allows people to hone essential social skills. Supposed defects such as the confirmation bias are well fitted to this purpose because they enable people to efficiently marshal the evidence they need in arguing with others.

[div class=attrib]More from theSource here.[end-div]

Ultimate logic: To infinity and beyond

[div class=attrib]From the New Scientist:[end-div]

WHEN David Hilbert left the podium at the Sorbonne in Paris, France, on 8 August 1900, few of the assembled delegates seemed overly impressed. According to one contemporary report, the discussion following his address to the second International Congress of Mathematicians was “rather desultory”. Passions seem to have been more inflamed by a subsequent debate on whether Esperanto should be adopted as mathematics’ working language.

Yet Hilbert’s address set the mathematical agenda for the 20th century. It crystallised into a list of 23 crucial unanswered questions, including how to pack spheres to make best use of the available space, and whether the Riemann hypothesis, which concerns how the prime numbers are distributed, is true.

Today many of these problems have been resolved, sphere-packing among them. Others, such as the Riemann hypothesis, have seen little or no progress. But the first item on Hilbert’s list stands out for the sheer oddness of the answer supplied by generations of mathematicians since: that mathematics is simply not equipped to provide an answer.

This curiously intractable riddle is known as the continuum hypothesis, and it concerns that most enigmatic quantity, infinity. Now, 140 years after the problem was formulated, a respected US mathematician believes he has cracked it. What’s more, he claims to have arrived at the solution not by using mathematics as we know it, but by building a new, radically stronger logical structure: a structure he dubs “ultimate L”.

The journey to this point began in the early 1870s, when the German Georg Cantor was laying the foundations of set theory. Set theory deals with the counting and manipulation of collections of objects, and provides the crucial logical underpinnings of mathematics: because numbers can be associated with the size of sets, the rules for manipulating sets also determine the logic of arithmetic and everything that builds on it.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

Are You Real, Or Are You a Hologram?

The principle of a holographic universe, not to be confused with the Holographic Universe, an album by swedish death metal rock band Scar Symmetry, continues to hold serious sway among a not insignificant group of even more serious cosmologists.

Originally proposed by noted physicists Gerard ‘t Hooft, and Leonard Susskind in the mid-1990s, the holographic theory of the universe suggests that our entire universe can described as a informational 3-D projection painted in two dimensions on a cosmological boundary. This is analogous to the flat hologram printed on a credit card creating the illusion of a 3-D object.

While current mathematical theory and experimental verification is lagging, the theory has garnered much interest and forward momentum — so this area warrants a brief status check, courtesy of the New Scientist.

[div class=attrib]From the New Scientist:[end-div]

TAKE a look around you. The walls, the chair you’re sitting in, your own body – they all seem real and solid. Yet there is a possibility that everything we see in the universe – including you and me – may be nothing more than a hologram.

It sounds preposterous, yet there is already some evidence that it may be true, and we could know for sure within a couple of years. If it does turn out to be the case, it would turn our common-sense conception of reality inside out.

The idea has a long history, stemming from an apparent paradox posed by Stephen Hawking’s work in the 1970s. He discovered that black holes slowly radiate their mass away. This Hawking radiation appears to carry no information, however, raising the question of what happens to the information that described the original star once the black hole evaporates. It is a cornerstone of physics that information cannot be destroyed.

In 1972 Jacob Bekenstein at the Hebrew University of Jerusalem, Israel, showed that the information content of a black hole is proportional to the two-dimensional surface area of its event horizon – the point-of-no-return for in-falling light or matter. Later, string theorists managed to show how the original star’s information could be encoded in tiny lumps and bumps on the event horizon, which would then imprint it on the Hawking radiation departing the black hole.

This solved the paradox, but theoretical physicists Leonard Susskind and Gerard ‘t Hooft decided to take the idea a step further: if a three-dimensional star could be encoded on a black hole’s 2D event horizon, maybe the same could be true of the whole universe. The universe does, after all, have a horizon 42 billion light years away, beyond which point light would not have had time to reach us since the big bang. Susskind and ‘t Hooft suggested that this 2D “surface” may encode the entire 3D universe that we experience – much like the 3D hologram that is projected from your credit card.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Computerarts.[end-div]

Flowing Water on Mars?

NASA’s latest spacecraft to visit Mars, the Mars Reconnaissance Orbiter, has made some stunning observations that show the possibility of flowing water on the red planet. Intriguingly,  repeated observations of the same regions over several Martian seasons show visible changes attributable to some kind of dynamic flow.

[div class=attrib]From NASA / JPL:[end-div]

Observations from NASA’s Mars Reconnaissance Orbiter have revealed possible flowing water during the warmest months on Mars.

“NASA’s Mars Exploration Program keeps bringing us closer to determining whether the Red Planet could harbor life in some form,” NASA Administrator Charles Bolden said, “and it reaffirms Mars as an important future destination for human exploration.”

Dark, finger-like features appear and extend down some Martian slopes during late spring through summer, fade in winter, and return during the next spring. Repeated observations have tracked the seasonal changes in these recurring features on several steep slopes in the middle latitudes of Mars’ southern hemisphere.

“The best explanation for these observations so far is the flow of briny water,” said Alfred McEwen of the University of Arizona, Tucson. McEwen is the principal investigator for the orbiter’s High Resolution Imaging Science Experiment (HiRISE) and lead author of a report about the recurring flows published in Thursday’s edition of the journal Science.

Some aspects of the observations still puzzle researchers, but flows of liquid brine fit the features’ characteristics better than alternate hypotheses. Saltiness lowers the freezing temperature of water. Sites with active flows get warm enough, even in the shallow subsurface, to sustain liquid water that is about as salty as Earth’s oceans, while pure water would freeze at the observed temperatures.

[div class=attrib]More from theSource here.[end-div]

The End of 140

Five years in internet time is analogous to several entire human lifespans. So, it’s no surprise that Twitter seems to have been with us forever. Despite the near ubiquity of the little blue bird, most of the service’s tweeters have no idea why they are constrained to using a mere 140 characters to express themselves.

Farhad Manjoo over at Slate has a well-reasoned plea to increase this upper character limit for the more garrulous amongst us.

Though perhaps more importantly is the effect of this truncated form of messaging on our broader mechanisms of expression and communication. Time will tell if our patterns of speech and the written word will adjust accordingly.

[div class=attrib]From Slate:[end-div]

Five years ago this month, Twitter opened itself up to the public. The new service, initially called Twttr, was born out of software engineer Jack Dorsey’s fascination with an overlooked corner of the modern metropolis—the central dispatch systems that track delivery trucks, taxis, emergency vehicles, and bike messengers as they’re moving about town. As Dorsey once told the Los Angeles Times, the logs of central dispatchers contained “this very rich sense of what’s happening right now in the city.” For a long time, Dorsey tried to build a public version of that log. It was only around 2005, when text messaging began to take off in America, that his dream became technically feasible. There was only one problem with building Twittr on mobile carriers’ SMS system, though—texts were limited to 160 characters, and if you included space for a user’s handle, that left only 140 characters per message.

What could you say in 140 characters? Not a whole lot—and that was the point. Dorsey believed that Twitter would be used for status updates—his prototypical tweets were “in bed” and “going to park,” and his first real tweet was “inviting coworkers.” That’s not how we use Twitter nowadays. In 2009, the company acknowledged that its service had “outgrown the concept of personal status updates,” and it changed its home-screen prompt from “What are you doing?” to the more open-ended “What’s happening?”

As far as I can tell, though, Twitter has never considered removing the 140-character limit, and Twitter’s embrace of this constraint has been held up as one of the key reasons for the service’s success. But I’m hoping Twitter celebrates its fifth birthday by rethinking this stubborn stance. The 140-character limit now feels less like a feature than a big, obvious bug. I don’t want Twitter to allow messages of unlimited length, as that would encourage people to drone on interminably. But since very few Twitter users now access the system through SMS, it’s technically possible for the network to accommodate longer tweets. I suggest doubling the ceiling—give me 280 characters, Jack, and I’ll give you the best tweets you’ve ever seen!

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Life Cycle of Common Man

Twice Poet Laureate of the United States, Howard Nemerov, catalogs the human condition in his work “Life Cycle of Common Man”.

[div class=attrib]By Howard Nemerov, courtesy of Poetry Foundation:[end-div]

Life Cycle of Common Man

Roughly figured, this man of moderate habits,
This average consumer of the middle class,
Consumed in the course of his average life span
Just under half a million cigarettes,
Four thousand fifths of gin and about
A quarter as much vermouth; he drank
Maybe a hundred thousand cups of coffee,
And counting his parents’ share it cost
Something like half a million dollars
To put him through life. How many beasts
Died to provide him with meat, belt and shoes
Cannot be certainly said.
But anyhow,
It is in this way that a man travels through time,
Leaving behind him a lengthening trail
Of empty bottles and bones, of broken shoes,
Frayed collars and worn out or outgrown
Diapers and dinnerjackets, silk ties and slickers.

Given the energy and security thus achieved,
He did . . . ? What? The usual things, of course,
The eating, dreaming, drinking and begetting,
And he worked for the money which was to pay
For the eating, et cetera, which were necessary
If he were to go on working for the money, et cetera,
But chiefly he talked. As the bottles and bones
Accumulated behind him, the words proceeded
Steadily from the front of his face as he
Advanced into the silence and made it verbal.
Who can tally the tale of his words? A lifetime
Would barely suffice for their repetition;
If you merely printed all his commas the result
Would be a very large volume, and the number of times
He said “thank you” or “very little sugar, please,”
Would stagger the imagination. There were also
Witticisms, platitudes, and statements beginning
“It seems to me” or “As I always say.”
Consider the courage in all that, and behold the man
Walking into deep silence, with the ectoplastic
Cartoon’s balloon of speech proceeding
Steadily out of the front of his face, the words
Borne along on the breath which is his spirit
Telling the numberless tale of his untold Word
Which makes the world his apple, and forces him to eat.

[div class=attrib]Source: The Collected Poems of Howard Nemerov (The University of Chicago Press, 1977).[end-div]

The Prospect of Immortality

A recently opened solo art show takes an fascinating inside peek at the cryonics industry. Entitled “The Prospect of Immortality” the show features photography by Murray Ballard. Ballard’s collection of images follows a 5-year investigation of cryonics in England, the United States and Russia. Cryonics is the practice of freezing the human body just after death in the hope that future science will one day have the capability of restoring it to life.

Ballard presents the topic in a fair an balanced way, leaving viewers to question and weigh the process of cryonics for themselves.

[div class=attrib]From Impressions Gallery:[end-div]

The result of five year’s unprecedented access and international investigation, Murray Ballard offers an amazing photographic insight into the practice of : the process of freezing a human body after death in the hope that scientific advances may one day bring it back to life. Premiering at Impressions Gallery, this is Murray Ballard’s first major solo show.

Ballard’s images take the viewer on a journey through the tiny but dedicated international cryonics community, from the English seaside retirement town of Peacehaven; to the high-tech laboratories of Arizona; to the rudimentary facilities of Kriorus, just outside Moscow.  Worldwide there are approximately 200 ‘patients’ stored permanently in liquid nitrogen, with a further thousand people signed up for cryonics after death.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Images courtesy of Impressions Gallery / Murray Ballard.[end-div]

The Science Behind Dreaming

[div class=attrib]From Scientific American:[end-div]

For centuries people have pondered the meaning of dreams. Early civilizations thought of dreams as a medium between our earthly world and that of the gods. In fact, the Greeks and Romans were convinced that dreams had certain prophetic powers. While there has always been a great interest in the interpretation of human dreams, it wasn’t until the end of the nineteenth century that Sigmund Freud and Carl Jung put forth some of the most widely-known modern theories of dreaming. Freud’s theory centred around the notion of repressed longing — the idea that dreaming allows us to sort through unresolved, repressed wishes. Carl Jung (who studied under Freud) also believed that dreams had psychological importance, but proposed different theories about their meaning.

Since then, technological advancements have allowed for the development of other theories. One prominent neurobiological theory of dreaming is the “activation-synthesis hypothesis,” which states that dreams don’t actually mean anything: they are merely electrical brain impulses that pull random thoughts and imagery from our memories. Humans, the theory goes, construct dream stories after they wake up, in a natural attempt to make sense of it all. Yet, given the vast documentation of realistic aspects to human dreaming as well as indirect experimental evidence that other mammals such as cats also dream, evolutionary psychologists have theorized that dreaming really does serve a purpose. In particular, the “threat simulation theory” suggests that dreaming should be seen as an ancient biological defence mechanism that provided an evolutionary advantage because of  its capacity to repeatedly simulate potential threatening events – enhancing the neuro-cognitive mechanisms required for efficient threat perception and avoidance.

So, over the years, numerous theories have been put forth in an attempt to illuminate the mystery behind human dreams, but, until recently, strong tangible evidence has remained largely elusive.

Yet, new research published in the Journal of Neuroscience provides compelling insights into the mechanisms that underlie dreaming and the strong relationship our dreams have with our memories. Cristina Marzano and her colleagues at the University of Rome have succeeded, for the first time, in explaining how humans remember their dreams. The scientists predicted the likelihood of successful dream recall based on a signature pattern of brain waves. In order to do this, the Italian research team invited 65 students to spend two consecutive nights in their research laboratory.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image: The Knight’s Dream by Antonio de Pereda. Courtesy of Wikipedia / Creative Commons.[end-div]

Rate This Article: What’s Wrong with the Culture of Critique

[div class=attrib]From Wired:[end-div]

You don’t have to read this essay to know whether you’ll like it. Just go online and assess how provocative it is by the number of comments at the bottom of the web version. (If you’re already reading the web version, done and done.) To find out whether it has gone viral, check how many people have hit the little thumbs-up, or tweeted about it, or liked it on Facebook, or dug it on Digg. These increasingly ubiquitous mechanisms of assessment have some real advantages: In this case, you could save 10 minutes’ reading time. Unfortunately, life is also getting a little ruined in the process.

A funny thing has quietly accompanied our era’s eye-gouging proliferation of information, and by funny I mean not very funny. For every ocean of new data we generate each hour—videos, blog posts, VRBO listings, MP3s, ebooks, tweets—an attendant ocean’s worth of reviewage follows. The Internet-begotten abundance of absolutely everything has given rise to a parallel universe of stars, rankings, most-recommended lists, and other valuations designed to help us sort the wheat from all the chaff we’re drowning in. I’ve never been to Massimo’s pizzeria in Princeton, New Jersey, but thanks to the Yelpers I can already describe the personality of Big Vince, a man I’ve never met. (And why would I want to? He’s surly and drums his fingers while you order, apparently.) Everything exists to be charted and evaluated, and the charts and evaluations themselves grow more baroque by the day. Was this review helpful to you? We even review our reviews.

Technoculture critic and former Wired contributor Erik Davis is concerned about the proliferation of reviews, too. “Our culture is afflicted with knowingness,” he says. “We exalt in being able to know as much as possible. And that’s great on many levels. But we’re forgetting the pleasures of not knowing. I’m no Luddite, but we’ve started replacing actual experience with someone else’s already digested knowledge.”

Of course, Yelpification of the universe is so thorough as to be invisible. I scarcely blinked the other day when, after a Skype chat with my mother, I was asked to rate the call. (I assumed they were talking about connection quality, but if they want to hear about how Mom still pronounces it noo-cu-lar, I’m happy to share.) That same afternoon, the UPS guy delivered a guitar stand I’d ordered. Even before I could weigh in on the product, or on the seller’s expeditiousness, I was presented with a third assessment opportunity. It was emblazoned on the cardboard box: “Rate this packaging.”

[div class=attrib]More from theSource here.[end-div]

Communicating Meaning in Cyberspace

Clarifying intent, emotion, wishes and meaning is a rather tricky and cumbersome process that we all navigate each day. Online in the digital world this is even more challenging, if not sometimes impossible. The pre-digital method of exchanging information in a social context would have been face-to-face. Such a method provides the full gamut of verbal and non-verbal dialogue between two or more parties. Importantly, it also provides a channel for the exchange of unconscious cues between people, which researchers are increasingly finding to be of critical importance during communication.

So, now replace the the face-to-face interaction with email, texting, instant messaging, video chat, and other forms of digital communication and you have a new playground for researchers in cognitive and social sciences. The intriguing question for researchers, and all of us for that matter, is: how do we ensure our meaning, motivations and intent are expressed clearly through digital communications?

There are some partial answers over at Anthropology in Practice, which looks at how users of digital media express emotion, resolve ambiguity and communicate cross-culturally.

[div class=attrib]Anthropology in Practice:[end-div]

The ability to interpret social data is rooted in our theory of mind—our capacity to attribute mental states (beliefs, intents, desires, knowledge, etc.) to the self and to others. This cognitive development reflects some understanding of how other individuals relate to the world, allowing for the prediction of behaviors.1 As social beings we require consistent and frequent confirmation of our social placement. This confirmation is vital to the preservation of our networks—we need to be able to gauge the state of our relationships with others.

Research has shown that children whose capacity to mentalize is diminished find other ways to successfully interpret nonverbal social and visual cues 2-6, suggesting that the capacity to mentalize is necessary to social life. Digitally-mediated communication, such as text messaging and instant messaging, does not readily permit social biofeedback. However cyber communicators still find ways of conveying beliefs, desires, intent, deceit, and knowledge online, which may reflect an effort to preserve the capacity to mentalize in digital media.

The Challenges of Digitally-Mediated Communication

In its most basic form DMC is text-based, although the growth of video conferencing technology indicates DMC is still evolving. One of the biggest criticisms of DMC has been the lack of nonverbal cues which are an important indicator to the speaker’s meaning, particularly when the message is ambiguous.

Email communicators are all too familiar with this issue. After all, in speech the same statement can have multiple meanings depending on tone, expression, emphasis, inflection, and gesture. Speech conveys not only what is said, but how it is said—and consequently, reveals a bit of the speaker’s mind to interested parties. In a plain-text environment like email only the typist knows whether a statement should be read with sarcasm.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

The Slow Food – Fast Food Debate

For watchers of the human condition, dissecting and analyzing our food culture is both fascinating and troubling. The global agricultural-industrial complex with its enormous efficiencies and finely engineered end-products, churns out mountains of food stuffs that help feed a significant proportion of the world. And yet, many argue that the same over-refined, highly-processed, preservative-doped, high-fructose enriched, sugar and salt laden, color saturated foods are to blame for many of our modern ills. The catalog of dangers from that box of “fish” sticks, orange “cheese” and twinkies goes something likes this: heart disease, cancer, diabetes, and obesity.

To counterbalance the fast/processed food juggernaut the grassroots International Slow Food movement established its manifesto in 1989. Its stated vision is:

We envision a world in which all people can access and enjoy food that is good for them, good for those who grow it and good for the planet.

They go on to say:

We believe that everyone has a fundamental right to the pleasure of good food and consequently the responsibility to protect the heritage of food, tradition and culture that make this pleasure possible. Our association believes in the concept of neo-gastronomy – recognition of the strong connections between plate, planet, people and culture.

These are lofty ideals. Many would argue that the goals of the Slow Food movement, while worthy, are somewhat elitist and totally impractical in current times on our over-crowded, resource constrained little blue planet.

Krystal D’Costa over at Anthropology in Practice has a fascinating analysis and takes a more pragmatic view.

[div class=attrib]From Krystal D’Costa over at Anthropology in Practice:[end-div]

There’s a sign hanging in my local deli that offers customers some tips on what to expect in terms of quality and service. It reads:

Your order:

Can be fast and good, but it won’t be cheap.
Can be fast and cheap, but it won’t be good.
Can be good and cheap, but it won’t be fast.
Pick two—because you aren’t going to get it good, cheap, and fast.

The Good/Fast/Cheap Model is certainly not new. It’s been a longstanding principle in design, and has been applied to many other things. The idea is a simple one: we can’t have our cake and eat it too. But that doesn’t mean we can’t or won’t try—and no where does this battle rage more fiercely than when it comes to fast food.

In a landscape dominated by golden arches, dollar menus, and value meals serving up to 2,150 calories, fast food has been much maligned. It’s fast, it’s cheap, but we know it’s generally not good for us. And yet, well-touted statistics report that Americans are spending more than ever on fast food:

In 1970, Americans spent about $6 billion on fast food; in 2000, they spent more than $110 billion. Americans now spend more money on fast food than on higher education, personal computers, computer software, or new cars. They spend more on fast food than on movies, books, magazines, newspapers, videos, and recorded music—combined.[i]

With waistlines growing at an alarming rate, fast food has become an easy target. Concern has spurned the emergence of healthier chains (where it’s good and fast, but not cheap), half servings, and posted calorie counts. We talk about awareness and “food prints” enthusiastically, aspire to incorporate more organic produce in our diets, and struggle to encourage others to do the same even while we acknowledge that differing economic means may be a limiting factor.

In short, we long to return to a simpler food time—when local harvests were common and more than adequately provided the sustenance we needed, and we relied less on processed, industrialized foods. We long for a time when home-cooked meals, from scratch, were the norm—and any number of cooking shows on the American airways today work to convince us that it’s easy to do. We’re told to shun fast food, and while it’s true that modern, fast, processed foods represent an extreme in portion size and nutrition, it is also true that our nostalgia is misguided: raw, unprocessed foods—the “natural” that we yearn for—were a challenge for our ancestors. In fact, these foods were downright dangerous.

Step back in time to when fresh meat rotted before it could be consumed and you still consumed it, to when fresh fruits were sour, vegetables were bitter, and when roots and tubers were poisonous. Nature, ever fickle, could withhold her bounty as easily as she could share it: droughts wreaked havoc on produce, storms hampered fishing, cows stopped giving milk, and hens stopped laying.[ii] What would you do then?

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Images courtesy of International Slow Food Movement / Fred Meyer store by lyzadanger.[end-div]

Graduate Job Picture

Encouraging news for the class 0f 2011. The National Association of Colleges and Employers (NACE) released results from a recent survey showing a slightly improved job picture for 2011 college graduates.

[div class=attrib]From Course Hero:[end-div]

[div class=attrib]More from theSource here.[end-div]

QR Codes as Art

It’s only a matter of time before someone has a cool looking QR code tattooed to their eyelid.

A QR or Quick Response code is a two-dimensional matrix that looks like a scrambled barcode, and behaves much like one, with one important difference. The QR code exhibits a rather high level of tolerance for errors. Some have reported that up to 20-30 percent of the QR code can be selectively altered without affecting its ability to be scanned correctly. Try scanning a regular barcode that has some lines missing or has been altered and your scanner is likely to give you a warning beep. The QR code however still scans correctly even if specific areas are missing or changed. This is important because a QR code does not require a high-end, dedicated barcode scanner for it to be scanned, and therefore also makes it suitable for outdoor use.

A QR code can be scanned, actually photographed, with a regular smartphone (or other device) equipped with a camera and QR code reading app. This makes it possible for QR codes to take up residence anywhere, not just on product packages, and scanned by anyone with a smartphone. In fact you may have seen QR codes displayed on street corners, posters, doors, billboards, websites, vehicles and magazines.

Of course, once you snap a picture of a code, your smartphone app will deliver more details about the object on which the QR code resides. For instance, take a picture of a code placed on a billboard advertising a new BMW model, and you’ll be linked to the BMW website with special promotions for your region. QR codes not only link to websites, but also can be used to send pre-defined text messages, provide further textual information, and deliver location maps.

Since parts of a QR code can be changed without reducing its ability to be scanned correctly, artists and designers now have the leeway to customize the matrix with some creative results.

Some favorites below.

[div]Images courtesy of Duncan Robertson, BBC; Louis Vuitton, SET; Ayara Thai Cuisine Restaurant.[end-div]

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The science behind disgust

[div class=attrib]From Salon:[end-div]

We all have things that disgust us irrationally, whether it be cockroaches or chitterlings or cotton balls. For me, it’s fruit soda. It started when I was 3; my mom offered me a can of Sunkist after inner ear surgery. Still woozy from the anesthesia, I gulped it down, and by the time we made it to the cashier, all of it managed to come back up. Although it is nearly 30 years later, just the smell of this “fun, sun and the beach” drink is enough to turn my stomach.

But what, exactly, happens when we feel disgust? As Daniel Kelly, an assistant professor of philosophy at Purdue University, explains in his new book, “Yuck!: The Nature and Moral Significance of Disgust,” it’s not just a physical sensation, it’s a powerful emotional warning sign. Although disgust initially helped keep us away from rotting food and contagious disease, the defense mechanism changed over time to effect the distance we keep from one another. When allowed to play a role in the creation of social policy, Kelly argues, disgust might actually cause more harm than good.

Salon spoke with Kelly about hiding the science behind disgust, why we’re captivated by things we find revolting, and how it can be a very dangerous thing.

What exactly is disgust?

Simply speaking, disgust is the response we have to things we find repulsive. Some of the things that trigger disgust are innate, like the smell of sewage on a hot summer day. No one has to teach you to feel disgusted by garbage, you just are. Other things that are automatically disgusting are rotting food and visible cues of infection or illness. We have this base layer of core disgusting things, and a lot of them don’t seem like they’re learned.

[div class=attrib]More from theSource here.[end-div]

Mr.Carrier, Thanks for Inventing the Air Conditioner

It’s #$% hot in the southern plains of the United States, with high temperatures constantly above 100 degrees F, and lows never dipping below 80. For that matter, it’s hotter than average this year in most parts of the country. So, a timely article over at Slate gives a great overview of the history of the air conditioning system, courtesy of inventor Willis Carrier.

[div class=attrib]From Slate:[end-div]

Anyone tempted to yearn for a simpler time must reckon with a few undeniable unpleasantries of life before modern technology: abscessed teeth, chamber pots, the bubonic plague—and a lack of air conditioning in late July. As temperatures rise into the triple digits across the eastern United States, it’s worth remembering how we arrived at the climate-controlled summer environments we have today.

Until the 20th century, Americans dealt with the hot weather as many still do around the world: They sweated and fanned themselves. Primitive air-conditioning systems have existed since ancient times, but in most cases, these were so costly and inefficient as to preclude their use by any but the wealthiest people. In the United States, things began to change in the early 1900s, when the first electric fans appeared in homes. But cooling units have only spread beyond American borders in the last couple of decades, with the confluence of a rising global middle class and breakthroughs in energy-efficient technology. . . .

The big breakthrough, of course, was electricity. Nikola Tesla’s development of alternating current motors made possible the invention of oscillating fans in the early 20th century. And in 1902, a 25-year-old engineer from New York named Willis Carrier invented the first modern air-conditioning system. The mechanical unit, which sent air through water-cooled coils, was not aimed at human comfort, however; it was designed to control humidity in the printing plant where he worked.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image of Willis Carrier courtesy of Wikipedia / Creative Commons.[end-div]

Answers to Life’s Big Questions

Do you gulp Pepsi or Coke? Are you a Mac or a PC? Do you side with MSNBC or Fox News? Do you sip tea or coffee? Do you prefer thin crust or deep pan pizza.

Hunch has compiled a telling infographic compiled from millions of answers gathered via its online Teach Hunch About You (THAY) questions. Interestingly, it looks like 61 percent of respondents are “dog people” and 31 percent “cat people” (with 8 percent neither).

[div class=attrib]From Hunch:[end-div]

[div class=attrib]More from theSource here.[end-div]

Art Makes Your Body Tingle

The next time you wander through an art gallery and feel lightheaded after seeing a Monroe silkscreen by Warhol, or feel reflective and soothed by a scene from Monet’s garden you’ll be in good company. New research shows that the body reacts to art not just our grey matter.

The study by Wolfgang Tschacher and colleagues, and published by the American Psychological Association, found that:

. . . physiological responses during perception of an artwork were significantly related to aesthetic-emotional experiencing. The dimensions “Aesthetic Quality,” “Surprise/Humor,” “Dominance,” and “Curatorial Quality” were associated with cardiac measures (heart rate variability, heart rate level) and skin conductance variability.

In other words, art makes your pulse race, your skin perspire and your body tingle.

[div class=attrib]From Miller-McCune:[end-div]

Art exhibits are not generally thought of as opportunities to get our pulses racing and skin tingling. But newly published research suggests aesthetic appreciation is, in fact, a full-body experience.

Three hundred and seventy-three visitors to a Swiss museum agreed to wear special gloves measuring four physiological responses as they strolled through an art exhibit. Researchers found an association between the gallery-goers’ reported responses to the artworks and three of the four measurements of bodily stimulation.

“Our findings suggest that an idiosyncratically human property — finding aesthetic pleasure in viewing artistic artifacts — is linked to biological markers,” researchers led by psychologist Wolfgang Tschacher of the University of Bern, Switzerland, write in the journal Psychology of Aesthetics, Creativity and the Arts.

Their study, the first of its kind conducted in an actual art gallery, provides evidence for what Tschacher and his colleagues call “the embodiment of aesthetics.”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

Higgs Particle Collides with Modern Art

Jonathan Jones over at the Guardian puts an creative spin (pun intended) on the latest developments in the world of particle physics. He suggests that we might borrow from the world of modern and contemporary art to help us take the vast imaginative leaps necessary to understand our physical world and its underlying quantum mechanical nature bound up in uncertainty and paradox.

Jones makes a good point that many leading artists of recent times broke new ground by presenting us with an alternate reality that demanded a fresh perspective of the world and what lies beneath. Think Picasso and Dali and Miro and Twombly.

[div class=attrib]From Jonathan Jones for the Guardian:[end-div]

The experiments currently being performed in the LHC are enigmatic, mind-boggling and imaginative. But are they science – or art? In his renowned television series The Ascent of Man, the polymath Jacob Bronowski called the discovery of the invisible world within the atom the great collective achievement of science in the 20th century. Then he went further. “No – it is a great, collective work of art.”

Niels Bohr, who was at the heart of the new sub-atomic physics in the early 20th century, put the mystery of what he and others were finding into provocative sayings. He was very quotable, and every quote stresses the ambiguity of the new realm he was opening up, the realm of the smallest conceivable things in the universe. “If quantum mechanics hasn’t profoundly shocked you, you haven’t understood it yet,” ran one of his remarks. According to Bronowski, Bohr also said that to think about the paradoxical truths of quantum mechanics is to think in images, because the only way to know anything about the invisible is to create an image of it that is by definition a human construct, a model, a half-truth trying to hint at the real truth.

. . .

We won’t understand what those guys at Cern are up to until our idea of science catches up with the greatest minds of the 20th century who blew apart all previous conventions of thought. One guide offers itself to those of us who are not physicists: modern art. Bohr, explained Bronowski, collected Cubist paintings. Cubism was invented by Pablo Picasso and Georges Braque at the same time modern physics was being created: its crystalline structures and opaque surfaces suggest the astonishment of a reality whose every microcosmic particle is sublimely complex.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Wikipedia / CERN / Creative Commons.[end-div]

Tour de France and the Higgs Particle

Two exciting races tracked through Grenoble, France this passed week. First, the Tour de France held one of the definitive stages of the 2011 race in Grenoble, the individual time trial. Second, Grenoble hosted the European Physical Society conference on High-Energy Physics. Fans of professional cycling and high energy physics would not be disappointed.

In cycling, Cadel Evans set a blistering pace in his solo effort on stage 20 to ensure the Yellow Jersey and an overall win in this year’s Tour.

In the world of high energy physics, physicists from Fermilab and CERN presented updates on their competing searches to discover (or not) the Higgs boson. The two main experiments at Fermilab, CDF and DZero, are looking for traces of the Higgs particle in the debris of Tevatron collider’s proton-antiproton collisions. At CERN’s Large Hadron Collider scientists working at the two massive detectors, Atlas and CMS, are sifting through vast mountains of data accumulated from proton-proton collisions.

Both colliders have been smashing particles together in their ongoing quest to refine our understanding of the building blocks of matter, and to determine the existence of the Higgs particle. The Higgs is believed to convey mass to other particles, and remains one of the remaining undiscovered components of the Standard Model of physics.

The latest results presented in Grenoble show excess particle events, above a chance distribution, across the search range where the Higgs particle is predicted to be found. There is a surplus of unusual events at a mass of 140-145 GeV (gigaelectronvolts), which is at the low end of the range allowed for the particle. Tantalizingly, physicists’ theories predict that this is the most likely region where the Higgs is to be found.

[div class=attrib]Further details from Symmetry Breaking:[end-div]

Physicists could be on their way to discovering the Higgs boson, if it exists, by next year. Scientists in two experiments at the Large Hadron Collider pleasantly surprised attendees at the European Physical Society conference this afternoon by both showing small hints of what could be the prized particle in the same area.

“This is what we expect to find on the road to the Higgs,” said Gigi Rolandi, physics coordinator for the CMS experiment.

Both experiments found excesses in the 130-150 GeV mass region. But the excesses did not have enough statistical significance to count as evidence of the Higgs.

If the Higgs really is lurking in this region, it is still in reach of experiments at Fermilab’s Tevatron. Although the accelerator will shut down for good at the end of September, Fermilab’s CDF and DZero experiments will continue to collect data up until that point and to improve their analyses.

“This should give us the sensitivity to make a new statement about the 114-180 mass range,” said Rob Roser, CDF spokesperson. Read more about the differences between Higgs searches at the Tevatron and at the LHC here.

The CDF and DZero experiments announced expanded exclusions in the search for their specialty, the low-mass Higgs, this morning. On Wednesday, the two experiments will announce their combined Higgs results.

Scientists measure statistical significance in units called sigma, written as the Greek letter ?. These high-energy experiments usually require 3?  level of confidence, about 99.7 percent certainty, to claim they’ve seen evidence of something. They need 5? to claim a discovery. The ATLAS experiment reported excesses at confidence levels between 2 and 2.8?, and the CMS experiment found similar excesses at close to 3?.

After the two experiments combine their results — a mathematical process much more arduous than simple addition — they could find themselves on new ground. They hope to do this in the next few months, at the latest by the winter conferences, said Kyle Cranmer, an assistant professor at New York University who presented the results for the ATLAS collaboration.

“The fact that these two experiments with different issues, different approaches and different modeling found similar results leads you to believe it might not be just a fluke,” Cranmer said. “This is what it would look like if it were real.”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]CERN photograph courtesy Fabrice Coffrini/AFP/Getty Images. Tour de France image courtesy of NBCSports.[end-div]

Book Review: Linchpin. Seth Godin

Phew! Another heartfelt call to action from business blogger Seth Godin to become indispensable.

Author, public speaker, orthogonal thinker and internet marketing maven, Seth Godin makes a compelling case to the artist within us all to get off our backsides, ignore the risk averse “lizard brain” as he puts it, get creative, and give the gift of art. After all there is no way to win the “race to the bottom” wrought by commoditization of both product and labor.

Bear in mind, Godin uses “art” in its most widely used sense, not merely a canvas or a sculpture. Here, art is anything that its maker so creates; it may be a service just as well as an object. Importantly also, to be art it has to be given with the correct intent — as a gift (a transcendent, unexpected act that surpasses expectation).

Critics maintain that his latest bestseller is short on specifics, but indeed it should be. After all if the process of creating art could be decomposed to an instruction manual it wouldn’t deliver art, it would deliver a Big Mac. So while, we do not get a “7 point plan” that leads to creative nirvana, Godin does a good job through his tireless combination of anecdote, repetition, historical analysis and social science at convincing the “anonymous cogs in the machine” to think and act more like the insightful, innovators that we can all become.

Godin rightly believes that the new world of work is rife with opportunity to add value through creativity, human connection and generosity, and this is the area where the indispensable artist gets to create his or her art, and to become a linchpin in the process. Godin’s linchpin is a rule-breaker, not a follower; a map-maker, not an order taker; a doer not a whiner.

In reading Linchpin we are reminded of the other side of the economy, in which we all unfortunately participate as well, the domain of commoditization, homogeneity and anonymity. This is the domain that artists so their utmost to avoid, and better still, subvert. Of course, this economy provides a benefit too – lower price. However, a “Volkswagen-sized jar of pickles for $3” can only go so far. Commoditization undermines our very social fabric: it undermines our desire for uniqueness and special connection in a service or product that we purchase; it removes our dignity and respect when we allow ourselves to become a disposable part, a human cog, in the job machine. So, jettison the bland, the average, and the subservient, learn to take risk, face fear and become an indispensable, passionate, discerning artist – one who creates and one who gives.

The Allure of Steampunk Videotelephony and the Telephonoscope

Video telephony as imagined in 1910

A concept for the videophone surfaced just a couple of years after the telephone was patented in the United States. The telephonoscope as it was called first appeared in Victorian journals and early French science fiction in 1878.

In 1891 Alexander Graham Bell recorded his concept of an electrical radiophone, which discussed, “…the possibility of seeing by electricity”. He later went on to predict that, “…the day would come when the man at the telephone would be able to see the distant person to whom he was speaking”.

The world’s first videophone entered service in 1934, in Germany. The service was offered in select post offices linking several major German cities, and provided bi-directional voice and image on 8 inch square displays. In the U.S., AT&T launched the Picturephone in the mid-1960s. However, the costly equipment, high-cost per call, and inconveniently located public video-telephone booths ensured that the service would never gain public acceptance. Similar to the U.S., experience major telephone companies in France, Japan and Sweden had limited success with video-telephony during the 1970s-80s.

Major improvements in video technology, telecommunications deregulation and increases in bandwidth during the 1980s-90s brought the price point down considerably. However, significant usage remained mostly within the realm of major corporations due to the still not insignificant investment in equipment and cost of bandwidth.

Fast forward to the 21st century. Skype and other IP (internet protocol) based services have made videochat commonplace and affordable, and in most cases free.It now seems that videchat has become almost ubiquitous. Recent moves into this space by tech heavyweights like Apple with Facetime, Microsoft with its acquisition of Skype, Google with its Google Plus social network video calling component, and Facebook’s new video calling service will in all likelihood add further momentum.

Of course, while videochat is an effective communication tool it does have a cost in terms of personal and social consequences over its non-video cousin, the telephone. Next time you videochat rather than make a telephone call you will surely be paying greater attention to your bad hair and poor grooming, your crumpled clothes, uncoordinated pajamas or lack thereof, the unwanted visitors in the background shot, and the not so subtle back-lighting that focuses attention on the clutter in your office or bedroom. Doesn’t it make you harken back for the days of the simple telephone? Either that or perhaps you are drawn to the more alluring and elegant steampunk form of videochat as imagined by the Victorians, in the image above.

Book Review: The Psychopath Test. Jon Ronson

Hilarious and disturbing. I suspect Jon Ronson would strike a couple of checkmarks in the Hare PCL-R Checklist against my name for finding his latest work both hilarious and disturbing. Would this, perhaps, make me a psychopath?

Jon Ronson is author of The Psychopath Test and the Hare PCL-R, named for its inventor,  Canadian psychologist Bob Hare, is the gold standard in personality trait measurement for psychopathic disorder (officially known as Antisocial Personality Disorder).

Ronson’s book is a fascinating journey through the “madness industry” covering psychiatrists, clinical psychologists, criminal scientists, criminal profilers, and of course their clients: patients, criminals and the “insane” at large. Fascinated by the psychopathic traits that the industry applied to the criminally insane, Ronson goes on to explore these behavior and personality traits in the general population. And, perhaps to no surprise he finds that a not insignificant proportion of business leaders and others in positions on authority could be classified as “psychopaths” based on the standard PCL-R checklist.

Ronson’s stories are poignant. He tells us the tale of Tony, who feigned madness to avoid what he believed would be have been a harsher prison sentence for a violent crime. Instead, Tony found himself in Broadmoor, a notorious maximum security institution for the criminally insane. Twelve years on, Tony still incarcerated, finds it impossible to convince anyone of his sanity, despite behaving quite normally. His doctors now admit that he was sane at the time of admission, but agree that he must have been nuts to feign insanity in the first place, and furthermore only someone who is insane could behave so “sanely” while surrounded by the insane!

Tony’s story and the other characters that Ronson illuminates in this work are thoroughly memorable, especially Al Dunlap, empathy poor, former CEO of Sunbeam — perhaps one of the high-functioning psychopaths who lives in our midst. Peppered throughout Ronson’s interviews with madmen and madwomen, are his perpetual anxiety and self-reflection; he now has considerable diagnostic power and insight versed on such tools as the PCL-R checklist. As a result, Ronson begins seeing “psychopaths” everywhere.

My only criticism of the book is that Jon Ronson should have made it 200 pages longer and focused much more on the “psychopathic” personalities that roam amongst us, not just those who live behind bars, and on the madness industry itself, now seemingly lead by the major  pharmaceutical companies.

The Cutting-Edge Physics of Jackson Pollock

 

Untitled, ca. 1948-49. Jackson Pollock

[div class=attrib]From Wired:[end-div]

Jackson Pollock, famous for his deceptively random-seeming drip paintings, took advantage of certain features of fluid dynamics years before physicists thought to study them.

“His particular painting technique essentially lets physics be a player in the creative process,” said physicist Andrzej Herczynski of Boston College, coauthor of a new paper in Physics Today that analyzes the physics in Pollock’s art. “To the degree that he lets physics take a role in the painting process, he is inviting physics to be a coauthor of his pieces.”

Pollock’s unique technique — letting paint drip and splatter on the floor rather than spreading it on a vertical canvas — revolutionized the art world in the 1940s. The resulting streaks and blobs look haphazard, but art historians and, more recently, physicists argue they’re anything but. Some have suggested that the snarls of paint have lasting appeal because they reflect fractal geometry that shows up in clouds and coast lines.

Now, Boston College art historian Claude Cernuschi, Harvard mathematician Lakshminarayanan Mahadevan and Herczynski have turned the tools of physics on Pollock’s painting process. In what they believe is the first quantitative analysis of drip painting, the researchers derived an equation for how Pollock spread paint.

The team focused on the painting Untitled 1948-49, which features wiggling lines and curlicues of red paint. Those loops formed through a fluid instability called coiling, in which thick fluids fold onto themselves like coils of rope.

“People thought perhaps Pollock created this effect by wiggling his hand in a sinusoidal way, but he didn’t,” Herczynski said.

Coiling is familiar to anyone who’s ever squeezed honey on toast, but it’s only recently grabbed the attention of physicists. Recent studies have shown that the patterns fluids form as they fall depends on their viscosity and their speed. Viscous liquids fall in straight lines when moving quickly, but form loops, squiggles and figure eights when poured slowly, as seen in this video of honey falling on a conveyor belt.

The first physics papers that touched on this phenomenon appeared in the late 1950s, but Pollock knew all about it in 1948. Pollock was famous for searching for using different kinds of paints than anyone else in the art world, and mixing his paints with solvents to make them thicker or thinner. Instead of using a brush or pouring paint directly from a can, he lifted paint with a rod and let it dribble onto the canvas in continuous streams. By moving his arm at different speeds and using paints of different thicknesses, he could control how much coiling showed up in the final painting.

[div class=attrib]More from theSource here.[end-div]

The Homogenous Culture of “Like”

[div class=attrib]Echo and Narcissus, John William Waterhouse [Public domain], via Wikimedia Commons[end-div]

About 12 months ago I committed suicide — internet suicide that is. I closed my personal Facebook account after recognizing several important issues. First, it was a colossal waste of time; time that I could and should be using more productively. Second, it became apparent that following, belonging and agreeing with others through the trivial “wall” status-in-a-can postings and now pervasive “like button” was nothing other than a declaration of mindless group-think and a curious way to maintain social standing. So, my choice was clear: become part of a group that had similar interests, like-minded activities, same politics, parallel beliefs, common likes and dislikes; or revert to my own weirdly independent path. I chose the latter, rejecting the road towards a homogeneity of ideas and a points-based system of instant self-esteem.

This facet of the Facebook ecosystem has an affect similar to the filter bubble that I described is a previous post, The Technology of Personalization and the Bubble Syndrome. In both cases my explicit choices on Facebook, such as which friends I follow or which content I “like”, and my implicit browsing behaviors that increasingly filter what I see and don’t see causes a narrowing of the world of ideas to which I am a exposed. This cannot be good.

So, although I may incur the wrath of author Neil Strauss for including an excerpt of his recent column below, I cannot help but “like” what he has to say. More importantly, he does a much more eloquent job of describing the issue which commoditizes social relationships and, dare I say it, lowers the barrier to entry for narcissists to grow and fine tune their skills.

[div class=attrib]By Neil Strauss for the Wall Street Journal:[end-div]

If you happen to be reading this article online, you’ll notice that right above it, there is a button labeled “like.” Please stop reading and click on “like” right now.

Thank you. I feel much better. It’s good to be liked.

Don’t forget to comment on, tweet, blog about and StumbleUpon this article. And be sure to “+1” it if you’re on the newly launched Google+ social network. In fact, if you don’t want to read the rest of this article, at least stay on the page for a few minutes before clicking elsewhere. That way, it will appear to the site analytics as if you’ve read the whole thing.

Once, there was something called a point of view. And, after much strife and conflict, it eventually became a commonly held idea in some parts of the world that people were entitled to their own points of view.

Unfortunately, this idea is becoming an anachronism. When the Internet first came into public use, it was hailed as a liberation from conformity, a floating world ruled by passion, creativity, innovation and freedom of information. When it was hijacked first by advertising and then by commerce, it seemed like it had been fully co-opted and brought into line with human greed and ambition.

But there was one other element of human nature that the Internet still needed to conquer: the need to belong. The “like” button began on the website FriendFeed in 2007, appeared on Facebook in 2009, began spreading everywhere from YouTube to Amazon to most major news sites last year, and has now been officially embraced by Google as the agreeable, supportive and more status-conscious “+1.” As a result, we can now search not just for information, merchandise and kitten videos on the Internet, but for approval.

Just as stand-up comedians are trained to be funny by observing which of their lines and expressions are greeted with laughter, so too are our thoughts online molded to conform to popular opinion by these buttons. A status update that is met with no likes (or a clever tweet that isn’t retweeted) becomes the equivalent of a joke met with silence. It must be rethought and rewritten. And so we don’t show our true selves online, but a mask designed to conform to the opinions of those around us.

Conversely, when we’re looking at someone else’s content—whether a video or a news story—we are able to see first how many people liked it and, often, whether our friends liked it. And so we are encouraged not to form our own opinion but to look to others for cues on how to feel.

“Like” culture is antithetical to the concept of self-esteem, which a healthy individual should be developing from the inside out rather than from the outside in. Instead, we are shaped by our stats, which include not just “likes” but the number of comments generated in response to what we write and the number of friends or followers we have. I’ve seen rock stars agonize over the fact that another artist has far more Facebook “likes” and Twitter followers than they do.

[div class=attrib]More from theSource here.[end-div]

Undiscovered

[div class=attrib]From Eurozine:[end-div]

Neurological and Darwinistic strands in the philosophy of consciousness see human beings as no more than our evolved brains. Avoiding naturalistic explanations of human beings’ fundamental difference from other animals requires openness to more expansive approaches, argues Raymond Tallis.

For several decades I have been arguing against what I call biologism. This is the idea, currently dominant within secular humanist circles, that humans are essentially animals (or at least much more beastly than has been hitherto thought) and that we need therefore to look to the biological sciences, and only there, to advance our understanding of human nature. As a result of my criticism of this position I have been accused of being a Cartesian dualist, who thinks that the mind is some kind of a ghost in the machinery of the brain. Worse, it has been suggested that I am opposed to Darwinism, to neuroscience or to science itself. Worst of all, some have suggested that I have a hidden religious agenda. For the record, I regard neuroscience (which was my own area of research) as one of the greatest monuments of the human intellect; I think Cartesian dualism is a lost cause; and I believe that Darwin’s theory is supported by overwhelming evidence. Nor do I have a hidden religious agenda: I am an atheist humanist. And this is in fact the reason why I have watched the rise of biologism with such dismay: it is a consequence of the widespread assumption that the only alternative to a supernatural understanding of human beings is a strictly naturalistic one that sees us as just another kind of beast and, ultimately, as being less conscious agents than pieces of matter stitched into the material world.

This is to do humanity a gross disservice, as I think we are so much more than gifted chimps. Unpacking the most “ordinary” moment of human life reveals knowledge, skills, emotions, intuitions, a sense of past and future and of an infinitely elaborated world, that are not to be found elsewhere in the living world.

Biologism has two strands: “Neuromania” and “Darwinitis”. Neuromania arises out of the belief that human consciousness is identical with neural activity in certain parts of the brain. It follows from this that the best way to investigate what we humans truly are, to understand the origins of our beliefs, our predispositions, our morality and even our aesthetic pleasures, will be to peer into the brains of human subjects using the latest scanning technology. This way we shall know what is really going on when we are having experiences, thinking thoughts, feeling emotions, remembering memories, making decisions, being wise or silly, breaking the law, falling in love and so on.

The other strand is Darwinitis, rooted in the belief that evolutionary theory not only explains the origin of the species H. sapiens – which it does, of course – but also explains humans as they are today; that people are at bottom the organisms forged by the processes of natural selection and nothing more.

[div class=attrib]More from theSource here.[end-div]

Scientific Evidence for Indeterminism

[div class=attrib]From Evolutionary Philosophy:[end-div]

The advantage of being a materialist is that so much of our experience seems to point to a material basis for reality. Idealists usually have to appeal to some inner knowing as the justification of their faith that mind, not matter, is the foundation of reality. Unfortunately the appeal to inner knowing is exactly what a materialist has trouble with in the first place.

Charles Sanders Peirce was a logician and a scientist first and a philosopher second. He thought like a scientists and as he developed his evolutionary philosophy his reasons for believing in it were very logical and scientific. One of the early insights that lead him to his understanding of an evolving universe was his realization that the state of our world or its future was not necessarily predetermined.

One conclusion that materialism tends to lead to is a belief that ‘nothing comes from nothing.’ Everything comes from some form of matter or interaction between material things. Nothing just immerges spontaneously. Everything is part of an ongoing chain of cause and effect. The question, how did the chain of cause and effect start, is one that is generally felt best to be left to the realm of metaphysics and unsuitable for scientific investigation.

And so the image of a materially based universe tends to lead to a deterministic account of reality. You start with something and then that something unravels according to immutable laws. As an image to picture imagine this, a large bucket filled with pink and green tennis balls. Then imagine that there are two smaller buckets that are empty. This arrangement represents the starting point of the universe. The natural laws of this universe dictate that individual tennis balls will be removed from the large bucket and placed in one of the two smaller ones. If the ball that is removed is pink it goes in the left hand bucket and if it is green it goes in the right hand bucket. In this simple model the end state of the universe is going to be that the large bucket will be empty, the left hand bucket will be filled with pink tennis balls and the right hand bucket will be filled with green tennis balls. The outcome of the process is predetermined by the initial conditions and the laws governing the subsequent activity.

A belief in this kind of determinism seems to be constantly reinforced for us through our ongoing experience with the material universe.  Go ahead pick up a rock hold it up and then let it go. It will fall. Every single time it will fall. It is predetermined that a rock that is held up in the air and then dropped will fall. Punch a wall. It will hurt – every single time.  Over and over again our experience of everyday reality seems to reinforce the fact that we live in a universe which is exactly governed by immutable laws.

[div class=attrib]More from theSource here.[end-div]

Brilliant, but Distant: Most Far-Flung Known Quasar Offers Glimpse into Early Universe

[div class=attrib]From Scientific American:[end-div]

Peering far across space and time, astronomers have located a luminous beacon aglow when the universe was still in its infancy. That beacon, a bright astrophysical object known as a quasar, shines with the luminosity of 63 trillion suns as gas falling into a supermassive black holes compresses, heats up and radiates brightly. It is farther from Earth than any other known quasar—so distant that its light, emitted 13 billion years ago, is only now reaching Earth. Because of its extreme luminosity and record-setting distance, the quasar offers a unique opportunity to study the conditions of the universe as it underwent an important transition early in cosmic history.

By the time the universe was one billion years old, the once-neutral hydrogen gas atoms in between galaxies had been almost completely stripped of their electrons (ionized) by the glow of the first massive stars. But the full timeline of that process, known as re-ionization because it separated protons and electrons, as they had been in the first 380,000 years post–big bang, is somewhat uncertain. Quasars, with their tremendous intrinsic brightness, should make for excellent markers of the re-ionization process, acting as flashlights to illuminate the intergalactic medium. But quasar hunters working with optical telescopes had only been able to see back as far as 870 million years after the big bang, when the intergalactic medium’s transition from neutral to ionized was almost complete. (The universe is now 13.75 billion years old.) Beyond that point, a quasar’s light has been so stretched, or redshifted, by cosmic expansion that it no longer falls in the visible portion of the electromagnetic spectrum but rather in the longer-wavelength infrared.

Daniel Mortlock, an astrophysicist at Imperial College London, and his colleagues used that fact to their advantage. The researchers looked for objects that showed up in a large-area infrared sky survey but not in a visible-light survey covering the same area of sky, essentially isolating the high-redshift objects. They could thus discover a quasar, known as ULAS J1120+0641, at redshift 7.085, corresponding to a time just 770 million years after the big bang. That places the newfound quasar about 100 million years earlier in cosmic history than the previous record holder, which was at redshift 6.44. Mortlock and his colleagues report their finding in the June 30 issue of Nature. (Scientific American is part of Nature Publishing Group.)

[div class=attrib]More from theSource here.[end-div]

New Tevatron collider result may help explain the matter-antimatter asymmetry in the universe

[div class=attrib]From Symmetry Breaking:[end-div]

About a year ago, the DZero collaboration at Fermilab published  a tantalizing result in which the universe unexpectedly showed a preference for matter over antimatter. Now the collaboration has more data, and the evidence for this effect has grown stronger.

The result is extremely exciting: The question of why our universe should exist solely of matter is one of the burning scientific questions of our time. Theory predicts that matter and antimatter was made in equal quantities. If something hadn’t slightly favored matter over antimatter, our universe would consist of a bath of photons and little else. Matter wouldn’t exist.

The Standard Model predicts a value near zero for one of the parameters that is associated with the difference between the production of muons and antimuons in B meson decays. The DZero results from 2010 and 2011 differ from zero and are consistent with each other. The vertical bars of the measurements indicate their uncertainty. 

The 2010 measurement looked at muons and antimuons emerging from the decays of neutral mesons containing bottom quarks, which is a source that scientists have long expected to be a fruitful place to study the behavior of matter and antimatter under high-energy conditions. DZero scientists found a 1 percent difference between the production of pairs of muons and pairs of antimuons in B meson decays at Fermilab’s Tevatron collider. Like all measurements, that measurement had an uncertainty associated with it. Specifically, there was about a 0.07 percent chance that the measurement could come from a random fluctuation of the data recorded. That’s a tiny probability, but since DZero makes thousands of measurements, scientists expect to see the occasional rare fluctuation that turns out to be nothing.

During the last year, the DZero collaboration has taken more data and refined its analysis techniques. In addition, other scientists have raised questions and requested additional cross-checks. One concern was whether the muons and antimuons are actually coming from the decay of B mesons, rather than some other source.

Now, after incorporating almost 50 percent more data and dozens of cross-checks, DZero scientists are even more confident in the strength of their result. The probability that the observed effect is from a random fluctuation has dropped quite a bit and now is only 0.005 percent. DZero scientists will present the details of their analysis in a seminar geared toward particle physicists later today.

Scientists are a cautious bunch and require a high level of certainty to claim a discovery. For a measurement of the level of certainty achieved in the summer of 2010, particle physicists claim that they have evidence for an unexpected phenomenon. A claim of discovery requires a higher level of certainty.

If the earlier measurement were a fluctuation, scientists would expect the uncertainty of the new result to grow, not get smaller. Instead, the improvement is exactly what scientists expect if the effect is real. But the uncertainty associated with the new result is still too high to claim a discovery. For a discovery, particle physicists require an uncertainty of less than 0.00005 percent.

The new result suggests that DZero is hot on the trail of a crucial clue in one of the defining questions of all time: Why are we here at all?

[div class=attrib]More from theSource here.[end-div]