All posts by Mike

Quantum Computer Leap

The practical science behind quantum computers continues to make exciting progress. Quantum computers promise, in theory, immense gains in power and speed through the use of atomic scale parallel processing.

[div class=attrib]From the Observer:[end-div]

The reality of the universe in which we live is an outrage to common sense. Over the past 100 years, scientists have been forced to abandon a theory in which the stuff of the universe constitutes a single, concrete reality in exchange for one in which a single particle can be in two (or more) places at the same time. This is the universe as revealed by the laws of quantum physics and it is a model we are forced to accept – we have been battered into it by the weight of the scientific evidence. Without it, we would not have discovered and exploited the tiny switches present in their billions on every microchip, in every mobile phone and computer around the world. The modern world is built using quantum physics: through its technological applications in medicine, global communications and scientific computing it has shaped the world in which we live.

Although modern computing relies on the fidelity of quantum physics, the action of those tiny switches remains firmly in the domain of everyday logic. Each switch can be either “on” or “off”, and computer programs are implemented by controlling the flow of electricity through a network of wires and switches: the electricity flows through open switches and is blocked by closed switches. The result is a plethora of extremely useful devices that process information in a fantastic variety of ways.

Modern “classical” computers seem to have almost limitless potential – there is so much we can do with them. But there is an awful lot we cannot do with them too. There are problems in science that are of tremendous importance but which we have no hope of solving, not ever, using classical computers. The trouble is that some problems require so much information processing that there simply aren’t enough atoms in the universe to build a switch-based computer to solve them. This isn’t an esoteric matter of mere academic interest – classical computers can’t ever hope to model the behaviour of some systems that contain even just a few tens of atoms. This is a serious obstacle to those who are trying to understand the way molecules behave or how certain materials work – without the possibility to build computer models they are hampered in their efforts. One example is the field of high-temperature superconductivity. Certain materials are able to conduct electricity “for free” at surprisingly high temperatures (still pretty cold, though, at well but still below -100 degrees celsius). The trouble is, nobody really knows how they work and that seriously hinders any attempt to make a commercially viable technology. The difficulty in simulating physical systems of this type arises whenever quantum effects are playing an important role and that is the clue we need to identify a possible way to make progress.

It was American physicist Richard Feynman who, in 1981, first recognised that nature evidently does not need to employ vast computing resources to manufacture complicated quantum systems. That means if we can mimic nature then we might be able to simulate these systems without the prohibitive computational cost. Simulating nature is already done every day in science labs around the world – simulations allow scientists to play around in ways that cannot be realised in an experiment, either because the experiment would be too difficult or expensive or even impossible. Feynman’s insight was that simulations that inherently include quantum physics from the outset have the potential to tackle those otherwise impossible problems.

Quantum simulations have, in the past year, really taken off. The ability to delicately manipulate and measure systems containing just a few atoms is a requirement of any attempt at quantum simulation and it is thanks to recent technical advances that this is now becoming possible. Most recently, in an article published in the journal Nature last week, physicists from the US, Australia and South Africa have teamed up to build a device capable of simulating a particular type of magnetism that is of interest to those who are studying high-temperature superconductivity. Their simulator is esoteric. It is a small pancake-like layer less than 1 millimetre across made from 300 beryllium atoms that is delicately disturbed using laser beams… and it paves the way for future studies into quantum magnetism that will be impossible using a classical computer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A crystal of beryllium ions confined by a large magnetic field at the US National Institute of Standards and Technology’s quantum simulator. The outermost electron of each ion is a quantum bit (qubit), and here they are fluorescing blue, which indicates they are all in the same state. Photograph courtesy of Britton/NIST, Observer.[end-div]

Nanotech: Bane and Boon

An insightful opinion on the benefits and perils of nanotechnology from essayist and naturalist, Diane Ackerman.

[div class=attrib]From the New York Times:[end-div]

“I SING the body electric,” Walt Whitman wrote in 1855, inspired by the novelty of useful electricity, which he would live to see power streetlights and telephones, locomotives and dynamos. In “Leaves of Grass,” his ecstatic epic poem of American life, he depicted himself as a live wire, a relay station for all the voices of the earth, natural or invented, human or mineral. “I have instant conductors all over me,” he wrote. “They seize every object and lead it harmlessly through me… My flesh and blood playing out lightning to strike what is hardly different from myself.”

Electricity equipped Whitman and other poets with a scintillation of metaphors. Like inspiration, it was a lightning flash. Like prophetic insight, it illuminated the darkness. Like sex, it tingled the flesh. Like life, it energized raw matter. Whitman didn’t know that our cells really do generate electricity, that the heart’s pacemaker relies on such signals and that billions of axons in the brain create their own electrical charge (equivalent to about a 60-watt bulb). A force of nature himself, he admired the range and raw power of electricity.

Deeply as he believed the vow “I sing the body electric” — a line sure to become a winning trademark — I suspect one of nanotechnology’s recent breakthroughs would have stunned him. A team at the University of Exeter in England has invented the lightest, supplest, most diaphanous material ever made for conducting electricity, a dream textile named GraphExeter, which could revolutionize electronics by making it fashionable to wear your computer, cellphone and MP3 player. Only one atom thick, it’s an ideal fabric for street clothes and couture lines alike. You could start your laptop by plugging it into your jeans, recharge your cellphone by plugging it into your T-shirt. Then, not only would your cells sizzle with electricity, but even your clothing would chime in.

I don’t know if a fully electric suit would upset flight electronics, pacemakers, airport security monitors or the brain’s cellular dispatches. If you wore an electric coat in a lightning storm, would the hairs on the back of your neck stand up? Would you be more likely to fall prey to a lightning strike? How long will it be before a jokester plays the sound of one-hand-clapping from a mitten? How long before late-night hosts riff about electric undies? Will people tethered to recharging poles haunt the airport waiting rooms? Will it become hip to wear flashing neon ads, quotes and designs — maybe a name in a luminous tattoo?

Another recent marvel of nanotechnology promises to alter daily life, too, but this one, despite its silver lining, strikes me as wickedly dangerous, though probably inevitable. As a result, it’s bound to inspire labyrinthine laws and a welter of patents and to ignite bioethical debates.

Nano-engineers have developed a way to coat both hard surfaces (like hospital bed rails, doorknobs and furniture) and also soft surfaces (sheets, gowns and curtains) with microscopic nanoparticles of silver, an element known to kill microbes. You’d think the new nano-coating would offer a silver bullet, be a godsend to patients stricken with hospital-acquired sepsis and pneumonia, and to doctors fighting what has become a nightmare of antibiotic-resistant micro-organisms that can kill tens of thousands of people a year.

It does, and it is. That’s the problem. It’s too effective. Most micro-organisms are harmless, many are beneficial, but some are absolutely essential for the environment and human life. Bacteria were the first life forms on the planet, and we owe them everything. Our biochemistry is interwoven with theirs. Swarms of bacteria blanket us on the outside, other swarms colonize our insides. Kill all the gut bacteria, essential for breaking down large molecules, and digestion slows.

Friendly bacteria aid the immune system. They release biotin, folic acid and vitamin K; help eliminate heavy metals from the body; calm inflammation; and prevent cancers. During childbirth, a baby picks up beneficial bacteria in the birth canal. Nitrogen-fixing bacteria ensure healthy plants and ecosystems. We use bacteria to decontaminate sewage and also to create protein-rich foods like kefir and yogurt.

How tempting for nanotechnology companies, capitalizing on our fears and fetishes, to engineer superbly effective nanosilver microbe-killers, deodorants and sanitizers of all sorts for home and industry.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Technorati.[end-div]

The Wantologist

This may sound like another job from the future, but “wantologists” wander among us in 2012.

[div class=attrib]From the New York Times:[end-div]

IN the sprawling outskirts of San Jose, Calif., I find myself at the apartment door of Katherine Ziegler, a psychologist and wantologist. Could it be, I wonder, that there is such a thing as a wantologist, someone we can hire to figure out what we want? Have I arrived at some final telling moment in my research on outsourcing intimate parts of our lives, or at the absurdist edge of the market frontier?

A willowy woman of 55, Ms. Ziegler beckons me in. A framed Ph.D. degree in psychology from the University of Illinois hangs on the wall, along with an intricate handmade quilt and a collage of images clipped from magazines — the back of a child’s head, a gnarled tree, a wandering cat — an odd assemblage that invites one to search for a connecting thread.

After a 20-year career as a psychologist, Ms. Ziegler expanded her practice to include executive coaching, life coaching and wantology. Originally intended to help business managers make purchasing decisions, wantology is the brainchild of Kevin Kreitman, an industrial engineer who set up a two-day class to train life coaches to apply this method to individuals in private life. Ms. Ziegler took the course and was promptly certified in the new field.

Ms. Ziegler explains that the first step in thinking about a “want,” is to ask your client, “ ‘Are you floating or navigating toward your goal?’ A lot of people float. Then you ask, ‘What do you want to feel like once you have what you want?’ ”

She described her experience with a recent client, a woman who lived in a medium-size house with a small garden but yearned for a bigger house with a bigger garden. She dreaded telling her husband, who had long toiled at renovations on their present home, and she feared telling her son, who she felt would criticize her for being too materialistic.

Ms. Ziegler took me through the conversation she had with this woman: “What do you want?”

“A bigger house.”

“How would you feel if you lived in a bigger house?”

“Peaceful.”

“What other things make you feel peaceful?”

“Walks by the ocean.” (The ocean was an hour’s drive away.)

“Do you ever take walks nearer where you live that remind you of the ocean?”“Certain ones, yes.”

“What do you like about those walks?”

“I hear the sound of water and feel surrounded by green.”

This gentle line of questions nudged the client toward a more nuanced understanding of her own desire. In the end, the woman dedicated a small room in her home to feeling peaceful. She filled it with lush ferns. The greenery encircled a bubbling slate-and-rock tabletop fountain. Sitting in her redesigned room in her medium-size house, the woman found the peace for which she’d yearned.

I was touched by the story. Maybe Ms. Ziegler’s client just needed a good friend who could listen sympathetically and help her work out her feelings. Ms. Ziegler provided a service — albeit one with a wacky name — for a fee. Still, the mere existence of a paid wantologist indicates just how far the market has penetrated our intimate lives. Can it be that we are no longer confident to identify even our most ordinary desires without a professional to guide us?

Is the wantologist the tail end of a larger story? Over the last century, the world of services has changed greatly.

A hundred — or even 40 — years ago, human eggs and sperm were not for sale, nor were wombs for rent. Online dating companies, nameologists, life coaches, party animators and paid graveside visitors did not exist.

Nor had a language developed that so seamlessly melded village and market — as in “Rent-a-Mom,” “Rent-a-Dad,” “Rent-a-Grandma,” “Rent-a-Friend” — insinuating itself, half joking, half serious, into our culture. The explosion in the number of available personal services says a great deal about changing ideas of what we can reasonably expect from whom. In the late 1940s, there were 2,500 clinical psychologists licensed in the United States. By 2010, there were 77,000 — and an additional 50,000 marriage and family therapists.

[div class=attrib]Read the entire article after the jump.[end-div]

How Religions Are Born: Church of Jedi

May the Fourth was Star Wars Day. Why? Say, “May the Fourth” slowly while pretending to lisp slightly, and you’ll understand. Appropriately, Matt Cresswen over at the Guardian took this day to review the growing Jedi religion in the UK.

Would that make George Lucas God?

[div class=attrib]From the Guardian:[end-div]

Today [May 4] is Star Wars Day, being May the Fourth. (Say the date slowly, several times.) Around the world, film buffs, storm troopers and Jedi are gathering to celebrate one of the greatest science fiction romps of all time. It would be easy to let the fan boys enjoy their day and be done with it. However, Jediism is a growing religion in the UK. Although the results of the 2001 census, in which 390,000 recipients stated their religion as Jedi, have been widely interpreted as a pop at the government, the UK does actually have serious Jedi.

For those of you who, like BBC producer Bill Dare, have never seen Star Wars, the Jedi are “good” characters from the films. They draw from a mystical entity binding the universe, called “the Force”. Sporting hoodies, the Jedi are generally altruistic, swift-footed and handy with a light sabre. Their enemies, Emperor Palpatine, Darth Vader and other cohorts use the dark side of the Force. By tapping into its powers, the dark side command armies of demented droids, kill Jedi and are capable of wiping out entire planets.

This week, Chi-Pa Amshe from the Church of Jediism in Anglesey, Wales, emailed me with some responses to questions. He said Jediism was growing and that they were gaining hundreds of members each month. The church made the news three years ago, after its founder, Daniel Jones, had a widely reported run-in with Tesco.

Chi-Pa Amshe, speaking as a spokesperson for the Jedi council (Falkna Kar, Anzai Kooji Cutpa and Daqian Xiong), believes that Jediism can merge with other belief systems, rather like a bolt-on accessory.

“Many of our members are in fact both Christian and Jedi,” he says. “We can no more understand the Force and our place within it than a gear in a clock could comprehend its function in moving the hands across the face. I’d like to point out that each of our members interprets their beliefs through the prison of their own lives and although we offer guidance and support, ultimately like with the Qur’an, it is up to them to find what they need and choose their own path.”

Meeting up as a church is hard, the council explained, and members rely heavily on Skype and Facebook. They have an annual physical meeting, “where the church council is available for face-to-face questions and guidance”. They also support charity events and attend computer gaming conventions.

Meanwhile, in New Zealand, a web-based group called the Jedi Church believes that Jediism has always been around.

It states: “The Jedi religion is just like the sun, it existed before a popular movie gave it a name, and now that it has a name, people all over the world can share their experiences of the Jedi religion, here in the Jedi church.”

There are many other Jedi groups on the web, although Chi-Pa Amshe said some were “very unpleasant”. The dark side, perhaps.

[div class=attrib]Read the entire article after the jump.[end-div]

Google: Please Don’t Be Evil

Google has been variously praised and derided for its corporate manta, “Don’t Be Evil”. For those who like to believe that Google has good intentions recent events strain these assumptions. The company was found to have been snooping on and collecting data from personal Wi-Fi routers. Is this the case of a lone-wolf or a corporate strategy?

[div class=attrib]From Slate:[end-div]

Was Google’s snooping on home Wi-Fi users the work of a rogue software engineer? Was it a deliberate corporate strategy? Was it simply an honest-to-goodness mistake? And which of these scenarios should we wish for—which would assuage your fears about the company that manages so much of our personal data?

These are the central questions raised by a damning FCC report on Google’s Street View program that was released last weekend. The Street View scandal began with a revolutionary idea—Larry Page wanted to snap photos of every public building in the world. Beginning in 2007, the search company’s vehicles began driving on streets in the United States (and later Europe, Canada, Mexico, and everywhere else), collecting a stream of images to feed into Google Maps.

While developing its Street View cars, Google’s engineers realized that the vehicles could also be used for “wardriving.” That’s a sinister-sounding name for the mainly noble effort to map the physical location of the world’s Wi-Fi routers. Creating a location database of Wi-Fi hotspots would make Google Maps more useful on mobile devices—phones without GPS chips could use the database to approximate their physical location, while GPS-enabled devices could use the system to speed up their location-monitoring systems. As a privacy matter, there was nothing unusual about wardriving. By the time Google began building its system, several startups had already created their own Wi-Fi mapping databases.

But Google, unlike other companies, wasn’t just recording the location of people’s Wi-Fi routers. When a Street View car encountered an open Wi-Fi network—that is, a router that was not protected by a password—it recorded all the digital traffic traveling across that router. As long as the car was within the vicinity, it sucked up a flood of personal data: login names, passwords, the full text of emails, Web histories, details of people’s medical conditions, online dating searches, and streaming music and movies.

Imagine a postal worker who opens and copies one letter from every mailbox along his route. Google’s sniffing was pretty much the same thing, except instead of one guy on one route it was a whole company operating around the world. The FCC report says that when French investigators looked at the data Google collected, they found “an exchange of emails between a married woman and man, both seeking an extra-marital relationship” and “Web addresses that revealed the sexual preferences of consumers at specific residences.” In the United States, Google’s cars collected 200 gigabytes of such data between 2008 and 2010, and they stopped only when regulators discovered the practice.

Why did Google collect all this data? What did it want to do with people’s private information? Was collecting it a mistake? Was it the inevitable result of Google’s maximalist philosophy about public data—its aim to collect and organize all of the world’s information?

Google says the answer to that final question is no. In its response to the FCC and its public blog posts, the company says it is sorry for what happened, and insists that it has established a much stricter set of internal policies to prevent something like this from happening again. The company characterizes the collection of Wi-Fi payload data as the idea of one guy, an engineer who contributed code to the Street View program. In the FCC report, he’s called Engineer Doe. On Monday, the New York Times identified him as Marius Milner, a network programmer who created Network Stumbler, a popular Wi-Fi network detection tool. The company argues that Milner—for reasons that aren’t really clear—slipped the snooping code into the Street View program without anyone else figuring out what he was up to. Nobody else on the Street View team wanted to collect Wi-Fi data, Google says—they didn’t think it would be useful in any way, and, in fact, the data was never used for any Google product.

Should we believe Google’s lone-coder theory? I have a hard time doing so. The FCC report points out that Milner’s “design document” mentions his intention to collect and analyze payload data, and it also highlights privacy as a potential concern. Though Google’s privacy team never reviewed the program, many of Milner’s colleagues closely reviewed his source code. In 2008, Milner told one colleague in an email that analyzing the Wi-Fi payload data was “one of my to-do items.” Later, he ran a script to count the Web addresses contained in the collected data and sent his results to an unnamed “senior manager.” The manager responded as if he knew what was going on: “Are you saying that these are URLs that you sniffed out of Wi-Fi packets that we recorded while driving?” Milner responded by explaining exactly where the data came from. “The data was collected during the daytime when most traffic is at work,” he said.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Fastcompany.[end-div]

Creativity and Immorality

[div class=attrib]From Scientific American:[end-div]

In the mid 1990’s, Apple Computers was a dying company.  Microsoft’s Windows operating system was overwhelmingly favored by consumers, and Apple’s attempts to win back market share by improving the Macintosh operating system were unsuccessful.  After several years of debilitating financial losses, the company chose to purchase a fledgling software company called NeXT.  Along with purchasing the rights to NeXT’s software, this move allowed Apple to regain the services of one of the company’s founders, the late Steve Jobs.  Under the guidance of Jobs, Apple returned to profitability and is now the largest technology company in the world, with the creativity of Steve Jobs receiving much of the credit.

However, despite the widespread positive image of Jobs as a creative genius, he also has a dark reputation for encouraging censorship,“ losing sight of honesty and integrity”, belittling employees, and engaging in other morally questionable actions. These harshly contrasting images of Jobs raise the question of why a CEO held in such near-universal positive regard could also be the same one accused of engaging in such contemptible behavior.  The answer, it turns out, may have something to do with the aspect of Jobs which is so admired by so many.

In a recent paper published in the Journal of Personality and Social Psychology, researchers at Harvard and Duke Universities demonstrate that creativity can lead people to behave unethically.  In five studies, the authors show that creative individuals are more likely to be dishonest, and that individuals induced to think creatively were more likely to be dishonest. Importantly, they showed that this effect is not explained by any tendency for creative people to be more intelligent, but rather that creativity leads people to more easily come up with justifications for their unscrupulous actions.

In one study, the authors administered a survey to employees at an advertising agency.  The survey asked the employees how likely they were to engage in various kinds of unethical behaviors, such as taking office supplies home or inflating business expense reports.  The employees were also asked to report how much creativity was required for their job.  Further, the authors asked the executives of the company to provide creativity ratings for each department within the company.

Those who said that their jobs required more creativity also tended to self-report a greater likelihood of unethical behavior.  And if the executives said that a particular department required more creativity, the individuals in that department tended to report greater likelihoods of unethical behavior.

The authors hypothesized that it is creativity which causes unethical behavior by allowing people the means to justify their misdeeds, but it is hard to say for certain whether this is correct given the correlational nature of the study.  It could just as easily be true, after all, that unethical behavior leads people to be more creative, or that there is something else which causes both creativity and dishonesty, such as intelligence.  To explore this, the authors set up an experiment in which participants were induced into a creative mindset and then given the opportunity to cheat.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Scientific American / iStock.[end-div]

Your Tween Online

Many parents with children in the pre-teenage years probably have a containment policy restricting them from participating on adult oriented social media such as Facebook. Well, these tech-savvy tweens may be doing more online than just playing Club Penguin.

[div class=attrib]From the WSJ:[end-div]

Celina McPhail’s mom wouldn’t let her have a Facebook account. The 12-year-old is on Instagram instead.

Her mother, Maria McPhail, agreed to let her download the app onto her iPod Touch, because she thought she was fostering an interest in photography. But Ms. McPhail, of Austin, Texas, has learned that Celina and her friends mostly use the service to post and “like” Photoshopped photo-jokes and text messages they create on another free app called Versagram. When kids can’t get on Facebook, “they’re good at finding ways around that,” she says.

It’s harder than ever to keep an eye on the children. Many parents limit their preteens’ access to well-known sites like Facebook and monitor what their children do online. But with kids constantly seeking new places to connect—preferably, unsupervised by their families—most parents are learning how difficult it is to prevent their kids from interacting with social media.

Children are using technology at ever-younger ages. About 15% of kids under the age of 11 have their own mobile phone, according to eMarketer. The Pew Research Center’s Internet & American Life Project reported last summer that 16% of kids 12 to 17 who are online used Twitter, double the number from two years earlier.

Parents worry about the risks of online predators and bullying, and there are other concerns. Kids are creating permanent public records, and they may encounter excessive or inappropriate advertising. Yet many parents also believe it is in their kids’ interest to be nimble with technology.

As families grapple with how to use social media safely, many marketers are working to create social networks and other interactive applications for kids that parents will approve. Some go even further, seeing themselves as providing a crucial education in online literacy—”training wheels for social media,” as Rebecca Levey of social-media site KidzVuz puts it.

Along with established social sites for kids, such as Walt Disney Co.’s Club Penguin, kids are flocking to newer sites such as FashionPlaytes.com, a meeting place aimed at girls ages 5 to 12 who are interested in designing clothes, and Everloop, a social network for kids under the age of 13. Viddy, a video-sharing site which functions similarly to Instagram, is becoming more popular with kids and teenagers as well.

Some kids do join YouTube, Google, Facebook, Tumblr and Twitter, despite policies meant to bar kids under 13. These sites require that users enter their date of birth upon signing up, and they must be at least 13 years old. Apple—which requires an account to download apps like Instagram to an iPhone—has the same requirement. But there is little to bar kids from entering a false date of birth or getting an adult to set up an account. Instagram declined to comment.

“If we learn that someone is not old enough to have a Google account, or we receive a report, we will investigate and take the appropriate action,” says Google spokesman Jay Nancarrow. He adds that “users first have a chance to demonstrate that they meet our age requirements. If they don’t, we will close the account.” Facebook and most other sites have similar policies.

Still, some children establish public identities on social-media networks like YouTube and Facebook with their parents’ permission. Autumn Miller, a 10-year-old from Southern California, has nearly 6,000 people following her Facebook fan-page postings, which include links to videos of her in makeup and costumes, dancing Laker-Girl style.

[div class=attrib]Read the entire article after the jump.[end-div]

Job of the Future: Personal Data Broker

Pause for a second, and think of all the personal data that companies have amassed about you. Then think about the billions that these companies make in trading this data to advertisers, information researchers and data miners. There are credit bureaus with details of your financial history since birth; social networks with details of everything you and your friends say and (dis)like; GPS-enabled services that track your every move; search engines that trawl your searches, medical companies with your intimate health data, security devices that monitor your movements, and online retailers with all your purchase transactions and wish-lists.

Now think of a business model that puts you in charge of your own personal data. This may not be as far fetched as it seems, especially as the backlash grows against the increasing consolidation of personal data in the hands of an ever smaller cadre of increasingly powerful players.

[div class=attrib]From Technology Review:[end-div]

Here’s a job title made for the information age: personal data broker.

Today, people have no choice but to give away their personal information—sometimes in exchange for free networking on Twitter or searching on Google, but other times to third-party data-aggregation firms without realizing it at all.

“There’s an immense amount of value in data about people,” says Bernardo Huberman, senior fellow at HP Labs. “That data is being collected all the time. Anytime you turn on your computer, anytime you buy something.”

Huberman, who directs HP Labs’ Social Computing Research Group, has come up with an alternative—a marketplace for personal information—that would give individuals control of and compensation for the private tidbits they share, rather than putting it all in the hands of companies.

In a paper posted online last week, Huberman and coauthor Christina Aperjis propose something akin to a New York Stock Exchange for personal data. A trusted market operator could take a small cut of each transaction and help arrive at a realistic price for a sale.

“There are two kinds of people. Some people who say, ‘I’m not going to give you my data at all, unless you give me a million bucks.’ And there are a lot of people who say, ‘I don’t care, I’ll give it to you for little,’ ” says Huberman. He’s tested this the academic way, through experiments that involved asking men and women to share how much they weigh for a payment.

On his proposed market, a person who highly values her privacy might chose an option to sell her shopping patterns for $10, but at a big risk of not finding a buyer. Alternately, she might sell the same data for a guaranteed payment of 50 cents. Or she might opt out and keep her privacy entirely.

You won’t find any kind of opportunity like this today. But with Internet companies making billions of dollars selling our information, fresh ideas and business models that promise users control over their privacy are gaining momentum. Startups like Personal and Singly are working on these challenges already. The World Economic Forum recently called an individual’s data an emerging “asset class.”

Huberman is not the first to investigate a personal data marketplace, and there would seem to be significant barriers—like how to get companies that already collect data for free to participate. But, he says, since the pricing options he outlines gauge how a person values privacy and risk, they address at least two big obstacles to making such a market function.

[div class=attrib]Read the entire article after the jump.[end-div]

Spacetime as an Emergent Phenomenon

A small, but growing, idea in theoretical physics and cosmology is that spacetime may be emergent. That is, spacetime emerges from something much more fundamental, in much the same way that our perception of temperature emerges from the motion and characteristics of underlying particles.

[div class=attrib]More on this new front in our quest to answer the most basic of questions from FQXi:[end-div]

Imagine if nothing around you was real. And, no, not in a science-fiction Matrix sense, but in an actual science-fact way.

Technically, our perceived reality is a gigantic series of approximations: The tables, chairs, people, and cell phones that we interact with every day are actually made up of tiny particles—as all good schoolchildren learn. From the motion and characteristics of those particles emerge the properties that we see and feel, including color and temperature. Though we don’t see those particles, because they are so much smaller than the phenomena our bodies are built to sense, they govern our day-to-day existence.

Now, what if spacetime is emergent too? That’s the question that Joanna Karczmarek, a string theorist at the University of British Columbia, Vancouver, is attempting to answer. As a string theorist, Karczmarek is familiar with imagining invisible constituents of reality. String theorists posit that at a fundamental level, matter is made up of unthinkably tiny vibrating threads of energy that underlie subatomic particles, such as quarks and electrons. Most string theorists, however, assume that such strings dance across a pre-existing and fundamental stage set by spacetime. Karczmarek is pushing things a step further, by suggesting that spacetime itself is not fundamental, but made of more basic constituents.

Having carried out early research in atomic, molecular and optical physics, Karczmarek shifted into string theory because she “was more excited by areas where less was known”—and looking for the building blocks from which spacetime arises certainly fits that criteria. The project, funded by a $40,000 FQXi grant, is “high risk but high payoff,” Karczmarek says.

Although one of only a few string theorists to address the issue, Karczmarek is part of a growing movement in the wider physics community to create a theory that shows spacetime is emergent. (See, for instance, “Breaking the Universe’s Speed Limit.”) The problem really comes into focus for those attempting to combine quantum mechanics with Einstein’s theory of general relativity and thus is traditionally tackled directly by quantum gravity researchers, rather than by string theorists, Karczmarek notes.

That may change though. Nathan Seiberg, a string theorist at the Institute for Advanced Study (IAS) in Princeton, New Jersey, has found good reasons for his stringy colleagues to believe that at least space—if not spacetime—is emergent. “With space we can sort of imagine how it might work,” Sieberg says. To explain how, Seiberg uses an everyday example—the emergence of an apparently smooth surface of water in a bowl. “If you examine the water at the level of particles, there is no smooth surface. It looks like there is, but this is an approximation,” Seiberg says. Similarly, he has found examples in string theory where some spatial dimensions emerge when you take a step back from the picture (arXiv:hep-th/0601234v1). “At shorter distances it doesn’t look like these dimensions are there because they are quantum fluctuations that are very rapid,” Seiberg explains. “In fact, the notion of space ceases to make sense, and eventually if you go to shorter and shorter distances you don’t even need it for the formulation of the theory.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Nature.[end-div]

Bike+GPS=Map Art

Frank Jacobs over at Strange Maps has found another “out in leftfield” map. This cartographic invention is courtesy of an artist who “paints” using his GPS-enabled bicycle.

[div class=attrib]From Strange Maps:[end-div]

GPS technology is opening up exciting new hybrid forms of mapping and art. Or in this case: cycling, mapping and art. The maps on this page are the product of Michael Wallace, a Baltimore-based artist who uses his bike as a paintbrush, and the city as his canvas.

As Wallace traces shapes and forms across Baltimore’s street grid, the GPS technology that tracks this movements fixes the fluid pattern of his pedalstrokes onto a map. The results are what Wallace calls GPX images, or ‘virtual geoglyphs’ [1].

These massive images, created over by now three riding seasons, “continue to generate happiness, fitness and imagination through planning the physical activity of ‘digital spray-painting’ my ‘local canvas’ with the help of tracking satellites 12,500 miles above.”

Wallace’s portfolio by now is filled with dozens of GPX images, ranging from pictures of a toilet to the Titanic. They even include a map of the US – traced on the map of Baltimore. How’s that for self-reference? Or for Bawlmer [2] hubris?

[div class=attrib]Read the entire article after the jump.[end-div]

Vampire Wedding and the Moral Molecule

Attend a wedding. Gather the hundred or so guests, and take their blood. Take samples that is. Then, measure the levels of a hormone called oxytocin. This is where neuroeconomist Paul Zak’s story beings — around a molecular messenger thought to be responsible for facilitating trust and empathy in all our intimate relationships.

[div class=attrib]From “The Moral Molecule” by Paul J. Zak, to be published May 10, courtesy of the Wall Street Journal:[end-div]

Could a single molecule—one chemical substance—lie at the very center of our moral lives?

Research that I have done over the past decade suggests that a chemical messenger called oxytocin accounts for why some people give freely of themselves and others are coldhearted louts, why some people cheat and steal and others you can trust with your life, why some husbands are more faithful than others, and why women tend to be nicer and more generous than men. In our blood and in the brain, oxytocin appears to be the chemical elixir that creates bonds of trust not just in our intimate relationships but also in our business dealings, in politics and in society at large.

Known primarily as a female reproductive hormone, oxytocin controls contractions during labor, which is where many women encounter it as Pitocin, the synthetic version that doctors inject in expectant mothers to induce delivery. Oxytocin is also responsible for the calm, focused attention that mothers lavish on their babies while breast-feeding. And it is abundant, too, on wedding nights (we hope) because it helps to create the warm glow that both women and men feel during sex, a massage or even a hug.

Since 2001, my colleagues and I have conducted a number of experiments showing that when someone’s level of oxytocin goes up, he or she responds more generously and caringly, even with complete strangers. As a benchmark for measuring behavior, we relied on the willingness of our subjects to share real money with others in real time. To measure the increase in oxytocin, we took their blood and analyzed it. Money comes in conveniently measurable units, which meant that we were able to quantify the increase in generosity by the amount someone was willing to share. We were then able to correlate these numbers with the increase in oxytocin found in the blood.

Later, to be certain that what we were seeing was true cause and effect, we sprayed synthetic oxytocin into our subjects’ nasal passages—a way to get it directly into their brains. Our conclusion: We could turn the behavioral response on and off like a garden hose. (Don’t try this at home: Oxytocin inhalers aren’t available to consumers in the U.S.)

More strikingly, we found that you don’t need to shoot a chemical up someone’s nose, or have sex with them, or even give them a hug in order to create the surge in oxytocin that leads to more generous behavior. To trigger this “moral molecule,” all you have to do is give someone a sign of trust. When one person extends himself to another in a trusting way—by, say, giving money—the person being trusted experiences a surge in oxytocin that makes her less likely to hold back and less likely to cheat. Which is another way of saying that the feeling of being trusted makes a person more…trustworthy. Which, over time, makes other people more inclined to trust, which in turn…

If you detect the makings of an endless loop that can feed back onto itself, creating what might be called a virtuous circle—and ultimately a more virtuous society—you are getting the idea.

Obviously, there is more to it, because no one chemical in the body functions in isolation, and other factors from a person’s life experience play a role as well. Things can go awry. In our studies, we found that a small percentage of subjects never shared any money; analysis of their blood indicated that their oxytocin receptors were malfunctioning. But for everyone else, oxytocin orchestrates the kind of generous and caring behavior that every culture endorses as the right way to live—the cooperative, benign, pro-social way of living that every culture on the planet describes as “moral.” The Golden Rule is a lesson that the body already knows, and when we get it right, we feel the rewards immediately.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]CPK model of the Oxitocin molecule C43H66N12O12S2. Courtesy of Wikipedia.[end-div]

Corporatespeak: Lingua Franca of the Internet

Author Lewis Lapham reminds us of the phrase made (in)famous by Emperor Charles V:

“I speak Spanish to God, Italian to women, French to men, and German to my horse.”

So, what of the language of the internet? Again, Lapham offers a fitting and damning summary, this time courtesy of a lesser mortal, critic George Steiner:

“The true catastrophe of Babel is not the scattering of tongues. It is the reduction of human speech to a handful of planetary, ‘multinational’ tongues…Anglo-American standardized vocabularies” and grammar shaped by “military technocratic megalomania” and “the imperatives of commercial greed.”

More from the keyboard of Lewis Lapham on how the communicative promise of the internet is being usurped by commerce and the “lowest common denominator”.

[div class=attrib]From TomDispatch:[end-div]

But in which language does one speak to a machine, and what can be expected by way of response? The questions arise from the accelerating datastreams out of which we’ve learned to draw the breath of life, posed in consultation with the equipment that scans the flesh and tracks the spirit, cues the ATM, the GPS, and the EKG, arranges the assignations on Match.com and the high-frequency trades at Goldman Sachs, catalogs the pornography and drives the car, tells us how and when and where to connect the dots and thus recognize ourselves as human beings.

Why then does it come to pass that the more data we collect—from Google, YouTube, and Facebook—the less likely we are to know what it means?

The conundrum is in line with the late Marshall McLuhan’s noticing 50 years ago the presence of “an acoustic world,” one with “no continuity, no homogeneity, no connections, no stasis,” a new “information environment of which humanity has no experience whatever.” He published Understanding Media in 1964, proceeding from the premise that “we become what we behold,” that “we shape our tools, and thereafter our tools shape us.”

Media were to be understood as “make-happen agents” rather than as “make-aware agents,” not as art or philosophy but as systems comparable to roads and waterfalls and sewers. Content follows form; new means of communication give rise to new structures of feeling and thought.

To account for the transference of the idioms of print to those of the electronic media, McLuhan examined two technological revolutions that overturned the epistemological status quo. First, in the mid-15th century, Johannes Gutenberg’s invention of moveable type, which deconstructed the illuminated wisdom preserved on manuscript in monasteries, encouraged people to organize their perceptions of the world along the straight lines of the printed page. Second, in the 19th and 20th centuries, the applications of electricity (telegraph, telephone, radio, movie camera, television screen, eventually the computer), favored a sensibility that runs in circles, compressing or eliminating the dimensions of space and time, narrative dissolving into montage, the word replaced with the icon and the rebus.

Within a year of its publication, Understanding Media acquired the standing of Holy Scripture and made of its author the foremost oracle of the age. The New York Herald Tribune proclaimed him “the most important thinker since Newton, Darwin, Freud, Einstein, and Pavlov.” Although never at a loss for Delphic aphorism—”The electric light is pure information”; “In the electric age, we wear all mankind as our skin”—McLuhan assumed that he had done nothing more than look into the window of the future at what was both obvious and certain.

[div class=attrib]Read the entire article following the jump.[end-div]

Language as a Fluid Construct

Peter Ludlow, professor of philosophy at Northwestern University, has authored a number of fascinating articles on the philosophy of language and linguistics. Here he discusses his view of language as a dynamic, living organism. Literalists take note.

[div class=attrib]From the New York Times:[end-div]

There is a standard view about language that one finds among philosophers, language departments, pundits and politicians.  It is the idea that a language like English is a semi-stable abstract object that we learn to some degree or other and then use in order to communicate or express ideas and perform certain tasks.  I call this the static picture of language, because, even though it acknowledges some language change, the pace of change is thought to be slow, and what change there is, is thought to be the hard fought product of conflict.  Thus, even the “revisionist” picture of language sketched by Gary Gutting in a recent Stone column counts as static on my view, because the change is slow and it must overcome resistance.

Recent work in philosophy, psychology and artificial intelligence has suggested an alternative picture that rejects the idea that languages are stable abstract objects that we learn and then use.  According to the alternative “dynamic” picture, human languages are one-off things that we build “on the fly” on a conversation-by-conversation basis; we can call these one-off fleeting languages microlanguages.  Importantly, this picture rejects the idea that words are relatively stable things with fixed meanings that we come to learn. Rather, word meanings themselves are dynamic — they shift from microlanguage to microlanguage.

Shifts of meaning do not merely occur between conversations; they also occur within conversations — in fact conversations are often designed to help this shifting take place.  That is, when we engage in conversation, much of what we say does not involve making claims about the world but involves instructing our communicative partners how to adjust word meanings for the purposes of our conversation.

I’d I tell my friend that I don’t care where I teach so long as the school is in a city.  My friend suggests that I apply to the University of Michigan and I reply “Ann Arbor is not a city.”  In doing this, I am not making a claim about the world so much as instructing my friend (for the purposes of our conversation) to adjust the meaning of “city” from official definitions to one in which places like Ann Arbor do not count as a cities.

Word meanings are dynamic, but they are also underdetermined.  What this means is that there is no complete answer to what does and doesn’t fall within the range of a term like “red” or “city” or “hexagonal.”  We may sharpen the meaning and we may get clearer on what falls in the range of these terms, but we never completely sharpen the meaning.

This isn’t just the case for words like “city” but, for all words, ranging from words for things, like “person” and “tree,” words for abstract ideas, like “art” and “freedom,” and words for crimes, like “rape” and “murder.” Indeed, I would argue that this is also the case with mathematical and logical terms like “parallel line” and “entailment.”  The meanings of these terms remain open to some degree or other, and are sharpened as needed when we make advances in mathematics and logic.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Leif Parsons / New York Times.[end-div]

Your Brain Today

Progress in neuroscience continues to accelerate, and one of the principal catalysts of this progress is neuroscientist David Eagleman. We excerpt a recent article about Eagleman’s research, into amongst other things, synaesthesia, sensory substitution, time perception, neurochemical basis for attraction, and consciousness.

[div class=attrib]From the Telegraph:[end-div]

It ought to be quite intimidating, talking to David Eagleman. He is one of the world’s leading neuroscientists, after all, known for his work on time perception, synaesthesia and the use of neurology in criminal justice. But as anyone who has read his best-selling books or listened to his TED talks online will know, he has a gift for communicating complicated ideas in an accessible and friendly way — Brian Cox with an American accent.

He lives in Houston, Texas, with his wife and their two-month-old baby. When we Skype each other, he is sitting in a book-lined study and he doesn’t look as if his nights are being too disturbed by mewling. No bags under his eyes. In fact, with his sideburns and black polo shirt he looks much younger than his 41 years, positively boyish. His enthusiasm for his subject is boyish, too, as he warns me, he “speaks fast”.

He sure does. And he waves his arms around. We are talking about the minute calibrations and almost instantaneous assessments the brain makes when members of the opposite sex meet, one of many brain-related subjects covered in his book Incognito: The Secret Lives of the Brain, which is about to be published in paperback.

“Men are consistently more attracted to women with dilated eyes,” he says. “Because that corresponds with sexual excitement.”

Still, I say, not exactly a romantic discovery, is it? How does this theory go down with his wife? “Well she’s a neuroscientist like me so we joke about it all the time, like when I grow a beard. Women will always say they don’t like beards, but when you do the study it turns out they do, and the reason is it’s a secondary sex characteristic that indicates sexual development, the thing that separates the men from the boys.”

Indeed, according to Eagleman, we mostly run on unconscious autopilot. Our neural systems have been carved by natural selection to solve problems that were faced by our ancestors. Which brings me to another of his books, Why The Net Matters. As the father of children who spend a great deal of their time on the internet, I want to know if he thinks it is changing their brains.

“It certainly is,” he says, “especially in the way we seek information. When we were growing up it was all about ‘just in case’ information, the Battle of Hastings and so on. Now it is ‘just in time’ learning, where a kid looks something up online if he needs to know about it. This means kids today are becoming less good at memorising, but in other ways their method of learning is superior to ours because it targets neurotransmitters in the brain, ones that are related to curiosity, emotional salience and interactivity. So I think there might be some real advantages to where this is going. Kids are becoming faster at searching for information. When you or I read, our eyes scan down the page, but for a Generation-Y kid, their eyes will have a different set of movements, top, then side, then bottom and that is the layout of webpages.”

In many ways Eagleman’s current status as “the poster boy of science’s most fashionable field” (as the neuroscientist was described in a recent New Yorker profile) seems entirely apt given his own upbringing. His mother was a biology teacher, his father a psychiatrist who was often called upon to evaluate insanity pleas. Yet Eagleman says he wasn’t drawn to any of this. “Growing up, I didn’t see my career path coming at all, because in tenth grade I always found biology gross, dissecting rats and frogs. But in college I started reading about the brain and then I found myself consuming anything I could on the subject. I became hooked.”

Eagleman’s mother has described him as an “unusual child”. He wrote his first words at two, and at 12 he was explaining Einstein’s theory of relativity to her. He also liked to ask for a list of 400 random objects then repeat them back from memory, in reverse order. At Rice University, Houston, he majored in electrical engineering, but then took a sabbatical, joined the Israeli army as a volunteer, spent a semester at Oxford studying political science and literature and finally moved to LA to try and become a stand-up comedian. It didn’t work out and so he returned to Rice, this time to study neurolinguistics. After this came his doctorate and his day job as a professor running a laboratory at Baylor College of Medicine, Houston (he does his book writing at night, doesn’t have hobbies and has never owned a television).

I ask if he has encountered any snobbery within the scientific community for being an academic who has “dumbed down” by writing popular science books that spend months on the New York Times bestseller list? “I have to tell you, that was one of my concerns, and I can definitely find evidence of that. Online, people will sometimes say terrible things about me, but they are the exceptions that illustrate a more benevolent rule. I give talks on university campuses and the students there tell me they read my books because they synthesise large swathes of data in a readable way.”

He actually thinks there is an advantage for scientists in making their work accessible to non-scientists. “I have many tens of thousands of neuroscience details in my head and the process of writing about them and trying to explain them to an eighth grader makes them become clearer in my own mind. It crystallises them.”

I tell him that my copy of Incognito is heavily annotated and there is one passage where I have simply written a large exclamation mark. It concerns Eric Weihenmayer who, in 2001, became the first blind person to climb Mount Everest. Today he climbs with a grid of more than six hundred tiny electrodes in his mouth. This device allows him to see with his tongue. Although the tongue is normally a taste organ, its moisture and chemical environment make it a good brain-machine interface when a tingly electrode grid is laid on its surface. The grid translates a video input into patterns of electrical pulses, allowing the tongue to discern qualities usually ascribed to vision such as distance, shape, direction of movement and size.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of ALAMY / Telegraph.[end-div]

Cocktail Party Science and Multitasking


The hit drama Mad Men shows us that cocktail parties can be fun — colorful drinks and colorful conversations with a host of very colorful characters. Yet cocktail parties also highlight one of our limitations, the inability to multitask. We are single-threaded animals despite the constant and simultaneous bombardment for our attention from all directions, and to all our senses.

Melinda Beck over at the WSJ Health Journal summarizes recent research that shows the deleterious effects of our attempts to multitask — why it’s so hard and why it’s probably not a good idea anyway, especially while driving.

[div class=attrib]From the Wall Street Journal:[end-div]

You’re at a party. Music is playing. Glasses are clinking. Dozens of conversations are driving up the decibel level. Yet amid all those distractions, you can zero in on the one conversation you want to hear.

This ability to hyper-focus on one stream of sound amid a cacophony of others is what researchers call the “cocktail-party effect.” Now, scientists at the University of California in San Francisco have pinpointed where that sound-editing process occurs in the brain—in the auditory cortex just behind the ear, not in areas of higher thought. The auditory cortex boosts some sounds and turns down others so that when the signal reaches the higher brain, “it’s as if only one person was speaking alone,” says principle investigator Edward Chang.

These findings, published in the journal Nature last week, underscore why people aren’t very good at multitasking—our brains are wired for “selective attention” and can focus on only one thing at a time. That innate ability has helped humans survive in a world buzzing with visual and auditory stimulation. But we keep trying to push the limits with multitasking, sometimes with tragic consequences. Drivers talking on cellphones, for example, are four times as likely to get into traffic accidents as those who aren’t.

Many of those accidents are due to “inattentional blindness,” in which people can, in effect, turn a blind eye to things they aren’t focusing on. Images land on our retinas and are either boosted or played down in the visual cortex before being passed to the brain, just as the auditory cortex filters sounds, as shown in the Nature study last week. “It’s a push-pull relationship—the more we focus on one thing, the less we can focus on others,” says Diane M. Beck, an associate professor of psychology at the University of Illinois.

That people can be completely oblivious to things in their field of vision was demonstrated famously in the “Invisible Gorilla experiment” devised at Harvard in the 1990s. Observers are shown a short video of youths tossing a basketball and asked to count how often the ball is passed by those wearing white. Afterward, the observers are asked several questions, including, “Did you see the gorilla?” Typically, about half the observers failed to notice that someone in a gorilla suit walked through the scene. They’re usually flabbergasted because they’re certain they would have noticed something like that.

“We largely see what we expect to see,” says Daniel Simons, one of the study’s creators and now a professor of psychology at the University of Illinois. As he notes in his subsequent book, “The Invisible Gorilla,” the more attention a task demands, the less attention we can pay to other things in our field of vision. That’s why pilots sometimes fail to notice obstacles on runways and radiologists may overlook anomalies on X-rays, especially in areas they aren’t scrutinizing.

And it isn’t just that sights and sounds compete for the brain’s attention. All the sensory inputs vie to become the mind’s top priority.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Getty Images / Wall Street Journal.[end-div]

Tilt: The World in Miniature

Tilt-shift photography has been around for quite a while, primarily as a tool in high-end architectural photography. More recently with the advent of more affordable lens attachments for consumer cameras and through software post-processing, including Photoshop and Instagram, tilt-shift is becoming more mainstream.

Tilt-shift is a combination of two movements. Photographers tilt, or rotate, the lens plane relative to the image to control which part of an image retains focus. Then, they shift the perspective to re-position the subject in the image (this usually has the effect of reducing the convergence of parallel lines). When used appropriately, tilt-shift delivers a highly selective focus, and the resulting images give the illusion of a miniaturized landscape.

[div class=attrib]More tilt-shift photographs from the Telegraph after the jump.[end-div]

[div class=attrib]Image: Brighton beach, on the south coast of Sussex, England. Courtesy of the Telegraph.[end-div]

Religious Art: From Faith or For Money?

Over the centuries many notable artists have painted religious scenes initiated or influenced by a very deep religious conviction; some painted to give voice to their own spirituality, others to mirror the faith of their time and community. However, others simply painted for fame or fortune, or both, or to remain in good stead with their wealthy patrons and landlords.

This bring us to another thoughtful article from Jonathan Jones over at the Guardian.

[div class=attrib]From the Guardian:[end-div]

“To paint the things of Christ you must live with Christ,” said the 15th-century artist Fra Angelico. He knew what he was talking about – he was a Dominican monk of such exemplary virtue that in 1982 he was officially beatified by Pope John Paul II. He was also a truly great religious artist whose frescoes at San Marco in Florence have influenced modern artists such as Mark Rothko. But is all holy art that holy?

From the dark ages to the end of the 17th century, the vast majority of artistic commissions in Europe were religious. Around 1700 this somehow stopped, at least when it came to art anyone cares to look at now. The great artists of the 18th century, and since, worked for secular patrons and markets. But in all those centuries when Christianity defined art, its genres, its settings, its content, was every painter and sculptor totally sincerely faithful in every work of art? Or were some of them just doing what they had to do and finding pleasure in the craft?

This question relates to another. What is it like to live in a world where everyone is religious? It is often said it was impossible to even imagine atheism in the middle ages and the Renaissance. This is so different from modern times that people do not even try to imagine it. Modern Christians blithely imagine a connection when actually a universal church meant a mentality so different from modern “faith” that today’s believers are as remote from it as today’s non-believers. Among other things it meant that while some artists “lived with Christ” and made art that searched their souls, others enjoyed the colours, the drama, the rich effects of religious paintings without thinking too deeply about the meaning.

Here are two contrasting examples from the National Gallery. Zurbarán’s painting of St Francis in Meditation (1635-9) is a harrowing and profoundly spiritual work. The face of a kneeling friar is barely glimpsed in a darkness that speaks of inner searching, of the long night of the soul. This is a true Christian masterpiece. But compare it to Carlo Crivelli’s painting The Annunciation (1486) in the same museum. Crivelli’s picture is a feast for the eye. Potted plants, a peacock, elaborately decorated classical buildings – and is that a gherkin just added in at the front of the scene? – add up to a materialistic cornucopia of visual interest. What is the religious function of such detail? Art historians, who sometimes seem to be high on piety, will point to the allegorical meaning of everyday objects in Renaissance art. But that’s all nonsense. I am not saying the allegories do not exist – I am saying they do not matter much to the artist, his original audience or us. In reality, Crivelli is enjoying himself, enjoying the world, and he paints religious scenes because that’s what he got paid to paint.

By smothering the art of the past in a piety that in some cases may be woefully misplaced, its guardians do it a disservice. Is Crivelli a Christian artist? Not in any sense that is meaningful today. He loves the things of this life, not the next.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Annunciation with St Emidius, Crivelli Carlo, 1486. National Gallery, London. Courtesy of Wikipedia / National Gallery.[end-div]

Loneliness in the Age of Connectedness

Online social networks are a boon to researchers. As never before, social scientists are probing our connections, our innermost thoughts now made public, our networks of friends, and our loneliness. Some academics point to the likes of Facebook for making our increasingly shallow “friendships” a disposable and tradable commodity, and ironically facilitating isolation from more intimate and deeper connections. Others see Facebook merely as a mirror — we have, quite simply, made ourselves lonely, and our social networks instantly and starkly expose our isolation for all to see and “like”.

An insightful article by novelist Stephen Marche over at The Atlantic examines our self-imposed loneliness.

[div class=attrib]From the Atlantic:[end-div]

Yvette Vickers, a former Playboy playmate and B-movie star, best known for her role in Attack of the 50 Foot Woman, would have been 83 last August, but nobody knows exactly how old she was when she died. According to the Los Angeles coroner’s report, she lay dead for the better part of a year before a neighbor and fellow actress, a woman named Susan Savage, noticed cobwebs and yellowing letters in her mailbox, reached through a broken window to unlock the door, and pushed her way through the piles of junk mail and mounds of clothing that barricaded the house. Upstairs, she found Vickers’s body, mummified, near a heater that was still running. Her computer was on too, its glow permeating the empty space.

The Los Angeles Times posted a story headlined “Mummified Body of Former Playboy Playmate Yvette Vickers Found in Her Benedict Canyon Home,” which quickly went viral. Within two weeks, by Technorati’s count, Vickers’s lonesome death was already the subject of 16,057 Facebook posts and 881 tweets. She had long been a horror-movie icon, a symbol of Hollywood’s capacity to exploit our most basic fears in the silliest ways; now she was an icon of a new and different kind of horror: our growing fear of loneliness. Certainly she received much more attention in death than she did in the final years of her life. With no children, no religious group, and no immediate social circle of any kind, she had begun, as an elderly woman, to look elsewhere for companionship. Savage later told Los Angeles magazine that she had searched Vickers’s phone bills for clues about the life that led to such an end. In the months before her grotesque death, Vickers had made calls not to friends or family but to distant fans who had found her through fan conventions and Internet sites.

Vickers’s web of connections had grown broader but shallower, as has happened for many of us. We are living in an isolation that would have been unimaginable to our ancestors, and yet we have never been more accessible. Over the past three decades, technology has delivered to us a world in which we need not be out of contact for a fraction of a moment. In 2010, at a cost of $300 million, 800 miles of fiber-optic cable was laid between the Chicago Mercantile Exchange and the New York Stock Exchange to shave three milliseconds off trading times. Yet within this world of instant and absolute communication, unbounded by limits of time or space, we suffer from unprecedented alienation. We have never been more detached from one another, or lonelier. In a world consumed by ever more novel modes of socializing, we have less and less actual society. We live in an accelerating contradiction: the more connected we become, the lonelier we are. We were promised a global village; instead we inhabit the drab cul-de-sacs and endless freeways of a vast suburb of information.

At the forefront of all this unexpectedly lonely interactivity is Facebook, with 845 million users and $3.7 billion in revenue last year. The company hopes to raise $5 billion in an initial public offering later this spring, which will make it by far the largest Internet IPO in history. Some recent estimates put the company’s potential value at $100 billion, which would make it larger than the global coffee industry—one addiction preparing to surpass the other. Facebook’s scale and reach are hard to comprehend: last summer, Facebook became, by some counts, the first Web site to receive 1 trillion page views in a month. In the last three months of 2011, users generated an average of 2.7 billion “likes” and comments every day. On whatever scale you care to judge Facebook—as a company, as a culture, as a country—it is vast beyond imagination.

Despite its immense popularity, or more likely because of it, Facebook has, from the beginning, been under something of a cloud of suspicion. The depiction of Mark Zuckerberg, in The Social Network, as a bastard with symptoms of Asperger’s syndrome, was nonsense. But it felt true. It felt true to Facebook, if not to Zuckerberg. The film’s most indelible scene, the one that may well have earned it an Oscar, was the final, silent shot of an anomic Zuckerberg sending out a friend request to his ex-girlfriend, then waiting and clicking and waiting and clicking—a moment of superconnected loneliness preserved in amber. We have all been in that scene: transfixed by the glare of a screen, hungering for response.

When you sign up for Google+ and set up your Friends circle, the program specifies that you should include only “your real friends, the ones you feel comfortable sharing private details with.” That one little phrase, Your real friends—so quaint, so charmingly mothering—perfectly encapsulates the anxieties that social media have produced: the fears that Facebook is interfering with our real friendships, distancing us from each other, making us lonelier; and that social networking might be spreading the very isolation it seemed designed to conquer.

Facebook arrived in the middle of a dramatic increase in the quantity and intensity of human loneliness, a rise that initially made the site’s promise of greater connection seem deeply attractive. Americans are more solitary than ever before. In 1950, less than 10 percent of American households contained only one person. By 2010, nearly 27 percent of households had just one person. Solitary living does not guarantee a life of unhappiness, of course. In his recent book about the trend toward living alone, Eric Klinenberg, a sociologist at NYU, writes: “Reams of published research show that it’s the quality, not the quantity of social interaction, that best predicts loneliness.” True. But before we begin the fantasies of happily eccentric singledom, of divorcées dropping by their knitting circles after work for glasses of Drew Barrymore pinot grigio, or recent college graduates with perfectly articulated, Steampunk-themed, 300-square-foot apartments organizing croquet matches with their book clubs, we should recognize that it is not just isolation that is rising sharply. It’s loneliness, too. And loneliness makes us miserable.

We know intuitively that loneliness and being alone are not the same thing. Solitude can be lovely. Crowded parties can be agony. We also know, thanks to a growing body of research on the topic, that loneliness is not a matter of external conditions; it is a psychological state. A 2005 analysis of data from a longitudinal study of Dutch twins showed that the tendency toward loneliness has roughly the same genetic component as other psychological problems such as neuroticism or anxiety.

[div class=attrib]Kindly read the entire article after the momentary jump.[end-div]

[div class=attrib]Photograph courtesy of Phillip Toledano / The Atlantic.[end-div]

Hitchcock

Alfred Hitchcock was a pioneer of modern cinema. His finely crafted movies introduced audiences to new levels of suspense, sexuality and violence. His work raised cinema to the level of great art.

This summer in London, the British Film Institute (BFI) is celebrating all things Hitchcockian by showing all 58 of his works, including newly restored prints of his early silent films, such as Blackmail.

[div class=attrib]From the Guardian:[end-div]

Alfred Hitchcock is to be celebrated like never before this summer, with a retrospective of all his surviving films and the premieres of his newly restored silent films – including Blackmail, which will be shown outside the British Museum.

The BFI on Tuesday announced details of its biggest ever project: celebrating the genius of a man who, it said, was as important to modern cinema as Picasso to modern art or Le Corbusier to modern architecture. Heather Stewart, the BFI’s creative director, said: “The idea of popular cinema somehow being capable of being great art at the same time as being entertaining is still a problem for some people. Shakespeare is on the national curriculum, Hitchcock is not.”

One of the highlights of the season will be the culmination of a three-year project to fully restore nine of the director’s silent films. It will involve The Pleasure Garden, Hitchcock’s first, being shown at Wilton’s Music Hall; The Ring at Hackney Empire, and Blackmail outside the British Museum, where the film’s climactic chase scene was filmed in 1929, both inside the building and on the roof.

Stewart said the restorations were spectacular and overdue. “We would find it very strange if we could not see Shakespeare’s early plays performed, or read Dickens’s early novels. But we’ve been quite satisfied as a nation that Hitchcock’s early films have not been seen in good quality prints on the big screen, even though – like Shakespearean and Dickensian – Hitchcockian has entered our language.”

The films, with new scores by composers including Nitin Sawhney, Daniel Patrick Cohen and Soweto Kinch, will be shown the London 2012 Festival, the finale of the Cultural Olympiad.

Between August and October the BFI will show all 58 surviving Hitchcock films including his many films made in the UK – The 39 Steps, for example, and The Lady Vanishes – and those from his Hollywood years, from Rebecca in 1940 to Vertigo in 1957, The Birds in 1963 and his penultimate film, Frenzy, in 1972.

[div class=attrib]See more stills here, and read the entire article after the jump.[end-div]

[div class=attrib]Image: Robert Donat in The 39 Steps (1935), often hailed as the best of four film versions of John Buchan’s novel. Courtesy of BFI / Guardian.[end-div]

Wedding Photography

If you’ve been through a marriage or other formal ceremony you probably have an album of images that beautifully captured the day. You, significant other, family and select friends will browse through the visual memories every so often. Doubtless you will have hired, for a quite handsome sum, a professional photographer and/or videographer to record all the important instants. However, somewhere you, or your photographer, will have a selection of “outtakes” that should never see the light of day, such as those described below.

[div class=attrib]From the Daily Telegraph:[end-div]

Thomas and Anneka Geary commissioned professional photographers Ian McCloskey and Nikki Carter £750 to cover what should have been the best day of their lives.

But they were stunned when the pictures arrived and included out of focus shots of the couple, the back of guests’ heads and a snap of the bride’s mother whose face was completely obscured by her hat.

Astonishingly, the photographers even failed to take a single frame of the groom’s parents.

One snap of the couple signing the marriage register also appears to feature a ghostly hand clutching a toy motorbike where the snappers tried to edit out Anneka’s three-year-old nephew Harry who was standing in the background.

The pictures of the evening do, which hosted 120 guests, were also taken without flash because one of the photographers complained about being epileptic.

[div class=attrib]Read the entire article and browse through more images after the jump.[end-div]

[div class=attrib]Image: Tom, 32, a firefighter for Warwickshire Fire Service, said: “We received a CD from the wedding photographers but at first we thought it was a joke. Just about all of the pictures were out of focus or badly lit or just plain weird.” Courtesy of Daily Telegraph, Westgate Photography / SWNS.[end-div]

The Evolutionary Benefits of Middle Age

David Bainbridge, author of “Middle Age: A Natural History”, examines the benefits of middle age. Yes, really. For those of us in “middle age” it’s not surprising to see that this period is not limited to decline, disease and senility. Rather, it’s a pre-programmed redistribution of physical and mental resources designed to cope with our ever-increasing life spans.

[div class=attrib]From David Bainbridge over at New Scientist:[end-div]

As a 42-year-old man born in England, I can expect to live for about another 38 years. In other words, I can no longer claim to be young. I am, without doubt, middle-aged.

To some people that is a depressing realization. We are used to dismissing our fifth and sixth decades as a negative chapter in our lives, perhaps even a cause for crisis. But recent scientific findings have shown just how important middle age is for every one of us, and how crucial it has been to the success of our species. Middle age is not just about wrinkles and worry. It is not about getting old. It is an ancient, pivotal episode in the human life span, preprogrammed into us by natural selection, an exceptional characteristic of an exceptional species.

Compared with other animals, humans have a very unusual pattern to our lives. We take a very long time to grow up, we are long-lived, and most of us stop reproducing halfway through our life span. A few other species have some elements of this pattern, but only humans have distorted the course of their lives in such a dramatic way. Most of that distortion is caused by the evolution of middle age, which adds two decades that most other animals simply do not get.

An important clue that middle age isn’t just the start of a downward spiral is that it does not bear the hallmarks of general, passive decline. Most body systems deteriorate very little during this stage of life. Those that do, deteriorate in ways that are very distinctive, are rarely seen in other species and are often abrupt.

For example, our ability to focus on nearby objects declines in a predictable way: Farsightedness is rare at 35 but universal at 50. Skin elasticity also decreases reliably and often surprisingly abruptly in early middle age. Patterns of fat deposition change in predictable, stereotyped ways. Other systems, notably cognition, barely change.

Each of these changes can be explained in evolutionary terms. In general, it makes sense to invest in the repair and maintenance only of body systems that deliver an immediate fitness benefit — that is, those that help to propagate your genes. As people get older, they no longer need spectacular visual acuity or mate-attracting, unblemished skin. Yet they do need their brains, and that is why we still invest heavily in them during middle age.

As for fat — that wonderfully efficient energy store that saved the lives of many of our hard-pressed ancestors — its role changes when we are no longer gearing up to produce offspring, especially in women. As the years pass, less fat is stored in depots ready to meet the demands of reproduction — the breasts, hips and thighs — or under the skin, where it gives a smooth, youthful appearance. Once our babymaking days are over, fat is stored in larger quantities and also stored more centrally, where it is easiest to carry about. That way, if times get tough we can use it for our own survival, thus freeing up food for our younger relatives.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Middle Age Couple Laughing. Courtesy of Cindi Matthews / Flickr.[end-div]

Heavy Metal Density

Heavy Metal in the musical sense, not as in elements, such as iron or manganese, is really popular in Finland and Iceland. It even pops up in Iran and Saudia Arabia.

[div class=attrib]Frank Jacobs over at Strange Maps tells us more.[end-div]

This map reflects the number of heavy metal bands per 100,000 inhabitants for each country in the world. It codes the result on a colour temperature scale, with blue indicating low occurrence, and red high occurrence. The data for this map is taken from the extensive Encyclopaedia Metallum, an online archive of metal music that lists bands per country, and provides some background by listing their subgenre (Progressive Death Metal, Symphonic Gothic Metal, Groove Metal, etc).

Even if you barely know your Def Leppard from your Deep Purple, you won’t be surprised by the obvious point of this map: Scandinavia is the world capital of heavy metal music. Leaders of the pack are Finland and Sweden, coloured with the hottest shade of red. With 2,825 metal bands listed in the Encyclopaedia Metallum, the figure for Finland works out to 54.3 bands per 100,000 Finns (for a total of 5.2 million inhabitants). Second is Sweden, with a whopping 3,398 band entries. For 9.1 million Swedes, that amounts to 37.3 metal bands per 100,000 inhabitants.

The next-hottest shade of red is coloured in by Norway and Iceland. The Icelandic situation is interesting: with only 71 bands listed, the country seems not particulary metal-oriented. But the total population of the North Atlantic island is a mere 313,000. Which produces a result of 22.6 metal bands per 100,000 inhabitants. That’s almost the double, relatively speaking, of Denmark, which has a score of 12.9 (708 metal bands for 5.5 million Danes)

The following shades of colour, from dark orange to light yellow, are almost all found in North America, Europe and Australasia. A notable addition to this list of usual suspects are Israel, and the three countries of Latin America’s Southern Cone: Chile, Argentina and Uruguay.

Some interesting variations in Europe: Portugal is much darker – i.e. much more metal-oriented – than its Iberian neighbour Spain, and Greece is a solid southern outpost of metal on an otherwise wishy-washy Balkan Peninsula.

On the other side of the scale, light blue indicates the worst – or at least loneliest – places to be a metal fan: Papua New Guinea, North Korea, Cambodia, Afghanistan, Yemen, and most of Africa outside its northern and southern fringe. According to the Encyclopaedia Metallum, there isn’t a single metal band in any of those countries.

[div class=attrib]Read the entire article after the jump.[end-div]

Why Do Some Videos Go Viral, and Others Not?

Some online videos and stories are seen by tens or hundreds of millions, yet others never see the light of day. Advertisers and reality star wannabes search daily for the secret sauce that determines the huge success of one internet meme over many others. However, much to the frustration of the many agents to the “next big thing”, several fascinating new studies point at nothing more than simple randomness.

[div class=attrib]From the New Scientist:[end-div]

WHAT causes some photos, videos, and Twitter posts to spread across the internet like wildfire while others fall by the wayside? The answer may have little to do with the quality of the information. What goes viral may be completely arbitrary, according to a controversial new study of online social networks.

By analysing 120 million retweets – repostings of users’ messages on Twitter – by 12.5 million users of the social network, researchers at Indiana University, Bloomington, learned the mechanisms by which memes compete for user interest, and how information spreads.

Using this insight, the team built a computer simulation designed to mimic Twitter. In the simulation, each tweet or message was assigned the same value and retweets were performed at random. Despite this, some tweets became incredibly popular and were persistently reposted, while others were quickly forgotten.

The reason for this, says team member Filippo Menczer, is that the simulated users had a limited attention span and could only view a portion of the total number of tweets – as is the case in the real world. Tweets selected for retweeting would be more likely to be seen by a user and re-posted. After a few iterations, a tweet becomes significantly more prevalent than those not retweeted. Many users see the message and retweet it further.

“When a meme starts to get popular it displaces other memes; you start to pay attention to the popular meme and don’t pay attention to other things because you have only so much attention,” Menczer says. “It’s similar to when a big news story breaks, you don’t hear about other things that happened on that day.”

Katherine Milkman of the University of Pennsylvania in Philadelphia disagrees. “[Menczer’s study] says that all of the things that catch on could be truly random but it doesn’t say they have to be,” says Milkman, who co-authored a paper last year examining how emotions affect meme sharing.

Milkman’s study analysed 7000 articles that appeared in the New York Times over a three-month period. It found that articles that aroused readers’ emotions were more likely to end up on the website’s “most emailed” list. “Anything that gets you fired up, whether positive or negative, will lead you to share it more,” Milkman says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets is a book by Nassim Nicholas Taleb. Courtesy of Wikipedia.[end-div]

Childhood Memory

[div class=attrib]From Slate:[end-div]

Last August, I moved across the country with a child who was a few months shy of his third birthday. I assumed he’d forget his old life—his old friends, his old routine—within a couple of months. Instead, over a half-year later, he remembers it in unnerving detail: the Laundromat below our apartment, the friends he ran around naked with, my wife’s co-workers. I just got done with a stint pretending to be his long-abandoned friend Iris—at his direction.

We assume children don’t remember much, because we don’t remember much about being children. As far as I can tell, I didn’t exist before the age of 5 or so—which is how old I am in my earliest memory, wandering around the Madison, Wis. farmers market in search of cream puffs. But developmental research now tells us that Isaiah’s memory isn’t extraordinary. It’s ordinary. Children remember.

Up until the 1980s, almost no one would have believed that Isaiah still remembers Iris. It was thought that babies and young toddlers lived in a perpetual present: All that existed was the world in front of them at that moment. When Jean Piaget conducted his famous experiments on object permanence—in which once an object was covered up, the baby seemed to forget about it—Piaget concluded that the baby had been unable to store the memory of the object: out of sight, out of mind.

The paradigm of the perpetual present has now itself been forgotten. Even infants are aware of the past, as many remarkable experiments have shown. Babies can’t speak but they can imitate, and if shown a series of actions with props, even 6-month-old infants will repeat a three-step sequence a day later. Nine-month-old infants will repeat it a month later.

The conventional wisdom for older children has been overturned, too. Once, children Isaiah’s age were believed to have memories of the past but nearly no way to organize those memories. According to Patricia Bauer, a professor of psychology at Emory who studies early memory, the general consensus was that a 3-year-old child’s memory was a jumble of disorganized information, like your email inbox without any sorting function: “You can’t sort them by name, you can’t sort them by date, it’s just all your email messages.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Summer school memories. Retouched New York World-Telegram photograph by Walter Albertin. Courtesy of Wikimedia.[end-div]

Creativity and Failure at School

[div class=attrib]From the Wall Street Journal:[end-div]

Most of our high schools and colleges are not preparing students to become innovators. To succeed in the 21st-century economy, students must learn to analyze and solve problems, collaborate, persevere, take calculated risks and learn from failure. To find out how to encourage these skills, I interviewed scores of innovators and their parents, teachers and employers. What I learned is that young Americans learn how to innovate most often despite their schooling—not because of it.

Though few young people will become brilliant innovators like Steve Jobs, most can be taught the skills needed to become more innovative in whatever they do. A handful of high schools, colleges and graduate schools are teaching young people these skills—places like High Tech High in San Diego, the New Tech high schools (a network of 86 schools in 16 states), Olin College in Massachusetts, the Institute of Design (d.school) at Stanford and the MIT Media Lab. The culture of learning in these programs is radically at odds with the culture of schooling in most classrooms.

In most high-school and college classes, failure is penalized. But without trial and error, there is no innovation. Amanda Alonzo, a 32-year-old teacher at Lynbrook High School in San Jose, Calif., who has mentored two Intel Science Prize finalists and 10 semifinalists in the last two years—more than any other public school science teacher in the U.S.—told me, “One of the most important things I have to teach my students is that when you fail, you are learning.” Students gain lasting self-confidence not by being protected from failure but by learning that they can survive it.

The university system today demands and rewards specialization. Professors earn tenure based on research in narrow academic fields, and students are required to declare a major in a subject area. Though expertise is important, Google’s director of talent, Judy Gilbert, told me that the most important thing educators can do to prepare students for work in companies like hers is to teach them that problems can never be understood or solved in the context of a single academic discipline. At Stanford’s d.school and MIT’s Media Lab, all courses are interdisciplinary and based on the exploration of a problem or new opportunity. At Olin College, half the students create interdisciplinary majors like “Design for Sustainable Development” or “Mathematical Biology.”

Learning in most conventional education settings is a passive experience: The students listen. But at the most innovative schools, classes are “hands-on,” and students are creators, not mere consumers. They acquire skills and knowledge while solving a problem, creating a product or generating a new understanding. At High Tech High, ninth graders must develop a new business concept—imagining a new product or service, writing a business and marketing plan, and developing a budget. The teams present their plans to a panel of business leaders who assess their work. At Olin College, seniors take part in a yearlong project in which students work in teams on a real engineering problem supplied by one of the college’s corporate partners.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of NY Daily News.[end-div]

Science and Politics

The tension between science, religion and politics that began several millennia ago continues unabated.

[div class=attrib]From ars technica:[end-div]

In the US, science has become a bit of a political punching bag, with a number of presidential candidates accusing climatologists of fraud, even as state legislators seek to inject phony controversies into science classrooms. It’s enough to make one long for the good old days when science was universally respected. But did those days ever actually exist?

A new look at decades of survey data suggests that there was never a time when science was universally respected, but one political group in particular—conservative voters—has seen its confidence in science decline dramatically over the last 30 years.

The researcher behind the new work, North Carolina’s Gordon Gauchat, figures there are three potential trajectories for the public’s view of science. One possibility is that the public, appreciating the benefits of the technological advances that science has helped to provide, would show a general increase in its affinity for science. An alternative prospect is that this process will inevitably peak, either because there are limits to how admired a field can be, or because a more general discomfort with modernity spills over to a field that helped bring it about.

The last prospect Gauchat considers is that there has been a change in views about science among a subset of the population. He cites previous research that suggests some view the role of science as having changed from one where it enhances productivity and living standards to one where it’s the primary justification for regulatory policies. “Science has always been politicized,” Gauchat writes. “What remains unclear is how political orientations shape public trust in science.”

To figure out which of these trends might apply, he turned to the General Social Survey, which has been gathering information on the US public’s views since 1972. During that time, the survey consistently contained a series of questions about confidence in US institutions, including the scientific community. The answers are divided pretty crudely—”a great deal,” “only some,” and “hardly any”—but they do provide a window into the public’s views on science. (In fact, “hardly any” was the choice of less than 7 percent of the respondents, so Gauchat simply lumped it in with “only some” for his analysis.)

The data showed a few general trends. For much of the study period, moderates actually had the lowest levels of confidence in science, with liberals typically having the highest; the levels of trust for both these groups were fairly steady across the 34 years of data. Conservatives were the odd one out. At the very start of the survey in 1974, they actually had the highest confidence in scientific institutions. By the 1980s, however, they had dropped so that they had significantly less trust than liberals did; in recent years, they’ve become the least trusting of science of any political affiliation.

Examining other demographic trends, Gauchat noted that the only other group to see a significant decline over time is regular churchgoers. Crunching the data, he states, indicates that “The growing force of the religious right in the conservative movement is a chief factor contributing to conservatives’ distrust in science.” This decline in trust occurred even among those who had college or graduate degrees, despite the fact that advanced education typically correlated with enhanced trust in science.

[div class=attrib]Read the entire article after the jump:[end-div]

You Are What You Share

The old maxim used to go something like, “you are what you eat”. Well, in the early 21st century it has been usurped by, “you are what you share online (knowingly or not)”.

[div class=attrib]From the Wall Street Journal:[end-div]

Not so long ago, there was a familiar product called software. It was sold in stores, in shrink-wrapped boxes. When you bought it, all that you gave away was your credit card number or a stack of bills.

Now there are “apps”—stylish, discrete chunks of software that live online or in your smartphone. To “buy” an app, all you have to do is click a button. Sometimes they cost a few dollars, but many apps are free, at least in monetary terms. You often pay in another way. Apps are gateways, and when you buy an app, there is a strong chance that you are supplying its developers with one of the most coveted commodities in today’s economy: personal data.

Some of the most widely used apps on Facebook—the games, quizzes and sharing services that define the social-networking site and give it such appeal—are gathering volumes of personal information.

A Wall Street Journal examination of 100 of the most popular Facebook apps found that some seek the email addresses, current location and sexual preference, among other details, not only of app users but also of their Facebook friends. One Yahoo service powered by Facebook requests access to a person’s religious and political leanings as a condition for using it. The popular Skype service for making online phone calls seeks the Facebook photos and birthdays of its users and their friends.

Yahoo and Skype say that they seek the information to customize their services for users and that they are committed to protecting privacy. “Data that is shared with Yahoo is managed carefully,” a Yahoo spokeswoman said.

The Journal also tested its own app, “WSJ Social,” which seeks data about users’ basic profile information and email and requests the ability to post an update when a user reads an article. A Journal spokeswoman says that the company asks only for information required to make the app work.

This appetite for personal data reflects a fundamental truth about Facebook and, by extension, the Internet economy as a whole: Facebook provides a free service that users pay for, in effect, by providing details about their lives, friendships, interests and activities. Facebook, in turn, uses that trove of information to attract advertisers, app makers and other business opportunities.

Up until a few years ago, such vast and easily accessible repositories of personal information were all but nonexistent. Their advent is driving a profound debate over the definition of privacy in an era when most people now carry information-transmitting devices with them all the time.

Capitalizing on personal data is a lucrative enterprise. Facebook is in the midst of planning for an initial public offering of its stock in May that could value the young company at more than $100 billion on the Nasdaq Stock Market.

Facebook requires apps to ask permission before accessing a user’s personal details. However, a user’s friends aren’t notified if information about them is used by a friend’s app. An examination of the apps’ activities also suggests that Facebook occasionally isn’t enforcing its own rules on data privacy.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Facebook is watching and selling you. Courtesy of Daily Mail.[end-div]