Retire at 30

No tricks. No Ponzi scheme. No lottery win. No grand inheritance. It’s rather simple; it’s about simple lifestyle choices made at an early age. We excerpt part of Mister Money Moustache’s fascinating story below.

From the Washington Post:

To hundreds of thousands of devotees, he is Mister Money Mustache. And he is here to tell you that early retirement doesn’t only happen to Powerball winners and those who luck into a big inheritance. He and his wife retired from middle-income jobs before they had their son. Exasperated, as he puts it, by “a barrage of skeptical questions from high-income peers who were still in debt years after we were free from work,” he created a no-nonsense personal finance blog and started spilling his secrets. I was eager to know more. He is Pete (just Pete, for the sake of his family’s privacy). He lives in Longmont, Colo. He is ridiculously happy. And he’s sure his life could be yours. Our conversation was edited for length and clarity..

 

So you retired at 30. How did that happen?

I was probably born with a desire for efficiency — the desire to get the most fun out of any possible situation, with no resources being wasted. This applied to money too, and by age 10, I was ironing my 20 dollar bills and keeping them in a photo album, just because they seemed like such powerful and intriguing little rectangles.

But I didn’t start saving and investing particularly early, I just maintained this desire not to waste anything. So I got through my engineering degree debt-free — by working a lot and not owning a car — and worked pretty hard early on to move up a bit in the career, relocating from Canada to the United States, attracted by the higher salaries and lower cost of living.

Then my future wife and I moved in together and DIY-renovated a junky house into a nice one, kept old cars while our friends drove fancy ones, biked to work instead of driving, cooked at home and went out to restaurants less, and it all just added up to saving more than half of what we earned. We invested this surplus as we went, never inflating our already-luxurious lives, and eventually the passive income from stock dividends and a rental house was more than enough to pay for our needs (about $25,000 per year for our family of three, with a paid-off house and no other debt).

What sort of retirement income do you have?

Our bread-and-butter living expenses are paid for by a single rental house we own, which generates about $25,000 per year after expenses. We also have stock index funds and 401(k) plans, which could boost that by about 50 percent without depleting principal if we ever needed it, but, so far, we can’t seem to spend more than $25,000 no matter how much we let loose. So the dividends just keep reinvesting.

You describe the typical middle-class life as an “exploding volcano of wastefulness.” Seems like lots of personal finance folks obsess about lattes. Are you just talking about the lattes here?

The latte is just the foamy figurehead of an entire spectrum of sloppy “I deserve it” luxury spending that consumes most of our gross domestic product these days. Among my favorite targets: commuting to an office job in an F-150 pickup truck, anything involving a drive-through, paying $100 per month for the privilege of wasting four hours a night watching cable TV and the whole yoga industry. There are better, and free, ways to meet these needs, but everyone always chooses the expensive ones and then complains that life is hard these days.

Read the entire article following the jump or visit Mr. Money Moustache’s blog.

Image courtesy of Google Search.

Send to Kindle

General Relativity Lives on For Now

Since Einstein first published his elegant theory of General Relativity almost 100 years ago it has proved to be one of most powerful and enduring cornerstones of modern science. Yet theorists and researchers the world over know that it cannot possibly remain the sole answer to our cosmological questions. It answers questions about the very, very large — galaxies, stars and planets and the gravitational relationship between them. But it fails to tackle the science of the very, very small — atoms, their constituents and the forces that unite and repel them, which is addressed by the elegant and complex, but mutually incompatible Quantum Theory.

So, scientists continue to push their measurements to ever greater levels of precision across both greater and smaller distances with one aim in mind — to test the limits of each theory and to see which one breaks down first.

A recent highly precise and yet very long distance experiment, confirmed that Einstein’s theory still rules the heavens.

From ars technica:

The general theory of relativity is a remarkably successful model for gravity. However, many of the best tests for it don’t push its limits: they measure phenomena where gravity is relatively weak. Some alternative theories predict different behavior in areas subject to very strong gravity, like near the surface of a pulsar—the compact, rapidly rotating remnant of a massive star (also called a neutron star). For that reason, astronomers are very interested in finding a pulsar paired with another high-mass object. One such system has now provided an especially sensitive test of strong gravity.

The system is a binary consisting of a high-mass pulsar and a bright white dwarf locked in mutual orbit with a period of about 2.5 hours. Using optical and radio observations, John Antoniadis and colleagues measured its properties as it spirals toward merger by emitting gravitational radiation. After monitoring the system for a number of orbits, the researchers determined its behavior is in complete agreement with general relativity to a high level of precision.

The binary system was first detected in a survey of pulsars by the Green Bank Telescope (GBT). The pulsar in the system, memorably labeled PSR J0348+0432, emits radio pulses about once every 39 milliseconds (0.039 seconds). Fluctuations in the pulsar’s output indicated that it is in a binary system, though its companion lacked radio emissions. However, the GBT’s measurements were precise enough to pinpoint its location in the sky, which enabled the researchers to find the system in the archives of the Sloan Digital Sky Survey (SDSS). They determined the companion object was a particularly bright white dwarf, the remnant of the core of a star similar to our Sun. It and the pulsar are locked in a mutual orbit about 2.46 hours in length.

Following up with the Very Large Telescope (VLT) in Chile, the astronomers built up enough data to model the system. Pulsars are extremely dense, packing a star’s worth of mass into a sphere roughly 10 kilometers in radius—far too small to see directly. White dwarfs are less extreme, but they still involve stellar masses in a volume roughly equivalent to Earth’s. That means the objects in the PSR J0348+0432 system can orbit much closer to each other than stars could—as little as 0.5 percent of the average Earth-Sun separation, or 1.2 times the Sun’s radius.

The pulsar itself was interesting because of its relatively high mass: about 2.0 times that of the Sun (most observed pulsars are about 1.4 times more massive). Unlike more mundane objects, pulsar size doesn’t grow with mass; according to some models, a higher mass pulsar may actually be smaller than one with lower mass. As a result, the gravity at the surface of PSR J0348+0432 is far more intense than at a lower-mass counterpart, providing a laboratory for testing general relativity (GR). The gravitational intensity near PSR J0348+0432 is about twice that of other pulsars in binary systems, creating a more extreme environment than previously measured.

According to GR, a binary emits gravitational waves that carry energy away from the system, causing the size of the orbit to shrink. For most binaries, the effect is small, but for compact systems like the one containing PSR J0348+0432, it is measurable. The first such system was found by Russel Hulse and Joseph Taylor; its discovery won the two astronomers the Nobel Prize.

The shrinking of the orbit results in a decrease in the orbital period as the two objects revolve around each other more quickly. In this case, the researchers measured the effect by studying the change in the spectrum of light emitted by the white dwarf, as well as fluctuations in the emissions from the pulsar. (This study also helped demonstrate the two objects were in mutual orbit, rather than being coincidentally in the same part of the sky.)

To test agreement with GR, physicists established a set of observable quantities. These include the rate of orbit decrease (which is a reflection of the energy loss to gravitational radiation) and something called the Shapiro delay. The latter phenomenon occurs because light emitted from the pulsar must travel through the intense gravitational field of the pulsar when exiting the system. This effect depends on the relative orientation of the pulsar to us, but alternative models also predict different observable results.

In the case of the PSR J0348+0432 system, the change in orbital period and the Shapiro delay agreed with the predictions of GR, placing strong constraints on alternative theories. The researchers were also able to rule out energy loss from other, non-gravitational sources (rotation or electromagnetic phenomena). If the system continues as models predict, the white dwarf and pulsar will merge in about 400 million years—we don’t know what the product of that merger will be, so astronomers are undoubtedly marking their calendars now.

The results are of potential use for the Laser Interferometer Gravitational-wave Observatory (LIGO) and other ground-based gravitational-wave detectors. These instruments are sensitive to the final death spiral of binaries like the one containing PSR J0348+0432. The current detection and observation strategies involve “templates,” or theoretical models of the gravitational wave signal from binaries. All information about the behavior of close pulsar binaries helps gravitational-wave astronomers refine those templates, which should improve the chances of detection.

Of course, no theory can be “proven right” by experiment or observation—data provides evidence in support of or against the predictions of a particular model. However, the PSR J0348+0432 binary results placed stringent constraints on any alternative model to GR in the strong-gravity regime. (Certain other alternative models focus on altering gravity on large scales to explain dark energy and the acceleration expansion of the Universe.) Based on this new data, only theories that agree with GR to high precision are still standing—leaving general relativity the continuing champion theory of gravity.

Read the entire article after the jump.

Image: Artist’s impression of the PSR J0348+0432 system. The compact pulsar (with beams of radio emission) produces a strong distortion of spacetime (illustrated by the green mesh). Courtesy of Science Mag.

Send to Kindle

Google’s AI

The collective IQ of Google, the company, inched up a few notches in January of 2013 when they hired Ray Kurzweil. Over the coming years if the work of Kurzweil, and many of his colleagues, pays off the company’s intelligence may surge significantly. This time though it will be thanks to their work on artificial intelligence (AI), machine learning and (very) big data.

From  Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Read the entire fascinating article following the jump.

Image courtesy of Wired.

Send to Kindle

Corporate-Speak 101

We believe that corporate-speak is a dangerous starting point that may eventually lead us to Orwellian doublethink. After all what could possibly be the purpose of using the words “going forward” in place of “in the future”, if not to convince employees to believe the past never happened. Some of our favorite management buzzwords and euphemisms below.

From the Guardian:

Among the most spirit-sapping indignities of office life is the relentless battering of workers’ ears by the strangled vocabulary of management-speak. It might even seem to some innocent souls as though all you need to do to acquire a high-level job is to learn its stultifying jargon. Bureaucratese is a maddeningly viral kind of Unspeak engineered to deflect blame, complicate simple ideas, obscure problems, and perpetuate power relations. Here are some of its most dismaying manifestations.

1 Going forward

Top of many people’s hate list is this now-venerable way of saying “from now on” or “in future”. It has the rhetorical virtue of wiping clean the slate of the past (perhaps because “mistakes were made”), and implying a kind of thrustingly strategic progress, even though none is likely to be made as long as the working day is made up of funereal meetings where people say things like “going forward”.

2 Drill down

Far be it from me to suggest that managers prefer metaphors that evoke huge pieces of phallic machinery, but why else say “drill down” when you just mean “look at in detail”?

3 Action

Some people despise verbings (where a noun begins to be used as a verb) on principle, though who knows what they say instead of “texting”. In his Dictionary of Weasel Words, the doyen of management-jargon mockery Don Watson defines “to action” simply as “do”. This is not quite right, but “action” can probably always be replaced with a more specific verb, such as “reply” or “fulfil”, even if they sound less excitingly action-y. The less said of the mouth-full-of-pebbles construction “actionables”, the better.

4 End of play

The curious strain of kiddy-talk in bureaucratese perhaps stems from a hope that infantilised workers are more docile. A manager who tells you to do something “by end of play” – in other words, today – is trying to hypnotise you into thinking you are having fun. This is not a game of cricket.

5 Deliver

What you do when you’ve actioned something. “Delivering” (eg “results”) borrows the dynamic, space-traversing connotations of a postal service — perhaps a post-apocalyptic one such as that started by Kevin Costner in The Postman. Inevitably, as with “actionables”, we also have “deliverables” (“key deliverables,” Don Watson notes thoughtfully, “are the most important ones”), though by this point more sensitive subordinates might be wishing instead for deliverance.

6 Issues

Calling something a “problem” is bound to scare the horses and focus responsibility on the bosses, so let’s deploy the counselling-speak of “issues”. The critic (and managing editor of the TLS) Robert Potts translates “there are some issues around X” as “there is a problem so big that we are scared to even talk about it directly”. Though it sounds therapeutically nonjudgmental, “issues” can also be a subtly vicious way to imply personal deficiency. If you have “issues” with a certain proposal, maybe you just need to go away and work on your issues.

Read the entire article following the jump.

Send to Kindle

The Advantages of Shyness

Behavioral scientists have confirmed what shy people of the world have known for quite some time — that timidity and introversion can be beneficial traits. Yes, shyness is not a disorder!

Several studies of humans and animals show that shyness and assertiveness are both beneficial, dependent on the situational context. Researchers have shown that evolution favors both types of personality, and in fact, often rewards adaptability versus pathological extremes at either end of the behavioral spectrum.

From the New Scientist:

“Don’t be shy!” It’s an oft-heard phrase in modern western cultures where go-getters and extroverts appear to have an edge and where raising confident, assertive children sits high on the priority list for many parents. Such attitudes are understandable. Timidity really does hold individuals back. “Shy people start dating later, have sex later, get married later, have children later and get promoted later,” says Bernardo Carducci, director of the Shyness Research Institute at Indiana University Southeast in New Albany. In extreme cases shyness can even be pathological, resulting in anxiety attacks and social phobia.

In recent years it has emerged that we are not the only creatures to experience shyness. In fact, it is one of the most obvious character traits in the animal world, found in a wide variety of species from sea anemones and spiders to birds and sheep. But it is also becoming clear that in the natural world fortune doesn’t always favour the bold. Sometimes the shy, cautious individuals are luckier in love and lifespan. The inescapable conclusion is that there is no one “best” personality – each has benefits in different situations – so evolution favours both.

Should we take a lesson from these findings and re-evaluate what it means to be a shy human? Does shyness have survival value for us too? Some researchers think so and are starting to find that people who are shy, sensitive and even anxious have some surprising advantages over more go-getting types. Perhaps it is time to ditch our negative attitude to shyness and accept that it is as valuable as extroversion. Carducci certainly thinks so. “Think about what it would be like if everybody was very bold,” he says. “What would your daily life be like if everybody you encountered was like Lady Gaga?”

One of the first steps in the rehabilitation of shyness came in the 1990s, from work on salamanders. An interest in optimality – the idea that animals are as efficient as possible in their quest for food, mates and resources – led Andrew Sih at the University of California, Davis, to study the behaviour of sunfish and their prey, larval salamanders. In his experiments, he couldn’t help noticing differences between individual salamanders. Some were bolder and more active than others. They ate more and grew faster than their shyer counterparts, but there was a downside. When sunfish were around, the bold salamanders were just “blundering out there and not actually doing the sort of smart anti-predator behaviour that simple optimality theory predicted they would do”, says Sih. As a result, they were more likely to be gobbled up than their shy counterparts.

Until then, the idea that animals have personalities – consistent differences in behaviour between individuals – was considered controversial. Sih’s research forced a rethink. It also spurred further studies, to the extent that today the so-called “shy-bold continuum” has been identified in more than 100 species. In each of these, individuals range from highly “reactive” to highly “proactive”: reactive types being shy, timid, risk-averse and slow to explore novel environments, whereas proactive types are bold, aggressive, exploratory and risk-prone.

Why would these two personality types exist in nature? Sih’s study holds the key. Bold salamander larvae may risk being eaten, but their fast growth is a distinct advantage in the small streams they normally inhabit, which may dry up before more cautious individuals can reach maturity. In other words, each personality has advantages and disadvantages depending on the circumstances. Since natural environments are complex and constantly changing, natural selection may favour first one and then the other or even both simultaneously.

The idea is illustrated even more convincingly by studies of a small European bird, the great tit. The research, led by John Quinn at University College Cork in Ireland, involved capturing wild birds and putting each separately into a novel environment to assess how proactive or reactive it was. Some hunkered down in the fake tree provided and stayed there for the entire 8-minute trial; others immediately began exploring every nook and cranny of the experimental room. The birds were then released back into the wild, to carry on with the business of surviving and breeding. “If you catch those same individuals a year later, they tend to do more or less the same thing,” says Quinn. In other words, exploration is a consistent personality trait. What’s more, by continuously monitoring the birds, a team led by Niels Dingemanse at the Max Planck Institute for Ornithology in Seewiesen, Germany, observed that in certain years the environment favours bold individuals – more survive and they produce more chicks than other birds – whereas in other years the shy types do best.

A great tit’s propensity to explore is usually similar to that of its parents and a genetic component of risk-taking behaviour has been found in this and other species. Even so, nurture seems to play a part in forming animal personalities too (see “Nurturing Temperament”). Quinn’s team has also identified correlations between exploring and key survival behaviours: the more a bird likes to explore, the more willing it is to disperse, take risks and act aggressively. In contrast, less exploratory individuals were better at solving problems to find food.

Read the entire article following the jump.

Image courtesy of Psychology today.

Send to Kindle

Totalitarianism in the Age of the Internet

Google chair Eric Schmidt is in a very elite group. Not only does he run a major and very profitable U.S. corporation, and by extrapolation is thus a “googillionaire”, he’s also been to North Korea.

We excerpt below Schmidt’s recent essay, with co-author Jared Cohen, about freedom in both the real and digital worlds.

From the Wall Street Journal:

How do you explain to people that they are a YouTube sensation, when they have never heard of YouTube or the Internet? That’s a question we faced during our January visit to North Korea, when we attempted to engage with the Pyongyang traffic police. You may have seen videos on the Web of the capital city’s “traffic cops,” whose ballerina-like street rituals, featured in government propaganda videos, have made them famous online. The men and women themselves, however—like most North Koreans—have never seen a Web page, used a desktop computer, or held a tablet or smartphone. They have never even heard of Google (or Bing, for that matter).

Even the idea of the Internet has not yet permeated the public’s consciousness in North Korea. When foreigners visit, the government stages Internet browsing sessions by having “students” look at pre-downloaded and preapproved content, spending hours (as they did when we were there) scrolling up and down their screens in totalitarian unison. We ended up trying to describe the Internet to North Koreans we met in terms of its values: free expression, freedom of assembly, critical thinking, meritocracy. These are uncomfortable ideas in a society where the “Respected Leader” is supposedly the source of all information and where the penalty for defying him is the persecution of you and your family for three generations.

North Korea is at the beginning of a cat-and-mouse game that’s playing out all around the world between repressive regimes and their people. In most of the world, the spread of connectivity has transformed people’s expectations of their governments. North Korea is one of the last holdouts. Until only a few years ago, the price for being caught there with an unauthorized cellphone was the death penalty. Cellphones are now more common in North Korea since the government decided to allow one million citizens to have them; and in parts of the country near the border, the Internet is sometimes within reach as citizens can sometimes catch a signal from China. None of this will transform the country overnight, but one thing is certain: Though it is possible to curb and monitor technology, once it is available, even the most repressive regimes are unable to put it back in the box.

What does this mean for governments and would-be revolutionaries? While technology has great potential to bring about change, there is a dark side to the digital revolution that is too often ignored. There is a turbulent transition ahead for autocratic regimes as more of their citizens come online, but technology doesn’t just help the good guys pushing for democratic reform—it can also provide powerful new tools for dictators to suppress dissent.

Fifty-seven percent of the world’s population still lives under some sort of autocratic regime. In the span of a decade, the world’s autocracies will go from having a minority of their citizens online to a majority. From Tehran to Beijing, autocrats are building the technology and training the personnel to suppress democratic dissent, often with the help of Western companies.

Of course, this is no easy task—and it isn’t cheap. The world’s autocrats will have to spend a great deal of money to build systems capable of monitoring and containing dissident energy. They will need cell towers and servers, large data centers, specialized software, legions of trained personnel and reliable supplies of basic resources like electricity and Internet connectivity. Once such an infrastructure is in place, repressive regimes then will need supercomputers to manage the glut of information.

Despite the expense, everything a regime would need to build an incredibly intimidating digital police state—including software that facilitates data mining and real-time monitoring of citizens—is commercially available right now. What’s more, once one regime builds its surveillance state, it will share what it has learned with others. We know that autocratic governments share information, governance strategies and military hardware, and it’s only logical that the configuration that one state designs (if it works) will proliferate among its allies and assorted others. Companies that sell data-mining software, surveillance cameras and other products will flaunt their work with one government to attract new business. It’s the digital analog to arms sales, and like arms sales, it will not be cheap. Autocracies rich in national resources—oil, gas, minerals—will be able to afford it. Poorer dictatorships might be unable to sustain the state of the art and find themselves reliant on ideologically sympathetic patrons.

And don’t think that the data being collected by autocracies is limited to Facebook posts or Twitter comments. The most important data they will collect in the future is biometric information, which can be used to identify individuals through their unique physical and biological attributes. Fingerprints, photographs and DNA testing are all familiar biometric data types today. Indeed, future visitors to repressive countries might be surprised to find that airport security requires not just a customs form and passport check, but also a voice scan. In the future, software for voice and facial recognition will surpass all the current biometric tests in terms of accuracy and ease of use.

Today’s facial-recognition systems use a camera to zoom in on an individual’s eyes, mouth and nose, and extract a “feature vector,” a set of numbers that describes key aspects of the image, such as the precise distance between the eyes. (Remember, in the end, digital images are just numbers.) Those numbers can be fed back into a large database of faces in search of a match. The accuracy of this software is limited today (by, among other things, pictures shot in profile), but the progress in this field is remarkable. A team at Carnegie Mellon demonstrated in a 2011 study that the combination of “off-the-shelf” facial recognition software and publicly available online data (such as social-network profiles) can match a large number of faces very quickly. With cloud computing, it takes just seconds to compare millions of faces. The accuracy improves with people who have many pictures of themselves available online—which, in the age of Facebook, is practically everyone.

Dictators, of course, are not the only beneficiaries from advances in technology. In recent years, we have seen how large numbers of young people in countries such as Egypt and Tunisia, armed with little more than mobile phones, can fuel revolutions. Their connectivity has helped them to challenge decades of authority and control, hastening a process that, historically, has often taken decades. Still, given the range of possible outcomes in these situations—brutal crackdown, regime change, civil war, transition to democracy—it is also clear that technology is not the whole story.

Observers and participants alike have described the recent Arab Spring as “leaderless”—but this obviously has a downside to match its upside. In the day-to-day process of demonstrating, it was possible to retain a decentralized command structure (safer too, since the regimes could not kill the movement simply by capturing the leaders). But, over time, some sort of centralized authority must emerge if a democratic movement is to have any direction. Popular uprisings can overthrow dictators, but they’re only successful afterward if opposition forces have a plan and can execute it. Building a Facebook page does not constitute a plan.

History suggests that opposition movements need time to develop. Consider the African National Congress in South Africa. During its decades of exile from the apartheid state, the organization went through multiple iterations, and the men who would go on to become South African presidents (Nelson Mandela, Thabo Mbeki and Jacob Zuma) all had time to build their reputations, credentials and networks while honing their operational skills. Likewise with Lech Walesa and his Solidarity trade union in Eastern Europe. A decade passed before Solidarity leaders could contest seats in the Polish parliament, and their victory paved the way for the fall of communism.

Read the entire essay after the jump.

Image: North Korean students work in a computer lab. Courtesy of AP Photo/David Guttenfelder / Washington Post.

Send to Kindle

Your Genes. But Are They Your Intellectual Property?

The genetic code buried deep within your cells, described in a unique sequence encoded in your DNA, defines who you are at the most fundamental level. The 20,000 or so genes in your genome establish how you are constructed and how you function (and malfunction). These genes are common to many, but their expression belongs to only you.

Yet, companies are out to patent strings of this genetic code. While many would argue that patent ownership is a sound business strategy, in most industries, it is morally indefensible in this case. Rafts of bio-ethicists have argued the pros and cons of patenting animal and human genetic information for decades, and as we speak a case has made it to the U.S. Supreme Court. Can a company claim ownership of your genetic code? While the rights of business over those of an individual’s genetic code are dubious at best, it is clear that public consensus and a clear ethical framework, and consequently a sound legal doctrine, lag far behind the actual science.

From the Guardian

Tracey Barraclough made a grim discovery in 1998. She found she possessed a gene that predisposed her to cancer. “I was told I had up to an 85% chance of developing breast cancer and an up to 60% chance of developing ovarian cancer,” she recalls. The piece of DNA responsible for her grim predisposition is known as the BRCA1 gene.

Tracey was devastated, but not surprised. She had sought the gene test because her mother, grandmother and great-grandmother had all died of ovarian cancer in their 50s. Four months later Tracey had her womb and ovaries removed to reduce her cancer risk. A year later she had a double mastectomy.

“Deciding to embark on that was the loneliest and most agonising journey of my life,” Tracey says. “My son, Josh, was five at the time and I wanted to live for him. I didn’t want him to grow up without a mum.” Thirteen years later, Tracey describes herself as “100% happy” with her actions. “It was the right thing for me. I feel that losing my mother, grandmother and great-grandmother hasn’t been in vain.”

The BRCA1 gene that Tracey inherited is expressed in breast tissue where it helps repair damaged DNA. In its mutated form, found in a small percentage of women, damaged DNA cannot be repaired and carriers become highly susceptible to cancers of the breast and ovaries.

The discovery of BRCA1 in 1994, and a second version, BRCA2, discovered a year later, remains one of the greatest triumphs of modern genetics. It allows doctors to pinpoint women at high risk of breast or ovarian cancer in later life. Stars such as Sharon Osbourne and Christina Applegate have been among those who have had BRCA1 diagnoses and subsequent mastectomies. BRCA technology has saved many lives over the years. However, it has also triggered a major division in the medical community, a split that last week ended up before the nine justices of the US supreme court. At issue is the simple but fundamental question: should the law allow companies to patent human genes? It is a battle that has profound implications for genetic research and has embroiled scientists on both sides of the Atlantic in a major argument about the nature of scientific inquiry.

On one side, US biotechnology giant Myriad Genetics is demanding that the US supreme court back the patents it has taken out on the BRCA genes. The company believes it should be the only producer of tests to detect mutations in these genes, a business it has carried out in the United States for more than a decade.

On the other side, a group of activists, represented by lawyers from the American Civil Liberties Union, argues that it is fundamentally absurd and immoral to claim ownership of humanity’s shared genetic heritage and demands that the court ban patents. How can anyone think that any individual or company should enjoy exclusive use of naturally occurring DNA sequences pertinent to human diseases, they ask?

It is a point stressed by Gilda Witte, head of Ovarian Cancer Action in the UK. “The idea that you can hold a patent to a piece of human DNA is just wrong. More and more genes that predispose individuals to cancers and other conditions are being discovered by scientists all the time. If companies like Myriad are allowed to hold more and more patents like the ones they claim for BRCA1 and BRCA2, the cost of diagnosing disease is going to soar.”

For its part, Myriad denies it has tried to patent human DNA on its own. Instead, the company argues that its patents cover the techniques it has developed to isolate the BRCA1 and BRCA2 genes and the chemical methods it has developed to make it possible to analyse the genes in the laboratory. Mark Capone, the president of Myriad, says his company has invested $500m in developing its BRCA tests.

“It is certainly true that people will not invest in medicine unless there is some return on that investment,” said Justin Hitchcock, a UK expert on patent law and medicine. “That is why Myriad has sought these patents.”

In Britain, women such as Tracey Barraclough have been given BRCA tests for free on the NHS. In the US, where Myriad holds patents, those seeking such tests have to pay the company $4,000. It might therefore seem to be a peculiarly American debate based on the nation’s insistence on having a completely privatised health service. Professor Alan Ashworth, director of the Institute for Cancer Research, disagreed, however.

“I think that, if Myriad win this case, the impact will be retrograde for the whole of genetic research across the globe,” he said. “The idea that you can take a piece of DNA and claim that only you are allowed to test for its existence is wrong. It stinks, morally and intellectually. People are becoming easier about using and exchanging genetic information at present. Any move to back Myriad would take us back decades.”

Issuing patents is a complicated business, of course, a point demonstrated by the story of monoclonal antibodies. Developed in British university labs in the 1970s, these artificial versions of natural antibodies won a Nobel prize in 1984 for their inventors, a team led by César Milstein at Cambridge University. Monoclonal antibodies target disease sites in the human body and can be fitted with toxins to be sent like tiny Exocet missiles to carry their lethal payloads straight to a tumour.

When Milstein and his team finished their research, they decided to publish their results straight away. Once in the public domain, the work could no longer claim patent protection, a development that enraged the newly elected prime minister, Margaret Thatcher, a former patent lawyer. She, and many others, viewed the monoclonal story as a disaster that could have cost Britain billions.

But over the years this view has become less certain. “If you look at medicines based on monoclonal antibodies today, it is clear these are some of the most valuable on the market,” said Hitchcock. “But that value is based on the layers of inventiveness that have since been added to the basic concept of the monoclonal antibody and has nothing to do with the actual technique itself.”

Read the entire article following the jump.

Image: A museum visitor views a digital representation of the human genome in New York City in 2001. Courtesy of Mario Tama, Getty Images / National Geographic.

Send to Kindle

One Way Ticket to Mars

You would be rightfully mistaken for thinking this might be a lonesome bus trip to Mars, Pennsylvania or to the North American headquarters of Mars, purveyors of many things chocolaty including M&Ms, Mars Bars and Snickers, in New Jersey. This one way ticket is further afield, to the Red Planet, and comes from a company known as Mars One — estimated time of departure, 2023.

From the Guardian:

A few months before he died, Carl Sagan recorded a message of hope to would-be Mars explorers, telling them: “Whatever the reason you’re on Mars is, I’m glad you’re there. And I wish I was with you.”

On Monday, 17 years after the pioneering astronomer set out his hopeful vision of the future in 1996, a company from the Netherlands is proposing to turn Sagan’s dreams of reaching Mars into reality. The company, Mars One, plans to send four astronauts on a trip to the Red Planet to set up a human colony in 2023. But there are a couple of serious snags.

Firstly, when on Mars their bodies will have to adapt to surface gravity that is 38% of that on Earth. It is thought that this would cause such a total physiological change in their bone density, muscle strength and circulation that voyagers would no longer be able to survive in Earth’s conditions. Secondly, and directly related to the first, they will have to say goodbye to all their family and friends, as the deal doesn’t include a return ticket.

The Mars One website states that a return “cannot be anticipated nor expected”. To return, they would need a fully assembled and fuelled rocket capable of escaping the gravitational field of Mars, on-board life support systems capable of up to a seven-month voyage and the capacity either to dock with a space station orbiting Earth or perform a safe re-entry and landing.

“Not one of these is a small endeavour” the site notes, requiring “substantial technical capacity, weight and cost”.

Nevertheless, the project has already had 10,000 applicants, according to the company’s medical director, Norbert Kraft. When the official search is launched on Monday at the Hotel Pennsylvania in New York, they expect tens of thousands more hopefuls to put their names forward.

Kraft told the Guardian that the applicants so far ranged in age from 18 to at least 62 and, though they include women, they tended to be men.

The reasons they gave for wanting to go were varied, he said. One of three examples Kraft forwarded by email to the Guardian cited Sagan.

An American woman called Cynthia, who gave her age as 32, told the company that it was a “childhood imagining” of hers to go to Mars. She described a trip her mother had taken her on in the early 1990s to a lecture at the University of Wisconsin.

In a communication to Mars One, she said the lecturer had been Sagan and she had asked him if he thought humans would land on Mars in her lifetime. Cynthia said: “He in turn asked me if I wanted to be trapped in a ‘tin can spacecraft’ for the two years it would take to get there. I told him yes, he smiled, and told me in all seriousness, that yes, he absolutely believed that humans would reach Mars in my lifetime.”

She told the project: “When I first heard about the Mars One project I thought, this is my chance – that childhood dream could become a reality. I could be one of the pioneers, building the first settlement on Mars and teaching people back home that there are still uncharted territories that humans can reach for.”

The prime attributes Mars One is looking for in astronaut-settlers is resilience, adaptability, curiosity, ability to trust and resourcefulness, according to Kraft. They must also be over 18.

Professor Gerard ‘t Hooft, winner of the Nobel prize for theoretical physics in 1999 and lecturer of theoretical physics at the University of Utrecht, Holland, is an ambassador for the project. ‘T Hooft admits there are unknown health risks. The radiation is “of quite a different nature” than anything that has been tested on Earth, he told the BBC.

Founded in 2010 by Bas Lansdorp, an engineer, Mars One says it has developed a realistic road map and financing plan for the project based on existing technologies and that the mission is perfectly feasible. The website states that the basic elements required for life are already present on the planet. For instance, water can be extracted from ice in the soil and Mars has sources of nitrogen, the primary element in the air we breathe. The colony will be powered by specially adapted solar panels, it says.

In March, Mars One said it had signed a contract with the American firm Paragon Space Development Corporation to take the first steps in developing the life support system and spacesuits fit for the mission.

The project will cost a reported $6bn (£4bn), a sum Lansdorp has said he hopes will be met partly by selling broadcasting rights. “The revenue garnered by the London Olympics was almost enough to finance a mission to Mars,” Lansdorp said, in an interview with ABC News in March.

Another ambassador to the project is Paul Römer, the co-creator of Big Brother, one of the first reality TV shows and one of the most successful.

On the website, Römer gave an indication of how the broadcasting of the project might proceed: “This mission to Mars can be the biggest media event in the world,” said Römer. “Reality meets talent show with no ending and the whole world watching. Now there’s a good pitch!”

The aim is to establish a permanent human colony, according to the company’s website. The first team would land on the surface of Mars in 2023 to begin constructing the colony, with a team of four astronauts every two years after that.

The project is not without its sceptics, however, and concerns have been raised about how astronauts might get to the surface and establish a colony with all the life support and other requirements needed. There were also concerns over the health implications for the applicants.

Dr Veronica Bray, from the University of Arizona’s lunar and planetary laboratory, told BBC News that Earth was protected from solar winds by a strong magnetic field, without which it would be difficult to survive. The Martian surface is very hostile to life. There is no liquid water, the atmospheric pressure is “practically a vacuum”, radiation levels are higher and temperatures vary wildly. High radiation levels can lead to increased cancer risk, a lowered immune system and possibly infertility, she said.

To minimise radiation, the project team will cover the domes they plan to build with several metres of soil, which the colonists will have to dig up.

The mission hopes to inspire generations to “believe that all things are possible, that anything can be achieved” much like the Apollo moon landings.

“Mars One believes it is not only possible, but imperative that we establish a permanent settlement on Mars in order to accelerate our understanding of the formation of the solar system, the origins of life, and of equal importance, our place in the universe” it says.

Read the entire article following the jump.

Image: Panoramic View From ‘Rocknest’ Position of Curiosity Mars Rover. Courtesy of JPL / NASA.

Send to Kindle

Moist and Other Words We Hate

Some words give us the creeps, they raise the hair on back of our heads, they make us squirm and give us an internal shudder. “Moist” is such as word.

From Slate:

The George Saunders story “Escape From Spiderhead,” included in his much praised new book Tenth of December, is not for the squeamish or the faint of heart. The sprawling, futuristic tale delves into several potentially unnerving topics: suicide, sex, psychotropic drugs. It includes graphic scenes of self-mutilation. It employs the phrases “butt-squirm,” “placental blood,” and “thrusting penis.” At one point, Saunders relates a conversation between two characters about the application of medicinal cream to raw, chafed genitals.

Early in the story, there is a brief passage in which the narrator, describing a moment of postcoital amorousness, says, “Everything seemed moist, permeable, sayable.” This sentence doesn’t really stand out from the rest—in fact, it’s one of the less conspicuous sentences in the story. But during a recent reading of “Escape From Spiderhead” in Austin, Texas, Saunders says he encountered something unexpected. “I’d texted a cousin of mine who was coming with her kids (one of whom is in high school) just to let her know there was some rough language,” he recalls. “Afterwards she said she didn’t mind fu*k, but hated—wait for it—moist. Said it made her a little physically ill. Then I went on to Jackson, read there, and my sister Jane was in the audience—and had the same reaction. To moist.”

Mr. Saunders, say hello to word aversion.

It’s about to get really moist in here. But first, some background is in order. The phenomenon of word aversion—seemingly pedestrian, inoffensive words driving some people up the wall—has garnered increasing attention over the past decade or so. In a recent post on Language Log, University of Pennsylvania linguistics professor Mark Liberman defined the concept as “a feeling of intense, irrational distaste for the sound or sight of a particular word or phrase, not because its use is regarded as etymologically or logically or grammatically wrong, nor because it’s felt to be over-used or redundant or trendy or non-standard, but simply because the word itself somehow feels unpleasant or even disgusting.”

So we’re not talking about hating how some people say laxadaisical instead of lackadaisical or wanting to vigorously shake teenagers who can’t avoid using the word like between every other word of a sentence. If you can’t stand the word tax because you dislike paying taxes, that’s something else, too. (When recently asked about whether he harbored any word aversions, Harvard University cognition and education professor Howard Gardner offered up webinar, noting that these events take too much time to set up, often lack the requisite organization, and usually result in “a singularly unpleasant experience.” All true, of course, but that sort of antipathy is not what word aversion is all about.)

Word aversion is marked by strong reactions triggered by the sound, sight, and sometimes even the thought of certain words, according to Liberman. “Not to the things that they refer to, but to the word itself,” he adds. “The feelings involved seem to be something like disgust.”

Participants on various message boards and online forums have noted serious aversions to, for instance, squab, cornucopia, panties, navel, brainchild, crud, slacks, crevice, and fudge, among numerous others. Ointment, one Language Log reader noted in 2007, “has the same mouth-feel as moist, yet it’s somehow worse.” In response to a 2009 post on the subject by Ben Zimmer, one commenter confided: “The word meal makes me wince. Doubly so when paired with hot.” (Nineteen comments later, someone agreed, declaring: “Meal is a repulsive word.”) In many cases, real-life word aversions seem no less bizarre than when the words mattress and tin induce freak-outs on Monty Python’s Flying Circus. (The Monty Python crew knew a thing or two about annoying sounds.)

Jason Riggle, a professor in the department of linguistics at the University of Chicago, says word aversions are similar to phobias. “If there is a single central hallmark to this, it’s probably that it’s a more visceral response,” he says. “The [words] evoke nausea and disgust rather than, say, annoyance or moral outrage. And the disgust response is triggered because the word evokes a highly specific and somewhat unusual association with imagery or a scenario that people would typically find disgusting—but don’t typically associate with the word.” These aversions, Riggle adds, don’t seem to be elicited solely by specific letter combinations or word characteristics. “If we collected enough of [these words], it might be the case that the words that fall in this category have some properties in common,” he says. “But it’s not the case that words with those properties in common always fall in the category.”

So back to moist. If pop cultural references, Internet blog posts, and social media are any indication, moist reigns supreme in its capacity to disgust a great many of us. Aversion to the word has popped up on How I Met Your Mother and Dead Like Me. VH1 declared that using the word moist is enough to make a man “undateable.” In December, Huffington Post’s food section published a piece suggesting five alternatives to the word moist so the site could avoid its usage when writing about various cakes. Readers of The New Yorker flocked to Facebook and Twitter to choose moist as the one word they would most like to be eliminated from the English language. In a survey of 75 Mississippi State University students from 2009, moist placed second only to vomit as the ugliest word in the English language. In a 2011 follow-up survey of 125 students, moist pulled into the ugly-word lead—vanquishing a greatest hits of gross that included phlegm, ooze, mucus, puke, scab, and pus. Meanwhile, there are 7,903 people on Facebook who like the “interest” known as “I Hate the Word Moist.” (More than 5,000 other Facebook users give the thumbs up to three different moist-hatred Facebook pages.)

Being grossed out by the word moist is not beyond comprehension. It’s squishy-seeming, and, to some, specifically evocative of genital regions and undergarments. These qualities are not unusual when it comes to word aversion. Many hated words refer to “slimy things, or gross things, or names for garments worn in potentially sexual areas, or anything to do with food, or suckling, or sexual overtones,” says Riggle. But other averted words are more confounding, notes Liberman. “There is a list of words that seem to have sexual connotations that are among the words that elicit this kind of reaction—moist being an obvious one,” he says. “But there are other words like luggage, and pugilist, and hardscrabble, and goose pimple, and squab, and so on, which I guess you could imagine phonic associations between those words and something sexual, but it certainly doesn’t seem obvious.”

So then the question becomes: What is it about certain words that makes certain people want to hurl?

Riggle thinks the phenomenon may be dependent on social interactions and media coverage. “Given that, as far back as the aughts, there were comedians making jokes about hating [moist], people who were maybe prone to have that kind of reaction to one of these words, surely have had it pointed out to them that it’s an icky word,” he says. “So, to what extent is it really some sort of innate expression that is independently arrived at, and to what extent is it sort of socially transmitted? Disgust is really a very social emotion.”

And in an era of YouTube, Twitter, Vine, BuzzFeed top-20 gross-out lists, and so on, trends, even the most icky ones, spread fast. “There could very well be a viral aspect to this, where either through the media or just through real-world personal connections, the reaction to some particular word—for example, moist—spreads,” says Liberman. “But that’s the sheerest speculation.”

Words do have the power to disgust and repulse, though—that, at least, has been demonstrated in scholarly investigations. Natasha Fedotova, a Ph.D. student studying psychology at the University of Pennsylvania, recently conducted research examining the extent to which individuals connect the properties of an especially repellent thing to the word that represents it. “For instance,” she says, “the word rat, which stands for a disgusting animal, can contaminate an edible object [such as water] if the two touch. This result cannot be explained solely in terms of the tendency of the word to act as a reminder of the disgusting entity because the effect depends on direct physical contact with the word.” Put another way, if you serve people who are grossed out by rats Big Macs on plates that have the word rat written on them, some people will be less likely to want to eat the portion of the burger that touched the word. Humans, in these instances, go so far as to treat gross-out words “as though they can transfer negative properties through physical contact,” says Fedotova.

Product marketers and advertisers are, not surprisingly, well aware of these tendencies, even if they haven’t read about word aversion (and even though they’ve been known to slip up on the word usage front from time to time, to disastrous effect). George Tannenbaum, an executive creative director at the advertising agency R/GA, says those responsible for creating corporate branding strategies know that consumers are an easily skeeved-out bunch. “Our job as communicators and agents is to protect brands from their own linguistic foibles,” he says. “Obviously there are some words that are just ugly sounding.”

Sometimes, because the stakes are so high, Tannenbaum says clients can be risk averse to an extreme. He recalled working on an ad for a health club that included the word pectoral, which the client deemed to be dangerously close to the word pecker. In the end, after much consideration, they didn’t want to risk any pervy connotations. “We took it out,” he says.

Read the entire article following the jump.

Image courtesy of keep-calm-o-matic.

Send to Kindle

Idyllic Undeveloped Land: Only 1,200 Light Years Away

Humans may soon make their only home irreversibly uninhabitable. Fortunately, astronomers have recently discovered a couple of exo-planets capable of sustaining life. Unfortunately, they are a little too distant — using current technology it would take humans around 26 million years. But, we can still dream.

From the New York Times:

Astronomers said Thursday that they had found the most Earth-like worlds yet known in the outer cosmos, a pair of planets that appear capable of supporting life and that orbit a star 1,200 light-years from here, in the northern constellation Lyra.

They are the two outermost of five worlds circling a yellowish star slightly smaller and dimmer than our Sun, heretofore anonymous and now destined to be known in the cosmic history books as Kepler 62, after NASA’s Kepler spacecraft, which discovered them. These planets are roughly half again as large as Earth and are presumably balls of rock, perhaps covered by oceans with humid, cloudy skies, although that is at best a highly educated guess.

Nobody will probably ever know if anything lives on these planets, and the odds are that humans will travel there only in their faster-than-light dreams, but the news has sent astronomers into heavenly raptures. William Borucki of NASA’s Ames Research Center, head of the Kepler project, described one of the new worlds as the best site for Life Out There yet found in Kepler’s four-years-and-counting search for other Earths. He treated his team to pizza and beer on his own dime to celebrate the find (this being the age of sequestration). “It’s a big deal,” he said.

Looming brightly in each other’s skies, the two planets circle their star at distances of 37 million and 65 million miles, about as far apart as Mercury and Venus in our solar system. Most significantly, their orbits place them both in the “Goldilocks” zone of lukewarm temperatures suitable for liquid water, the crucial ingredient for Life as We Know It.

Goldilocks would be so jealous.

Previous claims of Goldilocks planets with “just so” orbits snuggled up to red dwarf stars much dimmer and cooler than the Sun have had uncertainties in the size and mass and even the existence of these worlds, said David Charbonneau of the Harvard-Smithsonian Center for Astrophysics, an exoplanet hunter and member of the Kepler team.

“This is the first planet that ticks both boxes,” Dr. Charbonneau said, speaking of the outermost planet, Kepler 62f. “It’s the right size and the right temperature.” Kepler 62f is 40 percent bigger than Earth and smack in the middle of the habitable zone, with a 267-day year. In an interview, Mr. Borucki called it the best planet Kepler has found.

Its mate, known as Kepler 62e, is slightly larger — 60 percent bigger than Earth — and has a 122-day orbit, placing it on the inner edge of the Goldilocks zone. It is warmer but also probably habitable, astronomers said.

The Kepler 62 system resembles our own solar system, which also has two planets in the habitable zone: Earth — and Mars, which once had water and would still be habitable today if it were more massive and had been able to hang onto its primordial atmosphere.

The Kepler 62 planets continue a string of breakthroughs in the last two decades in which astronomers have gone from detecting the first known planets belonging to other stars, or exoplanets, broiling globs of gas bigger than Jupiter, to being able to discern smaller and smaller more moderate orbs — iceballs like Neptune and, now, bodies only a few times the mass of Earth, known technically as super-Earths. Size matters in planetary affairs because we can’t live under the crushing pressure of gas clouds on a world like Jupiter. Life as We Know It requires solid ground and liquid water — a gentle terrestrial environment, in other words.

Kepler 62’s newfound worlds are not quite small enough to be considered strict replicas of Earth, but the results have strengthened the already strong conviction among astronomers that the galaxy is littered with billions of Earth-size planets, perhaps as many as one per star, and that astronomers will soon find Earth 2.0, as they call it — our lost twin bathing in the rays of an alien sun.

“Kepler and other experiments are finding planets that remind us more and more of home,” said Geoffrey Marcy, a longtime exoplanet hunter at the University of California, Berkeley, and Kepler team member. “It’s an amazing moment in science. We haven’t found Earth 2.0 yet, but we can taste it, smell it, right there on our technological fingertips.”

Read the entire article following the jump.

Image: The Kepler 62 system: homes away from home. Courtesy of JPL-Caltech/Ames/NASA.

Send to Kindle

Science and Art of the Brain

Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.

From the New York Times:

This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.

Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.

This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.

Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.

The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.

As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.

Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.

The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.

This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.

Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.

Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?

Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”

Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.

This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.

Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.

Read the entire article following the jump.

Send to Kindle

Financial Apocalypse and Economic Collapse via Excel

It’s long been known that Microsoft Powerpoint fuels corporate mediocrity and causes brain atrophy if used by creative individuals. Now we discover that another flashship product from the Seattle software maker, this time Excel, is to blame for some significant stresses on the global financial system.

From ars technica:

An economics paper claiming that high levels of national debt led to low or negative economic growth could turn out to be deeply flawed as a result of, among other things, an incorrect formula in an Excel spreadsheet. Microsoft’s PowerPoint has been considered evil thanks to the proliferation of poorly presented data and dull slides that are created with it. Might Excel also deserve such hyperbolic censure?

The paper, Growth in a Time of Debt, was written by economists Carmen Reinhart and Kenneth Rogoff and published in 2010. Since publication, it has been cited abundantly by the world’s press politicians, including one-time vice president nominee Paul Ryan (R-WI). The link it draws between high levels of debt and negative average economic growth has been used by right-leaning politicians to justify austerity budgets: slashing government expenditure and reducing budget deficits in a bid to curtail the growth of debt.

This link was always controversial, with many economists proposing that the correlation between high debt and low growth was just as likely to have a causal link in the other direction to that proposed by Reinhart and Rogoff: it’s not that high debt causes low growth, but rather that low growth leads to high debt.

However, the underlying numbers and the existence of the correlation was broadly accepted, due in part to Reinhart and Rogoff’s paper not including the source data they used to draw their inferences.

A new paper, however, suggests that the data itself is in error. Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst, tried to reproduce the Reinhart and Rogoff result with their own data, but they couldn’t. So they asked for the original spreadsheets that Reinhart and Rogoff used to better understand what they were doing. Their results, published as “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff,” suggest that the pro-austerity paper was flawed. A comprehensive assessment of the new paper can be found at the Rortybomb economics blog.

It turns out that the Reinhart and Rogoff spreadsheet contained a simple coding error. The spreadsheet was supposed to calculate average values across twenty countries in rows 30 to 49, but in fact it only calculated values in 15 countries in rows 30 to 44. Instead of the correct formula AVERAGE(L30:L49), the incorrect AVERAGE(L30:L44) was used.

There was also a pair of important, but arguably more subjective, errors in the way the data was processed. Reinhart and Rogoff excluded data for some countries in the years immediately after World War II. There might be a reason for this; there might not. The original paper doesn’t justify the exclusion.

The original paper also used an unusual scheme for weighting data. The UK’s 19-year stretch of high debt and moderate growth (during the period between 1946 and 1964, the debt-to-GDP ratio was above 90 percent, and growth averaged 2.4 percent) is conflated into a single data point and treated as equivalent to New Zealand’s single year of debt above 90 percent, during which it experienced growth of -7.6. Some kind of weighting system might be justified, with Herndon, Ash, and Pollin speculating that there is a serial correlation between years.

Recalculating the data to remove these three issues turns out to provide much weaker evidence for austerity. Although growth is higher in countries with a debt ratio of less than 30 percent (averaging 4.2 percent), there’s no point at which it falls off a cliff and inevitably turns negative. For countries with a debt of between 30 and 60 percent, average growth was 3.1 percent, between 60 and 90 it was 3.2 percent, and above 90 percent it was 2.2 percent. Lower than the low debt growth, but far from the -0.1 percent growth the original paper claimed.

As such, the argument that high levels of debt should be avoided and the justification for austerity budgets substantially evaporates. Whether politicians actually used this paper to shape their beliefs or merely used its findings to give cover for their own pre-existing beliefs is hard to judge.

Excel, of course, isn’t the only thing to blame here. But it played a role. Excel is used extensively in fields such as economics and finance, because it’s an extremely useful tool that can be deceptively simple to use, making it apparently perfect for ad hoc calculations. However, spreadsheet formulae are notoriously fiddly to work with and debug, and Excel has long-standing deficiencies when it comes to certain kinds of statistical analysis.

It’s unlikely that this is the only occasion on which improper use of Excel has produced a bad result with far-reaching consequences. Bruno Iksil, better known as the “London Whale,” racked up billions of dollars of losses for bank JPMorgan. The post mortem of his trades revealed extensive use of Excel, including manual copying and pasting between workbooks and a number of formula errors that resulted in underestimation of risk.

Read the entire article following the jump.

Image: Default Screen of Microsoft Excel 2013, component of Microsoft Office 2013. Courtesy of Microsoft / Wikipedia.

Send to Kindle

Off World Living

Will humanity ever transcend gravity to become a space-faring race? A simple napkin-based calculation will give you the answer.

From Scientific American:

Optimistic visions of a human future in space seem to have given way to a confusing mix of possibilities, maybes, ifs, and buts. It’s not just the fault of governments and space agencies, basic physics is in part the culprit. Hoisting mass away from Earth is tremendously difficult, and thus far in fifty years we’ve barely managed a total the equivalent of a large oil-tanker. But there’s hope.

Back in the 1970?s the physicist Gerard O’Neill and his students investigated concepts of vast orbital structures capable of sustaining entire human populations. It was the tail end of the Apollo era, and despite the looming specter of budget restrictions and terrestrial pessimism there was still a sense of what might be, what could be, and what was truly within reach.

The result was a series of blueprints for habitats that solved all manner of problems for space life, from artificial gravity (spin up giant cylinders), to atmospheres, and radiation (let the atmosphere shield you). They’re pretty amazing, and they’ve remained perhaps one of the most optimistic visions of a future where we expand beyond the Earth.

But there’s a lurking problem, and it comes down to basic physics. It is awfully hard to move stuff from the surface of our planet into orbit or beyond. O’Neill knew this, as does anyone else who’s thought of grand space schemes. The solution is to ‘live of the land’, extracting raw materials from either the Moon with its shallower gravity well, or by processing asteroids. To get to that point though we’d still have to loft an awful lot of stuff into space – the basic tools and infrastructure have to start somewhere.

And there’s the rub. To put it into perspective I took a look at the amount of ‘stuff’ we’ve managed to get off Earth in the past 50-60 years. It’s actually pretty hard to evaluate, lots of the mass we send up comes back down in short order – either as spent rocket stages or as short-lived low-altitude satellites. But we can still get a feel for it.

To start with, a lower limit on the mass hoisted to space is the present day artificial satellite population. Altogether there are in excess of about 3,000 satellites up there, plus vast amounts of small debris. Current estimates suggest this amounts to a total of around 6,000 metric tons. The biggest single structure is the International Space Station, currently coming in at about 450 metric tons (about 992,000 lb for reference).

These numbers don’t reflect launch mass – the total of a rocket + payload + fuel. To put that into context, a fully loaded Saturn V was about 2,000 metric tons, but most of that was fuel.

When the Space Shuttle flew it amounted to about 115 metric tons (Shuttle + payload) making it into low-Earth orbit. Since there were 135 launches of the Shuttle that amounts to a total hoisted mass of about 15,000 metric tons over a 30 year period.

Read the entire article after the jump.

Image: A pair of O’Neill cylinders. NASA ID number AC75-1085. Courtesy of NASA / Wikipedia.

Send to Kindle

Getting to the Bottom of It: Crimes of Fashion

Living in the West we are generally at liberty to wear what we wish, certainly in private, and usually in public — subject to public norms of course. That said, one can make a good case for punishing offenders of all genders who enact “crimes of fashion”.

From the Telegraph:

One of the lesser-known effects of the double-dip recession is that young men have been unable to afford belts. All over the Western world we have had to witness exposed bottoms, thanks to lack of funds to pop out and buy a belt or a pair of braces, although many people have tried to convince me that this is actually a conscious “fashion’” choice.

A town in Louisiana has fought back against this practice and is now imposing fines for those who choose to fly their trousers at half-mast.  What a shame this new law is, as these poor chaps are exactly that – poor.  They can’t afford a belt!  Fining them isn’t going to help their finances, is it?

These weird people who try to tell me boys actually choose to wear their trousers in this style have said that it harks back to the American prisons, when fashion accessories such as belts were whipped off the inmates in case they did anything foolish with them.  Like wearing a brown one with black shoes.

There is also a school of thought that showing the posterior was a sign to others that you were open to “advances”.  I cited this to a group of boys at a leading school recently and the look of horror that came over their faces was interesting to note.

It’s not just the chaps and belt-makers that are suffering from this recession. Women seem to be unable to afford tops that cover their bra straps. You only have to walk down any high street: you may as well be in a lingerie department.  Showing your underwear is clearly a sign that you are poor – in need of charity, sympathy and probably state-funded assistance.

To play devil’s advocate for one second, say these economic sufferers are actually making a conscious choice to show the rest of us their pants, then maybe Louisiana has the right idea. Fines are perhaps the best way to go. Here is a suggested menu of fines, which you’ll be pleased to know I have submitted to local councils the length and breadth of the nation.

For him

Trousers around bottom – £25 [$37.50]

Brown shoes with a suit – £35 [$52.50]

Tie length too short – £15 [$22.50]

Top button undone when wearing a tie – £20 [$30]

For her

Open toed shoes at formal evening events – £15 [$22.50]

Bra straps on show – £25 [$37.50]

Skirts that are shorter than the eyelashes – £20 [$30]

Too much cleavage as well as too much leg on display – £25 [$37.50]

Wearing heels that you haven’t learned to walk in yet – £12 [$18]

Read the entire article after the jump.

Send to Kindle

Ray Kurzweil and Living a Googol Years

By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.

From the Wall Street Journal:

Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?

This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.

Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.

In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.

If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”

If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.

In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”

There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.

Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”

“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “

To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.

“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.

“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.

By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.

How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”

We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”

“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.

Read the entire article after the jump.

Send to Kindle

Cheap Hydrogen

Researchers at the University of Glasgow, Scotland, have discovered an alternative and possibly more efficient way to make hydrogen at industrial scales. Typically, hydrogen is produced from reacting high temperature steam with methane or natural gas. A small volume of hydrogen, less than five percent annually, is also made through the process of electrolysis — passing an electric current through water.

This new method of production appears to be less costly, less dangerous and also more environmentally sound.

From the Independent:

Scientists have harnessed the principles of photosynthesis to develop a new way of producing hydrogen – in a breakthrough that offers a possible solution to global energy problems.

The researchers claim the development could help unlock the potential of hydrogen as a clean, cheap and reliable power source.

Unlike fossil fuels, hydrogen can be burned to produce energy without producing emissions. It is also the most abundant element on the planet.

Hydrogen gas is produced by splitting water into its constituent elements – hydrogen and oxygen. But scientists have been struggling for decades to find a way of extracting these elements at different times, which would make the process more energy-efficient and reduce the risk of dangerous explosions.

In a paper published today in the journal Nature Chemistry, scientists at the University of Glasgow outline how they have managed to replicate the way plants use the sun’s energy to split water molecules into hydrogen and oxygen at separate times and at separate physical locations.

Experts heralded the “important” discovery yesterday, saying it could make hydrogen a more practicable source of green energy.

Professor Xile Hu, director of the Laboratory of Inorganic Synthesis and Catalysis at the Swiss Federal Institute of Technology in Lausanne, said: “This work provides an important demonstration of the principle of separating hydrogen and oxygen production in electrolysis and is very original. Of course, further developments are needed to improve the capacity of the system, energy efficiency, lifetime and so on. But this research already  offers potential and promise and can help in making the storage of green  energy cheaper.”

Until now, scientists have separated hydrogen and oxygen atoms using electrolysis, which involves running electricity through water. This is energy-intensive and potentially explosive, because the oxygen and hydrogen are removed at the same time.

But in the new variation of electrolysis developed at the University of Glasgow, hydrogen and oxygen are produced from the water at different times, thanks to what researchers call an “electron-coupled proton buffer”. This acts to collect and store hydrogen while the current runs through the water, meaning that in the first instance only oxygen is released. The hydrogen can then be released when convenient.

Because pure hydrogen does not occur naturally, it takes energy to make it. This new version of electrolysis takes longer, but is safer and uses less energy per minute, making it easier to rely on renewable energy sources for the electricity needed to separate  the atoms.

Dr Mark Symes, the report’s co-author, said: “What we have developed is a system for producing hydrogen on an industrial scale much more cheaply and safely than is currently possible. Currently much of the industrial production of hydrogen relies on reformation of fossil fuels, but if the electricity is provided via solar, wind or wave sources we can create an almost totally clean source of power.”

Professor Lee Cronin, the other author of the research, said: “The existing gas infrastructure which brings gas to homes across the country could just as easily carry hydrogen as it  currently does methane. If we were to use renewable power to generate hydrogen using the cheaper, more efficient decoupled process we’ve created, the country could switch to hydrogen  to generate our electrical power  at home. It would also allow us to  significantly reduce the country’s  carbon footprint.”

Nathan Lewis, a chemistry professor at the California Institute of Technology and a green energy expert, said: “This seems like an interesting scientific demonstration that may possibly address one of the problems involved with water electrolysis, which remains a relatively expensive method of producing hydrogen.”

Read the entire article following the jump.

Send to Kindle

The Digital Afterlife and i-Death

Leave it to Google to help you auto-euthanize and die digitally. The presence of our online selves after death was of limited concern until recently. However, with the explosion of online media and social networks our digital tracks remain preserved and scattered across drives and backups in distributed, anonymous data centers. Physical death does not change this.

[A case in point: your friendly editor at theDiagonal was recently asked to befriend a colleague via LinkedIn. All well and good, except that the colleague had passed-away two years earlier.]

So, armed with Google’s new Inactive Account Manager, death — at least online — may be just a couple of clicks away. By corollary it would be a small leap indeed to imagine an enterprising company charging an annual fee to a dearly-departed member to maintain a digital afterlife ad infinitum.

From the Independent:

The search engine giant Google has announced a new feature designed to allow users to decide what happens to their data after they die.

The feature, which applies to the Google-run email system Gmail as well as Google Plus, YouTube, Picasa and other tools, represents an attempt by the company to be the first to deal with the sensitive issue of data after death.

In a post on the company’s Public Policy Blog Andreas Tuerk, Product Manager, writes: “We hope that this new feature will enable you to plan your digital afterlife – in a way that protects your privacy and security – and make life easier for your loved ones after you’re gone.”

Google says that the new account management tool will allow users to opt to have their data deleted after three, six, nine or 12 months of inactivity. Alternatively users can arrange for certain contacts to be sent data from some or all of their services.

The California-based company did however stress that individuals listed to receive data in the event of ‘inactivity’ would be warned by text or email before the information was sent.

Social Networking site Facebook already has a function that allows friends and family to “memorialize” an account once its owner has died.

Read the entire article following the jump.

Send to Kindle

Tracking and Monetizing Your Every Move

Your movements are valuable — but not in the way you may think. Mobile technology companies are moving rapidly to exploit the vast amount of data collected from the billions of mobile devices. This data is extremely valuable to an array of organizations, including urban planners, retailers, and travel and transportation marketers. And, of course, this raises significant privacy concerns. Many believe that when the data is used collectively it preserves user anonymity. However, if correlated with other data sources it could be used to discover a range of unintended and previously private information, relating both to individuals and to groups.

From MIT Technology Review:

Wireless operators have access to an unprecedented volume of information about users’ real-world activities, but for years these massive data troves were put to little use other than for internal planning and marketing.

This data is under lock and key no more. Under pressure to seek new revenue streams (see “AT&T Looks to Outside Developers for Innovation”), a growing number of mobile carriers are now carefully mining, packaging, and repurposing their subscriber data to create powerful statistics about how people are moving about in the real world.

More comprehensive than the data collected by any app, this is the kind of information that, experts believe, could help cities plan smarter road networks, businesses reach more potential customers, and health officials track diseases. But even if shared with the utmost of care to protect anonymity, it could also present new privacy risks for customers.

Verizon Wireless, the largest U.S. carrier with more than 98 million retail customers, shows how such a program could come together. In late 2011, the company changed its privacy policy so that it could share anonymous and aggregated subscriber data with outside parties. That made possible the launch of its Precision Market Insights division last October.

The program, still in its early days, is creating a natural extension of what already happens online, with websites tracking clicks and getting a detailed breakdown of where visitors come from and what they are interested in.

Similarly, Verizon is working to sell demographics about the people who, for example, attend an event, how they got there or the kinds of apps they use once they arrive. In a recent case study, says program spokeswoman Debra Lewis, Verizon showed that fans from Baltimore outnumbered fans from San Francisco by three to one inside the Super Bowl stadium. That information might have been expensive or difficult to obtain in other ways, such as through surveys, because not all the people in the stadium purchased their own tickets and had credit card information on file, nor had they all downloaded the Super Bowl’s app.

Other telecommunications companies are exploring similar ideas. In Europe, for example, Telefonica launched a similar program last October, and the head of this new business unit gave the keynote address at new industry conference on “big data monetization in telecoms” in January.

“It doesn’t look to me like it’s a big part of their [telcos’] business yet, though at the same time it could be,” says Vincent Blondel, an applied mathematician who is now working on a research challenge from the operator Orange to analyze two billion anonymous records of communications between five million customers in Africa.

The concerns about making such data available, Blondel says, are not that individual data points will leak out or contain compromising information but that they might be cross-referenced with other data sources to reveal unintended details about individuals or specific groups (see “How Access to Location Data Could Trample Your Privacy”).

Already, some startups are building businesses by aggregating this kind of data in useful ways, beyond what individual companies may offer. For example, AirSage, an Atlanta, Georgia, a company founded in 2000, has spent much of the last decade negotiating what it says are exclusive rights to put its hardware inside the firewalls of two of the top three U.S. wireless carriers and collect, anonymize, encrypt, and analyze cellular tower signaling data in real time. Since AirSage solidified the second of these major partnerships about a year ago (it won’t specify which specific carriers it works with), it has been processing 15 billion locations a day and can account for movement of about a third of the U.S. population in some places to within less than 100 meters, says marketing vice president Andrea Moe.

As users’ mobile devices ping cellular towers in different locations, AirSage’s algorithms look for patterns in that location data—mostly to help transportation planners and traffic reports, so far. For example, the software might infer that the owners of devices that spend time in a business park from nine to five are likely at work, so a highway engineer might be able to estimate how much traffic on the local freeway exit is due to commuters.

Other companies are starting to add additional layers of information beyond cellular network data. One customer of AirSage is a relatively small San Francisco startup, Streetlight Data which recently raised $3 million in financing backed partly by the venture capital arm of Deutsche Telekom.

Streetlight buys both cellular network and GPS navigation data that can be mined for useful market research. (The cellular data covers a larger number of people, but the GPS data, collected by mapping software providers, can improve accuracy.) Today, many companies already build massive demographic and behavioral databases on top of U.S. Census information about households to help retailers choose where to build new stores and plan marketing budgets. But Streetlight’s software, with interactive, color-coded maps of neighborhoods and roads, offers more practical information. It can be tied to the demographics of people who work nearby, commute through on a particular highway, or are just there for a visit, rather than just supplying information about who lives in the area.

Read the entire article following the jump.

Image: mobile devices. Courtesy of W3.org

Send to Kindle

Dark Lightning

It’s fascinating how a seemingly well-understood phenomenon, such as lightning, can still yield enormous surprises. Researchers have found that visible flashes of lightning can also be accompanied by non-visible, and more harmful, radiation such as x- and gamma-rays.

From the Washington Post:

A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.

But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.

Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.

Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.

The only way to determine whether an airplane had been struck by dark lightning, Dwyer says, “would be to use a radiation detector. Right in the middle of [a flash], a very brief bluish-purple glow around the plane might be perceptible. Inside an aircraft, a passenger would probably not be able to feel or hear much of anything, but the radiation dose could be significant.”

However, because there’s only about one dark lightning occurrence for every thousand visible flashes and because pilots take great pains to avoid thunderstorms, Dwyer says, the risk of injury is quite limited. No one knows for sure if anyone has ever been hit by dark lightning.

About 25 million visible thunderbolts hit the United States every year, killing about 30 people and many farm animals, says John Jensenius, a lightning safety specialist with the National Weather Service in Gray, Maine. Worldwide, thunderstorms produce about a billion or so lightning bolts annually.

Read the entire article after the jump.

Image: Lightning in Foshan, China. Courtesy of Telegraph.

Send to Kindle

The Dangerous World of Pseudo-Academia

Pseudoscience can be fun — for comedic purposes only of course. But when it is taken seriously and dogmatically, as it often is by a significant number of people, it imperils rational dialogue and threatens real scientific and cultural progress. There is no end to the lengthy list of fake scientific claims and theories — some of our favorites include: the moon “landing” conspiracy, hollow Earth, Bermuda triangle, crop circles, psychic surgery, body earthing, room temperature fusion, perpetual and motion machines.

Fun aside, pseudoscience can also be harmful and dangerous particularly when those duped by the dubious practice are harmed physically, medically or financially. Which brings us to a recent, related development aimed at duping academics. Welcome to the world of pseudo-academia.

From the New York Times:

The scientists who were recruited to appear at a conference called Entomology-2013 thought they had been selected to make a presentation to the leading professional association of scientists who study insects.

But they found out the hard way that they were wrong. The prestigious, academically sanctioned conference they had in mind has a slightly different name: Entomology 2013 (without the hyphen). The one they had signed up for featured speakers who were recruited by e-mail, not vetted by leading academics. Those who agreed to appear were later charged a hefty fee for the privilege, and pretty much anyone who paid got a spot on the podium that could be used to pad a résumé.

“I think we were duped,” one of the scientists wrote in an e-mail to the Entomological Society.

Those scientists had stumbled into a parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them. Many of the journals and meetings have names that are nearly identical to those of established, well-known publications and events.

Steven Goodman, a dean and professor of medicine at Stanford and the editor of the journal Clinical Trials, which has its own imitators, called this phenomenon “the dark side of open access,” the movement to make scholarly publications freely available.

The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.

Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.

But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk. “Most people don’t know the journal universe,” Dr. Goodman said. “They will not know from a journal’s title if it is for real or not.”

Researchers also say that universities are facing new challenges in assessing the résumés of academics. Are the publications they list in highly competitive journals or ones masquerading as such? And some academics themselves say they have found it difficult to disentangle themselves from these journals once they mistakenly agree to serve on their editorial boards.

The phenomenon has caught the attention of Nature, one of the most competitive and well-regarded scientific journals. In a news report published recently, the journal noted “the rise of questionable operators” and explored whether it was better to blacklist them or to create a “white list” of those open-access journals that meet certain standards. Nature included a checklist on “how to perform due diligence before submitting to a journal or a publisher.”

Jeffrey Beall, a research librarian at the University of Colorado in Denver, has developed his own blacklist of what he calls “predatory open-access journals.” There were 20 publishers on his list in 2010, and now there are more than 300. He estimates that there are as many as 4,000 predatory journals today, at least 25 percent of the total number of open-access journals.

“It’s almost like the word is out,” he said. “This is easy money, very little work, a low barrier start-up.”

Journals on what has become known as “Beall’s list” generally do not post the fees they charge on their Web sites and may not even inform authors of them until after an article is submitted. They barrage academics with e-mail invitations to submit articles and to be on editorial boards.

One publisher on Beall’s list, Avens Publishing Group, even sweetened the pot for those who agreed to be on the editorial board of The Journal of Clinical Trails & Patenting, offering 20 percent of its revenues to each editor.

One of the most prolific publishers on Beall’s list, Srinubabu Gedela, the director of the Omics Group, has about 250 journals and charges authors as much as $2,700 per paper. Dr. Gedela, who lists a Ph.D. from Andhra University in India, says on his Web site that he “learnt to devise wonders in biotechnology.”

Read the entire article following the jump.

Image courtesy of University of Texas.

Send to Kindle

Looking for Alien Engineering Work

We haven’t yet found any aliens inhabiting exoplanets orbiting distant stars. We haven’t received any intelligently manufactured radio signals from deep space. And, unless you subscribe to the conspiracy theories surrounding Roswell Area 51, it’s unlikely that we’ve been visited by an extra-terrestrial intelligence.

Most reasonable calculations suggest that the universe should be teeming with life beyond our small, blue planet. So, where are all the aliens and why haven’t we been contacted yet? Not content to wait, some astronomers believe we should be looking for evidence of distant alien engineering projects.

From the New Scientist:

ALIENS: where are you? Our hopes of finding intelligent companionship seem to be constantly receding. Mars and Venus are not the richly populated realms we once guessed at. The icy seas of the outer solar system may hold life, but almost certainly no more than microbes. And the search for radio signals from more distant extraterrestrials has so frustrated some astronomers that they are suggesting we shout out an interstellar “Hello”, in the hope of prodding the dozy creatures into a response.

So maybe we need to think along different lines. Rather than trying to intercept alien communications, perhaps we should go looking for alien artefacts.

There have already been a handful of small-scale searches, but now three teams of astronomers are setting out to scan a much greater volume of space (see diagram). Two groups hope to see the shadows of alien industry in fluctuating starlight. The third, like archaeologists sifting through a midden heap on Earth, is hunting for alien waste.

What they’re after is something rather grander than flint arrowheads or shards of pottery. Something big. Planet-sized power stations. Star-girdling rings or spheres. Computers the size of a solar system. Perhaps even an assembly of hardware so vast it can darken an entire galaxy.

It might seem crazy to even entertain the notion of such stupendous celestial edifices, let alone go and look for them. Yet there is a simple rationale. Unless tool-users are always doomed to destroy themselves, any civilisation out there is likely to be far older and far more advanced than ours.

Humanity has already covered vast areas of Earth’s surface with roads and cities, and begun sending probes to other planets. If we can do all this in a matter of centuries, what could more advanced civilisations do over many thousands or even millions of years?

In 1960, the physicist Freeman Dyson pointed out that if alien civilisations keep growing and expanding, they will inevitably consume ever more energy – and the biggest source of energy in any star system is the star itself. Our total power consumption today is equivalent to about 0.01 per cent of the sunlight falling on Earth, so solar power could easily supply all our needs. If energy demand keeps growing at 1 per cent a year, however, then in 1000 years we’d need more energy than strikes the surface of the planet. Other energy sources, such as nuclear fusion, cannot solve the problem because the waste heat would fry the planet.

In a similar position, alien civilisations could start building solar power plants, factories and even habitats in space. With material mined from asteroids, then planets, and perhaps even the star itself, they could really spread out. Dyson’s conclusion was that after thousands or millions of years, the star might be entirely surrounded by a vast artificial sphere of solar panels.

The scale of a Dyson sphere is almost unimaginable. A sphere with a radius similar to that of Earth’s orbit would have more than a hundred million times the surface area of Earth. Nobody thinks building it would be easy. A single shell is almost certainly out, as it would be under extraordinary stresses and gravitationally unstable. A more plausible option is a swarm: many huge power stations on orbits that do not intersect, effectively surrounding the star. Dyson himself does not like to speculate on the details, or on the likelihood of a sphere being built. “We have no way of judging,” he says. The crucial point is that if any aliens have built Dyson spheres, there is a chance we could spot them.

A sphere would block the sun’s light, making it invisible to our eyes, but the sphere would still emit waste heat in the form of infrared radiation. So, as Carl Sagan pointed out in 1966, if infrared telescopes spot a warm object but nothing shows up at visible wavelengths, it could be a Dyson sphere.

Some natural objects can produce the same effect. Very young and very old stars are often surrounded by dust and gas, which blocks their light and radiates infrared. But the infrared spectrum of these objects should be a giveaway. Silicate minerals in dust produce a distinctive broad peak in the spectrum, and molecules in a warm gas would produce bright or dark spectral lines at specific wavelengths. By contrast, waste heat from a sphere should have a smooth, featureless thermal spectrum. “We would be hoping that the spectrum looks boring,” says Matt Povich at the California State Polytechnic University in Pomona. “The more boring the better.”

Our first good view of the sky at the appropriate wavelengths came when the Infrared Astronomical Satellite surveyed the skies for 10 months in 1983, and a few astronomers have sifted through its data. Vyacheslav Slysh at the Space Research Institute in Moscow made the first attempt in 1985, and Richard Carrigan at Fermilab in Illinois published the latest search in 2009. “I wanted to get into the mode of the British Museum, to go and look for artefacts,” he says.

Carrigan found no persuasive sources, but the range of his search was limited. It would have detected spheres around sunlike stars only within 1000 light years of Earth. This is a very small part of the Milky Way, which is 100,000 light years across.

One reason few have joined Carrigan in the hunt for artefacts is the difficulty of getting funding for such projects. Then last year, the Templeton Foundation – an organisation set up by a billionaire to fund research into the “big questions” – invited proposals for its New Frontiers programme, specifically requesting research that would not normally be funded because of its speculative nature. A few astronomers jumped at the chance to look for alien contraptions and, in October, the programme approved three separate searches. The grants are just a couple of hundred thousand dollars each, but they do not have to fund new telescopes, only new analysis.

One team, led by Jason Wright at Pennsylvania State University in University Park, will look for the waste heat of Dyson spheres by analysing data from two space-based infrared observatories, the Wide-field Infrared Survey Explorer (WISE) and the Spitzer space telescope, launched in 2009 and 2003. Povich, a member of this team, is looking specifically within the Milky Way. Thanks to the data from Spitzer and WISE, Povich should be able to scan a volume of space thousands of times larger than previous searches like Carrigan’s. “For example, if you had a sun-equivalent star, fully enclosed in a Dyson sphere, we should be able to detect it almost anywhere in the galaxy.”

Even such a wide-ranging hunt may not be ambitious enough, according to Wright. He suspects that interstellar travel will prove no harder than constructing a sphere. An alien civilisation with such a high level of technology would spread out and colonise the galaxy in a few million years, building spheres as they go. “I would argue that it’s very hard for a spacefaring civilisation to die out. There are too many lifeboats,” says Wright. “Once you have self-sufficient colonies, you will take over the galaxy – you can’t even try to stop it because you can’t coordinate the actions of the colonies.”

If this had happened in the Milky Way, there should be spheres everywhere. “To find one or a few Dyson spheres in our galaxy would be very strange,” says Wright.

Read the entire article after the jump.

Image: 2001: A Space Odyssey, The Monolith. Courtesy of Daily Galaxy.

Send to Kindle

The Cycle of Dispossession and Persecution

In 2010, novelist Iain Banks delivered his well-crafted and heart-felt view of a very human problem — our inability to learn from past mistakes. Courageously for someone in the public eye he did something non-trivial, however small, about an all too common one. We excerpt his essay below.

From Guardian:

I support the Boycott, Divestment and Sanctions (BDS) campaign because, especially in our instantly connected world, an injustice committed against one, or against one group of people, is an injustice against all, against every one of us; a collective injury.

My particular reason for participating in the cultural boycott of Israel is that, first of all, I can; I’m a writer, a novelist, and I produce works that are, as a rule, presented to the international market. This gives me a small extra degree of power over that which I possess as a (UK) citizen and a consumer. Secondly, where possible when trying to make a point, one ought to be precise, and hit where it hurts. The sports boycott of South Africa when it was still run by the racist apartheid regime helped to bring the country to its senses because the ruling Afrikaaner minority put so much store in their sporting prowess. Rugby and cricket in particular mattered to them profoundly, and their teams’ generally elevated position in the international league tables was a matter of considerable pride. When they were eventually isolated by the sporting boycott – as part of the wider cultural and trade boycott – they were forced that much more persuasively to confront their own outlaw status in the world.

A sporting boycott of Israel would make relatively little difference to the self-esteem of Israelis in comparison to South Africa; an intellectual and cultural one might help make all the difference, especially now that the events of the Arab spring and the continuing repercussions of the attack on the Gaza-bound flotilla peace convoy have threatened both Israel’s ability to rely on Egypt’s collusion in the containment of Gaza, and Turkey’s willingness to engage sympathetically with the Israeli regime at all. Feeling increasingly isolated, Israel is all the more vulnerable to further evidence that it, in turn, like the racist South African regime it once supported and collaborated with, is increasingly regarded as an outlaw state.

I was able to play a tiny part in South Africa’s cultural boycott, ensuring that – once it thundered through to me that I could do so – my novels weren’t sold there (while subject to an earlier contract, under whose terms the books were sold in South Africa, I did a rough calculation of royalties earned each year and sent that amount to the ANC). Since the 2010 attack on the Turkish-led convoy to Gaza in international waters, I’ve instructed my agent not to sell the rights to my novels to Israeli publishers. I don’t buy Israeli-sourced products or food, and my partner and I try to support Palestinian-sourced products wherever possible.

It doesn’t feel like much, and I’m not completely happy doing even this; it can sometimes feel like taking part in collective punishment (although BDS is, by definition, aimed directly at the state and not the people), and that’s one of the most damning charges that can be levelled at Israel itself: that it engages in the collective punishment of the Palestinian people within Israel, and the occupied territories, that is, the West Bank and – especially – the vast prison camp that is Gaza. The problem is that constructive engagement and reasoned argument demonstrably have not worked, and the relatively crude weapon of boycott is pretty much all that’s left. (To the question, “What about boycotting Saudi Arabia?” – all I can claim is that cutting back on my consumption of its most lucrative export was a peripheral reason for giving up the powerful cars I used to drive, and for stopping flying, some years ago. I certainly wouldn’t let a book of mine be published there either, although – unsurprisingly, given some of the things I’ve said about that barbaric excuse for a country, not to mention the contents of the books themselves – the issue has never arisen, and never will with anything remotely resembling the current regime in power.)

As someone who has always respected and admired the achievements of the Jewish people – they’ve probably contributed even more to world civilisation than the Scots, and we Caledonians are hardly shy about promoting our own wee-but-influential record and status – and has felt sympathy for the suffering they experienced, especially in the years leading up to and then during the second world war and the Holocaust, I’ll always feel uncomfortable taking part in any action that – even if only thanks to the efforts of the Israeli propaganda machine – may be claimed by some to target them, despite the fact that the state of Israel and the Jewish people are not synonymous. Israel and its apologists can’t have it both ways, though: if they’re going to make the rather hysterical claim that any and every criticism of Israeli domestic or foreign policy amounts to antisemitism, they have to accept that this claimed, if specious, indivisibility provides an opportunity for what they claim to be the censure of one to function as the condemnation of the other.

Read the entire essay after the jump.

Send to Kindle

Technology and the Exploitation of Children

Many herald the forward motion of technological innovation as progress. In many cases the momentum does genuinely seem to carry us towards a better place; it broadly alleviates pain and suffering; it generally delivers more and better nutrition to our bodies and our minds. Yet for all the positive steps, this progress is often accompanied by retrograde leaps — often paradoxical ones. Particularly disturbing is the relative ease to which technology allows us — the responsible adults – to sexualise and exploit children. Now, this is certainly not a new phenomenon, but our technical prowess certainly makes this problem more pervasive. A case in point, the Instagram beauty pageant. Move over Honey Boo-Boo.

From the Washington Post:

The photo-sharing site Instagram has become wildly popular as a way to trade pictures of pets and friends. But a new trend on the site is making parents cringe: beauty pageants, in which thousands of young girls — many appearing no older than 12 or 13 — submit photographs of themselves for others to judge.

In one case, the mug shots of four girls, middle-school-age or younger, have been pitted against each other. One is all dimples, wearing a hair bow and a big, toothy grin. Another is trying out a pensive, sultry look.

Any of Instagram’s 30 million users can vote on the appearance of the girls in a comments section of the post. Once a girl’s photo receives a certain number of negative remarks, the pageant host, who can remain anonymous, can update it with a big red X or the word “OUT” scratched across her face.

“U.G.L.Y,” wrote one user about a girl, who submitted her photo to one of the pageants identified on Instagram by the keyword “#beautycontest.”

The phenomenon has sparked concern among parents and child safety advocates who fear that young girls are making themselves vulnerable to adult strangers and participating in often cruel social interactions at a sensitive period of development.

But the contests are the latest example of how technology is pervading the lives of children in ways that parents and teachers struggle to understand or monitor.

“What started out as just a photo-sharing site has become something really pernicious for young girls,” said Rachel Simmons, author of “Odd Girl Out” and a speaker on youth and girls. “What happened was, like most social media experiences, girls co-opted it and imposed their social life on it to compete for attention and in a very exaggerated way.”

It’s difficult to track when the pageants began and who initially set them up. A keyword search of #beautycontest turned up 8,757 posts, while #rateme had 27,593 photo posts. Experts say those two terms represent only a fraction of the activity. Contests are also appearing on other social media sites, including Tumblr and Snapchat — mobile apps that have grown in popularity among youth.

Facebook, which bought Instagram last year, declined to comment. The company has a policy of not allowing anyone under the age of 13 to create an account or share photos on Instagram. But Facebook has been criticized for allowing pre-teens to get around the rule — two years ago, Consumer Reports estimated their presence on Facebook was 7.5 million. (Washington Post Co. Chairman Donald Graham sits on Facebook’s board of directors.)

Read the entire article after the jump.

Image: Instagram. Courtesy of Wired.

 

Send to Kindle

Shedding Light on Dark Matter

Scientists are cautiously optimistic that results from a particle experiment circling the Earth onboard the International Space Station (ISS) hint at the existence of dark matter.

From Symmetry:

The space-based Alpha Magnetic Spectrometer experiment could be building toward evidence of dark matter, judging by its first result.

The AMS detector does its work more than 200 miles above Earth, latched to the side of the International Space Station. It detects charged cosmic rays, high-energy particles that for the most part originate outside our solar system.

The experiment’s first result, released today, showed an excess of antimatter particles—over the number expected to come from cosmic-ray collisions—in a certain energy range.

There are two competing explanations for this excess. Extra antimatter particles called positrons could be forming in collisions between unseen dark-matter particles and their antiparticles in space. Or an astronomical object such as a pulsar could be firing them into our solar system.

Luckily, there are a couple of ways to find out which explanation is correct.

If dark-matter particles are the culprits, the excess of positrons should sink suddenly above a certain energy. But if a pulsar is responsible, at higher energies the excess will only gradually disappear.

“The way they drop off tells you everything,” said AMS Spokesperson and Nobel laureate Sam Ting, in today’s presentation at CERN, the European center for particle physics.

The AMS result, to be published in Physical Review Letters on April 5, includes data from the energy range between 0.5 and 350 GeV. A graph of the flux of positrons over the flux of electrons and positrons takes the shape of a valley, dipping in the energy range between 0.5 to 10 GeV and then increasing steadily between 10 and 250 GeV. After that point, it begins to dip again—but the graph cuts off just before one can tell whether this is the great drop-off expected in dark matter models or the gradual fade-out expected in pulsar models. This confirms previous results from the PAMELA experiment, with greater precision.

Ting smiled slightly while presenting this cliffhanger, pointing to the empty edge of the graph. “In here, what happens is of great interest,” he said.

“We, of course, have a feeling what is happening,” he said. “But probably it is too early to discuss that.”

Ting kept mum about any data collected so far above that energy, telling curious audience members to wait until the experiment had enough information to present a statistically significant result.

“I’ve been working at CERN for many years. I’ve never made a mistake on an experiment,” he said. “And this is a very difficult experiment.”

A second way to determine the origin of the excess of positrons is to consider where they’re coming from. If positrons are hitting the detector from all directions at random, they could be coming from something as diffuse as dark matter. But if they are arriving from one preferred direction, they might be coming from a pulsar.

So far, the result leans toward the dark-matter explanation, with positrons coming from all directions. But AMS scientists will need to collect more data to say this for certain.

Read the entire article following the jump.

Image: Alpha Magnetic Spectrometer (AMS) detector latched on to the International Space Station. Courtesy of NASA / AMS-02.

Send to Kindle

The Filter Bubble Eats the Book World

Last week Amazon purchased Goodreads the online book review site. Since 2007 Goodreads has grown to become home to over 16 million members who share a passion for discovering and sharing great literature. Now, with Amazon’s acquisition many are concerned that this represents another step towards a monolithic and monopolistic enterprise that controls vast swathes of the market. While Amazon’s innovation has upended the bricks-and-mortar worlds of publishing and retailing, its increasingly dominant market power raises serious concerns over access, distribution and choice. This is another worrying example of the so-called filter bubble — where increasingly edited selections and personalized recommendations act to limit and dumb-down content.

From the Guardian:

“Truly devastating” for some authors but “like finding out my mom is marrying that cool dude next door that I’ve been palling around with” for another, Amazon’s announcement late last week that it was buying the hugely popular reader review site Goodreads has sent shockwaves through the book industry.

The acquisition, terms of which Amazon.com did not reveal, will close in the second quarter of this year. Goodreads, founded in 2007, has more than 16m members, who have added more than four books per second to their “want to read” shelves over the past 90 days, according to Amazon. The internet retailer’s vice president of Kindle content, Russ Grandinetti, said the two sites “share a passion for reinventing reading”.

“Goodreads has helped change how we discover and discuss books and, with Kindle, Amazon has helped expand reading around the world. In addition, both Amazon and Goodreads have helped thousands of authors reach a wider audience and make a better living at their craft. Together we intend to build many new ways to delight readers and authors alike,” said Grandinetti, announcing the buy. Goodreads co-founder Otis Chandler said the deal with Amazon meant “we’re now going to be able to move faster in bringing the Goodreads experience to millions of readers around the world”, adding on his blog that “we have no plans to change the Goodreads experience and Goodreads will continue to be the wonderful community we all cherish”.

But despite Chandler’s reassurances, many readers and authors reacted negatively to the news. American writers’ organisation the Authors’ Guild called the acquisition a “truly devastating act of vertical integration” which meant that “Amazon’s control of online bookselling approaches the insurmountable”. Bestselling legal thriller author Scott Turow, president of the Guild, said it was “a textbook example of how modern internet monopolies can be built”.

“The key is to eliminate or absorb competitors before they pose a serious threat,” said Turow. “With its 16 million subscribers, Goodreads could easily have become a competing online bookseller, or played a role in directing buyers to a site other than Amazon. Instead, Amazon has scuttled that potential and also squelched what was fast becoming the go-to venue for online reviews, attracting far more attention than Amazon for those seeking independent assessment and discussion of books. As those in advertising have long known, the key to driving sales is controlling information.”

Turow was joined in his concerns by members of Goodreads, many of whom expressed their fears about what the deal would mean on Chandler’s blog. “I have to admit I’m not entirely thrilled by this development,” wrote one of the more level-headed commenters. “As a general rule I like Amazon, but unless they take an entirely 100% hands-off attitude toward Goodreads I find it hard to believe this will be in the best interest for the readers. There are simply too many ways they can interfere with the neutral Goodreads experience and/or try to profit from the strictly volunteer efforts of Goodreads users.”

But not all authors were against the move. Hugh Howey, author of the smash hit dystopian thriller Wool – which took off after he self-published it via Amazon – said it was “like finding out my mom is marrying that cool dude next door that I’ve been palling around with”. While Howey predicted “a lot of hand-wringing over the acquisition”, he said there were “so many ways this can be good for all involved. I’m still trying to think of a way it could suck.”

Read the entire article following the jump.

Image: Amazon.com screen. Courtesy of New York Times.

Send to Kindle

Iain (M.) Banks

Where is the technology of the Culture when it’s most needed? Nothing more to add.

From the Guardian:

In Iain M Banks’s finest creation, the universe of the Culture, death is largely optional. It’s an option most people take in the end: they take it after three or four centuries, after living on a suitably wide variety of planets and in a suitably wide variety of bodies, and after a life of hedonism appropriate to the anarcho-communist Age of Plenty galactic civilisation in which they live; they take it in partial, reversible forms. But they take it. It’s an option.

Sadly, and obviously, that’s not true for us. Banks himself has released a statement on his website, saying that he has terminal cancer. He tells us as much with his usual eye for technical detail and stark impact:

I have cancer. It started in my gall bladder, has infected both lobes of my liver and probably also my pancreas and some lymph nodes, plus one tumour is massed around a group of major blood vessels in the same volume, effectively ruling out any chance of surgery to remove the tumours… The bottom line, now, I’m afraid, is that as a late stage gall bladder cancer patient, I’m expected to live for ‘several months’ and it’s extremely unlikely I’ll live beyond a year.

So there you have it.

Anything I write about Banks and his work, both as Iain Banks and Iain M Banks (for the uninitiated, Iain Banks is the name he publishes his non-genre novels under; Iain M Banks is for his sci-fi stuff), will ultimately be about me, I realise. I can’t pretend to say What His Work Meant for Literature or for Sci-Fi, because I don’t know what it meant; I can’t speak about him as a human being, beyond what I thought I could detect of his personality through his work (humane and witty and fascinated by the new, for the record), because I haven’t met him.

With that in mind, I just wanted to talk a bit about why I love his books, why I think he is one – or two, really – of our finest living writers, and how his work has had probably more impact on me than any other fiction writer.

I first read The Wasp Factory in about 1996, when my mum, keen to get me reading pretty much anything that wasn’t Terry Pratchett, heard of this “enfant explosif” of Scottish literature. It’s a slightly tricky admission to make in a hagiographical piece like this one, but I wasn’t all that taken with it: it felt a little bleak and soulless, and the literary pyrotechnics and grand gothic sequences didn’t rescue it. But then I read Excession, one of his M Banks sci-fi novels, set in the Culture; and then I read The Crow Road, his hilarious and moving madcap family-history-murder-mystery set in the Scottish wilds; and I was hooked.

Since then I’ve read literally everything he’s published under M Banks, and most of the stuff under Banks. There are hits and misses, but the misses are never bad and the hits are spectacular. He creates vivid characters; he paints scenes in sparkling detail; he has plots that rollick along like Dan Brown’s are supposed to, but don’t.

And what’s most brilliant, at least for me as a lifelong fan of both sci-fi and “proper” literature, is that he takes the same simple but vital skills – well-drawn characters, clever writing, believable dialogue – from his non-genre novels and applies them to his sci-fi, allied to dizzying imagination and serious knowledge.

Read the entire article after the jump.

Image: Iain Banks. Courtesy of the Guardian.

Send to Kindle

Blame (Or Hug) Martin Cooper

Martin Cooper. You may not know that name, but you and a fair proportion of the world’s 7 billion inhabitants have surely held or dropped or prodded or cursed his offspring.

You see, forty years ago Martin Cooper used his baby to make the first public mobile phone call. Martin Cooper invented the cell phone.

From the Guardian:

It is 40 years this week since the first public mobile phone call. On 3 April, 1973, Martin Cooper, a pioneering inventor working for Motorola in New York, called a rival engineer from the pavement of Sixth Avenue to brag and was met with a stunned, defeated silence. The race to make the first portable phone had been won. The Pandora’s box containing txt-speak, pocket-dials and pig-hating suicidal birds was open.

Many people at Motorola, however, felt mobile phones would never be a mass-market consumer product. They wanted the firm to focus on business carphones. But Cooper and his team persisted. Ten years after that first boastful phonecall they brought the portable phone to market, at a retail price of around $4,000.

Thirty years on, the number of mobile phone subscribers worldwide is estimated at six and a half billion. And Angry Birds games have been downloaded 1.7bn times.

This is the story of the mobile phone in 40 facts:

1 That first portable phone was called a DynaTAC. The original model had 35 minutes of battery life and weighed one kilogram.

2 Several prototypes of the DynaTAC were created just 90 days after Cooper had first suggested the idea. He held a competition among Motorola engineers from various departments to design it and ended up choosing “the least glamorous”.

3 The DynaTAC’s weight was reduced to 794g before it came to market. It was still heavy enough to beat someone to death with, although this fact was never used as a selling point.

4 Nonetheless, people cottoned on. DynaTAC became the phone of choice for fictional psychopaths, including Wall Street’s Gordon Gekko, American Psycho’s Patrick Bateman and Saved by the Bell’s Zack Morris.

5 The UK’s first public mobile phone call was made by comedian Ernie Wise in 1985 from St Katharine dock to the Vodafone head offices over a curry house in Newbury.

6 Vodafone’s 1985 monopoly of the UK mobile market lasted just nine days before Cellnet (now O2) launched its rival service. A Vodafone spokesperson was probably all like: “Aw, shucks!”

7 Cellnet and Vodafone were the only UK mobile providers until 1993.

8 It took Vodafone just less than nine years to reach the one million customers mark. They reached two million just 18 months later.

9 The first smartphone was IBM’s Simon, which debuted at the Wireless World Conference in 1993. It had an early LCD touchscreen and also functioned as an email device, electronic pager, calendar, address book and calculator.

10 The first cameraphone was created by French entrepreneur Philippe Kahn. He took the first photograph with a mobile phone, of his newborn daughter Sophie, on 11 June, 1997.

Read the entire article after the jump.

Image: Dr. Martin Cooper, the inventor of the cell phone, with DynaTAC prototype from 1973 (in the year 2007). Courtesy of Wikipedia.

Send to Kindle

The Benefits of Human Stupidity

Human intelligence is a wonderful thing. At both the individual and collective level it drives our complex communication, our fundamental discoveries and inventions, and impressive and accelerating progress. Intelligence allows us to innovate, to design, to build; and it underlies our superior capacity, over other animals, for empathy, altruism, art, and social and cultural evolution. Yet, despite our intellectual abilities and seemingly limitless potential, we humans still do lots of stupid things. Why is this?

From New Scientist:

“EARTH has its boundaries, but human stupidity is limitless,” wrote Gustave Flaubert. He was almost unhinged by the fact. Colourful fulminations about his fatuous peers filled his many letters to Louise Colet, the French poet who inspired his novel Madame Bovary. He saw stupidity everywhere, from the gossip of middle-class busybodies to the lectures of academics. Not even Voltaire escaped his critical eye. Consumed by this obsession, he devoted his final years to collecting thousands of examples for a kind of encyclopedia of stupidity. He died before his magnum opus was complete, and some attribute his sudden death, aged 58, to the frustration of researching the book.

Documenting the extent of human stupidity may itself seem a fool’s errand, which could explain why studies of human intellect have tended to focus on the high end of the intelligence spectrum. And yet, the sheer breadth of that spectrum raises many intriguing questions. If being smart is such an overwhelming advantage, for instance, why aren’t we all uniformly intelligent? Or are there drawbacks to being clever that sometimes give slower thinkers the upper hand? And why are even the smartest people prone to – well, stupidity?

It turns out that our usual measures of intelligence – particularly IQ – have very little to do with the kind of irrational, illogical behaviours that so enraged Flaubert. You really can be highly intelligent, and at the same time very stupid. Understanding the factors that lead clever people to make bad decisions is beginning to shed light on many of society’s biggest catastrophes, including the recent economic crisis. More intriguingly, the latest research may suggest ways to evade a condition that can plague us all.

The idea that intelligence and stupidity are simply opposing ends of a single spectrum is a surprisingly modern one. The Renaissance theologian Erasmus painted Folly – or Stultitia in Latin – as a distinct entity in her own right, descended from the god of wealth and the nymph of youth; others saw it as a combination of vanity, stubbornness and imitation. It was only in the middle of the 18th century that stupidity became conflated with mediocre intelligence, says Matthijs van Boxsel, a Dutch historian who has written many books about stupidity. “Around that time, the bourgeoisie rose to power, and reason became a new norm with the Enlightenment,” he says. “That put every man in charge of his own fate.”

Modern attempts to study variations in human ability tended to focus on IQ tests that put a single number on someone’s mental capacity. They are perhaps best recognised as a measure of abstract reasoning, says psychologist Richard Nisbett at the University of Michigan in Ann Arbor. “If you have an IQ of 120, calculus is easy. If it’s 100, you can learn it but you’ll have to be motivated to put in a lot of work. If your IQ is 70, you have no chance of grasping calculus.” The measure seems to predict academic and professional success.

Various factors will determine where you lie on the IQ scale. Possibly a third of the variation in our intelligence is down to the environment in which we grow up – nutrition and education, for example. Genes, meanwhile, contribute more than 40 per cent of the differences between two people.

These differences may manifest themselves in our brain’s wiring. Smarter brains seem to have more efficient networks of connections between neurons. That may determine how well someone is able to use their short-term “working” memory to link disparate ideas and quickly access problem-solving strategies, says Jennie Ferrell, a psychologist at the University of the West of England in Bristol. “Those neural connections are the biological basis for making efficient mental connections.”

This variation in intelligence has led some to wonder whether superior brain power comes at a cost – otherwise, why haven’t we all evolved to be geniuses? Unfortunately, evidence is in short supply. For instance, some proposed that depression may be more common among more intelligent people, leading to higher suicide rates, but no studies have managed to support the idea. One of the only studies to report a downside to intelligence found that soldiers with higher IQs were more likely to die during the second world war. The effect was slight, however, and other factors might have skewed the data.

Intellectual wasteland

Alternatively, the variation in our intelligence may have arisen from a process called “genetic drift”, after human civilisation eased the challenges driving the evolution of our brains. Gerald Crabtree at Stanford University in California is one of the leading proponents of this idea. He points out that our intelligence depends on around 2000 to 5000 constantly mutating genes. In the distant past, people whose mutations had slowed their intellect would not have survived to pass on their genes; but Crabtree suggests that as human societies became more collaborative, slower thinkers were able to piggyback on the success of those with higher intellect. In fact, he says, someone plucked from 1000 BC and placed in modern society, would be “among the brightest and most intellectually alive of our colleagues and companions” (Trends in Genetics, vol 29, p 1).

This theory is often called the “idiocracy” hypothesis, after the eponymous film, which imagines a future in which the social safety net has created an intellectual wasteland. Although it has some supporters, the evidence is shaky. We can’t easily estimate the intelligence of our distant ancestors, and the average IQ has in fact risen slightly in the immediate past. At the very least, “this disproves the fear that less intelligent people have more children and therefore the national intelligence will fall”, says psychologist Alan Baddeley at the University of York, UK.

In any case, such theories on the evolution of intelligence may need a radical rethink in the light of recent developments, which have led many to speculate that there are more dimensions to human thinking than IQ measures. Critics have long pointed out that IQ scores can easily be skewed by factors such as dyslexia, education and culture. “I would probably soundly fail an intelligence test devised by an 18th-century Sioux Indian,” says Nisbett. Additionally, people with scores as low as 80 can still speak multiple languages and even, in the case of one British man, engage in complex financial fraud. Conversely, high IQ is no guarantee that a person will act rationally – think of the brilliant physicists who insist that climate change is a hoax.

It was this inability to weigh up evidence and make sound decisions that so infuriated Flaubert. Unlike the French writer, however, many scientists avoid talking about stupidity per se – “the term is unscientific”, says Baddeley. However, Flaubert’s understanding that profound lapses in logic can plague the brightest minds is now getting attention. “There are intelligent people who are stupid,” says Dylan Evans, a psychologist and author who studies emotion and intelligence.

Read the entire article after the jump.

Send to Kindle

Next Up: Apple TV

Robert Hof argues that the time is ripe for Steve Jobs’ corporate legacy to reinvent the TV. Apple transformed the personal computer industry, the mobile phone market and the music business. Clearly the company has all the components in place to assemble another innovation.

From Technology Review:

Steve Jobs couldn’t hide his frustration. Asked at a technology conference in 2010 whether Apple might finally turn its attention to television, he launched into an exasperated critique of TV. Cable and satellite TV companies make cheap, primitive set-top boxes that “squash any opportunity for innovation,” he fumed. Viewers are stuck with “a table full of remotes, a cluster full of boxes, a bunch of different [interfaces].” It was the kind of technological mess that cried out for Apple to clean it up with an elegant product. But Jobs professed to have no idea how his company could transform the TV.

Scarcely a year later, however, he sounded far more confident. Before he died on October 5, 2011, he told his biographer, ­Walter Isaacson, that Apple wanted to create an “integrated television set that is completely easy to use.” It would sync with other devices and Apple’s iCloud online storage service and provide “the simplest user interface you could imagine.” He added, tantalizingly, “I finally cracked it.”

Precisely what he cracked remains hidden behind Apple’s shroud of secrecy. Apple has had only one television-related product—the black, hockey-puck-size Apple TV device, which streams shows and movies to a TV. For years, Jobs and Tim Cook, his successor as CEO, called that device a “hobby.” But under the guise of this hobby, Apple has been steadily building hardware, software, and services that make it easier for people to watch shows and movies in whatever way they wish. Already, the company has more of the pieces for a compelling next-generation TV experience than people might realize.

And as Apple showed with the iPad and iPhone, it doesn’t have to invent every aspect of a product in order for it to be disruptive. Instead, it has become the leader in consumer electronics by combining existing technologies with some of its own and packaging them into products that are simple to use. TV seems to be at that moment now. People crave something better than the fusty, rigidly controlled cable TV experience, and indeed, the technologies exist for something better to come along. Speedier broadband connections, mobile TV apps, and the availability of some shows and movies on demand from Netflix and Hulu have made it easier to watch TV anytime, anywhere. The number of U.S. cable and satellite subscribers has been flat since 2010.

Apple would not comment. But it’s clear from two dozen interviews with people close to Apple suppliers and partners, and with people Apple has spoken to in the TV industry, that television—the medium and the device—is indeed its next target.

The biggest question is not whether Apple will take on TV, but when. The company must eventually come up with another breakthrough product; with annual revenue already topping $156 billion, it needs something very big to keep growth humming after the next year or two of the iPad boom. Walter Price, managing director of Allianz Global Investors, which holds nearly $1 billion in Apple shares, met with Apple executives in September and came away convinced that it would be years before Apple could get a significant share of the $345 billion worldwide market for televisions. But at $1,000, the bare minimum most analysts expect an Apple television to cost, such a product would eventually be a significant revenue generator. “You sell 10 million of those, it can move the needle,” he says.

Cook, who replaced Jobs as CEO in August 2011, could use a boost, too. He has presided over missteps such as a flawed iPhone mapping app that led to a rare apology and a major management departure. Seen as a peerless operations whiz, Cook still needs a revolutionary product of his own to cement his place next to Saint Steve. Corey Ferengul, a principal at the digital media investment firm Apace Equities and a former executive at Rovi, which provided TV programming guide services to Apple and other companies, says an Apple TV will be that product: “This will be Tim Cook’s first ‘holy shit’ innovation.”

What Apple Already Has

Rapt attention would be paid to whatever round-edged piece of brushed-aluminum hardware Apple produced, but a television set itself would probably be the least important piece of its television strategy. In fact, many well-connected people in technology and television, from TV and online video maven Mark Cuban to venture capitalist and former Apple executive Jean-Louis Gassée, can’t figure out why Apple would even bother with the machines.

For one thing, selling televisions is a low-margin business. No one subsidizes the purchase of a TV the way your wireless carrier does with the iPhone (an iPhone might cost you $200, but Apple’s revenue from it is much higher than that). TVs are also huge and difficult to stock in stores, let alone ship to homes. Most of all, the upgrade cycle that powers Apple’s iPhone and iPad profit engine doesn’t apply to television sets—no one replaces them every year or two.

But even though TVs don’t line up neatly with the way Apple makes money on other hardware, they are likely to remain central to people’s ever-increasing consumption of video, games, and other forms of media. Apple at least initially could sell the screens as a kind of Trojan horse—a way of entering or expanding its role in lines of business that are more profitable, such as selling movies, shows, games, and other Apple hardware.

Read the entire article following the jump.

Image courtesy of Apple, Inc.

Send to Kindle

Mars: 2030

Dennis Tito, the world’s first space tourist, would like to send a private space mission to Mars in 2018. He has pots of money and has founded a non-profit to gather partners and donors to get the mission off the ground. NASA has other plans. The U.S. space agency is tasked by the current administration to plan a human mission to Mars for the mid-2030s. However, due to budgetary issues, fiscal cliffs, and possible debt and deficit reduction, nobody believes it will actually happen. Though, many in NASA and lay-explorers at heart continue to hope.

From Technology Review:

In August, NASA used a series of precise and daring maneuvers to put a one-ton robotic rover named Curiosity on Mars. A capsule containing the rover parachuted through the Martian atmosphere and then unfurled a “sky crane” that lowered Curiosity safely into place. It was a thrilling moment: here were people communicating with a large and sophisticated piece of equipment 150 million miles away as it began to carry out experiments that should enhance our understanding of whether the planet has or has ever had life. So when I visited NASA’s Johnson Space Center in Houston a few days later, I expected to find people still basking in the afterglow. To be sure, the Houston center, where astronauts get directions from Mission Control, didn’t play the leading role in Curiosity. That project was centered at the Jet Propulsion Laboratory, which Caltech manages for NASA in Pasadena. Nonetheless, the landing had been a remarkable event for the entire U.S. space program. And yet I found that Mars wasn’t an entirely happy subject in Houston—especially among people who believe that humans, not only robots, should be exploring there.

In his long but narrow office in the main building of the sprawling Houston center, Bret Drake has compiled an outline explaining how six astronauts could be sent on six-month flights to Mars and what they would do there for a year and a half before their six-month flights home. Drake, 51, has been thinking about this since 1988, when he began working on what he calls the “exploration beyond low Earth orbit dream.” Back then, he expected that people would return to the moon in 2004 and be on the brink of traveling to Mars by now. That prospect soon got ruled out, but Drake pressed on: in the late 1990s he was crafting plans for human Mars missions that could take place around 2018. Today the official goal is for it to happen in the 2030s, but funding cuts have inhibited NASA’s ability to develop many of the technologies that would be required. In fact, progress was halted entirely in 2008 when Congress, in an effort to impose frugality on NASA, prohibited it from using any money to further the human exploration of Mars. “Mars was a four-letter dirty word,” laments Drake, who is deputy chief architect for NASA’s human spaceflight architecture team. Even though that rule was rescinded after a year, Drake knows NASA could perpetually remain 20 years away from a manned Mars mission.

If putting men on the moon signified the extraordinary things that technology made possible in the middle of the 20th century, sending humans to Mars would be the 21st-century version. The flight would be much more arduous and isolating for the astronauts: whereas the Apollo crews who went to the moon were never more than three days from home and could still make out its familiar features, a Mars crew would see Earth shrink into just one of billions of twinkles in space. Once they landed, the astronauts would have to survive in a freezing, windswept world with unbreathable air and 38 percent of Earth’s gravity. But if Drake is right, we can make this journey happen. He and other NASA engineers know what will be required, from a landing vehicle that could get humans through the Martian atmosphere to systems for feeding them, sheltering them, and shuttling them around once they’re there.

The problem facing Drake and other advocates for human exploration of Mars is that the benefits are mostly intangible. Some of the justifications that have been floated—including the idea that people should colonize the planet to improve humanity’s odds of survival—don’t stand up to an economic analysis. Until we have actually tried to keep people alive there, permanent human settlements on Mars will remain a figment of science fiction.

A better argument is that exploring Mars might have scientific benefits, because basic questions about the planet remain unanswered. “We know Mars was once wet and warm,” Drake says. “So did life ever arise there? If so, is it any different than life here on Earth? Where did it all go? What happened to Mars? Why did it become so cold and dry? How can we learn from that and what it may mean for Earth?” But right now Curiosity is exploring these very questions, firing lasers at rocks to determine their composition and hunting for signs of microbial life. Because of such robotic missions, our knowledge of Mars has improved so much in the past 15 years that it’s become harder to make the case for sending humans. People are far more adaptable and ingenious than robots and surely would find things drones can’t, but sending them would jack up the cost of a mission exponentially. “There’s just no real way to justify human exploration solely on the basis of science,” says Cynthia Phillips, a senior research scientist at the SETI Institute, which hunts for evidence of life elsewhere in the universe. “For the cost of sending one human to Mars, you could send an entire flotilla of robots.”

And yet human exploration of Mars has a powerful allure. No planet in our solar system is more like Earth. Our neighbor has rhythms we recognize as our own, with days slightly longer than 24 hours and polar ice caps that grow in the winter and shrink in the summer. Human explorers on Mars would profoundly expand the boundaries of human experience—providing, in the minds of many space advocates, an immeasurable benefit beyond science. “There have always been explorers in our society,” says Phillips. “If space exploration is only robots, you lose something, and you lose something really valuable.”

The Apollo Hangover

Mars was proposed as a place to explore even before the space program existed. In the 1950s, scientists such as Wernher von Braun (who had developed Nazi Germany’s combat rockets and later oversaw work on missiles and rockets for the United States) argued in magazines and on TV that as space became mankind’s next frontier, Mars would be an obvious point of interest. “Will man ever go to Mars?” von Braun wrote in Collier’s magazine in 1954. “I am sure he will—but it will be a century or more before he’s ready.”

Read the entire article after the jump.

Image: Artist’s conception of the Mars Excursion Module (MEM) proposed in a NASA Study in 1964. Courtesy of Dixon, Franklin P. Proceeding of the Symposium on Manned Planetary Missions: 1963/1964, Aeronutronic Divison of Philco Corp.

Send to Kindle