All posts by Mike

Technology and the Exploitation of Children

Many herald the forward motion of technological innovation as progress. In many cases the momentum does genuinely seem to carry us towards a better place; it broadly alleviates pain and suffering; it generally delivers more and better nutrition to our bodies and our minds. Yet for all the positive steps, this progress is often accompanied by retrograde leaps — often paradoxical ones. Particularly disturbing is the relative ease to which technology allows us — the responsible adults – to sexualise and exploit children. Now, this is certainly not a new phenomenon, but our technical prowess certainly makes this problem more pervasive. A case in point, the Instagram beauty pageant. Move over Honey Boo-Boo.

From the Washington Post:

The photo-sharing site Instagram has become wildly popular as a way to trade pictures of pets and friends. But a new trend on the site is making parents cringe: beauty pageants, in which thousands of young girls — many appearing no older than 12 or 13 — submit photographs of themselves for others to judge.

In one case, the mug shots of four girls, middle-school-age or younger, have been pitted against each other. One is all dimples, wearing a hair bow and a big, toothy grin. Another is trying out a pensive, sultry look.

Any of Instagram’s 30 million users can vote on the appearance of the girls in a comments section of the post. Once a girl’s photo receives a certain number of negative remarks, the pageant host, who can remain anonymous, can update it with a big red X or the word “OUT” scratched across her face.

“U.G.L.Y,” wrote one user about a girl, who submitted her photo to one of the pageants identified on Instagram by the keyword “#beautycontest.”

The phenomenon has sparked concern among parents and child safety advocates who fear that young girls are making themselves vulnerable to adult strangers and participating in often cruel social interactions at a sensitive period of development.

But the contests are the latest example of how technology is pervading the lives of children in ways that parents and teachers struggle to understand or monitor.

“What started out as just a photo-sharing site has become something really pernicious for young girls,” said Rachel Simmons, author of “Odd Girl Out” and a speaker on youth and girls. “What happened was, like most social media experiences, girls co-opted it and imposed their social life on it to compete for attention and in a very exaggerated way.”

It’s difficult to track when the pageants began and who initially set them up. A keyword search of #beautycontest turned up 8,757 posts, while #rateme had 27,593 photo posts. Experts say those two terms represent only a fraction of the activity. Contests are also appearing on other social media sites, including Tumblr and Snapchat — mobile apps that have grown in popularity among youth.

Facebook, which bought Instagram last year, declined to comment. The company has a policy of not allowing anyone under the age of 13 to create an account or share photos on Instagram. But Facebook has been criticized for allowing pre-teens to get around the rule — two years ago, Consumer Reports estimated their presence on Facebook was 7.5 million. (Washington Post Co. Chairman Donald Graham sits on Facebook’s board of directors.)

Read the entire article after the jump.

Image: Instagram. Courtesy of Wired.

 

Shedding Light on Dark Matter

Scientists are cautiously optimistic that results from a particle experiment circling the Earth onboard the International Space Station (ISS) hint at the existence of dark matter.

From Symmetry:

The space-based Alpha Magnetic Spectrometer experiment could be building toward evidence of dark matter, judging by its first result.

The AMS detector does its work more than 200 miles above Earth, latched to the side of the International Space Station. It detects charged cosmic rays, high-energy particles that for the most part originate outside our solar system.

The experiment’s first result, released today, showed an excess of antimatter particles—over the number expected to come from cosmic-ray collisions—in a certain energy range.

There are two competing explanations for this excess. Extra antimatter particles called positrons could be forming in collisions between unseen dark-matter particles and their antiparticles in space. Or an astronomical object such as a pulsar could be firing them into our solar system.

Luckily, there are a couple of ways to find out which explanation is correct.

If dark-matter particles are the culprits, the excess of positrons should sink suddenly above a certain energy. But if a pulsar is responsible, at higher energies the excess will only gradually disappear.

“The way they drop off tells you everything,” said AMS Spokesperson and Nobel laureate Sam Ting, in today’s presentation at CERN, the European center for particle physics.

The AMS result, to be published in Physical Review Letters on April 5, includes data from the energy range between 0.5 and 350 GeV. A graph of the flux of positrons over the flux of electrons and positrons takes the shape of a valley, dipping in the energy range between 0.5 to 10 GeV and then increasing steadily between 10 and 250 GeV. After that point, it begins to dip again—but the graph cuts off just before one can tell whether this is the great drop-off expected in dark matter models or the gradual fade-out expected in pulsar models. This confirms previous results from the PAMELA experiment, with greater precision.

Ting smiled slightly while presenting this cliffhanger, pointing to the empty edge of the graph. “In here, what happens is of great interest,” he said.

“We, of course, have a feeling what is happening,” he said. “But probably it is too early to discuss that.”

Ting kept mum about any data collected so far above that energy, telling curious audience members to wait until the experiment had enough information to present a statistically significant result.

“I’ve been working at CERN for many years. I’ve never made a mistake on an experiment,” he said. “And this is a very difficult experiment.”

A second way to determine the origin of the excess of positrons is to consider where they’re coming from. If positrons are hitting the detector from all directions at random, they could be coming from something as diffuse as dark matter. But if they are arriving from one preferred direction, they might be coming from a pulsar.

So far, the result leans toward the dark-matter explanation, with positrons coming from all directions. But AMS scientists will need to collect more data to say this for certain.

Read the entire article following the jump.

Image: Alpha Magnetic Spectrometer (AMS) detector latched on to the International Space Station. Courtesy of NASA / AMS-02.

The Filter Bubble Eats the Book World

Last week Amazon purchased Goodreads the online book review site. Since 2007 Goodreads has grown to become home to over 16 million members who share a passion for discovering and sharing great literature. Now, with Amazon’s acquisition many are concerned that this represents another step towards a monolithic and monopolistic enterprise that controls vast swathes of the market. While Amazon’s innovation has upended the bricks-and-mortar worlds of publishing and retailing, its increasingly dominant market power raises serious concerns over access, distribution and choice. This is another worrying example of the so-called filter bubble — where increasingly edited selections and personalized recommendations act to limit and dumb-down content.

From the Guardian:

“Truly devastating” for some authors but “like finding out my mom is marrying that cool dude next door that I’ve been palling around with” for another, Amazon’s announcement late last week that it was buying the hugely popular reader review site Goodreads has sent shockwaves through the book industry.

The acquisition, terms of which Amazon.com did not reveal, will close in the second quarter of this year. Goodreads, founded in 2007, has more than 16m members, who have added more than four books per second to their “want to read” shelves over the past 90 days, according to Amazon. The internet retailer’s vice president of Kindle content, Russ Grandinetti, said the two sites “share a passion for reinventing reading”.

“Goodreads has helped change how we discover and discuss books and, with Kindle, Amazon has helped expand reading around the world. In addition, both Amazon and Goodreads have helped thousands of authors reach a wider audience and make a better living at their craft. Together we intend to build many new ways to delight readers and authors alike,” said Grandinetti, announcing the buy. Goodreads co-founder Otis Chandler said the deal with Amazon meant “we’re now going to be able to move faster in bringing the Goodreads experience to millions of readers around the world”, adding on his blog that “we have no plans to change the Goodreads experience and Goodreads will continue to be the wonderful community we all cherish”.

But despite Chandler’s reassurances, many readers and authors reacted negatively to the news. American writers’ organisation the Authors’ Guild called the acquisition a “truly devastating act of vertical integration” which meant that “Amazon’s control of online bookselling approaches the insurmountable”. Bestselling legal thriller author Scott Turow, president of the Guild, said it was “a textbook example of how modern internet monopolies can be built”.

“The key is to eliminate or absorb competitors before they pose a serious threat,” said Turow. “With its 16 million subscribers, Goodreads could easily have become a competing online bookseller, or played a role in directing buyers to a site other than Amazon. Instead, Amazon has scuttled that potential and also squelched what was fast becoming the go-to venue for online reviews, attracting far more attention than Amazon for those seeking independent assessment and discussion of books. As those in advertising have long known, the key to driving sales is controlling information.”

Turow was joined in his concerns by members of Goodreads, many of whom expressed their fears about what the deal would mean on Chandler’s blog. “I have to admit I’m not entirely thrilled by this development,” wrote one of the more level-headed commenters. “As a general rule I like Amazon, but unless they take an entirely 100% hands-off attitude toward Goodreads I find it hard to believe this will be in the best interest for the readers. There are simply too many ways they can interfere with the neutral Goodreads experience and/or try to profit from the strictly volunteer efforts of Goodreads users.”

But not all authors were against the move. Hugh Howey, author of the smash hit dystopian thriller Wool – which took off after he self-published it via Amazon – said it was “like finding out my mom is marrying that cool dude next door that I’ve been palling around with”. While Howey predicted “a lot of hand-wringing over the acquisition”, he said there were “so many ways this can be good for all involved. I’m still trying to think of a way it could suck.”

Read the entire article following the jump.

Image: Amazon.com screen. Courtesy of New York Times.

Iain (M.) Banks

Where is the technology of the Culture when it’s most needed? Nothing more to add.

From the Guardian:

In Iain M Banks’s finest creation, the universe of the Culture, death is largely optional. It’s an option most people take in the end: they take it after three or four centuries, after living on a suitably wide variety of planets and in a suitably wide variety of bodies, and after a life of hedonism appropriate to the anarcho-communist Age of Plenty galactic civilisation in which they live; they take it in partial, reversible forms. But they take it. It’s an option.

Sadly, and obviously, that’s not true for us. Banks himself has released a statement on his website, saying that he has terminal cancer. He tells us as much with his usual eye for technical detail and stark impact:

I have cancer. It started in my gall bladder, has infected both lobes of my liver and probably also my pancreas and some lymph nodes, plus one tumour is massed around a group of major blood vessels in the same volume, effectively ruling out any chance of surgery to remove the tumours… The bottom line, now, I’m afraid, is that as a late stage gall bladder cancer patient, I’m expected to live for ‘several months’ and it’s extremely unlikely I’ll live beyond a year.

So there you have it.

Anything I write about Banks and his work, both as Iain Banks and Iain M Banks (for the uninitiated, Iain Banks is the name he publishes his non-genre novels under; Iain M Banks is for his sci-fi stuff), will ultimately be about me, I realise. I can’t pretend to say What His Work Meant for Literature or for Sci-Fi, because I don’t know what it meant; I can’t speak about him as a human being, beyond what I thought I could detect of his personality through his work (humane and witty and fascinated by the new, for the record), because I haven’t met him.

With that in mind, I just wanted to talk a bit about why I love his books, why I think he is one – or two, really – of our finest living writers, and how his work has had probably more impact on me than any other fiction writer.

I first read The Wasp Factory in about 1996, when my mum, keen to get me reading pretty much anything that wasn’t Terry Pratchett, heard of this “enfant explosif” of Scottish literature. It’s a slightly tricky admission to make in a hagiographical piece like this one, but I wasn’t all that taken with it: it felt a little bleak and soulless, and the literary pyrotechnics and grand gothic sequences didn’t rescue it. But then I read Excession, one of his M Banks sci-fi novels, set in the Culture; and then I read The Crow Road, his hilarious and moving madcap family-history-murder-mystery set in the Scottish wilds; and I was hooked.

Since then I’ve read literally everything he’s published under M Banks, and most of the stuff under Banks. There are hits and misses, but the misses are never bad and the hits are spectacular. He creates vivid characters; he paints scenes in sparkling detail; he has plots that rollick along like Dan Brown’s are supposed to, but don’t.

And what’s most brilliant, at least for me as a lifelong fan of both sci-fi and “proper” literature, is that he takes the same simple but vital skills – well-drawn characters, clever writing, believable dialogue – from his non-genre novels and applies them to his sci-fi, allied to dizzying imagination and serious knowledge.

Read the entire article after the jump.

Image: Iain Banks. Courtesy of the Guardian.

Blame (Or Hug) Martin Cooper

Martin Cooper. You may not know that name, but you and a fair proportion of the world’s 7 billion inhabitants have surely held or dropped or prodded or cursed his offspring.

You see, forty years ago Martin Cooper used his baby to make the first public mobile phone call. Martin Cooper invented the cell phone.

From the Guardian:

It is 40 years this week since the first public mobile phone call. On 3 April, 1973, Martin Cooper, a pioneering inventor working for Motorola in New York, called a rival engineer from the pavement of Sixth Avenue to brag and was met with a stunned, defeated silence. The race to make the first portable phone had been won. The Pandora’s box containing txt-speak, pocket-dials and pig-hating suicidal birds was open.

Many people at Motorola, however, felt mobile phones would never be a mass-market consumer product. They wanted the firm to focus on business carphones. But Cooper and his team persisted. Ten years after that first boastful phonecall they brought the portable phone to market, at a retail price of around $4,000.

Thirty years on, the number of mobile phone subscribers worldwide is estimated at six and a half billion. And Angry Birds games have been downloaded 1.7bn times.

This is the story of the mobile phone in 40 facts:

1 That first portable phone was called a DynaTAC. The original model had 35 minutes of battery life and weighed one kilogram.

2 Several prototypes of the DynaTAC were created just 90 days after Cooper had first suggested the idea. He held a competition among Motorola engineers from various departments to design it and ended up choosing “the least glamorous”.

3 The DynaTAC’s weight was reduced to 794g before it came to market. It was still heavy enough to beat someone to death with, although this fact was never used as a selling point.

4 Nonetheless, people cottoned on. DynaTAC became the phone of choice for fictional psychopaths, including Wall Street’s Gordon Gekko, American Psycho’s Patrick Bateman and Saved by the Bell’s Zack Morris.

5 The UK’s first public mobile phone call was made by comedian Ernie Wise in 1985 from St Katharine dock to the Vodafone head offices over a curry house in Newbury.

6 Vodafone’s 1985 monopoly of the UK mobile market lasted just nine days before Cellnet (now O2) launched its rival service. A Vodafone spokesperson was probably all like: “Aw, shucks!”

7 Cellnet and Vodafone were the only UK mobile providers until 1993.

8 It took Vodafone just less than nine years to reach the one million customers mark. They reached two million just 18 months later.

9 The first smartphone was IBM’s Simon, which debuted at the Wireless World Conference in 1993. It had an early LCD touchscreen and also functioned as an email device, electronic pager, calendar, address book and calculator.

10 The first cameraphone was created by French entrepreneur Philippe Kahn. He took the first photograph with a mobile phone, of his newborn daughter Sophie, on 11 June, 1997.

Read the entire article after the jump.

Image: Dr. Martin Cooper, the inventor of the cell phone, with DynaTAC prototype from 1973 (in the year 2007). Courtesy of Wikipedia.

The Benefits of Human Stupidity

Human intelligence is a wonderful thing. At both the individual and collective level it drives our complex communication, our fundamental discoveries and inventions, and impressive and accelerating progress. Intelligence allows us to innovate, to design, to build; and it underlies our superior capacity, over other animals, for empathy, altruism, art, and social and cultural evolution. Yet, despite our intellectual abilities and seemingly limitless potential, we humans still do lots of stupid things. Why is this?

From New Scientist:

“EARTH has its boundaries, but human stupidity is limitless,” wrote Gustave Flaubert. He was almost unhinged by the fact. Colourful fulminations about his fatuous peers filled his many letters to Louise Colet, the French poet who inspired his novel Madame Bovary. He saw stupidity everywhere, from the gossip of middle-class busybodies to the lectures of academics. Not even Voltaire escaped his critical eye. Consumed by this obsession, he devoted his final years to collecting thousands of examples for a kind of encyclopedia of stupidity. He died before his magnum opus was complete, and some attribute his sudden death, aged 58, to the frustration of researching the book.

Documenting the extent of human stupidity may itself seem a fool’s errand, which could explain why studies of human intellect have tended to focus on the high end of the intelligence spectrum. And yet, the sheer breadth of that spectrum raises many intriguing questions. If being smart is such an overwhelming advantage, for instance, why aren’t we all uniformly intelligent? Or are there drawbacks to being clever that sometimes give slower thinkers the upper hand? And why are even the smartest people prone to – well, stupidity?

It turns out that our usual measures of intelligence – particularly IQ – have very little to do with the kind of irrational, illogical behaviours that so enraged Flaubert. You really can be highly intelligent, and at the same time very stupid. Understanding the factors that lead clever people to make bad decisions is beginning to shed light on many of society’s biggest catastrophes, including the recent economic crisis. More intriguingly, the latest research may suggest ways to evade a condition that can plague us all.

The idea that intelligence and stupidity are simply opposing ends of a single spectrum is a surprisingly modern one. The Renaissance theologian Erasmus painted Folly – or Stultitia in Latin – as a distinct entity in her own right, descended from the god of wealth and the nymph of youth; others saw it as a combination of vanity, stubbornness and imitation. It was only in the middle of the 18th century that stupidity became conflated with mediocre intelligence, says Matthijs van Boxsel, a Dutch historian who has written many books about stupidity. “Around that time, the bourgeoisie rose to power, and reason became a new norm with the Enlightenment,” he says. “That put every man in charge of his own fate.”

Modern attempts to study variations in human ability tended to focus on IQ tests that put a single number on someone’s mental capacity. They are perhaps best recognised as a measure of abstract reasoning, says psychologist Richard Nisbett at the University of Michigan in Ann Arbor. “If you have an IQ of 120, calculus is easy. If it’s 100, you can learn it but you’ll have to be motivated to put in a lot of work. If your IQ is 70, you have no chance of grasping calculus.” The measure seems to predict academic and professional success.

Various factors will determine where you lie on the IQ scale. Possibly a third of the variation in our intelligence is down to the environment in which we grow up – nutrition and education, for example. Genes, meanwhile, contribute more than 40 per cent of the differences between two people.

These differences may manifest themselves in our brain’s wiring. Smarter brains seem to have more efficient networks of connections between neurons. That may determine how well someone is able to use their short-term “working” memory to link disparate ideas and quickly access problem-solving strategies, says Jennie Ferrell, a psychologist at the University of the West of England in Bristol. “Those neural connections are the biological basis for making efficient mental connections.”

This variation in intelligence has led some to wonder whether superior brain power comes at a cost – otherwise, why haven’t we all evolved to be geniuses? Unfortunately, evidence is in short supply. For instance, some proposed that depression may be more common among more intelligent people, leading to higher suicide rates, but no studies have managed to support the idea. One of the only studies to report a downside to intelligence found that soldiers with higher IQs were more likely to die during the second world war. The effect was slight, however, and other factors might have skewed the data.

Intellectual wasteland

Alternatively, the variation in our intelligence may have arisen from a process called “genetic drift”, after human civilisation eased the challenges driving the evolution of our brains. Gerald Crabtree at Stanford University in California is one of the leading proponents of this idea. He points out that our intelligence depends on around 2000 to 5000 constantly mutating genes. In the distant past, people whose mutations had slowed their intellect would not have survived to pass on their genes; but Crabtree suggests that as human societies became more collaborative, slower thinkers were able to piggyback on the success of those with higher intellect. In fact, he says, someone plucked from 1000 BC and placed in modern society, would be “among the brightest and most intellectually alive of our colleagues and companions” (Trends in Genetics, vol 29, p 1).

This theory is often called the “idiocracy” hypothesis, after the eponymous film, which imagines a future in which the social safety net has created an intellectual wasteland. Although it has some supporters, the evidence is shaky. We can’t easily estimate the intelligence of our distant ancestors, and the average IQ has in fact risen slightly in the immediate past. At the very least, “this disproves the fear that less intelligent people have more children and therefore the national intelligence will fall”, says psychologist Alan Baddeley at the University of York, UK.

In any case, such theories on the evolution of intelligence may need a radical rethink in the light of recent developments, which have led many to speculate that there are more dimensions to human thinking than IQ measures. Critics have long pointed out that IQ scores can easily be skewed by factors such as dyslexia, education and culture. “I would probably soundly fail an intelligence test devised by an 18th-century Sioux Indian,” says Nisbett. Additionally, people with scores as low as 80 can still speak multiple languages and even, in the case of one British man, engage in complex financial fraud. Conversely, high IQ is no guarantee that a person will act rationally – think of the brilliant physicists who insist that climate change is a hoax.

It was this inability to weigh up evidence and make sound decisions that so infuriated Flaubert. Unlike the French writer, however, many scientists avoid talking about stupidity per se – “the term is unscientific”, says Baddeley. However, Flaubert’s understanding that profound lapses in logic can plague the brightest minds is now getting attention. “There are intelligent people who are stupid,” says Dylan Evans, a psychologist and author who studies emotion and intelligence.

Read the entire article after the jump.

Next Up: Apple TV

Robert Hof argues that the time is ripe for Steve Jobs’ corporate legacy to reinvent the TV. Apple transformed the personal computer industry, the mobile phone market and the music business. Clearly the company has all the components in place to assemble another innovation.

From Technology Review:

Steve Jobs couldn’t hide his frustration. Asked at a technology conference in 2010 whether Apple might finally turn its attention to television, he launched into an exasperated critique of TV. Cable and satellite TV companies make cheap, primitive set-top boxes that “squash any opportunity for innovation,” he fumed. Viewers are stuck with “a table full of remotes, a cluster full of boxes, a bunch of different [interfaces].” It was the kind of technological mess that cried out for Apple to clean it up with an elegant product. But Jobs professed to have no idea how his company could transform the TV.

Scarcely a year later, however, he sounded far more confident. Before he died on October 5, 2011, he told his biographer, ­Walter Isaacson, that Apple wanted to create an “integrated television set that is completely easy to use.” It would sync with other devices and Apple’s iCloud online storage service and provide “the simplest user interface you could imagine.” He added, tantalizingly, “I finally cracked it.”

Precisely what he cracked remains hidden behind Apple’s shroud of secrecy. Apple has had only one television-related product—the black, hockey-puck-size Apple TV device, which streams shows and movies to a TV. For years, Jobs and Tim Cook, his successor as CEO, called that device a “hobby.” But under the guise of this hobby, Apple has been steadily building hardware, software, and services that make it easier for people to watch shows and movies in whatever way they wish. Already, the company has more of the pieces for a compelling next-generation TV experience than people might realize.

And as Apple showed with the iPad and iPhone, it doesn’t have to invent every aspect of a product in order for it to be disruptive. Instead, it has become the leader in consumer electronics by combining existing technologies with some of its own and packaging them into products that are simple to use. TV seems to be at that moment now. People crave something better than the fusty, rigidly controlled cable TV experience, and indeed, the technologies exist for something better to come along. Speedier broadband connections, mobile TV apps, and the availability of some shows and movies on demand from Netflix and Hulu have made it easier to watch TV anytime, anywhere. The number of U.S. cable and satellite subscribers has been flat since 2010.

Apple would not comment. But it’s clear from two dozen interviews with people close to Apple suppliers and partners, and with people Apple has spoken to in the TV industry, that television—the medium and the device—is indeed its next target.

The biggest question is not whether Apple will take on TV, but when. The company must eventually come up with another breakthrough product; with annual revenue already topping $156 billion, it needs something very big to keep growth humming after the next year or two of the iPad boom. Walter Price, managing director of Allianz Global Investors, which holds nearly $1 billion in Apple shares, met with Apple executives in September and came away convinced that it would be years before Apple could get a significant share of the $345 billion worldwide market for televisions. But at $1,000, the bare minimum most analysts expect an Apple television to cost, such a product would eventually be a significant revenue generator. “You sell 10 million of those, it can move the needle,” he says.

Cook, who replaced Jobs as CEO in August 2011, could use a boost, too. He has presided over missteps such as a flawed iPhone mapping app that led to a rare apology and a major management departure. Seen as a peerless operations whiz, Cook still needs a revolutionary product of his own to cement his place next to Saint Steve. Corey Ferengul, a principal at the digital media investment firm Apace Equities and a former executive at Rovi, which provided TV programming guide services to Apple and other companies, says an Apple TV will be that product: “This will be Tim Cook’s first ‘holy shit’ innovation.”

What Apple Already Has

Rapt attention would be paid to whatever round-edged piece of brushed-aluminum hardware Apple produced, but a television set itself would probably be the least important piece of its television strategy. In fact, many well-connected people in technology and television, from TV and online video maven Mark Cuban to venture capitalist and former Apple executive Jean-Louis Gassée, can’t figure out why Apple would even bother with the machines.

For one thing, selling televisions is a low-margin business. No one subsidizes the purchase of a TV the way your wireless carrier does with the iPhone (an iPhone might cost you $200, but Apple’s revenue from it is much higher than that). TVs are also huge and difficult to stock in stores, let alone ship to homes. Most of all, the upgrade cycle that powers Apple’s iPhone and iPad profit engine doesn’t apply to television sets—no one replaces them every year or two.

But even though TVs don’t line up neatly with the way Apple makes money on other hardware, they are likely to remain central to people’s ever-increasing consumption of video, games, and other forms of media. Apple at least initially could sell the screens as a kind of Trojan horse—a way of entering or expanding its role in lines of business that are more profitable, such as selling movies, shows, games, and other Apple hardware.

Read the entire article following the jump.

Image courtesy of Apple, Inc.

Mars: 2030

Dennis Tito, the world’s first space tourist, would like to send a private space mission to Mars in 2018. He has pots of money and has founded a non-profit to gather partners and donors to get the mission off the ground. NASA has other plans. The U.S. space agency is tasked by the current administration to plan a human mission to Mars for the mid-2030s. However, due to budgetary issues, fiscal cliffs, and possible debt and deficit reduction, nobody believes it will actually happen. Though, many in NASA and lay-explorers at heart continue to hope.

From Technology Review:

In August, NASA used a series of precise and daring maneuvers to put a one-ton robotic rover named Curiosity on Mars. A capsule containing the rover parachuted through the Martian atmosphere and then unfurled a “sky crane” that lowered Curiosity safely into place. It was a thrilling moment: here were people communicating with a large and sophisticated piece of equipment 150 million miles away as it began to carry out experiments that should enhance our understanding of whether the planet has or has ever had life. So when I visited NASA’s Johnson Space Center in Houston a few days later, I expected to find people still basking in the afterglow. To be sure, the Houston center, where astronauts get directions from Mission Control, didn’t play the leading role in Curiosity. That project was centered at the Jet Propulsion Laboratory, which Caltech manages for NASA in Pasadena. Nonetheless, the landing had been a remarkable event for the entire U.S. space program. And yet I found that Mars wasn’t an entirely happy subject in Houston—especially among people who believe that humans, not only robots, should be exploring there.

In his long but narrow office in the main building of the sprawling Houston center, Bret Drake has compiled an outline explaining how six astronauts could be sent on six-month flights to Mars and what they would do there for a year and a half before their six-month flights home. Drake, 51, has been thinking about this since 1988, when he began working on what he calls the “exploration beyond low Earth orbit dream.” Back then, he expected that people would return to the moon in 2004 and be on the brink of traveling to Mars by now. That prospect soon got ruled out, but Drake pressed on: in the late 1990s he was crafting plans for human Mars missions that could take place around 2018. Today the official goal is for it to happen in the 2030s, but funding cuts have inhibited NASA’s ability to develop many of the technologies that would be required. In fact, progress was halted entirely in 2008 when Congress, in an effort to impose frugality on NASA, prohibited it from using any money to further the human exploration of Mars. “Mars was a four-letter dirty word,” laments Drake, who is deputy chief architect for NASA’s human spaceflight architecture team. Even though that rule was rescinded after a year, Drake knows NASA could perpetually remain 20 years away from a manned Mars mission.

If putting men on the moon signified the extraordinary things that technology made possible in the middle of the 20th century, sending humans to Mars would be the 21st-century version. The flight would be much more arduous and isolating for the astronauts: whereas the Apollo crews who went to the moon were never more than three days from home and could still make out its familiar features, a Mars crew would see Earth shrink into just one of billions of twinkles in space. Once they landed, the astronauts would have to survive in a freezing, windswept world with unbreathable air and 38 percent of Earth’s gravity. But if Drake is right, we can make this journey happen. He and other NASA engineers know what will be required, from a landing vehicle that could get humans through the Martian atmosphere to systems for feeding them, sheltering them, and shuttling them around once they’re there.

The problem facing Drake and other advocates for human exploration of Mars is that the benefits are mostly intangible. Some of the justifications that have been floated—including the idea that people should colonize the planet to improve humanity’s odds of survival—don’t stand up to an economic analysis. Until we have actually tried to keep people alive there, permanent human settlements on Mars will remain a figment of science fiction.

A better argument is that exploring Mars might have scientific benefits, because basic questions about the planet remain unanswered. “We know Mars was once wet and warm,” Drake says. “So did life ever arise there? If so, is it any different than life here on Earth? Where did it all go? What happened to Mars? Why did it become so cold and dry? How can we learn from that and what it may mean for Earth?” But right now Curiosity is exploring these very questions, firing lasers at rocks to determine their composition and hunting for signs of microbial life. Because of such robotic missions, our knowledge of Mars has improved so much in the past 15 years that it’s become harder to make the case for sending humans. People are far more adaptable and ingenious than robots and surely would find things drones can’t, but sending them would jack up the cost of a mission exponentially. “There’s just no real way to justify human exploration solely on the basis of science,” says Cynthia Phillips, a senior research scientist at the SETI Institute, which hunts for evidence of life elsewhere in the universe. “For the cost of sending one human to Mars, you could send an entire flotilla of robots.”

And yet human exploration of Mars has a powerful allure. No planet in our solar system is more like Earth. Our neighbor has rhythms we recognize as our own, with days slightly longer than 24 hours and polar ice caps that grow in the winter and shrink in the summer. Human explorers on Mars would profoundly expand the boundaries of human experience—providing, in the minds of many space advocates, an immeasurable benefit beyond science. “There have always been explorers in our society,” says Phillips. “If space exploration is only robots, you lose something, and you lose something really valuable.”

The Apollo Hangover

Mars was proposed as a place to explore even before the space program existed. In the 1950s, scientists such as Wernher von Braun (who had developed Nazi Germany’s combat rockets and later oversaw work on missiles and rockets for the United States) argued in magazines and on TV that as space became mankind’s next frontier, Mars would be an obvious point of interest. “Will man ever go to Mars?” von Braun wrote in Collier’s magazine in 1954. “I am sure he will—but it will be a century or more before he’s ready.”

Read the entire article after the jump.

Image: Artist’s conception of the Mars Excursion Module (MEM) proposed in a NASA Study in 1964. Courtesy of Dixon, Franklin P. Proceeding of the Symposium on Manned Planetary Missions: 1963/1964, Aeronutronic Divison of Philco Corp.

Helplessness and Intelligence Go Hand in Hand

From the Wall Street Journal:

Why are children so, well, so helpless? Why did I spend a recent Sunday morning putting blueberry pancake bits on my 1-year-old grandson’s fork and then picking them up again off the floor? And why are toddlers most helpless when they’re trying to be helpful? Augie’s vigorous efforts to sweep up the pancake detritus with a much-too-large broom (“I clean!”) were adorable but not exactly effective.

This isn’t just a caregiver’s cri de coeur—it’s also an important scientific question. Human babies and young children are an evolutionary paradox. Why must big animals invest so much time and energy just keeping the little ones alive? This is especially true of our human young, helpless and needy for far longer than the young of other primates.

One idea is that our distinctive long childhood helps to develop our equally distinctive intelligence. We have both a much longer childhood and a much larger brain than other primates. Restless humans have to learn about more different physical environments than stay-at-home chimps, and with our propensity for culture, we constantly create new social environments. Childhood gives us a protected time to master new physical and social tools, from a whisk broom to a winning comment, before we have to use them to survive.

The usual museum diorama of our evolutionary origins features brave hunters pursuing a rearing mammoth. But a Pleistocene version of the scene in my kitchen, with ground cassava roots instead of pancakes, might be more accurate, if less exciting.

Of course, many scientists are justifiably skeptical about such “just-so stories” in evolutionary psychology. The idea that our useless babies are really useful learners is appealing, but what kind of evidence could support (or refute) it? There’s still controversy, but two recent studies at least show how we might go about proving the idea empirically.

One of the problems with much evolutionary psychology is that it just concentrates on humans, or sometimes on humans and chimps. To really make an evolutionary argument, you need to study a much wider variety of animals. Is it just a coincidence that we humans have both needy children and big brains? Or will we find the same evolutionary pattern in animals who are very different from us? In 2010, Vera Weisbecker of Cambridge University and a colleague found a correlation between brain size and dependence across 52 different species of marsupials, from familiar ones like kangaroos and opossums to more exotic ones like quokkas.

Quokkas are about the same size as Virginia opossums, but baby quokkas nurse for three times as long, their parents invest more in each baby, and their brains are twice as big.

Read the entire article after the jump.

Startup Ideas

For technologists the barriers to developing a new product have never been so low. Tools to develop, integrate and distribute software apps are to all intents negligible. Of course, most would recognize that development is often the easy part. The real difficulty lies in building an effective and sustainable marketing and communication strategy and getting the product adopted.

The recent headlines of 17 year old British app developer Nick D’Aloisio selling his Summly app to Yahoo! for the tidy sum of $30 million, has lots of young and seasoned developers scratching their heads. After all, if a school kid can do it, why not anybody? Why not me?

Paul Graham may have some of the answers. He sold his first company to Yahoo in 1998. He now runs YCombinator a successful startup incubator. We excerpt his recent, observant and insightful essay below.

From Paul Graham:

The way to get startup ideas is not to try to think of startup ideas. It’s to look for problems, preferably problems you have yourself.

The very best startup ideas tend to have three things in common: they’re something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.

Problems

Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.

I made it myself. In 1995 I started a company to put art galleries online. But galleries didn’t want to be online. It’s not how the art business works. So why did I spend 6 months working on this stupid idea? Because I didn’t pay attention to users. I invented a model of the world that didn’t correspond to reality, and worked from that. I didn’t notice my model was wrong until I tried to convince users to pay for what we’d built. Even then I took embarrassingly long to catch on. I was attached to my model of the world, and I’d spent a lot of time on the software. They had to want it!

Why do so many founders build things no one wants? Because they begin by trying to think of startup ideas. That m.o. is doubly dangerous: it doesn’t merely yield few good ideas; it yields bad ideas that sound plausible enough to fool you into working on them.

At YC we call these “made-up” or “sitcom” startup ideas. Imagine one of the characters on a TV show was starting a startup. The writers would have to invent something for it to do. But coming up with good startup ideas is hard. It’s not something you can do for the asking. So (unless they got amazingly lucky) the writers would come up with an idea that sounded plausible, but was actually bad.

For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.

The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.

Well

When a startup launches, there have to be at least some users who really need what they’re making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.

Imagine a graph whose x axis represents all the people who might want what you’re making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can’t expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that’s broad but shallow, or one that’s narrow and deep, like a well.

Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.

Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.

When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they’ll use it even when it’s a crappy version one made by a two-person startup they’ve never heard of? If you can’t answer that, the idea is probably bad.

You don’t need the narrowness of the well per se. It’s depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it’s a good sign when you know that an idea will appeal strongly to a specific group or type of user.

But while demand shaped like a well is almost a necessary condition for a good startup idea, it’s not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.

Similarly for Microsoft: Basic for the Altair; Basic for other machines; other languages besides Basic; operating systems; applications; IPO.

Self

How do you tell whether there’s a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can’t. The founders of Airbnb didn’t realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn’t foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That’s probably as much as Bill Gates or Mark Zuckerberg knew at first.

Occasionally it’s obvious from the beginning when there’s a path out of the initial niche. And sometimes I can see a path that’s not immediately obvious; that’s one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.

So if you can’t predict whether there’s a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you’re the right sort of person, you have the right sort of hunches. If you’re at the leading edge of a field that’s changing fast, when you have a hunch that something is worth doing, you’re more likely to be right.

In Zen and the Art of Motorcycle Maintenance, Robert Pirsig says:

You want to know how to paint a perfect painting? It’s easy. Make yourself perfect and then just paint naturally.

I’ve wondered about that passage since I read it in high school. I’m not sure how useful his advice is for painting specifically, but it fits this situation well. Empirically, the way to have good startup ideas is to become the sort of person who has them.

Being at the leading edge of a field doesn’t mean you have to be one of the people pushing it forward. You can also be at the leading edge as a user. It was not so much because he was a programmer that Facebook seemed a good idea to Mark Zuckerberg as because he used computers so much. If you’d asked most 40 year olds in 2004 whether they’d like to publish their lives semi-publicly on the Internet, they’d have been horrified at the idea. But Mark already lived online; to him it seemed natural.

Paul Buchheit says that people at the leading edge of a rapidly changing field “live in the future.” Combine that with Pirsig and you get:

Live in the future, then build what’s missing.

That describes the way many if not most of the biggest startups got started. Neither Apple nor Yahoo nor Google nor Facebook were even supposed to be companies at first. They grew out of things their founders built because there seemed a gap in the world.

If you look at the way successful founders have had their ideas, it’s generally the result of some external stimulus hitting a prepared mind. Bill Gates and Paul Allen hear about the Altair and think “I bet we could write a Basic interpreter for it.” Drew Houston realizes he’s forgotten his USB stick and thinks “I really need to make my files live online.” Lots of people heard about the Altair. Lots forgot USB sticks. The reason those stimuli caused those founders to start companies was that their experiences had prepared them to notice the opportunities they represented.

The verb you want to be using with respect to startup ideas is not “think up” but “notice.” At YC we call ideas that grow naturally out of the founders’ own experiences “organic” startup ideas. The most successful startups almost all begin this way.

That may not have been what you wanted to hear. You may have expected recipes for coming up with startup ideas, and instead I’m telling you that the key is to have a mind that’s prepared in the right way. But disappointing though it may be, this is the truth. And it is a recipe of a sort, just one that in the worst case takes a year rather than a weekend.

If you’re not at the leading edge of some rapidly changing field, you can get to one. For example, anyone reasonably smart can probably get to an edge of programming (e.g. building mobile apps) in a year. Since a successful startup will consume at least 3-5 years of your life, a year’s preparation would be a reasonable investment. Especially if you’re also looking for a cofounder.

You don’t have to learn programming to be at the leading edge of a domain that’s changing fast. Other domains change fast. But while learning to hack is not necessary, it is for the forseeable future sufficient. As Marc Andreessen put it, software is eating the world, and this trend has decades left to run.

Knowing how to hack also means that when you have ideas, you’ll be able to implement them. That’s not absolutely necessary (Jeff Bezos couldn’t) but it’s an advantage. It’s a big advantage, when you’re considering an idea like putting a college facebook online, if instead of merely thinking “That’s an interesting idea,” you can think instead “That’s an interesting idea. I’ll try building an initial version tonight.” It’s even better when you’re both a programmer and the target user, because then the cycle of generating new versions and testing them on users can happen inside one head.

Noticing

Once you’re living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you’re really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won’t be obvious is that they’re startup ideas. So if you want to find startup ideas, don’t merely turn on the filter “What’s missing?” Also turn off every other filter, particularly “Could this be a big company?” There’s plenty of time to apply that test later. But if you’re thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones.

Most things that are missing will take some time to see. You almost have to trick yourself into seeing the ideas around you.

But you know the ideas are out there. This is not one of those problems where there might not be an answer. It’s impossibly unlikely that this is the exact moment when technological progress stops. You can be sure people are going to build things in the next few years that will make you think “What did I do before x?”

And when these problems get solved, they will probably seem flamingly obvious in retrospect. What you need to do is turn off the filters that usually prevent you from seeing them. The most powerful is simply taking the current state of the world for granted. Even the most radically open-minded of us mostly do that. You couldn’t get from your bed to the front door if you stopped to question everything.

But if you’re looking for startup ideas you can sacrifice some of the efficiency of taking the status quo for granted and start to question things. Why is your inbox overflowing? Because you get a lot of email, or because it’s hard to get email out of your inbox? Why do you get so much email? What problems are people trying to solve by sending you email? Are there better ways to solve them? And why is it hard to get emails out of your inbox? Why do you keep emails around after you’ve read them? Is an inbox the optimal tool for that?

Pay particular attention to things that chafe you. The advantage of taking the status quo for granted is not just that it makes life (locally) more efficient, but also that it makes life more tolerable. If you knew about all the things we’ll get in the next 50 years but don’t have yet, you’d find present day life pretty constraining, just as someone from the present would if they were sent back 50 years in a time machine. When something annoys you, it could be because you’re living in the future.

When you find the right sort of problem, you should probably be able to describe it as obvious, at least to you. When we started Viaweb, all the online stores were built by hand, by web designers making individual HTML pages. It was obvious to us as programmers that these sites would have to be generated by software.

Which means, strangely enough, that coming up with startup ideas is a question of seeing the obvious. That suggests how weird this process is: you’re trying to see things that are obvious, and yet that you hadn’t seen.

Since what you need to do here is loosen up your own mind, it may be best not to make too much of a direct frontal attack on the problem—i.e. to sit down and try to think of ideas. The best plan may be just to keep a background process running, looking for things that seem to be missing. Work on hard problems, driven mainly by curiousity, but have a second self watching over your shoulder, taking note of gaps and anomalies.

Give yourself some time. You have a lot of control over the rate at which you turn yours into a prepared mind, but you have less control over the stimuli that spark ideas when they hit it. If Bill Gates and Paul Allen had constrained themselves to come up with a startup idea in one month, what if they’d chosen a month before the Altair appeared? They probably would have worked on a less promising idea. Drew Houston did work on a less promising idea before Dropbox: an SAT prep startup. But Dropbox was a much better idea, both in the absolute sense and also as a match for his skills.

A good way to trick yourself into noticing ideas is to work on projects that seem like they’d be cool. If you do that, you’ll naturally tend to build things that are missing. It wouldn’t seem as interesting to build something that already existed.

Just as trying to think up startup ideas tends to produce bad ones, working on things that could be dismissed as “toys” often produces good ones. When something is described as a toy, that means it has everything an idea needs except being important. It’s cool; users love it; it just doesn’t matter. But if you’re living in the future and you build something cool that users love, it may matter more than outsiders think. Microcomputers seemed like toys when Apple and Microsoft started working on them. I’m old enough to remember that era; the usual term for people with their own microcomputers was “hobbyists.” BackRub seemed like an inconsequential science project. The Facebook was just a way for undergrads to stalk one another.

At YC we’re excited when we meet startups working on things that we could imagine know-it-alls on forums dismissing as toys. To us that’s positive evidence an idea is good.

If you can afford to take a long view (and arguably you can’t afford not to), you can turn “Live in the future and build what’s missing” into something even better:

Live in the future and build what seems interesting.

School

That’s what I’d advise college students to do, rather than trying to learn about “entrepreneurship.” “Entrepreneurship” is something you learn best by doing it. The examples of the most successful founders make that clear. What you should be spending your time on in college is ratcheting yourself into the future. College is an incomparable opportunity to do that. What a waste to sacrifice an opportunity to solve the hard part of starting a startup—becoming the sort of person who can have organic startup ideas—by spending time learning about the easy part. Especially since you won’t even really learn about it, any more than you’d learn about sex in a class. All you’ll learn is the words for things.

The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you’ll probably see problems that software could solve. In fact, you’re doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don’t even know what the status quo is to take it for granted.

So if you’re a CS major and you want to start a startup, instead of taking a class on entrepreneurship you’re better off taking a class on, say, genetics. Or better still, go work for a biotech company. CS majors normally get summer jobs at computer hardware or software companies. But if you want to find startup ideas, you might do better to get a summer job in some unrelated field.

Or don’t take any extra classes, and just build things. It’s no coincidence that Microsoft and Facebook both got started in January. At Harvard that is (or was) Reading Period, when students have no classes to attend because they’re supposed to be studying for finals.

But don’t feel like you have to build things that will become startups. That’s premature optimization. Just build things. Preferably with other students. It’s not just the classes that make a university such a good place to crank oneself into the future. You’re also surrounded by other people trying to do the same thing. If you work together with them on projects, you’ll end up producing not just organic ideas, but organic ideas with organic founding teams—and that, empirically, is the best combination.

Beware of research. If an undergrad writes something all his friends start using, it’s quite likely to represent a good startup idea. Whereas a PhD dissertation is extremely unlikely to. For some reason, the more a project has to count as research, the less likely it is to be something that could be turned into a startup. [10] I think the reason is that the subset of ideas that count as research is so narrow that it’s unlikely that a project that satisfied that constraint would also satisfy the orthogonal constraint of solving users’ problems. Whereas when students (or professors) build something as a side-project, they automatically gravitate toward solving users’ problems—perhaps even with an additional energy that comes from being freed from the constraints of research.

Competition

Because a good idea should seem obvious, when you have one you’ll tend to feel that you’re late. Don’t let that deter you. Worrying that you’re late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you’re probably not too late. It’s exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don’t discard the idea.

If you’re uncertain, ask users. The question of whether you’re too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.

The question then is whether that beachhead is big enough. Or more importantly, who’s in it: if the beachhead consists of people doing something lots more people will be doing in the future, then it’s probably big enough no matter how small it is. For example, if you’re building something differentiated from competitors by the fact that it works on phones, but it only works on the newest phones, that’s probably a big enough beachhead.

Err on the side of doing things where you’ll face competitors. Inexperienced founders usually give competitors more credit than they deserve. Whether you succeed depends far more on you than on your competitors. So better a good idea with competitors than a bad one without.

You don’t need to worry about entering a “crowded market” so long as you have a thesis about what everyone else in it is overlooking. In fact that’s a very promising starting point. Google was that type of idea. Your thesis has to be more precise than “we’re going to make an x that doesn’t suck” though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn’t have the courage of their convictions, and that your plan is what they’d have done if they’d followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.

A crowded market is actually a good sign, because it means both that there’s demand and that none of the existing solutions are good enough. A startup can’t hope to enter a market that’s obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).

Filters

There are two more filters you’ll need to turn off if you want to notice startup ideas: the unsexy filter and the schlep filter.

Most programmers wish they could start a startup by just writing some brilliant code, pushing it to a server, and having users pay them lots of money. They’d prefer not to deal with tedious problems or get involved in messy ways with the real world. Which is a reasonable preference, because such things slow you down. But this preference is so widespread that the space of convenient startup ideas has been stripped pretty clean. If you let your mind wander a few blocks down the street to the messy, tedious ideas, you’ll find valuable ones just sitting there waiting to be implemented.

The schlep filter is so dangerous that I wrote a separate essay about the condition it induces, which I called schlep blindness. I gave Stripe as an example of a startup that benefited from turning off this filter, and a pretty striking example it is. Thousands of programmers were in a position to see this idea; thousands of programmers knew how painful it was to process payments before Stripe. But when they looked for startup ideas they didn’t see this one, because unconsciously they shrank from having to deal with payments. And dealing with payments is a schlep for Stripe, but not an intolerable one. In fact they might have had net less pain; because the fear of dealing with payments kept most people away from this idea, Stripe has had comparatively smooth sailing in other areas that are sometimes painful, like user acquisition. They didn’t have to try very hard to make themselves heard by users, because users were desperately waiting for what they were building.

The unsexy filter is similar to the schlep filter, except it keeps you from working on problems you despise rather than ones you fear. We overcame this one to work on Viaweb. There were interesting things about the architecture of our software, but we weren’t interested in ecommerce per se. We could see the problem was one that needed to be solved though.

Turning off the schlep filter is more important than turning off the unsexy filter, because the schlep filter is more likely to be an illusion. And even to the degree it isn’t, it’s a worse form of self-indulgence. Starting a successful startup is going to be fairly laborious no matter what. Even if the product doesn’t entail a lot of schleps, you’ll still have plenty dealing with investors, hiring and firing people, and so on. So if there’s some idea you think would be cool but you’re kept away from by fear of the schleps involved, don’t worry: any sufficiently good idea will have as many.

The unsexy filter, while still a source of error, is not as entirely useless as the schlep filter. If you’re at the leading edge of a field that’s changing rapidly, your ideas about what’s sexy will be somewhat correlated with what’s valuable in practice. Particularly as you get older and more experienced. Plus if you find an idea sexy, you’ll work on it more enthusiastically.

Recipes

While the best way to discover startup ideas is to become the sort of person who has them and then build whatever interests you, sometimes you don’t have that luxury. Sometimes you need an idea now. For example, if you’re working on a startup and your initial idea turns out to be bad.

For the rest of this essay I’ll talk about tricks for coming up with startup ideas on demand. Although empirically you’re better off using the organic strategy, you could succeed this way. You just have to be more disciplined. When you use the organic method, you don’t even notice an idea unless it’s evidence that something is truly missing. But when you make a conscious effort to think of startup ideas, you have to replace this natural constraint with self-discipline. You’ll see a lot more ideas, most of them bad, so you need to be able to filter them.

One of the biggest dangers of not using the organic method is the example of the organic method. Organic ideas feel like inspirations. There are a lot of stories about successful startups that began when the founders had what seemed a crazy idea but “just knew” it was promising. When you feel that about an idea you’ve had while trying to come up with startup ideas, you’re probably mistaken.

When searching for ideas, look in areas where you have some expertise. If you’re a database expert, don’t build a chat app for teenagers (unless you’re also a teenager). Maybe it’s a good idea, but you can’t trust your judgment about that, so ignore it. There have to be other ideas that involve databases, and whose quality you can judge. Do you find it hard to come up with good ideas involving databases? That’s because your expertise raises your standards. Your ideas about chat apps are just as bad, but you’re giving yourself a Dunning-Kruger pass in that domain.

The place to start looking for ideas is things you need. There must be things you need.

One good trick is to ask yourself whether in your previous job you ever found yourself saying “Why doesn’t someone make x? If someone made x we’d buy it in a second.” If you can think of any x people said that about, you probably have an idea. You know there’s demand, and people don’t say that about things that are impossible to build.

More generally, try asking yourself whether there’s something unusual about you that makes your needs different from most other people’s. You’re probably not the only one. It’s especially good if you’re different in a way people will increasingly be.

If you’re changing ideas, one unusual thing about you is the idea you’d previously been working on. Did you discover any needs while working on it? Several well-known startups began this way. Hotmail began as something its founders wrote to talk about their previous startup idea while they were working at their day jobs. [15]

A particularly promising way to be unusual is to be young. Some of the most valuable new ideas take root first among people in their teens and early twenties. And while young founders are at a disadvantage in some respects, they’re the only ones who really understand their peers. It would have been very hard for someone who wasn’t a college student to start Facebook. So if you’re a young founder (under 23 say), are there things you and your friends would like to do that current technology won’t let you?

The next best thing to an unmet need of your own is an unmet need of someone else. Try talking to everyone you can about the gaps they find in the world. What’s missing? What would they like to do that they can’t? What’s tedious or annoying, particularly in their work? Let the conversation get general; don’t be trying too hard to find startup ideas. You’re just looking for something to spark a thought. Maybe you’ll notice a problem they didn’t consciously realize they had, because you know how to solve it.

When you find an unmet need that isn’t your own, it may be somewhat blurry at first. The person who needs something may not know exactly what they need. In that case I often recommend that founders act like consultants—that they do what they’d do if they’d been retained to solve the problems of this one user. People’s problems are similar enough that nearly all the code you write this way will be reusable, and whatever isn’t will be a small price to start out certain that you’ve reached the bottom of the well.

One way to ensure you do a good job solving other people’s problems is to make them your own. When Rajat Suri of E la Carte decided to write software for restaurants, he got a job as a waiter to learn how restaurants worked. That may seem like taking things to extremes, but startups are extreme. We love it when founders do such things.

In fact, one strategy I recommend to people who need a new idea is not merely to turn off their schlep and unsexy filters, but to seek out ideas that are unsexy or involve schleps. Don’t try to start Twitter. Those ideas are so rare that you can’t find them by looking for them. Make something unsexy that people will pay you for.

A good trick for bypassing the schlep and to some extent the unsexy filter is to ask what you wish someone else would build, so that you could use it. What would you pay for right now?

Since startups often garbage-collect broken companies and industries, it can be a good trick to look for those that are dying, or deserve to, and try to imagine what kind of company would profit from their demise. For example, journalism is in free fall at the moment. But there may still be money to be made from something like journalism. What sort of company might cause people in the future to say “this replaced journalism” on some axis?

But imagine asking that in the future, not now. When one company or industry replaces another, it usually comes in from the side. So don’t look for a replacement for x; look for something that people will later say turned out to be a replacement for x. And be imaginative about the axis along which the replacement occurs. Traditional journalism, for example, is a way for readers to get information and to kill time, a way for writers to make money and to get attention, and a vehicle for several different types of advertising. It could be replaced on any of these axes (it has already started to be on most).

When startups consume incumbents, they usually start by serving some small but important market that the big players ignore. It’s particularly good if there’s an admixture of disdain in the big players’ attitude, because that often misleads them. For example, after Steve Wozniak built the computer that became the Apple I, he felt obliged to give his then-employer Hewlett-Packard the option to produce it. Fortunately for him, they turned it down, and one of the reasons they did was that it used a TV for a monitor, which seemed intolerably déclassé to a high-end hardware company like HP was at the time.

Are there groups of scruffy but sophisticated users like the early microcomputer “hobbyists” that are currently being ignored by the big players? A startup with its sights set on bigger things can often capture a small market easily by expending an effort that wouldn’t be justified by that market alone.

Similarly, since the most successful startups generally ride some wave bigger than themselves, it could be a good trick to look for waves and ask how one could benefit from them. The prices of gene sequencing and 3D printing are both experiencing Moore’s Law-like declines. What new things will we be able to do in the new world we’ll have in a few years? What are we unconsciously ruling out as impossible that will soon be possible?

Organic

But talking about looking explicitly for waves makes it clear that such recipes are plan B for getting startup ideas. Looking for waves is essentially a way to simulate the organic method. If you’re at the leading edge of some rapidly changing field, you don’t have to look for waves; you are the wave.

Finding startup ideas is a subtle business, and that’s why most people who try fail so miserably. It doesn’t work well simply to try to think of startup ideas. If you do that, you get bad ones that sound dangerously plausible. The best approach is more indirect: if you have the right sort of background, good startup ideas will seem obvious to you. But even then, not immediately. It takes time to come across situations where you notice something missing. And often these gaps won’t seem to be ideas for companies, just things that would be interesting to build. Which is why it’s good to have the time and the inclination to build things just because they’re interesting.

Live in the future and build what seems interesting. Strange as it sounds, that’s the real recipe.

Read the entire article after the jump.

Image: Nick D’Aloisio with his Summly app. Courtesy of Telegraph.

Farmscrapers

No, the drawing is not a construction from the mind of sci fi illustrator extraordinaire Michael Whelan. This is reality. Or, to be more precise an architectural rendering of buildings to come — in China of course.

From the Independent:

A French architecture firm has unveiled their new ambitious ‘farmscraper’ project – six towering structures which promise to change the way that we think about green living.

Vincent Callebaut Architects’ innovative Asian Cairns was planned specifically for Chinese city Shenzhen in response to the growing population, increasing CO2 emissions and urban development.

The structures will consist of a series of pebble-shaped levels – each connected by a central spinal column – which will contain residential areas, offices, and leisure spaces.

Sustainability is key to the innovative project – wind turbines will cover the roof of each tower, water recycling systems will be in place to recycle waste water, and solar panels will be installed on the buildings, providing renewable energy. The structures will also have gardens on the exterior, further adding to the project’s green credentials.

Vincent Callebaut, the Belgian architect behind the firm, is well-known for his ambitious, eco-friendly projects, winning many awards over the years.

His self-sufficient amphibious city Lilypad – ‘a floating ecopolis for climate refugees’ – is perhaps his most famous design. The model has been proposed as a long-term solution to rising water levels, and successfully meets the four challenges of climate, biodiversity, water, and health, that the OECD laid out in 2008.

Vincent Callebaut Architects said: “It is a prototype to build a green, dense, smart city connected by technology and eco-designed from biotechnologies.”

Read the entire article and see more illustrations after the jump.

Image: “Farmscrapers” take eco-friendly architecture to dizzying heights in China. Courtesy of Vincent Callebaut Architects / Independent.

Custom Does Not Freedom Make

Those of us who live relatively comfortable lives in the West are confronted with numerous and not insignificant stresses on a daily basis. There are the stresses of politics, parenting, work life balance, intolerance and financial, to name but a few.

Yet, for all the negatives it is often useful to put our toils and troubles into a clearer perspective. Sometimes a simple story is quite enough. This story is about a Saudi woman who dared to drive. In Saudi Arabia it is not illegal for women to drive, but it is against custom. May Manal al-Sharif and other “custom fighters” like her live long and prosper.

From the Wall Street Journal:

“You know when you have a bird, and it’s been in a cage all its life? When you open the cage door, it doesn’t want to leave. It was that moment.”

This is how Manal al-Sharif felt the first time she sat behind the wheel of a car in Saudi Arabia. The kingdom’s taboo against women driving is only rarely broken. To hear her recount the experience is as thrilling as it must have been to sit in the passenger seat beside her. Well, almost.

Ms. Sharif says her moment of hesitation didn’t last long. She pressed the gas pedal and in an instant her Cadillac SUV rolled forward. She spent the next hour circling the streets of Khobar, in the kingdom’s eastern province, while a friend used an iPhone camera to record the journey.

It was May 2011, when much of the Middle East was convulsed with popular uprisings. Saudi women’s-rights activists were stirring, too. They wondered if the Arab Spring would mark the end of the kingdom’s ban on women driving. “Everyone around me was complaining about the ban but no one was doing anything,” Ms. Sharif says. “The Arab Spring was happening all around us, so that inspired me to say, ‘Let’s call for an action instead of complaining.’ “

The campaign started with a Facebook page urging Saudi women to drive on a designated day, June 17, 2011. At first the page created great enthusiasm among activists. But then critics began injecting fear on and off the page. “The opponents were saying that ‘there are wolves in the street, and they will rape you if you drive,’ ” Ms. Sharif recalls. “There needed to be one person who could break that wall, to make the others understand that ‘it’s OK, you can drive in the street. No one will rape you.’ “

Ms. Sharif resolved to be that person, and the video she posted of herself driving around Khobar on May 17 became an instant YouTube hit. The news spread across Saudi media, too, and not all of the reactions were positive. Ms. Sharif received threatening phone calls and emails. “You have just opened the gates of hell on yourself,” said an Islamist cleric. “Your grave is waiting,” read one email.

Aramco, the national oil company where she was working as a computer-security consultant at the time, wasn’t pleased, either. Ms. Sharif recalls that her manager scolded her: “What the hell are you doing?” In response, Ms. Sharif requested two weeks off. Before leaving on vacation, however, she wrote a message to her boss on an office blackboard: “2011. Mark this year. It will change every single rule that you know. You cannot lecture me about what I’m doing.”

It was a stunning act of defiance in a country that takes very seriously the Quran’s teaching: “Men are in charge of women.” But less than a week after her first outing, Ms. Sharif got behind the wheel again, this time accompanied by her brother and his wife and child. “Where are the traffic police?” she recalls asking her brother as she put pedal to the metal once more. A rumor had been circulating that, since the driving ban isn’t codified in law, the police wouldn’t confront female drivers. “I wanted to test this,” she says.

The rumor was wrong. As she recounts, a traffic officer stopped the car, and soon members of the Committee for the Promotion of Virtue and Prevention of Vice, the Saudi morality police, surrounded the car. “Girl!” screamed one. “Get out! We don’t allow women to drive!” Ms. Sharif and her brother were arrested and detained for six hours, during which time she stood her ground.

“Sir, what law did I break?” she recalls repeatedly asking her interrogators. “You didn’t break any law,” they’d say. “You violated orf“—custom.

Read the entire article after the jump.

Image: Manal al-Sharif (Manal Abd Masoud Almnami al-Sharif). Courtesy of Wikipedia.

Chomsky

Chomsky. It’s highly likely that the mere sound of his name will polarize you. You will find yourself either for Noam Chomsky or adamantly against. You will either stand with him on the Arab-Israeli conflict or you won’t; you either support his libertarian-socialist views or you’re firmly against; you either agree with him on issues of privacy and authority or you don’t. However, regardless of your position on the Chomsky-support-scale you have to recognize that once he’s gone — he’s 84 years old — he’ll be recognized as one of the world’s great contemporary thinkers and writers. In the same mold as George Orwell, who was one of Chomsky’s early influences, Chomsky speaks truth to power. Whether the topic is political criticism, mass media, analytic philosophy, the military-industrial complex, computer science or linguistics the range of Chomsky’s discourse is astonishing, and his opinion not to be ignored.

From the Guardian:

It may have been pouring with rain, water overrunning the gutters and spreading fast and deep across London’s Euston Road, but this did not stop a queue forming, and growing until it snaked almost all the way back to Euston station. Inside Friends House, a Quaker-run meeting hall, the excitement was palpable. People searched for friends and seats with thinly disguised anxiety; all watched the stage until, about 15 minutes late, a short, slightly top-heavy old man climbed carefully on to the stage and sat down. The hall filled with cheers and clapping, with whoops and with whistles.

Noam Chomsky, said two speakers (one of them Mariam Said, whose late husband, Edward, this lecture honours) “needs no introduction”. A tired turn of phrase, but they had a point: in a bookshop down the road the politics section is divided into biography, reference, the Clintons, Obama, Thatcher, Marx, and Noam Chomsky. He topped the first Foreign Policy/Prospect Magazine list of global thinkers in 2005 (the most recent, however, perhaps reflecting a new editorship and a new rubric, lists him not at all). One study of the most frequently cited academic sources of all time found that he ranked eighth, just below Plato and Freud. The list included the Bible.

When he starts speaking, it is in a monotone that makes no particular rhetorical claim on the audience’s attention; in fact, it’s almost soporific. Last October, he tells his audience, he visited Gaza for the first time. Within five minutes many of the hallmarks of Chomsky’s political writing, and speaking, are displayed: his anger, his extraordinary range of reference and experience – journalism from inside Gaza, personal testimony, detailed knowledge of the old Egyptian government, its secret service, the new Egyptian government, the historical context of the Israeli occupation, recent news reports (of sewage used by the Egyptians to flood tunnels out of Gaza, and by Israelis to spray non-violent protesters). Fact upon fact upon fact, but also a withering, sweeping sarcasm – the atrocities are “tolerated politely by Europe as usual”. Harsh, vivid phrases – the “hideously charred corpses of murdered infants”; bodies “writhing in agony” – unspool until they become almost a form of punctuation.

You could argue that the latter is necessary, simply a description of atrocities that must be reported, but it is also a method that has diminishing returns. The facts speak for themselves; the adjectives and the sarcasm have the counterintuitive effect of cheapening them, of imposing on the world a disappointingly crude and simplistic argument. “The sentences,” wrote Larissa MacFarquhar in a brilliant New Yorker profile of Chomsky 10 years ago, “are accusations of guilt, but not from a position of innocence or hope for something better: Chomsky’s sarcasm is the scowl of a fallen world, the sneer of hell’s veteran to its appalled naifs” – and thus, in an odd way, static and ungenerative.

first came to prominence in 1959, with the argument, detailed in a book review (but already present in his first book, published two years earlier), that contrary to the prevailing idea that children learned language by copying and by reinforcement (ie behaviourism), basic grammatical arrangements were already present at birth. The argument revolutionised the study of linguistics; it had fundamental ramifications for anyone studying the mind. It also has interesting, even troubling ramifications for his politics. If we are born with innate structures of linguistic and by extension moral thought, isn’t this a kind of determinism that denies political agency? What is the point of arguing for any change at all?

“The most libertarian positions accept the same view,” he answers. “That there are instincts, basic conditions of human nature that lead to a preferred social order. In fact, if you’re in favour of any policy – reform, revolution, stability, regression, whatever – if you’re at least minimally moral, it’s because you think it’s somehow good for people. And good for people means conforming to their fundamental nature. So whoever you are, whatever your position is, you’re making some tacit assumptions about fundamental human nature … The question is: what do we strive for in developing a social order that is conducive to fundamental human needs? Are human beings born to be servants to masters, or are they born to be free, creative individuals who work with others to inquire, create, develop their own lives? I mean, if humans were totally unstructured creatures, they would be … a tool which can properly be shaped by outside forces. That’s why if you look at the history of what’s called radical behaviourism, [where] you can be completely shaped by outside forces – when [the advocates of this] spell out what they think society ought to be, it’s totalitarian.”

Chomsky, now 84, has been politically engaged all his life; his first published article, in fact, was against fascism, and written when he was 10. Where does the anger come from? “I grew up in the Depression. My parents had jobs, but a lot of the family were unemployed working class, so they had no jobs at all. So I saw poverty and repression right away. People would come to the door trying to sell rags – that was when I was four years old. I remember riding with my mother in a trolley car and passing a textile worker’s strike where the women were striking outside and the police were beating them bloody.”

He met Carol, who would become his wife, at about the same time, when he was five years old. They married when she was 19 and he 21, and were together until she died nearly 60 years later, in 2008. He talks about her constantly, given the chance: how she was so strict about his schedule when they travelled (she often accompanied him on lecture tours) that in Latin America they called her El Comandante; the various bureaucratic scrapes they got into, all over the world. By all accounts, she also enforced balance in his life: made sure he watched an hour of TV a night, went to movies and concerts, encouraged his love of sailing (at one point, he owned a small fleet of sailboats, plus a motorboat); she water-skied until she was 75.

But she was also politically involved: she took her daughters (they had three children: two girls and a boy) to demonstrations; he tells me a story about how, when they were protesting against the Vietnam war, they were once both arrested on the same day. “And you get one phone call. So my wife called our older daughter, who was at that time 12, I guess, and told her, ‘We’re not going to come home tonight, can you take care of the two kids?’ That’s life.” At another point, when it looked like he would be jailed for a long time, she went back to school to study for a PhD, so that she could support the children alone. It makes no sense, he told an interviewer a couple of years ago, for a woman to die before her husband, “because women manage so much better, they talk and support each other. My oldest and closest friend is in the office next door to me; we haven’t once talked about Carol.” His eldest daughter often helps him now. “There’s a transition point, in some way.”

Does he think that in all these years of talking and arguing and writing, he has ever changed one specific thing? “I don’t think any individual changes anything alone. Martin Luther King was an important figure but he couldn’t have said: ‘This is what I changed.’ He came to prominence on a groundswell that was created by mostly young people acting on the ground. In the early years of the antiwar movement we were all doing organising and writing and speaking and gradually certain people could do certain things more easily and effectively, so I pretty much dropped out of organising – I thought the teaching and writing was more effective. Others, friends of mine, did the opposite. But they’re not less influential. Just not known.”

Read the entire article following the jump.

Old Masters or Dirty Old Men?

A recent proposal to ban all pornography across Europe has raised some interesting questions. Not least of which is the issue of how to classify the numerous canvases featuring nudes — mostly women, of course — and sexual fantasies hanging prominently in most of Europe’s museums and galleries. Are Europe’s old masters, such as Titian, Botticelli, Rubens, Rousseau and Manet, pornographers?

From the Guardian:

A proposal to ban all pornography in Europe, recently unearthed by freedom of information campaigners in an EU report, raises an intriguing question. Would this only apply to photography and video, or do reformers also plan to rid Europe of all those lewd paintings by Titian and his contemporaries that joyously celebrate sex in the continent’s most civilised art galleries?

Europe’s great artists were making pornography long before the invention of the camera, let alone the internet. In my new book The Loves of the Artists, I argue that sexual gratification – of both the viewers of art, and artists themselves – was a fundamental drive of high European culture in the age of the old masters. Paintings were used as sexual stimuli, as visual lovers’ guides, as aids to fantasy. This was considered one of the most serious uses of art by no less a thinker than Leonardo da Vinci, who claimed images are better than words because pictures can directly arouse the senses. He was proud that he once painted a Madonna so sexy the owner asked for all its religious trappings to be removed, out of shame for the inappropriate lust it inspired. His painting of St John the Baptist is similarly ambiguous.

This was not a new attitude to art in the Renaissance. As the upcoming exhibition of ancient Pompeii at the British Museum will doubtless show, the ancient Romans also delighted in pornography. Some pornographic paintings now kept in the famous “Secret Museum” of ancient erotica in Naples came from Pompeii’s brothel’s – which makes their function very clear. In the Renaissance, which revered everything classical, ancient Roman sexual imagery was well known to collectors and artists. A notorious classical erotic statue owned by the plutocrat Agostino Chigi caused the 16th-century writer Pietro Aretino to remark, “why should the eyes be denied what delights them most?”

Aretino was a libertarian campaigner long before today’s ethical and political conflicts over pornography. He helped get the engraver Marcantonio Raimondi released from prison after the artist was jailed for publishing a series of erotic prints called The Positions – they depict various sexual positions – then wrote a set of obscene verses to accompany a new edition of what became a European bestseller. Aretino was a close friend of Titian, whose paintings share his licentious delight in sexuality.

Read the entire article following the jump.

Image: Venus of Urbino (Venere di Urbino), 1538 by Titian, Courtesy of Uffizi, Florence / Wikipedia.

MondayMap: Quiet News Day = Map of the Universe

It was surely a quiet news day on March 21 2013 — most major online news outlets showed a fresh map of the Cosmic Microwave Background (CMB) on the front page. It was taken by the Planck Telescope, operated by the European Space Agency, over a period of 15 months. The image shows a landscape of primordial cosmic microwaves from when the universe was only around 380,000 years old, and is often referred to as “first light”.

From ESA:

Acquired by ESA’s Planck space telescope, the most detailed map ever created of the cosmic microwave background – the relic radiation from the Big Bang – was released today revealing the existence of features that challenge the foundations of our current understanding of the Universe.

The image is based on the initial 15.5 months of data from Planck and is the mission’s first all-sky picture of the oldest light in our Universe, imprinted on the sky when it was just 380 000 years old.

At that time, the young Universe was filled with a hot dense soup of interacting protons, electrons and photons at about 2700ºC. When the protons and electrons joined to form hydrogen atoms, the light was set free. As the Universe has expanded, this light today has been stretched out to microwave wavelengths, equivalent to a temperature of just 2.7 degrees above absolute zero.

This ‘cosmic microwave background’ – CMB – shows tiny temperature fluctuations that correspond to regions of slightly different densities at very early times, representing the seeds of all future structure: the stars and galaxies of today.

According to the standard model of cosmology, the fluctuations arose immediately after the Big Bang and were stretched to cosmologically large scales during a brief period of accelerated expansion known as inflation.

Planck was designed to map these fluctuations across the whole sky with greater resolution and sensitivity than ever before. By analysing the nature and distribution of the seeds in Planck’s CMB image, we can determine the composition and evolution of the Universe from its birth to the present day.

Overall, the information extracted from Planck’s new map provides an excellent confirmation of the standard model of cosmology at an unprecedented accuracy, setting a new benchmark in our manifest of the contents of the Universe.

But because precision of Planck’s map is so high, it also made it possible to reveal some peculiar unexplained features that may well require new physics to be understood.

“The extraordinary quality of Planck’s portrait of the infant Universe allows us to peel back its layers to the very foundations, revealing that our blueprint of the cosmos is far from complete. Such discoveries were made possible by the unique technologies developed for that purpose by European industry,” says Jean-Jacques Dordain, ESA’s Director General.

“Since the release of Planck’s first all-sky image in 2010, we have been carefully extracting and analysing all of the foreground emissions that lie between us and the Universe’s first light, revealing the cosmic microwave background in the greatest detail yet,” adds George Efstathiou of the University of Cambridge, UK.

One of the most surprising findings is that the fluctuations in the CMB temperatures at large angular scales do not match those predicted by the standard model – their signals are not as strong as expected from the smaller scale structure revealed by Planck.

Read the entire article after the jump.

Image: Cosmic microwave background (CMB) seen by Planck. Courtesy of ESA (European Space Agency).

Jim’ll Paint It

Art can make you think; art can make you smile. Falling more towards the latter category is “Jim’ll Paint It“. Microsoft’s arcane Paint progra seems positively antiquated compared with more recent and powerful drawing apps. However, in the hands of an accomplished artist Paint still shines. In the hands of Jim it radiates. At his Jim’ll Paint It tumblr account Jim takes requests — however crazy — and renders them beautifully and with humor. In his own words:

I am here to make your wildest dreams a reality using nothing but Microsoft Paint (no tablets, no touch ups). Ask me to paint anything you wish and I will try no matter how specific or surreal your demands. While there aren’t enough hours in the day to physically paint every suggestion I will consider them all. Bonus points for originality and humour. Use your imagination!

From the Guardian:

Is all art nostalgic? Is it only when something is in the past, however recent, that it becomes interesting artistically?

I say this after perusing Jim’ll Paint It, where a guy called Jim offers to depict peoples’ craziest suggestions using Microsoft Paint, the graphics software included with all versions of Windows that now looks limited and “old-fashioned” compared with iPad art.

For anyone who is really trapped in the past, daddy-o, I am talking here about “painting” on a computer screen, not making a mess with gooey colours and real brushes. Using his archaically primitive Paint software, Jim has recently created scenes that include Jesus riding a motorbike into Hitler’s bunker, Nigella Lawson eating a plate of processors and Brian Blessed riding a vacuum cleaner.

His style is like a South Park storyboard, which I suppose tells us about how South Park is drawn. In fact, Jim reveals how familiar the visual lexicon of Microsoft Paint actually is in contemporary culture. By being simplified and unrealistic, it is arguably wittier, more imaginative and therefore more arty than paintings made on a tablet computer or smart phone that look like … well, like paintings.

Digital culture is as saturated in nostalgia as any previous form of culture. In a world where gadgets and software packages are constantly being reinvented, earlier phases of modernity are relegated to a sentimental past. MS Paint is still current but one day it will be as archaic as Pong.

Read the entire article following the jump.

Image: One of our favorites from Jim’ll Paint It — “Please paint me Jimi Hendrix explaining to an owl on his shoulder what a stick of chalk is, near a forest”. Courtesy of Jim’ll Paint It.

You Are a Google Datapoint

At first glance Google’s aim to make all known information accessible and searchable seems to be a fundamentally worthy goal, and in keeping with its “Do No Evil” mantra. Surely, giving all people access to the combined knowledge of the human race can do nothing but good, intellectually, politically and culturally.

However, what if that information includes you? After all, you are information: from the sequence of bases in your DNA, to the food you eat and the products you purchase, to your location and your planned vacations, your circle of friends and colleagues at work, to what you say and write and hear and see. You are a collection of datapoints, and if you don’t market and monetize them, someone else will.

Google continues to extend its technology boundaries and its vast indexed database of information. Now with the introduction of Google Glass the company extends its domain to a much more intimate level. Glass gives Google access to data on your precise location; it can record what you say and the sounds around you; it can capture what you are looking at and make it instantly shareable over the internet. Not surprisingly, this raises numerous concerns over privacy and security, and not only for the wearer of Google Glass. While active opt-in / opt-out features would allow a user a fair degree of control over how and what data is collected and shared with Google, it does not address those being observed.

So, beware the next time you are sitting in a Starbucks or shopping in a mall or riding the subway, you may be being recorded and your digital essence distributed over the internet. Perhaps, someone somewhere will even be making money from you. While the Orwellian dystopia of government surveillance and control may still be a nightmarish fiction, corporate snooping and monetization is no less troubling. Remember, to some, you are merely a datapoint (care of Google), a publication (via Facebook), and a product (courtesy of Twitter).

From the Telegraph:

In the online world – for now, at least – it’s the advertisers that make the world go round. If you’re Google, they represent more than 90% of your revenue and without them you would cease to exist.

So how do you reconcile the fact that there is a finite amount of data to be gathered online with the need to expand your data collection to keep ahead of your competitors?

There are two main routes. Firstly, try as hard as is legally possible to monopolise the data streams you already have, and hope regulators fine you less than the profit it generated. Secondly, you need to get up from behind the computer and hit the streets.

Google Glass is the first major salvo in an arms race that is going to see increasingly intrusive efforts made to join up our real lives with the digital businesses we have become accustomed to handing over huge amounts of personal data to.

The principles that underpin everyday consumer interactions – choice, informed consent, control – are at risk in a way that cannot be healthy. Our ability to walk away from a service depends on having a choice in the first place and knowing what data is collected and how it is used before we sign up.

Imagine if Google or Facebook decided to install their own CCTV cameras everywhere, gathering data about our movements, recording our lives and joining up every camera in the land in one giant control room. It’s Orwellian surveillance with fluffier branding. And this isn’t just video surveillance – Glass uses audio recording too. For added impact, if you’re not content with Google analysing the data, the person can share it to social media as they see fit too.

Yet that is the reality of Google Glass. Everything you see, Google sees. You don’t own the data, you don’t control the data and you definitely don’t know what happens to the data. Put another way – what would you say if instead of it being Google Glass, it was Government Glass? A revolutionary way of improving public services, some may say. Call me a cynic, but I don’t think it’d have much success.

More importantly, who gave you permission to collect data on the person sitting opposite you on the Tube? How about collecting information on your children’s friends? There is a gaping hole in the middle of the Google Glass world and it is one where privacy is not only seen as an annoying restriction on Google’s profit, but as something that simply does not even come into the equation. Google has empowered you to ignore the privacy of other people. Bravo.

It’s already led to reactions in the US. ‘Stop the Cyborgs’ might sound like the rallying cry of the next Terminator film, but this is the start of a campaign to ensure places of work, cafes, bars and public spaces are no-go areas for Google Glass. They’ve already produced stickers to put up informing people that they should take off their Glass.

They argue, rightly, that this is more than just a question of privacy. There’s a real issue about how much decision making is devolved to the display we see, in exactly the same way as the difference between appearing on page one or page two of Google’s search can spell the difference between commercial success and failure for small businesses. We trust what we see, it’s convenient and we don’t question the motives of a search engine in providing us with information.

The reality is very different. In abandoning critical thought and decision making, allowing ourselves to be guided by a melee of search results, social media and advertisements we do risk losing a part of what it is to be human. You can see the marketing already – Glass is all-knowing. The issue is that to be all-knowing, it needs you to help it be all-seeing.

Read the entire article after the jump.

Image: Google’s Sergin Brin wearing Google Glass. Courtesy of CBS News.

Heard the One About the Physicist and the Fashion Model?

You could be forgiven for mistakenly assuming this story to be a work of pop fiction from the colorful and restless minds of Quentin Tarrantino or the Coen brothers. But in another example of life mirroring art, it’s all true.

From the New York Times:

In November 2011, Paul Frampton, a theoretical particle physicist, met Denise Milani, a Czech bikini model, on the online dating site Mate1.com. She was gorgeous — dark-haired and dark-eyed, with a supposedly natural DDD breast size. In some photos, she looked tauntingly steamy; in others, she offered a warm smile. Soon, Frampton and Milani were chatting online nearly every day. Frampton would return home from campus — he’d been a professor in the physics and astronomy department at the University of North Carolina at Chapel Hill for 30 years — and his computer would buzz. “Are you there, honey?” They’d chat on Yahoo Messenger for a while, and then he’d go into the other room to take care of something. A half-hour later, there was the familiar buzz. It was always Milani. “What are you doing now?”

Frampton had been very lonely since his divorce three years earlier; now it seemed those days were over. Milani told him she was longing to change her life. She was tired, she said, of being a “glamour model,” of posing in her bikini on the beach while men ogled her. She wanted to settle down, have children. But she worried what he thought of her. “Do you think you could ever be proud of someone like me?” Of course he could, he assured her.

Frampton tried to get Milani to talk on the phone, but she always demurred. When she finally agreed to meet him in person, she asked him to come to La Paz, Bolivia, where she was doing a photo shoot. On Jan. 7, 2012, Frampton set out for Bolivia via Toronto and Santiago, Chile. At 68, he dreamed of finding a wife to bear him children — and what a wife. He pictured introducing her to his colleagues. One thing worried him, though. She had told him that men hit on her all the time. How did that acclaim affect her? Did it go to her head? But he remembered how comforting it felt to be chatting with her, like having a companion in the next room. And he knew she loved him. She’d said so many times.

Frampton didn’t plan on a long trip. He needed to be back to teach. So he left his car at the airport. Soon, he hoped, he’d be returning with Milani on his arm. The first thing that went wrong was that the e-ticket Milani sent Frampton for the Toronto-Santiago leg of his journey turned out to be invalid, leaving him stranded in the Toronto airport for a full day. Frampton finally arrived in La Paz four days after he set out. He hoped to meet Milani the next morning, but by then she had been called away to another photo shoot in Brussels. She promised to send him a ticket to join her there, so Frampton, who had checked into the Eva Palace Hotel, worked on a physics paper while he waited for it to arrive. He and Milani kept in regular contact. A ticket to Buenos Aires eventually came, with the promise that another ticket to Brussels was on the way. All Milani asked was that Frampton do her a favor: bring her a bag that she had left in La Paz.

While in Bolivia, Frampton corresponded with an old friend, John Dixon, a physicist and lawyer who lives in Ontario. When Frampton explained what he was up to, Dixon became alarmed. His warnings to Frampton were unequivocal, Dixon told me not long ago, still clearly upset: “I said: ‘Well, inside that suitcase sewn into the lining will be cocaine. You’re in big trouble.’ Paul said, ‘I’ll be careful, I’ll make sure there isn’t cocaine in there and if there is, I’ll ask them to remove it.’ I thought they were probably going to kidnap him and torture him to get his money. I didn’t know he didn’t have money. I said, ‘Well, you’re going to be killed, Paul, so whom should I contact when you disappear?’ And he said, ‘You can contact my brother and my former wife.’ ” Frampton later told me that he shrugged off Dixon’s warnings about drugs as melodramatic, adding that he rarely pays attention to the opinions of others.

On the evening of Jan. 20, nine days after he arrived in Bolivia, a man Frampton describes as Hispanic but whom he didn’t get a good look at handed him a bag out on the dark street in front of his hotel. Frampton was expecting to be given an Hermès or a Louis Vuitton, but the bag was an utterly commonplace black cloth suitcase with wheels. Once he was back in his room, he opened it. It was empty. He wrote to Milani, asking why this particular suitcase was so important. She told him it had “sentimental value.” The next morning, he filled it with his dirty laundry and headed to the airport.

Frampton flew from La Paz to Buenos Aires, crossing the border without incident. He says that he spent the next 40 hours in Ezeiza airport, without sleeping, mainly “doing physics” and checking his e-mail regularly in hopes that an e-ticket to Brussels would arrive. But by the time the ticket materialized, Frampton had gotten a friend to send him a ticket to Raleigh. He had been gone for 15 days and was ready to go home. Because there was always the chance that Milani would come to North Carolina and want her bag, he checked two bags, his and hers, and went to the gate. Soon he heard his name called over the loudspeaker. He thought it must be for an upgrade to first class, but when he arrived at the airline counter, he was greeted by several policemen. Asked to identify his luggage — “That’s my bag,” he said, “the other one’s not my bag, but I checked it in” — he waited while the police tested the contents of a package found in the “Milani” suitcase. Within hours, he was under arrest.

Read the entire article following the jump.

Image: Paul Frampton, theoretical physicist.Courtesy of Wikipedia.

Electronic Tattoos

arm with band-aid shaped skin graphForget wearable electronics, like Google Glass. That’s so, well, 2012. Welcome to the new world of epidermal electronics — electronic tattoos that contain circuits and sensors printed directly on to the body.

From MIT Technology Review:

Taking advantage of recent advances in flexible electronics, researchers have devised a way to “print” devices directly onto the skin so people can wear them for an extended period while performing normal daily activities. Such systems could be used to track health and monitor healing near the skin’s surface, as in the case of surgical wounds.

 

So-called “epidermal electronics” were demonstrated previously in research from the lab of John Rogers, a materials scientist at the University of Illinois at Urbana-Champaign; the devices consist of ultrathin electrodes, electronics, sensors, and wireless power and communication systems. In theory, they could attach to the skin and record and transmit electrophysiological measurements for medical purposes. These early versions of the technology, which were designed to be applied to a thin, soft elastomer backing, were “fine for an office environment,” says Rogers, “but if you wanted to go swimming or take a shower they weren’t able to hold up.” Now, Rogers and his coworkers have figured out how to print the electronics right on the skin, making the device more durable and rugged.

“What we’ve found is that you don’t even need the elastomer backing,” Rogers says. “You can use a rubber stamp to just deliver the ultrathin mesh electronics directly to the surface of the skin.” The researchers also found that they could use commercially available “spray-on bandage” products to add a thin protective layer and bond the system to the skin in a “very robust way,” he says.

Eliminating the elastomer backing makes the device one-thirtieth as thick, and thus “more conformal to the kind of roughness that’s present naturally on the surface of the skin,” says Rogers. It can be worn for up to two weeks before the skin’s natural exfoliation process causes it to flake off.

During the two weeks that it’s attached, the device can measure things like temperature, strain, and the hydration state of the skin, all of which are useful in tracking general health and wellness. One specific application could be to monitor wound healing: if a doctor or nurse attached the system near a surgical wound before the patient left the hospital, it could take measurements and transmit the information wirelessly to the health-care providers.

Read the entire article after the jump.

Image: Epidermal electronic snesor printed on the skin. Courtesy of MIT.

Technology: Mind Exp(a/e)nder

Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.

So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.

Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.

From Slate:

Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?

If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?

Read the entire article after the jump.

Image: Google Glass. Courtesy of Google.

The War on Apostrophes

No, we don’t mean war on apostasy, for which many have been hung, drawn, quartered, burned and beheaded. And no, “apostrophes” are not a new sect of fundamentalist terrorists.

Apostrophes are punctuation, and a local city council in Britain has deemed to outlaw them. Why?

From the Guardian:

The sometimes vexing question of where and when to add an apostrophe appears to have been solved in one corner of Devon: the local authority is planning to do away with them altogether.

Later this month members of Mid Devon district council’s cabinet will discuss formally banning the pesky little punctuation marks from its (no apostrophe needed) street signs, apparently to avoid “confusion”.

The news of the Tory-controlled council’s (apostrophe required) decision provoked howls of condemnation on Friday from champions of plain English, fans of grammar, and politicians. Even the government felt the need to join the campaign to save the apostrophe.

The Plain English Campaign led the criticism. “It’s nonsense,” said Steve Jenner, spokesperson and radio presenter. “Where’s it going to stop. Are we going to declare war on commas, outlaw full stops?”

Jenner was puzzled over why the council appeared to think it a good idea not to have punctuation on signs. “If it’s to try to make things clearer, it’s not going to work. The whole purpose of punctuation is to make language easier to understand. Is it because someone at the council doesn’t understand how it works?”

Jenner suggested the council was providing a bad example to children who were – hopefully – being taught punctuation at school only to not see it being used correctly on street signs. “It seems a bit hypocritical,” he added.

Sian Harris, lecturer in English literature at Exeter University, said the proposals were likely to lead to greater confusion. She said: “Usually the best way to teach about punctuation is to show practical examples of it – removing [apostrophes] from everyday life would be a terrible shame and make that understanding increasingly difficult. English is a complicated language as it is — removing apostrophes is not going to help with that at all.”

Ben Bradshaw, the former culture secretary and Labour MP for Exeter, condemned the plans on Twitter. He wrote a precisely punctuated tweet: “Tory Mid Devon Council bans the apostrophe to ‘avoid confusion’ … Whole point of proper grammar is to avoid confusion!”

The council’s plans caused a stir 200 miles away in Whitehall, where the Department for Communities and Local Government came out in defence of punctuation. A spokesman said: “Whilst this is ultimately a matter for the local council, ministers’ view is that England’s apostrophes should be cherished.”

To be fair to modest Mid Devon, it is not the only authority to pick on the apostrophe. Birmingham did the same three years ago (the Mail went with the headline The city where apostrophes arent welcome).

The book retailer Waterstones caused a bit of a stir last year when it ditched the mark.

The council’s communications manager, Andrew Lacey, attempted to dampen down the controversy. Lacey said: “Our proposed policy on street naming and numbering covers a whole host of practical issues, many of which are aimed at reducing potential confusion over street names.

“Although there is no national guidance that stops apostrophes being used, for many years the convention we’ve followed here is for new street names not to be given apostrophes.”

He said there were only three official street names in Mid Devon which include them: Beck’s Square and Blundell’s Avenue, both in Tiverton, and St George’s Well in Cullompton. All were named many, many years ago.

“No final decision has yet been made and the proposed policy will be discussed at cabinet,” he said.

Read the entire story after the jump.

Image: Mid Devon District Council’s plan is presumably to avoid errors such as this (from Hackney, London). Courtesy of Guardian / Andy Drysdale / Rex Features.

Exoplanet Exploration

It wasn’t too long ago that astronomers found the first indirect evidence of a planet beyond our solar system. They inferred the presence of an exoplanet (extrasolar planet) from the periodic dimming or wiggle of its parental star, rather than much more difficult direct observation. Since the first confirmed exoplanet was discovered in 1995 (51 Pegasi b), researchers have definitively  catalogued around 800, and identified another 18,000 candidates. And, the list seems to now grow daily.

If that wasn’t amazing enough researchers now have directly observed several exoplanets and even measured their atmospheric composition.

[div class=attrib]From ars technica:[end-div]

The star system HR 8799 is a sort of Solar System on steroids: a beefier star, four possible planets that are much bigger than Jupiter, and signs of asteroids and cometary bodies, all spread over a bigger region. Additionally, the whole system is younger and hotter, making it one of only a few cases where astronomers can image the planets themselves. However, HR 8799 is very different from our Solar System, as astronomers are realizing thanks to two detailed studies released this week.

The first study was an overview of the four exoplanet candidates, covered by John Timmer. The second set of observations focused on one of the four planet candidates, HR 8799c. Quinn Konopacky, Travis Barman, Bruce Macintosh, and Christian Marois performed a detailed spectral analysis of the atmosphere of the possible exoplanet. They compared their findings to the known properties of a brown dwarf and concluded that they don’t match—it is indeed a young planet. Chemical differences between HR 8799c and its host star led the researchers to conclude the system likely formed in the same way the Solar System did.

The HR 8799 system was one of the first where direct imaging of the exoplanets was possible; in most cases, the evidence for a planet’s presence is indirect. (See the Ars overview of exoplanet science for more.) This serendipity is possible for two major reasons: the system is very young, and the planet candidates orbit far from their host star.

The young age means the bodies orbiting the system still retain heat from their formation and so are glowing in the infrared; older planets emit much less light. That makes it possible to image these planets at these wavelengths. (We mostly image planets in the Solar System using reflected sunlight, but that’s not a viable detection strategy at these distances). A large planet-star separation means that the star’s light doesn’t overwhelm the planets’ warm glow. Astronomers are also assisted by HR 8799’s relative closeness to us—it’s only about 130 light-years away.

However, the brightness of the exoplanet candidates also obscures their identity. They are all much larger than Jupiter—each is more than 5 times Jupiter’s mass, and the largest could be 35 times greater. That, combined with their large infrared emission, could mean that they are not planets but brown dwarfs: star-like objects with insufficient mass to engage in hydrogen fusion. Since brown dwarfs can overlap in size and mass with the largest planets, we haven’t been certain that the objects observed in the HR 8799 system are planets.

For this reason, the two recent studies aimed at measuring the chemistry of these bodies using their spectral emissions. The Palomar study described yesterday provided a broad, big-picture view of the whole HR 8799 system. By contrast, the second study used one of the 10-meter Keck telescopes for a focused, in-depth view of one object: HR 8799c, the second-farthest out of the four.

The researchers measured relatively high levels of carbon monoxide (CO) and water (H2O, just in case you forgot the formula), which were present at levels well above the abundance measured in the spectrum of the host star. According to the researchers, this difference in chemical composition indicated that the planet likely formed via “core accretion”— the gradual, bottom-up accumulation of materials to make a planet—rather than a top-down fragmentation of the disk surrounding the newborn star. The original disk in this scenario would have contained a lot of ice fragments, which merged to make a world relatively high in water content.

In many respects, HR 8799c seemed to have properties between brown dwarfs and other exoplanets, but the chemical and gravitational analyses pushed the object more toward the planet side. In particular, the size and chemistry of HR 8799c placed its surface gravity lower than expected for a brown dwarf, especially when considered with the estimated age of the star system. While this analysis says nothing about whether the other bodies in the system are planets, it does provide further hints about the way the system formed.

One final surprise was the lack of methane (CH4) in HR 8799c’s atmosphere. Methane is a chemical component present in all the Jupiter-like planets in our Solar System. The authors argued that this could be due to vigorous mixing of the atmosphere, which is expected because the exoplanet has higher temperatures and pressures than seen on Jupiter or Neptune. This mixing could enable reactions that limit methane formation. Since the HR 8799 system is much younger than the Solar System—roughly 30 million years compared with 4.5 billion years—it’s uncertain how much this chemical balance may change over time.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]One of the discovery images of the system obtained at the Keck II telescope using the adaptive optics system and NIRC2 Near-Infrared Imager. The rectangle indicates the field-of-view of the OSIRIS instrument for planet C. Courtesy of NRC-HIA, C. Marois and Keck Observatory.[end-div]

RIP: Fare Thee Well

With smartphones and tweets taking over our planet, the art of letter writing is fast becoming a subject of history lessons. Our written communications are now modulated by the keypad, emoticons, acronyms and the backspace; our attentions ever-fractured by the noise of the digital world and the dumbed-down 24/7 media monster.

So, as Matthew Malady over at Slate argues, it’s time for the few remaining Luddites, pen still in hand, to join the trend towards curtness and to ditch the signoffs. You know, the words that anyone over the age of 50 once used to put at the end of a hand-written letter, and can still be found at the close of an email and, less frequently, a text: “Best regards“, “Warmest wishes“, “Most Sincerely“, “Cheers“, “Faithfully yours“.

Your friendly editor, for now, refuses to join the tidal wave of signoff slayers, and continues to take solace from his ink (fountain, if you please!) pens. There is still room for well-crafted prose in a sea of txt-speak.

[div class=attrib]From Slate:[end-div]

For the 20 years that I have used email, I have been a fool. For two decades, I never called bullshit when burly, bearded dudes from places like Pittsburgh and Park Slope bid me email adieu with the vaguely British “Cheers!” And I never batted an eye at the hundreds of “XOXO” email goodbyes from people I’d never met, much less hugged or kissed. When one of my best friends recently ended an email to me by using the priggish signoff, “Always,” I just rolled with it.

But everyone has a breaking point. For me, it was the ridiculous variations on “Regards” that I received over the past holiday season. My transition from signoff submissive to signoff subversive began when a former colleague ended an email to me with “Warmest regards.”

Were these scalding hot regards superior to the ordinary “Regards” I had been receiving on a near-daily basis? Obviously they were better than the merely “Warm Regards” I got from a co-worker the following week. Then I received “Best Regards” in a solicitation email from the New Republic. Apparently when urging me to attend a panel discussion, the good people at the New Republic were regarding me in a way that simply could not be topped.

After 10 or 15 more “Regards” of varying magnitudes, I could take no more. I finally realized the ridiculousness of spending even one second thinking about the totally unnecessary words that we tack on to the end of emails. And I came to the following conclusion: It’s time to eliminate email signoffs completely. Henceforth, I do not want—nay, I will not accept—any manner of regards. Nor will I offer any. And I urge you to do the same.

Think about it. Email signoffs are holdovers from a bygone era when letter writing—the kind that required ink and paper—was a major means of communication. The handwritten letters people sent included information of great import and sometimes functioned as the only communication with family members and other loved ones for months. In that case, it made sense to go to town, to get flowery with it. Then, a formal signoff was entirely called for. If you were, say, a Boston resident writing to his mother back home in Ireland in the late 19th century, then ending a correspondence with “I remain your ever fond son in Christ Our Lord J.C.,” as James Chamberlain did in 1891, was entirely reasonable and appropriate.

But those times have long since passed. And so has the era when individuals sought to win the favor of the king via dedication letters and love notes ending with “Your majesty’s Most bounden and devoted,” or “Fare thee as well as I fare.” Also long gone are the days when explorers attempted to ensure continued support for their voyages from monarchs and benefactors via fawning formal correspondence related to the initial successes of this or that expedition. Francisco Vázquez de Coronado had good reason to end his 1541 letter to King Charles I of Spain, relaying details about parts of what is now the southwestern United States, with a doozy that translates to “Your Majesty’s humble servant and vassal, who would kiss the royal feet and hands.”

But in 2013, when bots outnumber benefactors by a wide margin, the continued and consistent use of antiquated signoffs in email is impossible to justify. At this stage of the game, we should be able to interact with one another in ways that reflect the precise manner of communication being employed, rather than harkening back to old standbys popular during the age of the Pony Express.

I am not an important person. Nonetheless, each week, on average, I receive more than 300 emails. I send out about 500. These messages do not contain the stuff of old-timey letters. They’re about the pizza I had for lunch (horrendous) and must-see videos of corgis dressed in sweaters (delightful). I’m trading thoughts on various work-related matters with people who know me and don’t need to be “Best”-ed. Emails, over time, have become more like text messages than handwritten letters. And no one in their right mind uses signoffs in text messages.

What’s more, because no email signoff is exactly right for every occasion, it’s not uncommon for these add-ons to cause affirmative harm. Some people take offense to different iterations of “goodbye,” depending on the circumstances. Others, meanwhile, can’t help but wonder, “What did he mean by that?” or spend entire days worrying about the implications of a sudden shift from “See you soon!” in one email, to “Best wishes” in the next. So, naturally, we consider, and we overthink, and we agonize about how best to close out our emails. We ask others for advice on the matter, and we give advice on it when asked.

[div class=attrib]Read the entire article after the jump.[end-div]

Who Doesn’t Love and Hate a Dalek?

Over the decades Hollywood has remade movie monsters and aliens into evermore terrifying and nightmarish, and often slimier, versions of ourselves. In Britain of the 1960s kids grew up with the thoroughly scary and evil Daleks, from the SciFi series Dr.Who. Their raspy electronic voices proclaiming “Exterminate! Exterminate!” and death-rays would often consign children to a restless sleep in the comfort of their parents’ beds. Nowadays the Daleks would be dismissed as laughable and amateurish constructions — after all, how could malevolent, otherworldly beings be made from what looked too much like discarded egg cartons and toilet plungers. But, they do remain iconic — a fixture of our pop culture.

[div class=attrib]From the Guardian:[end-div]

The Daleks are a masterpiece of pop art. The death of their designer Raymond Cusick is rightly national news: it was Cusick who in the early 1960s gave a visual shape to this new monster invented by Doctor Who writer Terry Nation. But in the 50th anniversary of Britain’s greatest television show, the Daleks need to be seen in historical perspective. It is all too tempting to imagine Cusick and Nation sitting in the BBC canteen looking at a pepper pot on their lunch table and realising it could be a terrifying alien cyborg. In reality, the Daleks are a living legacy of the British pop art movement.

With Roy Lichtenstein whaaming ’em at London’s Tate Modern, it is all too easy to forget that pop art began in Britain – and our version of it started as science fiction. When Eduardo Paolozzi made his collage Dr Pepper in 1948, he was not portraying the real lives of austerity-burdened postwar Britons. He was imagining a future world of impossible consumer excess – a world that already existed in America, whose cultural icons from flash cars to electric cookers populate his collage of magazine clippings. But that seemed very far from reality in war-wounded Europe. Pop art began as an ironically utopian futuristic fantasy by British artists trapped in a monochrome reality.

The exhibition that brought pop art to a wider audience was This Is Tomorrow at the Whitechapel Gallery in 1956. As its title implies, This Is Tomorrow presented pop as visual sci-fi. It included a poster for the science fiction film Forbidden Planet and was officially opened by the star of the film, Robbie the Robot.

The layout of This Is Tomorrow created a futuristic landscape from fragments of found material and imagery, just as Doctor Who would fabricate alien worlds from silver foil and plastic bottles. The reason the series would face a crisis by the end of the 1970s was that its effects were deemed old-fashioned compared with Star Wars: but the whole point of Dr Who was that it demanded imagination of its audience and presented not fetishised perfect illusions, but a kind of kitchen sink sci-fi that shared the playfulness of pop art.

The Daleks are a wonder of pop art’s fantastic vision, at once absurd and marvellous. Most of all, they share the ironic juxtaposition of real and unreal one finds in the art of Richard Hamilton. Like a Hoover collaged into an ideal home, the Daleks are at their best gliding through an unexpected setting such as central London – a metal menace invading homely old Britain.

[div class-=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Daleks in the 1966 Doctor Who serial The Power of the Daleks. Courtesy of BBC.[end-div]

The Richest Person in the Solar System

[tube]Bs6rCxU_IHY[/tube]

Forget Warren Buffet, Bill Gates and Carlos Slim or the Russian oligarchs and the emirs of the Persian Gulf. These guys are merely multi-billionaires. Their fortunes — combined — account for less than half of 1 percent of the net worth of Dennis Hope, the world’s first trillionaire. In fact, you could describe Dennis as the solar system’s first trillionaire, with an estimated wealth of $100 trillion.

So, why have you never heard of Dennis Hope, trillionaire? Where does he invest his money? And, how did he amass this jaw-dropping uber-fortune? The answer to the first question is that he lives a relatively ordinary and quiet life in Nevada. The answer to the second question is: property. The answer to the third, and most fascinating question: well, he owns most of the Moon. He also owns the majority of the planets Mars, Venus and Mercury, and 90 or so other celestial plots. You too could become an interplanetary property investor for the starting and very modest sum of $19.99. Please write your check to… Dennis Hope.

The New York Times has a recent story and documentary on Mr.Hope, here.

[div class=attrib]From Discover:[end-div]

Dennis Hope, self-proclaimed Head Cheese of the Lunar Embassy, will promise you the moon. Or at least a piece of it. Since 1980, Hope has raked in over $9 million selling acres of lunar real estate for $19.99 a pop. So far, 4.25 million people have purchased a piece of the moon, including celebrities like Barbara Walters, George Lucas, Ronald Reagan, and even the first President Bush. Hope says he exploited a loophole in the 1967 United Nations Outer Space Treaty, which prohibits nations from owning the moon.

Because the law says nothing about individual holders, he says, his claim—which he sent to the United Nations—has some clout. “It was unowned land,” he says. “For private property claims, 197 countries at one time or another had a basis by which private citizens could make claims on land and not make payment. There are no standardized rules.”

Hope is right that the rules are somewhat murky—both Japan and the United States have plans for moon colonies—and lunar property ownership might be a powder keg waiting to spark. But Ram Jakhu, law professor at the Institute of Air and Space Law at McGill University in Montreal, says that Hope’s claims aren’t likely to hold much weight. Nor, for that matter, would any nation’s. “I don’t see a loophole,” Jakhu says. “The moon is a common property of the international community, so individuals and states cannot own it. That’s very clear in the U.N. treaty. Individuals’ rights cannot prevail over the rights and obligations of a state.”

Jakhu, a director of the International Institute for Space Law, believes that entrepreneurs like Hope have misread the treaty and that the 1967 legislation came about to block property claims in outer space. Historically, “the ownership of private property has been a major cause of war,” he says. “No one owns the moon. No one can own any property in outer space.”

Hope refuses to be discouraged. And he’s focusing on expansion. “I own about 95 different planetary bodies,” he says. “The total amount of property I currently own is about 7 trillion acres. The value of that property is about $100 trillion. And that doesn’t even include mineral rights.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Video courtesy of the New York Times.[end-div]

The United States: Land of the Creative and the Crazy

It’s unlikely that you would find many people who would argue against the notion that the United States is truly the most creative and innovative nation; from art to basic scientific research, from music to engineering, from theoretical physics to food science, from genetic studies and medicine to movies. And yet perplexingly, the nation continues to yearn for its wild, pioneering past, rather than inventing a brighter and more civilized future. To many outsiders the many contradictions that make up the United States are a source laughter and much incredulity. The recent news out of South Dakota shows why.

[div class=attrib]From the New York Times:[end-div]

Gov. Dennis Daugaard of South Dakota on Friday signed into law a bill that would allow teachers to carry guns in the classroom.

While some other states have provisions in their gun laws that make it possible for teachers to be armed, South Dakota is believed to be the first state to pass a law that specifically allows teachers to carry firearms.

About two dozen states have proposed similar bills since the shootings in December at Sandy Hook Elementary School in Newtown, Conn., but all of them have stalled.

Supporters say that the measure signed by Mr. Daugaard, a Republican, is important in a rural state like South Dakota, where some schools are many miles away from emergency responders.

Opponents, which have included the state school board association and teachers association, say this is a rushed measure that does not make schools safer.

The law says that school districts may choose to allow a school employee, hired security officer or volunteer to serve as a “sentinel” who can carry a firearm in the school. The law does not require school districts to do this.

Mr. Daugaard said he was comfortable with the law because it gave school districts the right to choose whether they wanted armed individuals in schools, and that those who were armed would have to undergo firearms training similar to what law enforcement officers received.

“I think it does provide the same safety precautions that a citizen expects when a law enforcement officer enters onto a premises,” Mr. Daugaard said in an interview. But he added that he did not think that many school districts would end up taking advantage of the measure.

[div class=attrib]Read the entire article after the jump.[end-div]

MondayMap: New Jersey Under Water

We love maps here at theDiagonal. So much so that we’ve begun a new feature: MondayMap. As the name suggests, we plan to feature fascinating new maps on Mondays. For our readers who prefer their plots served up on a Saturday, sorry. Usually we like to highlight maps that cause us to look at our world differently or provide a degree of welcome amusement, such as the wonderful trove of maps over at Strange Maps curated by Frank Jacobs.

However, this first MondayMap is a little different and serious. It’s an interactive map that shows the impact of estimated sea level rise on the streets of New Jersey. Obviously, such a tool would be a great boon for emergency services and urban planners. For the rest of us, whether we live in New Jersey or not, maps like this one — of extreme weather events and projections — are likely to become much more common over the coming decades. Kudos to researchers at Rutgers University for developing the NJ Flood Mapper.

[div class=attrib]From Wall Street Journal:[end-div]

While superstorm Sandy revealed the Northeast’s vulnerability, a new map by New Jersey scientists suggests how rising seas could make future storms even worse.

The map shows ocean waters surging more than a mile into communities along Raritan Bay, engulfing nearly all of New Jersey’s barrier islands and covering northern sections of the New Jersey Turnpike and land surrounding the Port Newark Container Terminal.

Such damage could occur under a scenario in which sea levels rise 6 feet—or a 3-foot rise in tandem with a powerful coastal storm, according to the map produced by Rutgers University researchers.

The satellite-based tool, one of the first comprehensive, state-specific maps of its kind, uses a Google-maps-style interface that allows viewers to zoom into street-level detail.

“We are not trying to unduly frighten people,” said Rick Lathrop, director of the Grant F. Walton Center for Remote Sensing and Spatial Analysis at Rutgers, who led the map’s development. “This is providing people a look at where our vulnerability is.”

Still, the implications of the Rutgers project unnerve residents of Surf City, on Long Beach Island, where the map shows water pouring over nearly all of the barrier island’s six municipalities with a 6-foot increase in sea levels.

“The water is going to come over the island and there will be no island,” said Barbara Epstein, a 73-year-old resident of nearby Barnegat Light, who added that she is considering moving after 12 years there. “The storms are worsening.”

To be sure, not everyone agrees that climate change will make sea-level rise more pronounced.

Politically, climate change remains an issue of debate. New York Gov. Andrew Cuomo has said Sandy showed the need to address the issue, while New Jersey Gov. Chris Christie has declined to comment on whether Sandy was linked to climate change.

Scientists have gone ahead and started to map sea-level-rise scenarios in New Jersey, New York City and flood-prone communities along the Gulf of Mexico to help guide local development and planning.

Sea levels have risen by 1.3 feet near Atlantic City and 0.9 feet by Battery Park between 1911 and 2006, according to data from the National Oceanic and Atmospheric Administration.

A serious storm could add at least another 3 feet, with historic storm surges—Sandy-scale—registering at 9 feet. So when planning for future coastal flooding, 6 feet or higher isn’t far-fetched when combining sea-level rise with high tides and storm surges, Mr. Lathrop said.

NOAA estimated in December that increasing ocean temperatures could cause sea levels to rise by 1.6 feet in 100 years, and by 3.9 feet if considering some level of Arctic ice-sheet melt.

Such an increase amounts to 0.16 inches per year, but the eventual impact could mean that a small storm could “do the same damage that Sandy did,” said Peter Howd, co-author of a 2012 U.S. Geological Survey report that found the rate of sea level rise had increased in the northeast.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: NJ Flood Mapper. Courtesy of Grant F. Walton Center for Remote Sensing and Spatial Analysis (CRSSA), Rutgers University, in partnership with the Jacques Cousteau National Estuarine Research Reserve (JCNERR), and in collaboration with the NOAA Coastal Services Center (CSC).[end-div]

Ziggy Stardust and the Spiders from the Moon?

To honor the brilliant new album by the Thin White Duke, we came across the article excerpted below, which at first glance seems to come directly from the songbook of Ziggy Stardust him- or herself. But closer inspection reveals that NASA may have designs on deploying giant manufacturing robots to construct a base on the moon. Can you hear me, Major Tom?

[tube]gH7dMBcg-gE[/tube]

Once you’ve had your fill of Bowie, read on about NASA’s spiders.

[div class=attrib]From ars technica:[end-div]

The first lunar base on the Moon may not be built by human hands, but rather by a giant spider-like robot built by NASA that can bind the dusty soil into giant bubble structures where astronauts can live, conduct experiments, relax or perhaps even cultivate crops.

We’ve already covered the European Space Agency’s (ESA) work with architecture firm Foster + Partners on a proposal for a 3D-printed moonbase, and there are similarities between the two bases—both would be located in Shackleton Crater near the Moon’s south pole, where sunlight (and thus solar energy) is nearly constant due to the Moon’s inclination on the crater’s rim, and both use lunar dust as their basic building material. However, while the ESA’s building would be constructed almost exactly the same way a house would be 3D-printed on Earth, this latest wheeze—SinterHab—uses NASA technology for something a fair bit more ambitious.

The product of joint research first started between space architects Tomas Rousek, Katarina Eriksson and Ondrej Doule and scientists from NASA’s Jet Propulsion Laboratory (JPL), SinterHab is so-named because it involves sintering lunar dust—that is, heating it up to just below its melting point, where the fine nanoparticle powders fuse and become one solid block a bit like a piece of ceramic. To do this, the JPL engineers propose using microwaves no more powerful than those found in a kitchen unit, with tiny particles easily reaching between 1200 and 1500 degrees Celsius.

Nanoparticles of iron within lunar soil are heated at certain microwave frequencies, enabling efficient heating and binding of the dust to itself. Not having to fly binding agent from Earth along with a 3D printer is a major advantage over the ESA/Foster + Partners plan. The solar panels to power the microwaves would, like the moon base itself, be based near or on the rim of Shackleton Crater in near-perpetual sunlight.

“Bubbles” of binded dust could be built by a huge six-legged robot (OK, so it’s not technically a spider) that can then be assembled into habitats large enough for astronauts to use as a base. This “Sinterator system” would use the JPL’s Athlete rover, a half-scale prototype of which has already been built and tested. It’s a human-controlled robotic space rover with wheels at the end of its 8.2m limbs and a detachable habitable capsule mounted at the top.

Athlete’s arms have several different functions, dependent on what it needs to do at any point. It has 48 3D cameras that stream video to its operator either inside the capsule, elsewhere on the Moon or back on Earth, it’s got a payload capacity of 300kg in Earth gravity, and it can scoop, dig, grab at and generally poke around in the soil fairly easily, giving it the combined abilities of a normal rover and a construction vehicle. It can even split into two smaller three-legged rovers at any time if needed. In the Sinterator system, a microwave 3D printer would be mounted on one of the Athlete’s legs and used to build the base.

Rousek explained the background of the idea to Wired.co.uk: “Since many of my buildings have advanced geometry that you can’t cut easily from sheet material, I started using 3D printing for rapid prototyping of my architecture models. The construction industry is still lagging several decades behind car and electronics production. The buildings now are terribly wasteful and imprecise—I have always dreamed about creating a factory where the buildings would be robotically mass-produced with parametric personalization, using composite materials and 3D printing. It would be also great to use local materials and precise manufacturing on-site.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Giant NASA spider robots could 3D print lunar base using microwaves, courtesy of Wired UK. Video: The Stars (Are Out Tonight), courtesy of David Bowie, ISO Records / Columbia Records.[end-div]