Category Archives: Technica

Warp Factor

To date the fastest speed ever traveled by humans is just under 25,000 miles per hour. This milestone was reached by the reentry capsule from the Apollo 10 moon mission — reaching 24,961 mph as it hurtled through Earth’s upper atmosphere. Yet this pales in comparison to the speed of light, which clocks in at 186,282 miles per second, in a vacuum. A quick visit to the calculator puts Apollo 10 at 6.93 miles per second, or 0.0037 percent speed of light!

Despite our very pedestrian speeds many dream of a future where humans might reach the stars, powered by some kind of “warp drive” (yes, Star Trek comes to mind). A handful of researchers at NASA are actively pondering this today. Though, our poor level of technology combined with our lack of understanding of the workings of the universe, suggests that an Alcubierre-like approach is still centuries away from our grasp.

From the New York Times:

Beyond the security gate at the Johnson Space Center’s 1960s-era campus here, inside a two-story glass and concrete building with winding corridors, there is a floating laboratory.

Harold G. White, a physicist and advanced propulsion engineer at NASA, beckoned toward a table full of equipment there on a recent afternoon: a laser, a camera, some small mirrors, a ring made of ceramic capacitors and a few other objects.

He and other NASA engineers have been designing and redesigning these instruments, with the goal of using them to slightly warp the trajectory of a photon, changing the distance it travels in a certain area, and then observing the change with a device called an interferometer. So sensitive is their measuring equipment that it was picking up myriad earthly vibrations, including people walking nearby. So they recently moved into this lab, which floats atop a system of underground pneumatic piers, freeing it from seismic disturbances.

The team is trying to determine whether faster-than-light travel — warp drive — might someday be possible.

Warp drive. Like on “Star Trek.”

“Space has been expanding since the Big Bang 13.7 billion years ago,” said Dr. White, 43, who runs the research project. “And we know that when you look at some of the cosmology models, there were early periods of the universe where there was explosive inflation, where two points would’ve went receding away from each other at very rapid speeds.”

“Nature can do it,” he said. “So the question is, can we do it?”

Einstein famously postulated that, as Dr. White put it, “thou shalt not exceed the speed of light,” essentially setting a galactic speed limit. But in 1994, a Mexican physicist, Miguel Alcubierre, theorized that faster-than-light speeds were possible in a way that did not contradict Einstein, though Dr. Alcubierre did not suggest anyone could actually construct the engine that could accomplish that.

His theory involved harnessing the expansion and contraction of space itself. Under Dr. Alcubierre’s hypothesis, a ship still couldn’t exceed light speed in a local region of space. But a theoretical propulsion system he sketched out manipulated space-time by generating a so-called “warp bubble” that would expand space on one side of a spacecraft and contract it on another.

“In this way, the spaceship will be pushed away from the Earth and pulled towards a distant star by space-time itself,” Dr. Alcubierre wrote. Dr. White has likened it to stepping onto a moving walkway at an airport.

But Dr. Alcubierre’s paper was purely theoretical, and suggested insurmountable hurdles. Among other things, it depended on large amounts of a little understood or observed type of “exotic matter” that violates typical physical laws.

Dr. White believes that advances he and others have made render warp speed less implausible. Among other things, he has redesigned the theoretical warp-traveling spacecraft — and in particular a ring around it that is key to its propulsion system — in a way that he believes will greatly reduce the energy requirements.

Read the entire article here.

Sounds of Extinction

Camera aficionados will find themselves lamenting the demise of the film advance. Now that the world has moved on from film to digital you will no longer hear that distinctive mechanical sound as you wind on the film, and hope the teeth on the spool engage the plastic of the film.

Hardcore computer buffs will no doubt miss the beep-beep-hiss sound of the 56K modem — that now seemingly ancient box that once connected us to… well, who knows what it actually connected us to at that speed.

Our favorite arcane sounds, soon to become relegated to the audio graveyard: the telephone handset slam, the click and carriage return of the typewriter, the whir of reel-to-reel tape, the crackle of the diamond stylus as it first hits an empty groove on a 33.

More sounds you may (or may not) miss below.

From Wired:

The forward march of technology has a drum beat. These days, it’s custom text-message alerts, or your friend saying “OK, Glass” every five minutes like a tech-drunk parrot. And meanwhile, some of the most beloved sounds are falling out of the marching band.

The boops and beeps of bygone technology can be used to chart its evolution. From the zzzzzzap of the Tesla coil to the tap-tap-tap of Morse code being sent via telegraph, what were once the most important nerd sounds in the world are now just historical signposts. But progress marches forward, and for every irritatingly smug Angry Pigs grunt we have to listen to, we move further away from the sound of the Defender ship exploding.

Let’s celebrate the dying cries of technology’s past. The follow sounds are either gone forever, or definitely on their way out. Bow your heads in silence and bid them a fond farewell.

The Telephone Slam

Ending a heated telephone conversation by slamming the receiver down in anger was so incredibly satisfying. There was no better way to punctuate your frustration with the person on the other end of the line. And when that receiver hit the phone, the clack of plastic against plastic was accompanied by a slight ringing of the phone’s internal bell. That’s how you knew you were really pissed — when you slammed the phone so hard, it rang.

There are other sounds we’ll miss from the phone. The busy signal died with the rise of voicemail (although my dad refuses to get voicemail or call waiting, so he’s still OG), and the rapid click-click-click of the dial on a rotary phone is gone. But none of those compare with hanging up the phone with a forceful slam.

Tapping a touchscreen just does not cut it. So the closest thing we have now is throwing the pitifully fragile smartphone against the wall.

The CRT Television

The only TVs left that still use cathode-ray tubes are stashed in the most depressing places — the waiting rooms of hospitals, used car dealerships, and the dusty guest bedroom at your grandparents’ house. But before we all fell prey to the magical resolution of zeros and ones, boxy CRT televisions warmed (literally) the living rooms of every home in America. The sounds they made when you turned them on warmed our hearts, too — the gentle whoosh of the degaussing coil as the set was brought to life with the heavy tug of a pull-switch, or the satisfying mechanical clunk of a power button. As the tube warmed up, you’d see the visuals slowly brighten on the screen, giving you ample time to settle into the couch to enjoy latest episode of Seinfeld.

Read the entire article here.

Image courtesy of Wired.

Gnarly Names

By most accounts the internet is home to around 650 million websites, of which around 200 million are active. About 8,000 new websites go live every hour of every day.

These are big numbers and the continued phenomenal growth means that it’s increasingly difficult to find a unique and unused domain name (think website). So, web entrepreneurs are getting creative with website and company names, with varying degrees of success.

From Wall Street Journal:

The New York cousins who started a digital sing-along storybook business have settled on the name Mibblio.

The Australian founder of a startup connecting big companies to big-data scientists has dubbed his service Kaggle.

The former toy executive behind a two-year-old mobile screen-sharing platform is going with the name Shodogg.

And the Missourian who founded a website giving customers access to local merchants and service providers? He thinks it should be called Zaarly.

Quirky names for startups first surfaced about 20 years ago in Silicon Valley, with the birth of search engines such as Yahoo, which stands for “Yet Another Hierarchical Officious Oracle,” and Google, a misspelling of googol,? the almost unfathomably high number represented by a 1 followed by 100 zeroes.

By the early 2000s, the trend had spread to startups outside the Valley, including the Vancouver-based photo-sharing site Flickr and New York-based blogging platform Tumblr, to name just two.

The current crop of startups boasts even wackier spellings. The reason, they say, is that practically every new business—be it a popsicle maker or a furniture retailer—needs its own website. With about 252 million domain names currently registered across the Internet, the short, recognizable dot-com Web addresses, or URLs, have long been taken.

The only practical solution, some entrepreneurs say, is to invent words, like Mibblio, Kaggle, Shodogg and Zaarly, to avoid paying as much as $2 million for a concise, no-nonsense dot-com URL.

The rights to Investing.com, for example, sold for about $2.5 million last year.

Choosing a name that’s a made-up word also helps entrepreneurs steer clear of trademark entanglements.

The challenge is to come up with something that conveys meaning, is memorable,?and isn’t just alphabet soup. Most founders don’t have the budget to hire naming advisers.

Founders tend to favor short names of five to seven letters, because they worry that potential customers might forget longer ones, according to Steve Manning, founder of Igor, a name-consulting company.

Linguistically speaking, there are only a few methods of forming new words. They include misspelling, compounding, blending and scrambling.

At Mibblio, the naming process was “the length of a human gestation period,” says the company’s 28-year-old co-founder David Leiberman, “but only more painful,” adds fellow co-founder Sammy Rubin, 35.

The two men made several trips back to the drawing board; early contenders included Babethoven, Yipsqueak and Canarytales, but none was a perfect fit. One they both loved, Squeakbox, was taken.

Read the entire article here.

Hyperloop: Not Your Father’s High-Speed Rail

Europe and Japan have been leading the way with their 200-300 mph bullet trains for several decades. While the United States still tries to play catch up, one serial entrepreneur has other ideas. For Elon Musk, the bullet train is so, well, yesterday. He has in mind a ground based system that would hurtle people around at speeds of 4,000 mph. Welcome to Hyperloop.

From Slate:

High-speed rail is so 20th century. Well, perhaps not in the United States, where we still haven’t gotten around to building any true bullet trains. After 30 years of dithering, California is finally working on one that would get people from Los Angeles to San Francisco in a little under 2 1/2 hours, but it could cost on the order of $100 billion and won’t be ready until at least 2028.

Enter Tesla and SpaceX visionary Elon Musk with one of the craziest-sounding ideas in transportation history. For a while now, Musk has been hinting at an idea he calls the Hyperloop—a ground-based transportation technology that would get people from Los Angeles to San Francisco in under half an hour, for less than 1/10 the cost of building the high-speed rail line. Oh, and this 800-mph system would be self-powered, immune to weather, and would never crash.

What is the Hyperloop? So far Musk hasn’t gotten very specific, though he once called it “a cross between a Concorde and a railgun and an air hockey table.” But we’ll soon find out more. On Monday, Musk tweeted that he will publish an “alpha design” for the Hyperloop by Aug. 12. Responding to questions on Twitter, he indicated that the plans would be open-source, and that he would consider a partnership with someone who shared his vision. Perhaps the best clue came when he responded to an engineer named John Gardi, who published a diagram of his best guess as to how the Hyperloop might work:

It sounds fanciful, and maybe it is. But Musk is not the only one working on ultra-fast land-based transportation systems. And if anyone can turn an idea like this into reality, it might just be the man who has spent the past decade revolutionizing electric cars and space transport. Don’t be surprised if the biggest obstacles to the Hyperloop turn out to be bureaucratic rather than technological. After all, we’ve known how to build bullet trains for half a century, and look how far that has gotten us. Still, a nation can dream—and as long as we’re dreaming, why not dream about something way cooler than what Japan and China are already working on?

Read the entire article here.

Atlas Shrugs

She or he is 6 feet 2 inches tall and weighs 330 pounds, and goes by the name Atlas.

[tube]zkBnFPBV3f0[/tube]

Surprisingly this person is not the new draft pick for the Denver Broncos or Ronaldo’s replacement at Real Madrid. Well, it’s not really a person, not yet anyway. Atlas is a humanoid robot. Its primary “parents” are Boston Dynamics and DARPA (Defense Advanced Research Projects Agency), a unit of the U.S. Department of Defense. The collaboration unveiled Atlas to the public on July 11, 2013.

From the New York Times:

Moving its hands as if it were dealing cards and walking with a bit of a swagger, a Pentagon-financed humanoid robot named Atlas made its first public appearance on Thursday.

C3PO it’s not. But its creators have high hopes for the hydraulically powered machine. The robot — which is equipped with both laser and stereo vision systems, as well as dexterous hands — is seen as a new tool that can come to the aid of humanity in natural and man-made disasters.

Atlas is being designed to perform rescue functions in situations where humans cannot survive. The Pentagon has devised a challenge in which competing teams of technologists program it to do things like shut off valves or throw switches, open doors, operate power equipment and travel over rocky ground. The challenge comes with a $2 million prize.

Some see Atlas’s unveiling as a giant — though shaky — step toward the long-anticipated age of humanoid robots.

“People love the wizards in Harry Potter or ‘Lord of the Rings,’ but this is real,” said Gary Bradski, a Silicon Valley artificial intelligence specialist and a co-founder of Industrial Perception Inc., a company that is building a robot able to load and unload trucks. “A new species, Robo sapiens, are emerging,” he said.

The debut of Atlas on Thursday was a striking example of how computers are beginning to grow legs and move around in the physical world.

Although robotic planes already fill the air and self-driving cars are being tested on public roads, many specialists in robotics believe that the learning curve toward useful humanoid robots will be steep. Still, many see them fulfilling the needs of humans — and the dreams of science fiction lovers — sooner rather than later.

Walking on two legs, they have the potential to serve as department store guides, assist the elderly with daily tasks or carry out nuclear power plant rescue operations.

“Two weeks ago 19 brave firefighters lost their lives,” said Gill Pratt, a program manager at the Defense Advanced Projects Agency, part of the Pentagon, which oversaw Atlas’s design and financing. “A number of us who are in the robotics field see these events in the news, and the thing that touches us very deeply is a single kind of feeling which is, can’t we do better? All of this technology that we work on, can’t we apply that technology to do much better? I think the answer is yes.”

Dr. Pratt equated the current version of Atlas to a 1-year-old.

“A 1-year-old child can barely walk, a 1-year-old child falls down a lot,” he said. “As you see these machines and you compare them to science fiction, just keep in mind that this is where we are right now.”

But he added that the robot, which has a brawny chest with a computer and is lit by bright blue LEDs, would learn quickly and would soon have the talents that are closer to those of a 2-year-old.

The event on Thursday was a “graduation” ceremony for the Atlas walking robot at the office of Boston Dynamics, the robotics research firm that led the design of the system. The demonstration began with Atlas shrouded under a bright red sheet. After Dr. Pratt finished his remarks, the sheet was pulled back revealing a machine that looked a like a metallic body builder, with an oversized chest and powerful long arms.

Read the entire article here.

UnGoogleable: The Height of Cool

So, it is no longer a surprise — our digital lives are tracked, correlated, stored and examined. The NSA (National Security Agency) does it to determine if you are an unsavory type; Google does it to serve you better information and ads; and, a whole host of other companies do it to sell you more things that you probably don’t need and for a price that you can’t afford. This of course raises deep and troubling questions about privacy. With this in mind, some are taking ownership of the issue and seeking to erase themselves from the vast digital Orwellian eye. However, to some being untraceable online is a fashion statement, rather than a victory for privacy.

From the Guardian:

“The chicest thing,” said fashion designer Phoebe Philo recently, “is when you don’t exist on Google. God, I would love to be that person!”

Philo, creative director of Céline, is not that person. As the London Evening Standard put it: “Unfortunately for the famously publicity-shy London designer – Paris born, Harrow-on-the-Hill raised – who has reinvented the way modern women dress, privacy may well continue to be a luxury.” Nobody who is oxymoronically described as “famously publicity-shy” will ever be unGoogleable. And if you’re not unGoogleable then, if Philo is right, you can never be truly chic, even if you were born in Paris. And if you’re not truly chic, then you might as well die – at least if you’re in fashion.

If she truly wanted to disappear herself from Google, Philo could start by changing her superb name to something less diverting. Prize-winning novelist AM Homes is an outlier in this respect. Google “am homes” and you’re in a world of blah US real estate rather than cutting-edge literature. But then Homes has thought a lot about privacy, having written a play about the most famously private person in recent history, JD Salinger, and had him threaten to sue her as a result.

And Homes isn’t the only one to make herself difficult to detect online. UnGoogleable bands are 10 a penny. The New York-based band !!! (known verbally as “chick chick chick” or “bang bang bang” – apparently “Exclamation point, exclamation point, exclamation point” proved too verbose for their meagre fanbase) must drive their business manager nuts. As must the band Merchandise, whose name – one might think – is a nominalist satire of commodification by the music industry. Nice work, Brad, Con, John and Rick.

 

If Philo renamed herself online as Google Maps or @, she might make herself more chic.

Welcome to anonymity chic – the antidote to an online world of exhibitionism. But let’s not go crazy: anonymity may be chic, but it is no business model. For years XXX Porn Site, my confusingly named alt-folk combo, has remained undiscovered. There are several bands called Girls (at least one of them including, confusingly, dudes) and each one has worried – after a period of chic iconoclasm – that such a putatively cool name means no one can find them online.

But still, maybe we should all embrace anonymity, given this week’s revelations that technology giants cooperated in Prism, a top-secret system at the US National Security Agency that collects emails, documents, photos and other material for secret service agents to review. It has also been a week in which Lindsay Mills, girlfriend of NSA whistleblower Edward Snowden, has posted on her blog (entitled: “Adventures of a world-traveling, pole-dancing super hero” with many photos showing her performing with the Waikiki Acrobatic Troupe) her misery that her fugitive boyfriend has fled to Hong Kong. Only a cynic would suggest that this blog post might help the Waikiki Acrobating Troupe veteran’s career at this – serious face – difficult time. Better the dignity of silent anonymity than using the internet for that.

Furthermore, as social media diminishes us with not just information overload but the 24/7 servitude of liking, friending and status updating, this going under the radar reminds us that we might benefit from withdrawing the labour on which the founders of Facebook, Twitter and Instagram have built their billions. “Today our intense cultivation of a singular self is tied up in the drive to constantly produce and update,” argues Geert Lovink, research professor of interactive media at the Hogeschool van Amsterdam and author of Networks Without a Cause: A Critique of Social Media. “You have to tweet, be on Facebook, answer emails,” says Lovink. “So the time pressure on people to remain present and keep up their presence is a very heavy load that leads to what some call the psychopathology of online.”

Internet evangelists such as Clay Shirky and Charles Leadbeater hoped for something very different from this pathologised reality. In Shirky’s Here Comes Everybody and Leadbeater’s We-Think, both published in 2008, the nascent social media were to echo the anti-authoritarian, democratising tendencies of the 60s counterculture. Both men revelled in the fact that new web-based social tools helped single mothers looking online for social networks and pro-democracy campaigners in Belarus. Neither sufficiently realised that these tools could just as readily be co-opted by The Man. Or, if you prefer, Mark Zuckerberg.

Not that Zuckerberg is the devil in this story. Social media have changed the way we interact with other people in line with what the sociologist Zygmunt Bauman wrote in Liquid Love. For us “liquid moderns”, who have lost faith in the future, cannot commit to relationships and have few kinship ties, Zuckerberg created a new way of belonging, one in which we use our wits to create provisional bonds loose enough to stop suffocation, but tight enough to give a needed sense of security now that the traditional sources of solace (family, career, loving relationships) are less reliable than ever.

Read the entire article here.

Technology and Kids

There is no doubting that technology’s grasp finds us at increasingly younger ages. No longer is it just our teens constantly mesmerized by status updates on their mobiles, and not just our “in-betweeners” addicted to “facetiming” with their BFFs. Now our technologies are fast becoming the tools of choice for our kindergarteners and pre-K kids. Some parents lament.

From New York Times:

A few months ago, I attended my daughter Josie’s kindergarten open house, the highlight of which was a video slide show featuring our moppets using iPads to practice their penmanship. Parental cooing ensued.

I happened to be sitting next to the teacher, and I asked her about the rumor I’d heard: that next year, every elementary-school kid in town would be provided his or her own iPad. She said this pilot program was being introduced only at the newly constructed school three blocks from our house, which Josie will attend next year. “You’re lucky,” she observed wistfully.

This seemed to be the consensus around the school-bus stop. The iPads are coming! Not only were our kids going to love learning, they were also going to do so on the cutting edge of innovation. Why, in the face of this giddy chatter, was I filled with dread?

It’s not because I’m a cranky Luddite. I swear. I recognize that iPads, if introduced with a clear plan, and properly supervised, can improve learning and allow students to work at their own pace. Those are big ifs in an era of overcrowded classrooms. But my hunch is that our school will do a fine job. We live in a town filled with talented educators and concerned parents.

Frankly, I find it more disturbing that a brand-name product is being elevated to the status of mandatory school supply. I also worry that iPads might transform the classroom from a social environment into an educational subway car, each student fixated on his or her personalized educational gadget.

But beneath this fretting is a more fundamental beef: the school system, without meaning to, is subverting my parenting, in particular my fitful efforts to regulate my children’s exposure to screens. These efforts arise directly from my own tortured history as a digital pioneer, and the war still raging within me between harnessing the dazzling gifts of technology versus fighting to preserve the slower, less convenient pleasures of the analog world.

What I’m experiencing is, in essence, a generational reckoning, that queasy moment when those of us whose impatient desires drove the tech revolution must face the inheritors of this enthusiasm: our children.

It will probably come as no surprise that I’m one of those annoying people fond of boasting that I don’t own a TV. It makes me feel noble to mention this — I am feeling noble right now! — as if I’m taking a brave stand against the vulgar superficiality of the age. What I mention less frequently is the reason I don’t own a TV: because I would watch it constantly.

My brothers and I were so devoted to television as kids that we created an entire lexicon around it. The brother who turned on the TV, and thus controlled the channel being watched, was said to “emanate.” I didn’t even know what “emanate” meant. It just sounded like the right verb.

This was back in the ’70s. We were latchkey kids living on the brink of a brave new world. In a few short years, we’d hurtled from the miraculous calculator (turn it over to spell out “boobs”!) to arcades filled with strobing amusements. I was one of those guys who spent every spare quarter mastering Asteroids and Defender, who found in video games a reliable short-term cure for the loneliness and competitive anxiety that plagued me. By the time I graduated from college, the era of personal computers had dawned. I used mine to become a closet Freecell Solitaire addict.

Midway through my 20s I underwent a reformation. I began reading, then writing, literary fiction. It quickly became apparent that the quality of my work rose in direct proportion to my ability filter out distractions. I’ve spent the past two decades struggling to resist the endless pixelated enticements intended to capture and monetize every spare second of human attention.

Has this campaign succeeded? Not really. I’ve just been a bit slower on the uptake than my contemporaries. But even without a TV or smartphones, our household can feel dominated by computers, especially because I and my wife (also a writer) work at home. We stare into our screens for hours at a stretch, working and just as often distracting ourselves from work.

Read the entire article here.

Image courtesy of Wired.

Technology and Employment

Technology is altering the lives of us all. Often it is a positive influence, offering its users tremendous benefits from time-saving to life-extension. However, the relationship of technology to our employment is more complex and usually detrimental.

Many traditional forms of employment have already disappeared thanks to our technological tools; still many other jobs have changed beyond recognition, requiring new skills and knowledge. And this may be just the beginning.

From Technology Review:

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson’s contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

It’s a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. ­Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. “It’s the great paradox of our era,” he says. “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren’t keeping up.”

Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.

Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world’s manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts—tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics’ Baxter (see “The Blue-Collar Robot,” May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google’s driverless car suggest what automation might be able to accomplish someday soon.

A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.

It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, “digital versions of human intelligence” are increasingly replacing even those jobs once thought to require people. “It will change every profession in ways we have barely seen yet,” he warns.

Read the entire article here.

Image: Industrial robots. Courtesy of Techjournal.

Amazon All the Time and Google Toilet Paper

Soon courtesy of Amazon, Google and other retail giants, and of course lubricated by the likes of the ubiquitous UPS and Fedex trucks, you may be able to dispense with the weekly or even daily trip to the grocery store. Amazon is expanding a trial of its same-day grocery delivery service, and others are following suit in select local and regional tests.

You may recall the spectacular implosion of the online grocery delivery service Webvan — a dot.com darling — that came and went in the blink of an internet eye, finally going bankrupt in 2001. Well, times have changed and now avaricious Amazon and its peers have their eyes trained on your groceries.

So now all you need to do is find a service to deliver your kids to and from school, an employer who will let you work from home, convince your spouse that “staycations” are cool, use Google Street View to become a virtual tourist, and you will never, ever, ever, EVER need to leave your house again!

From Slate:

The other day I ran out of toilet paper. You know how that goes. The last roll in the house sets off a ticking clock; depending on how many people you live with and their TP profligacy, you’re going to need to run to the store within a few hours, a day at the max, or you’re SOL. (Unless you’re a man who lives alone, in which case you can wait till the next equinox.) But it gets worse. My last roll of toilet paper happened to coincide with a shortage of paper towels, a severe run on diapers (you know, for kids!), and the last load of dishwashing soap. It was a perfect storm of household need. And, as usual, I was busy and in no mood to go to the store.

This quotidian catastrophe has a happy ending. In April, I got into the “pilot test” for Google Shopping Express, the search company’s effort to create an e-commerce service that delivers goods within a few hours of your order. The service, which is currently being offered in the San Francisco Bay Area, allows you to shop online at Target, Walgreens, Toys R Us, Office Depot, and several smaller, local stores, like Blue Bottle Coffee. Shopping Express combines most of those stores’ goods into a single interface, which means you can include all sorts of disparate items in the same purchase. Shopping Express also offers the same prices you’d find at the store. After you choose your items, you select a delivery window—something like “Anytime Today” or “Between 2 p.m. and 6 p.m.”—and you’re done. On the fateful day that I’d run out of toilet paper, I placed my order at around noon. Shortly after 4, a green-shirted Google delivery guy strode up to my door with my goods. I was back in business, and I never left the house.

Google is reportedly thinking about charging $60 to $70 a year for the service, making it a competitor to Amazon’s Prime subscription plan. But at this point the company hasn’t finalized pricing, and during the trial period, the whole thing is free. I’ve found it easy to use, cheap, and reliable. Similar to my experience when I first got Amazon Prime, it has transformed how I think about shopping. In fact, in the short time I’ve been using it, Shopping Express has replaced Amazon as my go-to source for many household items. I used to buy toilet paper, paper towels, and diapers through Amazon’s Subscribe & Save plan, which offers deep discounts on bulk goods if you choose a regular delivery schedule. I like that plan when it works, but subscribing to items whose use is unpredictable—like diapers for a newborn—is tricky. I often either run out of my Subscribe & Save items before my next delivery, or I get a new delivery while I still have a big load of the old stuff. Shopping Express is far simpler. You get access to low-priced big-box-store goods without all the hassle of big-box stores—driving, parking, waiting in line. And you get all the items you want immediately.

After using it for a few weeks, it’s hard to escape the notion that a service like Shopping Express represents the future of shopping. (Also the past of shopping—the return of profitless late-1990s’ services like Kozmo and WebVan, though presumably with some way of making money this time.) It’s not just Google: Yesterday, Reuters reported that Amazon is expanding AmazonFresh, its grocery delivery service, to big cities beyond Seattle, where it has been running for several years. Amazon’s move confirms the theory I floated a year ago, that the e-commerce giant’s long-term goal is to make same-day shipping the norm for most of its customers.

Amazon’s main competitive disadvantage, today, is shipping delays. While shopping online makes sense for many purchases, the vast majority of the world’s retail commerce involves stuff like toilet paper and dishwashing soap—items that people need (or think they need) immediately. That explains why Wal-Mart sells half a trillion dollars worth of goods every year, and Amazon sells only $61 billion. Wal-Mart’s customers return several times a week to buy what they need for dinner, and while they’re there, they sometimes pick up higher-margin stuff, too. By offering same-day delivery on groceries and household items, Amazon and Google are trying to edge in on that market.

As I learned while using Shopping Express, the plan could be a hit. If done well, same-day shipping erases the distinctions between the kinds of goods we buy online and those we buy offline. Today, when you think of something you need, you have to go through a mental checklist: Do I need it now? Can it wait two days? Is it worth driving for? With same-day shipping, you don’t have to do that. All shopping becomes online shopping.

Read the entire article here.

Image: Webvan truck. Courtesy of Wikipedia.

Law, Common Sense and Your DNA

Paradoxically the law and common sense often seem to be at odds. Justice may still be blind, at least in most open democracies, but there seems to be no question as to the stupidity of much of our law.

Some examples: in Missouri it’s illegal to drive with an uncaged bear in the car; in Maine, it’s illegal to keep Christmas decorations up after January 14th; in New Jersey, it’s illegal to wear a bulletproof vest while committing murder; in Connecticut, a pickle is not an official, legal pickle unless it can bounce; in Louisiana, you can be fined $500 for instructing a pizza delivery service to deliver pizza to a friend unknowingly.

So, today we celebrate a victory for common sense and justice over thoroughly ill-conceived and badly written law — the U.S. Supreme Court unanimously struck down laws granting patents to corporations for human genes.

Unfortunately though, due to the extremely high financial stakes this is not likely to be the last we hear about big business seeking to patent or control the building blocks to life.

From the WSJ:

The Supreme Court unanimously ruled Thursday that human genes isolated from the body can’t be patented, a victory for doctors and patients who argued that such patents interfere with scientific research and the practice of medicine.

The court was handing down one of its most significant rulings in the age of molecular medicine, deciding who may own the fundamental building blocks of life.

The case involved Myriad Genetics Inc., which holds patents related to two genes, known as BRCA1 and BRCA2, that can indicate whether a woman has a heightened risk of developing breast cancer or ovarian cancer.

Justice Clarence Thomas, writing for the court, said the genes Myriad isolated are products of nature, which aren’t eligible for patents.

“Myriad did not create anything,” Justice Thomas wrote in an 18-page opinion. “To be sure, it found an important and useful gene, but separating that gene from its surrounding genetic material is not an act of invention.”

Even if a discovery is brilliant or groundbreaking, that doesn’t necessarily mean it’s patentable, the court said.

However, the ruling wasn’t a complete loss for Myriad. The court said that DNA molecules synthesized in a laboratory were eligible for patent protection. Myriad’s shares soared after the court’s ruling.

The court adopted the position advanced by the Obama administration, which argued that isolated forms of naturally occurring DNA weren’t patentable, but artificial DNA molecules were.

Myriad also has patent claims on artificial genes, known as cDNA.

The high court’s ruling was a win for a coalition of cancer patients, medical groups and geneticists who filed a lawsuit in 2009 challenging Myriad’s patents. Thanks to those patents, the Salt Lake City company has been the exclusive U.S. commercial provider of genetic tests for breast cancer and ovarian cancer.

“Today, the court struck down a major barrier to patient care and medical innovation,” said Sandra Park of the American Civil Liberties Union, which represented the groups challenging the patents. “Because of this ruling, patients will have greater access to genetic testing and scientists can engage in research on these genes without fear of being sued.”

Myriad didn’t immediately respond to a request for comment.

The challengers argued the patents have allowed Myriad to dictate the type and terms of genetic screening available for the diseases, while also dissuading research by other laboratories.

Read the entire article here.

Image: Gene showing the coding region in a segment of eukaryotic DNA. Courtesy of Wikipedia.

The Death of Photojournalism

Really, it was only a matter of time. First, digital cameras killed off their film-dependent predecessors and then dealt a death knell for Kodak. Now social media and the #hashtag is doing the same to the professional photographer.

Camera-enabled smartphones are ubiquitous, making everyone a photographer. And, with almost everyone jacked into at least one social network or photo-sharing site it takes only one point and a couple of clicks to get a fresh image posted to the internet. Ironically, the newsprint media, despite being in the business of news, have failed to recognize this news until recently.

So, now with an eye to cutting costs, and making images more immediate and compelling — via citizens — news organizations are re-tooling their staffs in four ways: first, fire the photographers; second, re-train reporters to take photographs with their smartphones; third, video, video, video; fourth, rely on the ever willing public to snap images, post, tweet, #hashtag and like — for free of course.

From Cult of Mac:

The Chicago Sun-Times, one of the remnants of traditional paper journalism, has let go its entire photography staff of 28 people. Now its reporters will start receiving “iPhone photography basics” training to start producing their own photos and videos.

The move is part of a growing trend towards publications using the iPhone as a replacement for fancy, expensive DSLRs. It’s a also a sign of how traditional journalism is being changed by technology like the iPhone and the advent of digital publishing.

Screen Shot 2013-05-31 at 1.58.39 PM

When Hurricane Sandy hit New York City, reporters for Time used the iPhone to take photos on the field and upload to the publication’s Instagram account. Even the cover photo used on the corresponding issue of Time was taken on an iPhone.

Sun-Times photographer Alex Garcia argues that the “idea that freelancers and reporters could replace a photo staff with iPhones is idiotic at worst, and hopelessly uninformed at best.” Garcia believes that reporters are incapable of writing articles and also producing quality media, but she’s fighting an uphill battle.

Big newspaper companies aren’t making anywhere near the amount of money they used to due to the popularity of online publications and blogs. Free news is a click away nowadays. Getting rid of professional photographers and equipping reporters with iPhones is another way to cut costs.

The iPhone has a better camera than most digital point-and-shoots, and more importantly, it is in everyone’s pocket. It’s a great camera that’s always with you, and that makes it an invaluable tool for any journalist. There will always be a need for videographers and pro photographers that can make studio-level work, but the iPhone is proving to be an invaluable tool for reporters in the modern world.

Read the entire article here.

Image: Kodak 1949-56 Retina IIa 35mm Camera. Courtesy of Wikipedia / Kodak.

Beware! RoboBee May Be Watching You

History will probably show that humans are the likely cause for the mass disappearance and death of honey bees around the world.

So, while ecologists try to understand why and how to reverse bee death and colony collapse, engineers are busy building alternatives to our once nectar-loving friends. Meet RoboBee, also known as the Micro Air Vehicles Project.

From Scientific American:

We take for granted the effortless flight of insects, thinking nothing of swatting a pesky fly and crushing its wings. But this insect is a model of complexity. After 12 years of work, researchers at the Harvard School of Engineering and Applied Sciences have succeeded in creating a fly-like robot. And in early May, they announced that their tiny RoboBee (yes, it’s called a RoboBee even though it’s based on the mechanics of a fly) took flight. In the future, that could mean big things for everything from disaster relief to colony collapse disorder.

The RoboBee isn’t the only miniature flying robot in existence, but the 80-milligram, quarter-sized robot is certainly one of the smallest. “The motivations are really thinking about this as a platform to drive a host of really challenging open questions and drive new technology and engineering,” says Harvard professor Robert Wood, the engineering team lead for the project.

When Wood and his colleagues first set out to create a robotic fly, there were no off the shelf parts for them to use. “There were no motors small enough, no sensors that could fit on board. The microcontrollers, the microprocessors–everything had to be developed fresh,” says Wood. As a result, the RoboBee project has led to numerous innovations, including vision sensors for the bot, high power density piezoelectric actuators (ceramic strips that expand and contract when exposed to an electrical field), and a new kind of rapid manufacturing that involves layering laser-cut materials that fold like a pop-up book. The actuators assist with the bot’s wing-flapping, while the vision sensors monitor the world in relation to the RoboBee.

“Manufacturing took us quite awhile. Then it was control, how do you design the thing so we can fly it around, and the next one is going to be power, how we develop and integrate power sources,” says Wood. In a paper recently published by Science, the researchers describe the RoboBee’s power quandary: it can fly for just 20 seconds–and that’s while it’s tethered to a power source. “Batteries don’t exist at the size that we would want,” explains Wood. The researchers explain further in the report: ” If we implement on-board power with current technologies, we estimate no more than a few minutes of untethered, powered flight. Long duration power autonomy awaits advances in small, high-energy-density power sources.”

The RoboBees don’t last a particularly long time–Wood says the flight time is “on the order of tens of minutes”–but they can keep flapping their wings long enough for the Harvard researchers to learn everything they need to know from each successive generation of bots. For commercial applications, however, the RoboBees would need to be more durable.

Read the entire article here.

Image courtesy of Micro Air Vehicles Project, Harvard.

Leadership and the Tyranny of Big Data

“There are three kinds of lies: lies, damned lies, and statistics”, goes the adage popularized by author Mark Twain.

Most people take for granted that numbers can be persuasive — just take a look at your bank balance. Also, most accept the notion that data can be used, misused, misinterpreted, re-interpreted and distorted to support or counter almost any argument. Just listen to a politician quote polling numbers and then hear an opposing politician make a contrary argument using the very same statistics. Or, better still, familiarize yourself with pseudo-science of economics.

Authors Kenneth Cukier (data editor for The Economist) and Viktor Mayer-Schönberger (professor of Internet governance) examine this phenomenon in their book Big Data: A Revolution That Will Transform How We Live, Work, and Think. They eloquently present the example of Robert McNamara, U.S. defense secretary during the Vietnam war, who in(famously) used his detailed spreadsheets — including daily body count — to manage and measure progress. Following the end of the war, many U.S. generals later described this over-reliance on numbers as misguided dictatorship that led many to make ill-informed decisions — based solely on numbers — and to fudge their figures.

This classic example leads them to a timely and important caution: as the range and scale of big data becomes ever greater, and while it may offer us great benefits, it can and will be used to mislead.

From Technology review:

Big data is poised to transform society, from how we diagnose illness to how we educate children, even making it possible for a car to drive itself. Information is emerging as a new economic input, a vital resource. Companies, governments, and even individuals will be measuring and optimizing everything possible.

But there is a dark side. Big data erodes privacy. And when it is used to make predictions about what we are likely to do but haven’t yet done, it threatens freedom as well. Yet big data also exacerbates a very old problem: relying on the numbers when they are far more fallible than we think. Nothing underscores the consequences of data analysis gone awry more than the story of Robert McNamara.

McNamara was a numbers guy. Appointed the U.S. secretary of defense when tensions in Vietnam rose in the early 1960s, he insisted on getting data on everything he could. Only by applying statistical rigor, he believed, could decision makers understand a complex situation and make the right choices. The world in his view was a mass of unruly information that—if delineated, denoted, demarcated, and quantified—could be tamed by human hand and fall under human will. McNamara sought Truth, and that Truth could be found in data. Among the numbers that came back to him was the “body count.”

McNamara developed his love of numbers as a student at Harvard Business School and then as its youngest assistant professor at age 24. He applied this rigor during the Second World War as part of an elite Pentagon team called Statistical Control, which brought data-driven decision making to one of the world’s largest bureaucracies. Before this, the military was blind. It didn’t know, for instance, the type, quantity, or location of spare airplane parts. Data came to the rescue. Just making armament procurement more efficient saved $3.6 billion in 1943. Modern war demanded the efficient allocation of resources; the team’s work was a stunning success.

At war’s end, the members of this group offered their skills to corporate America. The Ford Motor Company was floundering, and a desperate Henry Ford II handed them the reins. Just as they knew nothing about the military when they helped win the war, so too were they clueless about making cars. Still, the so-called “Whiz Kids” turned the company around.

McNamara rose swiftly up the ranks, trotting out a data point for every situation. Harried factory managers produced the figures he demanded—whether they were correct or not. When an edict came down that all inventory from one car model must be used before a new model could begin production, exasperated line managers simply dumped excess parts into a nearby river. The joke at the factory was that a fellow could walk on water—atop rusted pieces of 1950 and 1951 cars.

McNamara epitomized the hyper-rational executive who relied on numbers rather than sentiments, and who could apply his quantitative skills to any industry he turned them to. In 1960 he was named president of Ford, a position he held for only a few weeks before being tapped to join President Kennedy’s cabinet as secretary of defense.

As the Vietnam conflict escalated and the United States sent more troops, it became clear that this was a war of wills, not of territory. America’s strategy was to pound the Viet Cong to the negotiation table. The way to measure progress, therefore, was by the number of enemy killed. The body count was published daily in the newspapers. To the war’s supporters it was proof of progress; to critics, evidence of its immorality. The body count was the data point that defined an era.

McNamara relied on the figures, fetishized them. With his perfectly combed-back hair and his flawlessly knotted tie, McNamara felt he could comprehend what was happening on the ground only by staring at a spreadsheet—at all those orderly rows and columns, calculations and charts, whose mastery seemed to bring him one standard deviation closer to God.

In 1977, two years after the last helicopter lifted off the rooftop of the U.S. embassy in Saigon, a retired Army general, Douglas Kinnard, published a landmark survey called The War Managers that revealed the quagmire of quantification. A mere 2 percent of America’s generals considered the body count a valid way to measure progress. “A fake—totally worthless,” wrote one general in his comments. “Often blatant lies,” wrote another. “They were grossly exaggerated by many units primarily because of the incredible interest shown by people like McNamara,” said a third.

Read the entire article after the jump.

Image: Robert McNamara at a cabinet meeting, 22 Nov 1967. Courtesy of Wikipedia / Public domain.

MondayMap: Your Taxes and Google Street View

The fear of an annual tax audit brings many people to their knees. It’s one of many techniques that government authorities use to milk their citizens of every last penny of taxes. Well, authorities now have an even more powerful weapon to add to their tax collecting arsenal — Google Street View. And, if you are reading this from Lithuania you will know what we are talking about.

From the Wall Street Journal:

One day last summer, a woman was about to climb into a hammock in the front yard of a suburban house here when a photographer for the Google Inc. Street View service snapped her picture.

The apparently innocuous photograph is now being used as evidence in a tax-evasion case brought by Lithuanian authorities against the undisclosed owners of the home.

Some European countries have been going after Google, complaining that the search giant is invading the privacy of their citizens. But tax inspectors here have turned to the prying eyes of Street View for their own purposes.

After Google’s car-borne cameras were driven through the Vilnius area last year, the tax men in this small Baltic nation got busy. They have spent months combing through footage looking for unreported taxable wealth.

“We were very impressed,” said Modestas Kaseliauskas, head of the State Tax Authority. “We realized that we could do more with less and in shorter time.”

More than 100 people have been identified so far after investigators compared Street View images of about 500 properties with state property registries looking for undeclared construction.

Two recent cases netted $130,000 in taxes and penalties after investigators found houses photographed by Google that weren’t on official maps.

From aerial surveillance to dedicated iPhone apps, cash-strapped governments across Europe are employing increasingly unconventional measures against tax cheats to raise revenue. In some countries, authorities have tried to enlist citizens to help keep watch. Customers in Greece, for instance, are insisting on getting receipts for what they buy.

For Lithuania, which only two decades ago began its transition away from communist central planning and remains one of the poorest countries in the European Union, Street View has been a big help. After the global financial crisis struck in 2008, belt tightening cut the tax authority’s budget by a third. A quarter of its employees were let go, leaving it with fewer resources just as it was being asked to do more.

“We were pressured to increase tax revenue,” said the authority’s Mr. Kaseliauskas.

Street View has let Mr. Kaseliauskas’s team see things it would have otherwise missed. Its images are better—and cheaper—than aerial photos, which authorities complain often aren’t clear enough to be useful.

Sitting in their city office 10 miles away, they were able to detect that, contrary to official records, the house with the hammock existed and that, in one photograph, three cars were parked in the driveway.

An undeclared semidetached house owned by the former board chairman of Bank Snoras, Raimundas Baranauskas, was recently identified using Street View and is estimated by the government to be worth about $260,000. Authorities knew Mr. Baranauskas owned land there, but not buildings. A quick look online led to the discovery of several houses on his land, in a quiet residential street of Vilnius.

Read the entire article here.

Image courtesy of (who else?), Google Maps.

Big Data and Even Bigger Problems

First a definition. Big data: typically a collection of large and complex datasets that are too cumbersome to process and analyze using traditional computational approaches and database applications. Usually the big data moniker will be accompanied by an IT vendor’s pitch for shiny new software (and possible hardware) solution able to crunch through petabytes (one petabyte is a million gigabytes) of data and produce a visualizable result that mere mortals can decipher.

Many companies see big data and related solutions as a panacea to a range of business challenges: customer service, medical diagnostics, product development, shipping and logistics, climate change studies, genomic analysis and so on. A great example was the last U.S. election. Many political wonks — from both sides of the aisle — agreed that President Obama was significantly aided in his won re-election with the help of big data. So, with that in mind, many are now looking at more important big data problems.

From Technology Review:

As chief scientist for President Obama’s reëlection effort, Rayid Ghani helped revolutionize the use of data in politics. During the final 18 months of the campaign, he joined a sprawling team of data and software experts who sifted, collated, and combined dozens of pieces of information on each registered U.S. voter to discover patterns that let them target fund-raising appeals and ads.

Now, with Obama again ensconced in the Oval Office, some veterans of the campaign’s data squad are applying lessons from the campaign to tackle social issues such as education and environmental stewardship. Edgeflip, a startup Ghani founded in January with two other campaign members, plans to turn the ad hoc data analysis tools developed for Obama for America into software that can make nonprofits more effective at raising money and recruiting volunteers.

Ghani isn’t the only one thinking along these lines. In Chicago, Ghani’s hometown and the site of Obama for America headquarters, some campaign members are helping the city make available records of utility usage and crime statistics so developers can build apps that attempt to improve life there. It’s all part of a bigger idea to engineer social systems by scanning the numerical exhaust from mundane activities for patterns that might bear on everything from traffic snarls to human trafficking. Among those pursuing such humanitarian goals are startups like DataKind as well as large companies like IBM, which is redrawing bus routes in Ivory Coast (see “African Bus Routes Redrawn Using Cell-Phone Data”), and Google, with its flu-tracking software (see “Sick Searchers Help Track Flu”).

Ghani, who is 35, has had a longstanding interest in social causes, like tutoring disadvantaged kids. But he developed his data-mining savvy during 10 years as director of analytics at Accenture, helping retail chains forecast sales, creating models of consumer behavior, and writing papers with titles like “Data Mining for Business Applications.”

Before joining the Obama campaign in July 2011, Ghani wasn’t even sure his expertise in machine learning and predicting online prices could have an impact on a social cause. But the campaign’s success in applying such methods on the fly to sway voters is now recognized as having been potentially decisive in the election’s outcome (see “A More Perfect Union”).

“I realized two things,” says Ghani. “It’s doable at the massive scale of the campaign, and that means it’s doable in the context of other problems.”

At Obama for America, Ghani helped build statistical models that assessed each voter along five axes: support for the president; susceptibility to being persuaded to support the president; willingness to donate money; willingness to volunteer; and likelihood of casting a vote. These models allowed the campaign to target door knocks, phone calls, TV spots, and online ads to where they were most likely to benefit Obama.

One of the most important ideas he developed, dubbed “targeted sharing,” now forms the basis of Edgeflip’s first product. It’s a Facebook app that prompts people to share information from a nonprofit, but only with those friends predicted to respond favorably. That’s a big change from the usual scattershot approach of posting pleas for money or help and hoping they’ll reach the right people.

Edgeflip’s app, like the one Ghani conceived for Obama, will ask people who share a post to provide access to their list of friends. This will pull in not only friends’ names but also personal details, like their age, that can feed models of who is most likely to help.

Say a hurricane strikes the southeastern United States and the Red Cross needs clean-up workers. The app would ask Facebook users to share the Red Cross message, but only with friends who live in the storm zone, are young and likely to do manual labor, and have previously shown interest in content shared by that user. But if the same person shared an appeal for donations instead, he or she would be prompted to pass it along to friends who are older, live farther away, and have donated money in the past.

Michael Slaby, a senior technology official for Obama who hired Ghani for the 2012 election season, sees great promise in the targeted sharing technique. “It’s one of the most compelling innovations to come out of the campaign,” says Slaby. “It has the potential to make online activism much more efficient and effective.”

For instance, Ghani has been working with Fidel Vargas, CEO of the Hispanic Scholarship Fund, to increase that organization’s analytical savvy. Vargas thinks social data could predict which scholarship recipients are most likely to contribute to the fund after they graduate. “Then you’d be able to give away scholarships to qualified students who would have a higher probability of giving back,” he says. “Everyone would be much better off.”

Ghani sees a far bigger role for technology in the social sphere. He imagines online petitions that act like open-source software, getting passed around and improved. Social programs, too, could get constantly tested and improved. “I can imagine policies being designed a lot more collaboratively,” he says. “I don’t know if the politicians are ready to deal with it.” He also thinks there’s a huge amount of untapped information out there about childhood obesity, gang membership, and infant mortality, all ready for big data’s touch.

Read the entire article here.

Inforgraphic courtesy of visua.ly. See the original here.

You Can Check Out Anytime You Like…

“… But You Can Never Leave”. So goes one of the most memorable of lyrical phrases from The Eagles (Hotel California).

Of late, it seems that this state of affairs also applies to a vast collection of people on Facebook; many wish to leave but lack the social capital or wisdom or backbone to do so.

From the Washington Post:

Bad news, everyone. We’re trapped. We may well be stuck here for the rest of our lives. I hope you brought canned goods.

A dreary line of tagged pictures and status updates stretches before us from here to the tomb.

Like life, Facebook seems to get less exciting the longer we spend there. And now everyone hates Facebook, officially.

Last week, Pew reported that 94 percent of teenagers are on Facebook, but that they are miserable about it. Then again, when are teenagers anything else? Pew’s focus groups of teens complained about the drama, said Twitter felt more natural, said that it seemed like a lot of effort to keep up with everyone you’d ever met, found the cliques and competition for friends offputting –

All right, teenagers. You have a point. And it doesn’t get better.

The trouble with Facebook is that 94 percent of people are there. Anything with 94 Percent of People involved ceases to have a personality and becomes a kind of public utility. There’s no broad generalization you can make about people who use flush toilets. Sure, toilets are a little odd, and they become quickly ridiculous when you stare at them long enough, the way a word used too often falls apart into meaningless letters under scrutiny, but we don’t think of them as peculiar. Everyone’s got one. The only thing weirder than having one of those funny porcelain thrones in your home would be not having one.

Facebook is like that, and not just because we deposit the same sort of thing in both. It used to define a particular crowd. But it’s no longer the bastion of college students and high schoolers avoiding parental scrutiny. Mom’s there. Heck, Velveeta Cheesy Skillets are there.

It’s just another space in which all the daily drama of actual life plays out. All the interactions that used only to be annoying to the people in the room with you at the time are now played out indelibly in text and pictures that can be seen from great distances by anyone who wants to take an afternoon and stalk you. Oscar Wilde complained about married couples who flirted with each other, saying that it was like washing clean linen in public. Well, just look at the wall exchanges of You Know The Couple I Mean. “Nothing is more irritating than not being invited to a party you wouldn’t be seen dead at,” Bill Vaughan said. On Facebook, that’s magnified to parties in entirely different states.

Facebook has been doing its best to approximate our actual social experience — that creepy foray into chairs aside. But what it forgot was that our actual social experience leaves much to be desired. After spending time with Other People smiling politely at news of what their sonograms are doing, we often want to rush from the room screaming wordlessly and bang our heads into something.

Hell is other people, updating their statuses with news that Yay The Strange Growth Checked Out Just Fine.

This is the point where someone says, “Well, if it’s that annoying, why don’t you unsubscribe?”

But you can’t.

Read the entire article here.

Image: Facebook logo courtesy of Mirror / Facebook.

Friendships of Utility

The average Facebook user is said to have 142 “friends”, and many active members have over 500. This certainly seems to be a textbook case of quantity over quality in the increasingly competitive status wars and popularity stakes of online neo- or pseudo-celebrity. That said, and regardless of your relationship with online social media, the one good to come from the likes — a small pun intended — of Facebook is that social scientists can now dissect and analyze your online behaviors and relationships as never before.

So, while Facebook, and its peers, may not represent a qualitative leap in human relationships the data and experiences that come from it may help future generations figure out what is truly important.

From the Wall Street Journal:

Facebook has made an indelible mark on my generation’s concept of friendship. The average Facebook user has 142 friends (many people I know have upward of 500). Without Facebook many of us “Millennials” wouldn’t know what our friends are up to or what their babies or boyfriends look like. We wouldn’t even remember their birthdays. Is this progress?

Aristotle wrote that friendship involves a degree of love. If we were to ask ourselves whether all of our Facebook friends were those we loved, we’d certainly answer that they’re not. These days, we devote equal if not more time to tracking the people we have had very limited human interaction with than to those whom we truly love. Aristotle would call the former “friendships of utility,” which, he wrote, are “for the commercially minded.”

I’d venture to guess that at least 90% of Facebook friendships are those of utility. Knowing this instinctively, we increasingly use Facebook as a vehicle for self-promotion rather than as a means to stay connected to those whom we love. Instead of sharing our lives, we compare and contrast them, based on carefully calculated posts, always striving to put our best face forward.

Friendship also, as Aristotle described it, can be based on pleasure. All of the comments, well-wishes and “likes” we can get from our numerous Facebook friends may give us pleasure. But something feels false about this. Aristotle wrote: “Those who love for the sake of pleasure do so for the sake of what is pleasant to themselves, and not insofar as the other is the person loved.” Few of us expect the dozens of Facebook friends who wish us a happy birthday ever to share a birthday celebration with us, let alone care for us when we’re sick or in need.

One thing’s for sure, my generation’s friendships are less personal than my parents’ or grandparents’ generation. Since we can rely on Facebook to manage our friendships, it’s easy to neglect more human forms of communication. Why visit a person, write a letter, deliver a card, or even pick up the phone when we can simply click a “like” button?

The ultimate form of friendship is described by Aristotle as “virtuous”—meaning the kind that involves a concern for our friend’s sake and not for our own. “Perfect friendship is the friendship of men who are good, and alike in virtue . . . . But it is natural that such friendships should be infrequent; for such men are rare.”

Those who came before the Millennial generation still say as much. My father and grandfather always told me that the number of such “true” friends can be counted on one hand over the course of a lifetime. Has Facebook increased our capacity for true friendship? I suspect Aristotle would say no.

Ms. Kelly joined Facebook in 2004 and quit in 2013.

Read the entire article here.

Pain Ray

We humans are capable of the most sublime creations, from soaring literary inventions to intensely moving music and gorgeous works of visual art. This stands in stark and paradoxical contrast to our range of inventions that enable efficient mass destruction, torture and death. The latest in this sad catalog of human tools of terror is the “pain ray”, otherwise known by its military euphemism as an Active Denial weapon. The good news is that it only delivers intense pain, rather than death. How inventive we humans really are — we should be so proud.

[tube]J1w4g2vr7B4[/tube]

From the New Scientist:

THE pain, when it comes, is unbearable. At first it’s comparable to a hairdryer blast on the skin. But within a couple of seconds, most of the body surface feels roasted to an excruciating degree. Nobody has ever resisted it: the deep-rooted instinct to writhe and escape is too strong.

The source of this pain is an entirely new type of weapon, originally developed in secret by the US military – and now ready for use. It is a genuine pain ray, designed to subdue people in war zones, prisons and riots. Its name is Active Denial. In the last decade, no other non-lethal weapon has had as much research and testing, and some $120 million has already been spent on development in the US.

Many want to shelve this pain ray before it is fired for real but the argument is far from cut and dried. Active Denial’s supporters claim that its introduction will save lives: the chances of serious injury are tiny, they claim, and it causes less harm than tasers, rubber bullets or batons. It is a persuasive argument. Until, that is, you bring the dark side of human nature into the equation.

The idea for Active Denial can be traced back to research on the effects of radar on biological tissue. Since the 1940s, researchers have known that the microwave radiation produced by radar devices at certain frequencies could heat the skin of bystanders. But attempts to use such microwave energy as a non-lethal weapon only began in the late 1980s, in secret, at the Air Force Research Laboratory (AFRL) at Kirtland Air Force Base in Albuquerque, New Mexico.

The first question facing the AFRL researchers was whether microwaves could trigger pain without causing skin damage. Radiation equivalent to that used in oven microwaves, for example, was out of the question since it penetrates deep into objects, and causes cells to break down within seconds.

The AFRL team found that the key was to use millimetre waves, very-short-wavelength microwaves, with a frequency of about 95 gigahertz. By conducting tests on human volunteers, they discovered that these waves would penetrate only the outer 0.4 millimetres of skin, because they are absorbed by water in surface tissue. So long as the beam power was capped – keeping the energy per square centimetre of skin below a certain level – the tissue temperature would not exceed 55 °C, which is just below the threshold for damaging cells (Bioelectromagnetics, vol 18, p 403).

The sensation, however, was extremely painful, because the outer skin holds a type of pain receptor called thermal nociceptors. These respond rapidly to threats and trigger reflexive “repel” reactions when stimulated (see diagram).

To build a weapon, the next step was to produce a high-power beam capable of reaching hundreds of metres. At the time, it was possible to beam longer-wavelength microwaves over great distances – as with radar systems – but it was not feasible to use the same underlying technology to produce millimetre waves.

Working with the AFRL, the military contractor Raytheon Company, based in Waltham, Massachusetts, built a prototype with a key bit of hardware: a gyrotron, a device for amplifying millimetre microwaves. Gyrotrons generate a rotating ring of electrons, held in a magnetic field by powerful cryogenically cooled superconducting magnets. The frequency at which these electrons rotate matches the frequency of millimetre microwaves, causing a resonating effect. The souped-up millimetre waves then pass to an antenna, which fires the beam.

The first working prototype of the Active Denial weapon, dubbed “System 0”, was completed in 2000. At 7.5 tonnes, it was too big to be easily transported. A few years later, it was followed by mobile versions that could be carried on heavy vehicles.

Today’s Active Denial device, designed for military use, looks similar to a large, flat satellite dish mounted on a truck. The microwave beam it produces has a diameter of about 2 metres and can reach targets several hundred metres away. It fires in bursts of about 3 to 5 seconds.

Those who have been at the wrong end of the beam report that the pain is impossible to resist. “You might think you can withstand getting blasted. Your body disagrees quite strongly,” says Spencer Ackerman, a reporter for Wired magazine’s blog, Danger Room. He stood in the beam at an event arranged for the media last year. “One second my shoulder and upper chest were at a crisp, early-spring outdoor temperature on a Virginia field. Literally the next second, they felt like they were roasted, with what can be likened to a super-hot tingling feeling. The sensation causes your nerves to take control of your feeble consciousness, so it wasn’t like I thought getting out of the way of the beam was a good idea – I did what my body told me to do.” There’s also little chance of shielding yourself; the waves penetrate clothing.

Read the entire article here.

Related video courtesy of CBS 60 Minutes.

Please Press 1 to Avoid Phone Menu Hell

Good customer service once meant that a store or service employee would know you by name. This person would know your previous purchasing habits and your preferences; this person would know the names of your kids and your dog. Great customer service once meant that an employee could use this knowledge to anticipate your needs or personalize a specific deal. Well, this type of service still exists — in some places — but many businesses have outsourced it to offshore call center personnel or to machines, or both. Service may seem personal, but it’s not — service is customized to suit your profile, but it’s not personal in the same sense that once held true.

And, to rub more salt into the customer service wound, businesses now use their automated phone systems seemingly to shield themselves from you, rather than to provide you with the service you want. After all, when was the last time you managed to speak to a real customer service employee after making it through “please press 1 for English“, the poor choice of musak or sponsored ads and the never-ending phone menus?

Now thanks to an enterprising and extremely patient soul there is an answer to phone menu hell.

Welcome to Please Press 1. Founded by Nigel Clarke (alumnus of 400 year old Dame Alice Owens School in London), Please Press 1 provides shortcuts for customer service phone menus for many of the top businesses in Britain [ed: we desperately need this service in the United States].

 

From the MailOnline:

A frustrated IT manager who has spent seven years making 12,000 calls to automated phone centres has launched a new website listing ‘short cut’ codes which can shave up to eight minutes off calls.

Nigel Clarke, 53, has painstakingly catalogued the intricate phone menus of hundreds of leading multi-national companies – some of which have up to 80 options.

He has now formulated his results into the website pleasepress1.com, which lists which number options to press to reach the desired department.

The father-of-three, from Fawkham, Kent, reckons the free service can save consumers more than eight minutes by cutting out up to seven menu options.

For example, a Lloyds TSB home insurance customer who wishes to report a water leak would normally have to wade through 78 menu options over seven levels to get through to the correct department.

But the new service informs callers that the combination 1-3-2-1-1-5-4 will get them straight through – saving over four minutes of waiting.

Mr Clarke reckons the service could save consumers up to one billion minutes a year.

He said: ‘Everyone knows that calling your insurance or gas company is a pain but for most, it’s not an everyday problem.

‘However, the cumulative effect of these calls is really quite devastating when you’re moving house or having an issue.

‘I’ve been working in IT for over 30 years and nothing gets me riled up like having my time wasted through inefficient design.

‘This is why I’ve devoted the best part of seven years to solving this issue.’

Mr Clarke describes call centre menu options as the ‘modern equivalent of Dante’s circles of hell’.

He sites the HMRC as one of the worst offenders, where callers can take up to six minutes to reach the correct department.

As one of the UK’s busiest call centres, the Revenue receives 79 million calls per year, or a potential 4.3 million working hours just navigating menus.

Mr Clarke believes that with better menu design, at least three million caller hours could be saved here alone.

He began his quest seven years ago as a self-confessed ‘call centre menu enthusiast’.

‘The idea began with the frustration of being met with a seemingly endless list of menu options,’ he said.

‘Whether calling my phone, insurance or energy company, they each had a different and often worse way of trying to “help” me.

‘I could sit there for minutes that seemed like hours, trying to get through their phone menus only to end up at the wrong place and having to redial and start again.’

He began noting down the menu options and soon realised he could shave several minutes off the waiting time.

Mr Clarke said: ‘When I called numbers regularly, I started keeping notes of the options to press. The numbers didn’t change very often and then it hit me.

Read the entire article here and visit Please Press 1, here.

Images courtesy of Time and Please Press 1.

The Internet of Things and Your (Lack of) Privacy

Ubiquitous connectivity for, and between, individuals and businesses is widely held to be beneficial for all concerned. We can connect rapidly and reliably with family, friends and colleagues from almost anywhere to anywhere via a wide array of internet enabled devices. Yet, as these devices become more powerful and interconnected, and enabled with location-based awareness, such as GPS (Global Positioning System) services, we are likely to face an increasing acute dilemma — connectedness or privacy?

From the Guardian:

The internet has turned into a massive surveillance tool. We’re constantly monitored on the internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated – sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behaviour, it’s still only behaviour that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is internet-enabled. Soon there will be internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be internet-connected tags on our clothing. In its extreme, everything can be connected to the internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about the “Internet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to – to monitor how people move about their house and how they spend their time.

Drones are the another “thing” moving onto the internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s internet-enabled glasses – Google Glass – are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behaviour and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals – not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyse this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days – and nights – with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Read the entire article here.

Image: Big Brother, 1984. Poster. Courtesy of Telegraph.

Media Multi-Tasking, School Work and Poor Memory

It’s official — teens can’t stay off social media for more than 15 minutes. It’s no secret that many kids aged between 8 and 18 spend most of their time texting, tweeting and checking their real-time social status. The profound psychological and sociological consequences of this behavior will only start to become apparent ten to fifteen year from now. In the meantime, researchers are finding a general degradation in kids’ memory skills from using social media and multi-tasking while studying.

From Slate:

Living rooms, dens, kitchens, even bedrooms: Investigators followed students into the spaces where homework gets done. Pens poised over their “study observation forms,” the observers watched intently as the students—in middle school, high school, and college, 263 in all—opened their books and turned on their computers.

For a quarter of an hour, the investigators from the lab of Larry Rosen, a psychology professor at California State University–Dominguez Hills, marked down once a minute what the students were doing as they studied. A checklist on the form included: reading a book, writing on paper, typing on the computer—and also using email, looking at Facebook, engaging in instant messaging, texting, talking on the phone, watching television, listening to music, surfing the Web. Sitting unobtrusively at the back of the room, the observers counted the number of windows open on the students’ screens and noted whether the students were wearing earbuds.

Although the students had been told at the outset that they should “study something important, including homework, an upcoming examination or project, or reading a book for a course,” it wasn’t long before their attention drifted: Students’ “on-task behavior” started declining around the two-minute mark as they began responding to arriving texts or checking their Facebook feeds. By the time the 15 minutes were up, they had spent only about 65 percent of the observation period actually doing their schoolwork.

“We were amazed at how frequently they multitasked, even though they knew someone was watching,” Rosen says. “It really seems that they could not go for 15 minutes without engaging their devices,” adding, “It was kind of scary, actually.”

Concern about young people’s use of technology is nothing new, of course. But Rosen’s study, published in the May issue of Computers in Human Behavior, is part of a growing body of research focused on a very particular use of technology: media multitasking while learning. Attending to multiple streams of information and entertainment while studying, doing homework, or even sitting in class has become common behavior among young people—so common that many of them rarely write a paper or complete a problem set any other way.

But evidence from psychology, cognitive science, and neuroscience suggests that when students multitask while doing schoolwork, their learning is far spottier and shallower than if the work had their full attention. They understand and remember less, and they have greater difficulty transferring their learning to new contexts. So detrimental is this practice that some researchers are proposing that a new prerequisite for academic and even professional success—the new marshmallow test of self-discipline—is the ability to resist a blinking inbox or a buzzing phone.

The media multitasking habit starts early. In “Generation M2: Media in the Lives of 8- to 18-Year-Olds,” a survey conducted by the Kaiser Family Foundation and published in 2010, almost a third of those surveyed said that when they were doing homework, “most of the time” they were also watching TV, texting, listening to music, or using some other medium. The lead author of the study was Victoria Rideout, then a vice president at Kaiser and now an independent research and policy consultant. Although the study looked at all aspects of kids’ media use, Rideout told me she was particularly troubled by its findings regarding media multitasking while doing schoolwork.

“This is a concern we should have distinct from worrying about how much kids are online or how much kids are media multitasking overall. It’s multitasking while learning that has the biggest potential downside,” she says. “I don’t care if a kid wants to tweet while she’s watching American Idol, or have music on while he plays a video game. But when students are doing serious work with their minds, they have to have focus.”

For older students, the media multitasking habit extends into the classroom. While most middle and high school students don’t have the opportunity to text, email, and surf the Internet during class, studies show the practice is nearly universal among students in college and professional school. One large survey found that 80 percent of college students admit to texting during class; 15 percent say they send 11 or more texts in a single class period.

During the first meeting of his courses, Rosen makes a practice of calling on a student who is busy with his phone. “I ask him, ‘What was on the slide I just showed to the class?’ The student always pulls a blank,” Rosen reports. “Young people have a wildly inflated idea of how many things they can attend to at once, and this demonstration helps drive the point home: If you’re paying attention to your phone, you’re not paying attention to what’s going on in class.” Other professors have taken a more surreptitious approach, installing electronic spyware or planting human observers to record whether students are taking notes on their laptops or using them for other, unauthorized purposes.

Read the entire article here.

Image courtesy of Examiner.

Google’s AI

The collective IQ of Google, the company, inched up a few notches in January of 2013 when they hired Ray Kurzweil. Over the coming years if the work of Kurzweil, and many of his colleagues, pays off the company’s intelligence may surge significantly. This time though it will be thanks to their work on artificial intelligence (AI), machine learning and (very) big data.

From  Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Read the entire fascinating article following the jump.

Image courtesy of Wired.

Off World Living

Will humanity ever transcend gravity to become a space-faring race? A simple napkin-based calculation will give you the answer.

From Scientific American:

Optimistic visions of a human future in space seem to have given way to a confusing mix of possibilities, maybes, ifs, and buts. It’s not just the fault of governments and space agencies, basic physics is in part the culprit. Hoisting mass away from Earth is tremendously difficult, and thus far in fifty years we’ve barely managed a total the equivalent of a large oil-tanker. But there’s hope.

Back in the 1970?s the physicist Gerard O’Neill and his students investigated concepts of vast orbital structures capable of sustaining entire human populations. It was the tail end of the Apollo era, and despite the looming specter of budget restrictions and terrestrial pessimism there was still a sense of what might be, what could be, and what was truly within reach.

The result was a series of blueprints for habitats that solved all manner of problems for space life, from artificial gravity (spin up giant cylinders), to atmospheres, and radiation (let the atmosphere shield you). They’re pretty amazing, and they’ve remained perhaps one of the most optimistic visions of a future where we expand beyond the Earth.

But there’s a lurking problem, and it comes down to basic physics. It is awfully hard to move stuff from the surface of our planet into orbit or beyond. O’Neill knew this, as does anyone else who’s thought of grand space schemes. The solution is to ‘live of the land’, extracting raw materials from either the Moon with its shallower gravity well, or by processing asteroids. To get to that point though we’d still have to loft an awful lot of stuff into space – the basic tools and infrastructure have to start somewhere.

And there’s the rub. To put it into perspective I took a look at the amount of ‘stuff’ we’ve managed to get off Earth in the past 50-60 years. It’s actually pretty hard to evaluate, lots of the mass we send up comes back down in short order – either as spent rocket stages or as short-lived low-altitude satellites. But we can still get a feel for it.

To start with, a lower limit on the mass hoisted to space is the present day artificial satellite population. Altogether there are in excess of about 3,000 satellites up there, plus vast amounts of small debris. Current estimates suggest this amounts to a total of around 6,000 metric tons. The biggest single structure is the International Space Station, currently coming in at about 450 metric tons (about 992,000 lb for reference).

These numbers don’t reflect launch mass – the total of a rocket + payload + fuel. To put that into context, a fully loaded Saturn V was about 2,000 metric tons, but most of that was fuel.

When the Space Shuttle flew it amounted to about 115 metric tons (Shuttle + payload) making it into low-Earth orbit. Since there were 135 launches of the Shuttle that amounts to a total hoisted mass of about 15,000 metric tons over a 30 year period.

Read the entire article after the jump.

Image: A pair of O’Neill cylinders. NASA ID number AC75-1085. Courtesy of NASA / Wikipedia.

Ray Kurzweil and Living a Googol Years

By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.

From the Wall Street Journal:

Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?

This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.

Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.

In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.

If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”

If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.

In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”

There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.

Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”

“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “

To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.

“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.

“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.

By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.

How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”

We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”

“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.

Read the entire article after the jump.

Cheap Hydrogen

Researchers at the University of Glasgow, Scotland, have discovered an alternative and possibly more efficient way to make hydrogen at industrial scales. Typically, hydrogen is produced from reacting high temperature steam with methane or natural gas. A small volume of hydrogen, less than five percent annually, is also made through the process of electrolysis — passing an electric current through water.

This new method of production appears to be less costly, less dangerous and also more environmentally sound.

From the Independent:

Scientists have harnessed the principles of photosynthesis to develop a new way of producing hydrogen – in a breakthrough that offers a possible solution to global energy problems.

The researchers claim the development could help unlock the potential of hydrogen as a clean, cheap and reliable power source.

Unlike fossil fuels, hydrogen can be burned to produce energy without producing emissions. It is also the most abundant element on the planet.

Hydrogen gas is produced by splitting water into its constituent elements – hydrogen and oxygen. But scientists have been struggling for decades to find a way of extracting these elements at different times, which would make the process more energy-efficient and reduce the risk of dangerous explosions.

In a paper published today in the journal Nature Chemistry, scientists at the University of Glasgow outline how they have managed to replicate the way plants use the sun’s energy to split water molecules into hydrogen and oxygen at separate times and at separate physical locations.

Experts heralded the “important” discovery yesterday, saying it could make hydrogen a more practicable source of green energy.

Professor Xile Hu, director of the Laboratory of Inorganic Synthesis and Catalysis at the Swiss Federal Institute of Technology in Lausanne, said: “This work provides an important demonstration of the principle of separating hydrogen and oxygen production in electrolysis and is very original. Of course, further developments are needed to improve the capacity of the system, energy efficiency, lifetime and so on. But this research already  offers potential and promise and can help in making the storage of green  energy cheaper.”

Until now, scientists have separated hydrogen and oxygen atoms using electrolysis, which involves running electricity through water. This is energy-intensive and potentially explosive, because the oxygen and hydrogen are removed at the same time.

But in the new variation of electrolysis developed at the University of Glasgow, hydrogen and oxygen are produced from the water at different times, thanks to what researchers call an “electron-coupled proton buffer”. This acts to collect and store hydrogen while the current runs through the water, meaning that in the first instance only oxygen is released. The hydrogen can then be released when convenient.

Because pure hydrogen does not occur naturally, it takes energy to make it. This new version of electrolysis takes longer, but is safer and uses less energy per minute, making it easier to rely on renewable energy sources for the electricity needed to separate  the atoms.

Dr Mark Symes, the report’s co-author, said: “What we have developed is a system for producing hydrogen on an industrial scale much more cheaply and safely than is currently possible. Currently much of the industrial production of hydrogen relies on reformation of fossil fuels, but if the electricity is provided via solar, wind or wave sources we can create an almost totally clean source of power.”

Professor Lee Cronin, the other author of the research, said: “The existing gas infrastructure which brings gas to homes across the country could just as easily carry hydrogen as it  currently does methane. If we were to use renewable power to generate hydrogen using the cheaper, more efficient decoupled process we’ve created, the country could switch to hydrogen  to generate our electrical power  at home. It would also allow us to  significantly reduce the country’s  carbon footprint.”

Nathan Lewis, a chemistry professor at the California Institute of Technology and a green energy expert, said: “This seems like an interesting scientific demonstration that may possibly address one of the problems involved with water electrolysis, which remains a relatively expensive method of producing hydrogen.”

Read the entire article following the jump.

The Digital Afterlife and i-Death

Leave it to Google to help you auto-euthanize and die digitally. The presence of our online selves after death was of limited concern until recently. However, with the explosion of online media and social networks our digital tracks remain preserved and scattered across drives and backups in distributed, anonymous data centers. Physical death does not change this.

[A case in point: your friendly editor at theDiagonal was recently asked to befriend a colleague via LinkedIn. All well and good, except that the colleague had passed-away two years earlier.]

So, armed with Google’s new Inactive Account Manager, death — at least online — may be just a couple of clicks away. By corollary it would be a small leap indeed to imagine an enterprising company charging an annual fee to a dearly-departed member to maintain a digital afterlife ad infinitum.

From the Independent:

The search engine giant Google has announced a new feature designed to allow users to decide what happens to their data after they die.

The feature, which applies to the Google-run email system Gmail as well as Google Plus, YouTube, Picasa and other tools, represents an attempt by the company to be the first to deal with the sensitive issue of data after death.

In a post on the company’s Public Policy Blog Andreas Tuerk, Product Manager, writes: “We hope that this new feature will enable you to plan your digital afterlife – in a way that protects your privacy and security – and make life easier for your loved ones after you’re gone.”

Google says that the new account management tool will allow users to opt to have their data deleted after three, six, nine or 12 months of inactivity. Alternatively users can arrange for certain contacts to be sent data from some or all of their services.

The California-based company did however stress that individuals listed to receive data in the event of ‘inactivity’ would be warned by text or email before the information was sent.

Social Networking site Facebook already has a function that allows friends and family to “memorialize” an account once its owner has died.

Read the entire article following the jump.

Tracking and Monetizing Your Every Move

Your movements are valuable — but not in the way you may think. Mobile technology companies are moving rapidly to exploit the vast amount of data collected from the billions of mobile devices. This data is extremely valuable to an array of organizations, including urban planners, retailers, and travel and transportation marketers. And, of course, this raises significant privacy concerns. Many believe that when the data is used collectively it preserves user anonymity. However, if correlated with other data sources it could be used to discover a range of unintended and previously private information, relating both to individuals and to groups.

From MIT Technology Review:

Wireless operators have access to an unprecedented volume of information about users’ real-world activities, but for years these massive data troves were put to little use other than for internal planning and marketing.

This data is under lock and key no more. Under pressure to seek new revenue streams (see “AT&T Looks to Outside Developers for Innovation”), a growing number of mobile carriers are now carefully mining, packaging, and repurposing their subscriber data to create powerful statistics about how people are moving about in the real world.

More comprehensive than the data collected by any app, this is the kind of information that, experts believe, could help cities plan smarter road networks, businesses reach more potential customers, and health officials track diseases. But even if shared with the utmost of care to protect anonymity, it could also present new privacy risks for customers.

Verizon Wireless, the largest U.S. carrier with more than 98 million retail customers, shows how such a program could come together. In late 2011, the company changed its privacy policy so that it could share anonymous and aggregated subscriber data with outside parties. That made possible the launch of its Precision Market Insights division last October.

The program, still in its early days, is creating a natural extension of what already happens online, with websites tracking clicks and getting a detailed breakdown of where visitors come from and what they are interested in.

Similarly, Verizon is working to sell demographics about the people who, for example, attend an event, how they got there or the kinds of apps they use once they arrive. In a recent case study, says program spokeswoman Debra Lewis, Verizon showed that fans from Baltimore outnumbered fans from San Francisco by three to one inside the Super Bowl stadium. That information might have been expensive or difficult to obtain in other ways, such as through surveys, because not all the people in the stadium purchased their own tickets and had credit card information on file, nor had they all downloaded the Super Bowl’s app.

Other telecommunications companies are exploring similar ideas. In Europe, for example, Telefonica launched a similar program last October, and the head of this new business unit gave the keynote address at new industry conference on “big data monetization in telecoms” in January.

“It doesn’t look to me like it’s a big part of their [telcos’] business yet, though at the same time it could be,” says Vincent Blondel, an applied mathematician who is now working on a research challenge from the operator Orange to analyze two billion anonymous records of communications between five million customers in Africa.

The concerns about making such data available, Blondel says, are not that individual data points will leak out or contain compromising information but that they might be cross-referenced with other data sources to reveal unintended details about individuals or specific groups (see “How Access to Location Data Could Trample Your Privacy”).

Already, some startups are building businesses by aggregating this kind of data in useful ways, beyond what individual companies may offer. For example, AirSage, an Atlanta, Georgia, a company founded in 2000, has spent much of the last decade negotiating what it says are exclusive rights to put its hardware inside the firewalls of two of the top three U.S. wireless carriers and collect, anonymize, encrypt, and analyze cellular tower signaling data in real time. Since AirSage solidified the second of these major partnerships about a year ago (it won’t specify which specific carriers it works with), it has been processing 15 billion locations a day and can account for movement of about a third of the U.S. population in some places to within less than 100 meters, says marketing vice president Andrea Moe.

As users’ mobile devices ping cellular towers in different locations, AirSage’s algorithms look for patterns in that location data—mostly to help transportation planners and traffic reports, so far. For example, the software might infer that the owners of devices that spend time in a business park from nine to five are likely at work, so a highway engineer might be able to estimate how much traffic on the local freeway exit is due to commuters.

Other companies are starting to add additional layers of information beyond cellular network data. One customer of AirSage is a relatively small San Francisco startup, Streetlight Data which recently raised $3 million in financing backed partly by the venture capital arm of Deutsche Telekom.

Streetlight buys both cellular network and GPS navigation data that can be mined for useful market research. (The cellular data covers a larger number of people, but the GPS data, collected by mapping software providers, can improve accuracy.) Today, many companies already build massive demographic and behavioral databases on top of U.S. Census information about households to help retailers choose where to build new stores and plan marketing budgets. But Streetlight’s software, with interactive, color-coded maps of neighborhoods and roads, offers more practical information. It can be tied to the demographics of people who work nearby, commute through on a particular highway, or are just there for a visit, rather than just supplying information about who lives in the area.

Read the entire article following the jump.

Image: mobile devices. Courtesy of W3.org

Technology and the Exploitation of Children

Many herald the forward motion of technological innovation as progress. In many cases the momentum does genuinely seem to carry us towards a better place; it broadly alleviates pain and suffering; it generally delivers more and better nutrition to our bodies and our minds. Yet for all the positive steps, this progress is often accompanied by retrograde leaps — often paradoxical ones. Particularly disturbing is the relative ease to which technology allows us — the responsible adults – to sexualise and exploit children. Now, this is certainly not a new phenomenon, but our technical prowess certainly makes this problem more pervasive. A case in point, the Instagram beauty pageant. Move over Honey Boo-Boo.

From the Washington Post:

The photo-sharing site Instagram has become wildly popular as a way to trade pictures of pets and friends. But a new trend on the site is making parents cringe: beauty pageants, in which thousands of young girls — many appearing no older than 12 or 13 — submit photographs of themselves for others to judge.

In one case, the mug shots of four girls, middle-school-age or younger, have been pitted against each other. One is all dimples, wearing a hair bow and a big, toothy grin. Another is trying out a pensive, sultry look.

Any of Instagram’s 30 million users can vote on the appearance of the girls in a comments section of the post. Once a girl’s photo receives a certain number of negative remarks, the pageant host, who can remain anonymous, can update it with a big red X or the word “OUT” scratched across her face.

“U.G.L.Y,” wrote one user about a girl, who submitted her photo to one of the pageants identified on Instagram by the keyword “#beautycontest.”

The phenomenon has sparked concern among parents and child safety advocates who fear that young girls are making themselves vulnerable to adult strangers and participating in often cruel social interactions at a sensitive period of development.

But the contests are the latest example of how technology is pervading the lives of children in ways that parents and teachers struggle to understand or monitor.

“What started out as just a photo-sharing site has become something really pernicious for young girls,” said Rachel Simmons, author of “Odd Girl Out” and a speaker on youth and girls. “What happened was, like most social media experiences, girls co-opted it and imposed their social life on it to compete for attention and in a very exaggerated way.”

It’s difficult to track when the pageants began and who initially set them up. A keyword search of #beautycontest turned up 8,757 posts, while #rateme had 27,593 photo posts. Experts say those two terms represent only a fraction of the activity. Contests are also appearing on other social media sites, including Tumblr and Snapchat — mobile apps that have grown in popularity among youth.

Facebook, which bought Instagram last year, declined to comment. The company has a policy of not allowing anyone under the age of 13 to create an account or share photos on Instagram. But Facebook has been criticized for allowing pre-teens to get around the rule — two years ago, Consumer Reports estimated their presence on Facebook was 7.5 million. (Washington Post Co. Chairman Donald Graham sits on Facebook’s board of directors.)

Read the entire article after the jump.

Image: Instagram. Courtesy of Wired.

 

Blame (Or Hug) Martin Cooper

Martin Cooper. You may not know that name, but you and a fair proportion of the world’s 7 billion inhabitants have surely held or dropped or prodded or cursed his offspring.

You see, forty years ago Martin Cooper used his baby to make the first public mobile phone call. Martin Cooper invented the cell phone.

From the Guardian:

It is 40 years this week since the first public mobile phone call. On 3 April, 1973, Martin Cooper, a pioneering inventor working for Motorola in New York, called a rival engineer from the pavement of Sixth Avenue to brag and was met with a stunned, defeated silence. The race to make the first portable phone had been won. The Pandora’s box containing txt-speak, pocket-dials and pig-hating suicidal birds was open.

Many people at Motorola, however, felt mobile phones would never be a mass-market consumer product. They wanted the firm to focus on business carphones. But Cooper and his team persisted. Ten years after that first boastful phonecall they brought the portable phone to market, at a retail price of around $4,000.

Thirty years on, the number of mobile phone subscribers worldwide is estimated at six and a half billion. And Angry Birds games have been downloaded 1.7bn times.

This is the story of the mobile phone in 40 facts:

1 That first portable phone was called a DynaTAC. The original model had 35 minutes of battery life and weighed one kilogram.

2 Several prototypes of the DynaTAC were created just 90 days after Cooper had first suggested the idea. He held a competition among Motorola engineers from various departments to design it and ended up choosing “the least glamorous”.

3 The DynaTAC’s weight was reduced to 794g before it came to market. It was still heavy enough to beat someone to death with, although this fact was never used as a selling point.

4 Nonetheless, people cottoned on. DynaTAC became the phone of choice for fictional psychopaths, including Wall Street’s Gordon Gekko, American Psycho’s Patrick Bateman and Saved by the Bell’s Zack Morris.

5 The UK’s first public mobile phone call was made by comedian Ernie Wise in 1985 from St Katharine dock to the Vodafone head offices over a curry house in Newbury.

6 Vodafone’s 1985 monopoly of the UK mobile market lasted just nine days before Cellnet (now O2) launched its rival service. A Vodafone spokesperson was probably all like: “Aw, shucks!”

7 Cellnet and Vodafone were the only UK mobile providers until 1993.

8 It took Vodafone just less than nine years to reach the one million customers mark. They reached two million just 18 months later.

9 The first smartphone was IBM’s Simon, which debuted at the Wireless World Conference in 1993. It had an early LCD touchscreen and also functioned as an email device, electronic pager, calendar, address book and calculator.

10 The first cameraphone was created by French entrepreneur Philippe Kahn. He took the first photograph with a mobile phone, of his newborn daughter Sophie, on 11 June, 1997.

Read the entire article after the jump.

Image: Dr. Martin Cooper, the inventor of the cell phone, with DynaTAC prototype from 1973 (in the year 2007). Courtesy of Wikipedia.

Next Up: Apple TV

Robert Hof argues that the time is ripe for Steve Jobs’ corporate legacy to reinvent the TV. Apple transformed the personal computer industry, the mobile phone market and the music business. Clearly the company has all the components in place to assemble another innovation.

From Technology Review:

Steve Jobs couldn’t hide his frustration. Asked at a technology conference in 2010 whether Apple might finally turn its attention to television, he launched into an exasperated critique of TV. Cable and satellite TV companies make cheap, primitive set-top boxes that “squash any opportunity for innovation,” he fumed. Viewers are stuck with “a table full of remotes, a cluster full of boxes, a bunch of different [interfaces].” It was the kind of technological mess that cried out for Apple to clean it up with an elegant product. But Jobs professed to have no idea how his company could transform the TV.

Scarcely a year later, however, he sounded far more confident. Before he died on October 5, 2011, he told his biographer, ­Walter Isaacson, that Apple wanted to create an “integrated television set that is completely easy to use.” It would sync with other devices and Apple’s iCloud online storage service and provide “the simplest user interface you could imagine.” He added, tantalizingly, “I finally cracked it.”

Precisely what he cracked remains hidden behind Apple’s shroud of secrecy. Apple has had only one television-related product—the black, hockey-puck-size Apple TV device, which streams shows and movies to a TV. For years, Jobs and Tim Cook, his successor as CEO, called that device a “hobby.” But under the guise of this hobby, Apple has been steadily building hardware, software, and services that make it easier for people to watch shows and movies in whatever way they wish. Already, the company has more of the pieces for a compelling next-generation TV experience than people might realize.

And as Apple showed with the iPad and iPhone, it doesn’t have to invent every aspect of a product in order for it to be disruptive. Instead, it has become the leader in consumer electronics by combining existing technologies with some of its own and packaging them into products that are simple to use. TV seems to be at that moment now. People crave something better than the fusty, rigidly controlled cable TV experience, and indeed, the technologies exist for something better to come along. Speedier broadband connections, mobile TV apps, and the availability of some shows and movies on demand from Netflix and Hulu have made it easier to watch TV anytime, anywhere. The number of U.S. cable and satellite subscribers has been flat since 2010.

Apple would not comment. But it’s clear from two dozen interviews with people close to Apple suppliers and partners, and with people Apple has spoken to in the TV industry, that television—the medium and the device—is indeed its next target.

The biggest question is not whether Apple will take on TV, but when. The company must eventually come up with another breakthrough product; with annual revenue already topping $156 billion, it needs something very big to keep growth humming after the next year or two of the iPad boom. Walter Price, managing director of Allianz Global Investors, which holds nearly $1 billion in Apple shares, met with Apple executives in September and came away convinced that it would be years before Apple could get a significant share of the $345 billion worldwide market for televisions. But at $1,000, the bare minimum most analysts expect an Apple television to cost, such a product would eventually be a significant revenue generator. “You sell 10 million of those, it can move the needle,” he says.

Cook, who replaced Jobs as CEO in August 2011, could use a boost, too. He has presided over missteps such as a flawed iPhone mapping app that led to a rare apology and a major management departure. Seen as a peerless operations whiz, Cook still needs a revolutionary product of his own to cement his place next to Saint Steve. Corey Ferengul, a principal at the digital media investment firm Apace Equities and a former executive at Rovi, which provided TV programming guide services to Apple and other companies, says an Apple TV will be that product: “This will be Tim Cook’s first ‘holy shit’ innovation.”

What Apple Already Has

Rapt attention would be paid to whatever round-edged piece of brushed-aluminum hardware Apple produced, but a television set itself would probably be the least important piece of its television strategy. In fact, many well-connected people in technology and television, from TV and online video maven Mark Cuban to venture capitalist and former Apple executive Jean-Louis Gassée, can’t figure out why Apple would even bother with the machines.

For one thing, selling televisions is a low-margin business. No one subsidizes the purchase of a TV the way your wireless carrier does with the iPhone (an iPhone might cost you $200, but Apple’s revenue from it is much higher than that). TVs are also huge and difficult to stock in stores, let alone ship to homes. Most of all, the upgrade cycle that powers Apple’s iPhone and iPad profit engine doesn’t apply to television sets—no one replaces them every year or two.

But even though TVs don’t line up neatly with the way Apple makes money on other hardware, they are likely to remain central to people’s ever-increasing consumption of video, games, and other forms of media. Apple at least initially could sell the screens as a kind of Trojan horse—a way of entering or expanding its role in lines of business that are more profitable, such as selling movies, shows, games, and other Apple hardware.

Read the entire article following the jump.

Image courtesy of Apple, Inc.