Tag Archives: technology

Research Without a Research Lab

Many technology companies have separate research teams, or even divisions, that play with new product ideas and invent new gizmos. The conventional wisdom suggests that businesses like Microsoft or IBM need to keep their innovative, far-sighted people away from those tasked with keeping yesterday’s products functioning and today’s customers happy. Google and a handful of other innovators on the other hand follow a different mantra; they invent in hallways and cubes — everywhere.

From Technology Review:

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.

“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Alan MacCormack, an adjunct professor at Harvard Business School who studies innovation and product development in the technology sector, says Google’s approach to research helps it deal with a conundrum facing many large companies. “Many firms are trying to balance a corporate strategy that defines who they are in five years with trying to discover new stuff that is unpredictable—this model has allowed them to do both.” Embedding people working on fundamental research into the core business also makes it possible for Google to encourage creative contributions from workers who would typically be far removed from any kind of research and development, adds MacCormack.

Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.

Cynthia Wagner Weick, a management professor at University of the Pacific, thinks that Google’s approach stems from its cofounders’ determination to avoid the usual corporate approach of keeping fundamental research isolated. “They are interested in solving major problems, and not just in the IT and communications space,” she says. Weick recently published a paper singling out Google, Edwards Lifescience, and Elon Musk’s companies, Tesla Motors and Space X, as examples of how tech companies can meet short-term needs while also thinking about far-off ideas.

Google can also draw on academia to boost its fundamental research. It spends millions each year on more than 100 research grants to universities and a few dozen PhD fellowships. At any given time it also hosts around 30 academics who “embed” at the company for up to 18 months. But it has lured many leading computing thinkers away from academia in recent years, particularly in artificial intelligence (see “Is Google Cornering the Market on Deep Learning?”). Those that make the switch get to keep publishing academic research while also gaining access to resources, tools and data unavailable inside universities.

Spector argues that it’s increasingly difficult for academic thinkers to independently advance a field like computer science without the involvement of corporations. Access to piles of data and working systems like those of Google is now a requirement to develop and test ideas that can move the discipline forward, he says. “Google’s played a larger role than almost any company in bringing that empiricism into the mainstream of the field,” he says. “Because of machine learning and operation at scale you can do things that are vastly different. You don’t want to separate researchers from data.”

It’s hard to say how long Google will be able to count on luring leading researchers, given the flush times for competing Silicon Valley startups. “We’re back to a time when there are a lot of startups out there exploring new ground,” says MacCormack, and if competitors can amass more interesting data, they may be able to leach away Google’s research mojo.

Read the entire story here.

The Persistent Panopticon

microsoft-surveillance-system

Based on the ever-encroaching surveillance systems used by local and national governments and private organizations one has to wonder if we — the presumed innocent — are living inside or outside a prison facility. Advances in security and surveillance systems now make it possible to track swathes of the population over periods of time across an entire city.

From the Washington Post:

Shooter and victim were just a pair of pixels, dark specks on a gray streetscape. Hair color, bullet wounds, even the weapon were not visible in the series of pictures taken from an airplane flying two miles above.

But what the images revealed — to a degree impossible just a few years ago — was location, mapped over time. Second by second, they showed a gang assembling, blocking off access points, sending the shooter to meet his target and taking flight after the body hit the pavement. When the report reached police, it included a picture of the blue stucco building into which the killer ultimately retreated, at last beyond the view of the powerful camera overhead.

“I’ve witnessed 34 of these,” said Ross McNutt, the genial president of Persistent Surveillance Systems, which collected the images of the killing in Ciudad Juarez, Mexico, from a specially outfitted Cessna. “It’s like opening up a murder mystery in the middle, and you need to figure out what happened before and after.”

As Americans have grown increasingly comfortable with traditional surveillance cameras, a new, far more powerful generation is being quietly deployed that can track every vehicle and person across an area the size of a small city, for several hours at a time. Though these cameras can’t read license plates or see faces, they provide such a wealth of data that police, businesses, even private individuals can use them to help identify people and track their movements.

Already, the cameras have been flown above major public events, such as the Ohio political rally where Sen. John McCain (R-Ariz.) named Sarah Palin as his running mate in 2008, McNutt said. They’ve been flown above Baltimore; Philadelphia; Compton, Calif.; and Dayton in demonstrations for police. They’ve also been used for traffic impact studies, for security at NASCAR races — and at the request of a Mexican politician, who commissioned the flights over Ciudad Juarez.

Video: A time machine for police, letting them watch criminals—and everyone else.

Defense contractors are developing similar technology for the military, but its potential for civilian use is raising novel civil-liberty concerns. In Dayton, where Persistent Surveillance Systems is based, city officials balked last year when police considered paying for 200 hours of flights, in part because of privacy complaints.

“There are an infinite number of surveillance technologies that would help solve crimes .?.?. but there are reasons that we don’t do those things, or shouldn’t be doing those things,” said Joel Pruce, a University of Dayton post-doctoral fellow in human rights who opposed the plan. “You know where there’s a lot less crime? There’s a lot less crime in China.”

McNutt, a retired Air Force officer who once helped design a similar system for the skies above Fallujah, a key battleground city in Iraq, hopes to win over officials in Dayton and elsewhere by convincing them that cameras mounted on fixed-wing aircraft can provide far more useful intelligence than police helicopters do, for less money. The Supreme Court generally has given wide latitude to police using aerial surveillance so long as the photography captures images visible to the naked eye.

A single camera mounted atop the Washington Monument, McNutt boasts, could deter crime all around the National Mall. He thinks regular flights over the most dangerous parts of Washington — combined with publicity about how much police could now see — would make a significant dent in the number of burglaries, robberies and murders. His 192-megapixel cameras would spot as many as 50 crimes per six-hour flight, he estimates, providing police with a continuous stream of images covering more than a third of the city.

“We watch 25 square miles, so you see lots of crimes,” he said. “And by the way, after people commit crimes, they drive like idiots.”

What McNutt is trying to sell is not merely the latest techno-wizardry for police. He envisions such steep drops in crime that they will bring substantial side effects, including rising property values, better schools, increased development and, eventually, lower incarceration rates as the reality of long-term overhead surveillance deters those tempted to commit crimes.

Dayton Police Chief Richard Biehl, a supporter of McNutt’s efforts, has even proposed inviting the public to visit the operations center, to get a glimpse of the technology in action.

“I want them to be worried that we’re watching,” Biehl said. “I want them to be worried that they never know when we’re overhead.”

Technology in action

McNutt, a suburban father of four with a doctorate from the Massachusetts Institute of Technology, is not deaf to concerns about his company’s ambitions. Unlike many of the giant defense contractors that are eagerly repurposing wartime surveillance technology for domestic use, he sought advice from the American Civil Liberties Union in writing a privacy policy.

It has rules on how long data can be kept, when images can be accessed and by whom. Police are supposed to begin looking at the pictures only after a crime has been reported. Pure fishing expeditions are prohibited.

The technology has inherent limitations as well. From the airborne cameras, each person appears as a single pixel indistinguishable from any other person. What they are doing — even whether they are clothed or not — is impossible to see. As camera technology improves, McNutt said he intends to increase their range, not the precision of the imagery, so that larger areas can be monitored.

The notion that McNutt and his roughly 40 employees are peeping Toms clearly rankles. They made a PowerPoint presentation for the ACLU that includes pictures taken to aid the response to Hurricane Sandy and the severe Iowa floods last summer. The section is titled: “Good People Doing Good Things.”

“We get a little frustrated when people get so worried about us seeing them in their back yard,” McNutt said in his operation center, where the walls are adorned with 120-inch monitors, each showing a different grainy urban scene collected from above. “We can’t even see what they are doing in their backyard. And, by the way, we don’t care.”

Yet in a world of increasingly pervasive surveillance, location and identity are becoming all but inextricable — one quickly leads to the other for those with the right tools.

During one of the company’s demonstration flights over Dayton in 2012, police got reports of an attempted robbery at a bookstore and shots fired at a Subway sandwich shop. The cameras revealed a single car moving between the two locations.

By reviewing the images, frame by frame, analysts were able to help police piece together a larger story: The man had left a residential neighborhood midday, attempted to rob the bookstore but fled when somebody hit an alarm. Then he drove to Subway, where the owner pulled a gun and chased him off. His next stop was a Family Dollar Store, where the man paused for several minutes. He soon returned home, after a short stop at a gas station where a video camera captured an image of his face.

A few hours later, after the surveillance flight ended, the Family Dollar Store was robbed. Police used the detailed map of the man’s movements, along with other evidence from the crime scenes, to arrest him for all three crimes.

On another occasion, Dayton police got a report of a burglary in progress. The aerial cameras spotted a white truck driving away from the scene. Police stopped the driver before he got home from the heist, with the stolen goods sitting in the back of the truck. A witnessed identified him soon after.

Read the entire story here.

Image: Surveillance cameras. Courtesy of Mashable / Microsoft.

Your iPhone is Worth $3,000

iphone_5C-colors

There is a slight catch.

Your iPhone is worth around $3,000 based on the combined value of a sack full of gadgets from over 20 years ago. We all know that no IPhone existed in the early nineties — not even inside Steve Jobs’ head. So intrepid tech-sleuth, Steve Cichon, calculated the iPhone’s value by combining the functions of fifteen or so consumer electronics devices from 1991, found at Radio Shack, which when all combined offer comparable features to one of today’s iPhones.

From the Washington Post:

Buffalo writer Steve Cichon dug up an old Radio Shack ad, offering a variety of what were then cutting-edge gadgets. There are 15 items listed on the page, and Cichon points out that all but two of them — the exceptions are a radar detector and a set of speakers — do jobs that can now be performed with a modern iPhone.

The other 13 items, including a desktop computer, a camcorder, a CD player  and a mobile phone, have a combined price of $3,071.21. The unsubsidized price of an iPhone is $549. And, of course, your iPhone is superior to these devices in many respects. The VHS camcorder, for example, captured video at a quality vastly inferior to the crystal-clear 1080p video an iPhone can record. That $1,599 Tandy computer would have struggled to browse the Web of the 1990s, to say nothing of the sophisticated Web sites iPhones access today. The CD player only lets you carry a few albums worth of music at a time; an iPhone can hold thousands of songs. And of course, the iPhone fits in your pocket.

This example is important to remember in the debate over whether the government’s official inflation figures understate or overstate inflation. In computing the inflation rate, economists assemble a representative “basket of goods” and see how its price changes over time. This isn’t difficult when the items in the basket are milk or gallons of gasoline. But it becomes extremely tricky when thinking about high-tech products. This year’s products are dramatically better than last year’s, so economists include a “quality adjustment” factor to reflect the change. But making apples-to-apples comparisons is difficult.

There’s no basket of 1991 gadgets that exactly duplicates the functionality of a modern iPhone, so deciding what to put into that basket is an inherently subjective enterprise. It’s not obvious that the average customer really gets as much value from his or her iPhone as a gadget lover in 1991 would have gotten from $3,000 worth of Radio Shack gadgets. On the other hand, iPhones do a lot of other things, too, like check Facebook, show movies on the go and provide turn-by-turn directions, that would have been hard to do on any gadget in 1991. So if anything, I suspect the way we measure inflation understates how quickly our standard of living has been improving.

Read the entire story here.

Image: Apple iPhone 5c. Courtesy of ABC News / Apple.

Techo-Blocking Technology

google-glass2

Many technologists, philosophers and social scientists who consider the ethics of technology have described it as a double-edged sword. Indeed observation does seem to uphold this idea; for every benefit gained from a new invention comes a mirroring disadvantage or a peril. Not that technology per se is a threat — but its human masters seem to be rather adept at deploying it for both good and evil means.

By corollary it is also evident that many a new technology spawns others, and sometimes entire industries, to counteract the first. The radar begets the radar-evading material; the radio begets the radio-jamming transmitter; cryptography begets hacking. You get the idea.

So not a moment too soon comes PlaceAvoider, a technology to suppress capturing and sharing of images seen through Google Glass. So, watch out Brin and Page and company, the watchers are watching you.

From Technology Review:

With last year’s launch of the Narrative Clip and Autographer, and Google Glass poised for release this year, technologies that can continuously capture our daily lives with photos and videos are inching closer to the mainstream. These gadgets can generate detailed visual diaries, drive self-improvement, and help those with memory problems. But do you really want to record in the bathroom or a sensitive work meeting?

Assuming that many people don’t, computer scientists at Indiana University have developed software that uses computer vision techniques to automatically identify potentially confidential or embarrassing pictures taken with these devices and prevent them from being shared. A prototype of the software, called PlaceAvoider, will be presented at the Network and Distributed System Security Symposium in San Diego in February.

“There simply isn’t the time to manually curate the thousands of images these devices can generate per day, and in a socially networked world that might lead to the inadvertent sharing of photos you don’t want to share,” says Apu Kapadia, who co-leads the team that developed the system. “Or those who are worried about that might just not share their life-log streams, so we’re trying to help people exploit these applications to the full by providing them with a way to share safely.”

Kapadia’s group began by acknowledging that devising algorithms that can identify sensitive pictures solely on the basis of visual content is probably impossible, since the things that people do and don’t want to share can vary widely and may be difficult to recognize. They set about designing software that users train by taking pictures of the rooms they want to blacklist. PlaceAvoider then flags new pictures taken in those rooms so the user will review them.

The system uses an existing computer-vision algorithm called scale-invariant feature transform (SIFT) to pinpoint regions of high contrast around corners and edges within the training images that are likely to stay visually constant even in varying light conditions and from different perspectives. For each of these, it produces a “numerical fingerprint” consisting of 128 separate numbers relating to properties such as color and texture, as well as its position relative to other regions of the image. Since images are sometimes blurry, PlaceAvoider also looks at more general properties such as colors and textures of walls and carpets, and takes into account the sequence in which shots are taken.

In tests, the system accurately determined whether images from streams captured in the homes and workplaces of the researchers were from blacklisted rooms an average of 89.8 percent of the time.

PlaceAvoider is currently a research prototype; its various components have been written but haven’t been combined as a completed product, and researchers used a smartphone worn around the neck to take photos rather than an existing device meant for life-logging. If developed to work on a life-logging device, an interface could be designed so that PlaceAvoider can flag potentially sensitive images at the time they are taken or place them in quarantine to be dealt with later.

Read the entire article here.

Image: Google Glass. Courtesy of Google.

3D Printing Grows Up

cubify-3dme

So, you’d like to print a 3D engine part for your jet fighter aircraft, or print a baby — actually a realistic model of one — or shoe insoles or a fake flower. Or perhaps you’d like to print a realistic windpipe or a new arm, or a guitar or a bikini or a model of a sports stadium or even a 3D selfie (please, say no). All of these and more can now be printed in three-dimensions courtesy of this rapidly developing area of technology.

From the Guardian:

As a technology journalist – even one who hasn’t written much about 3D printing – I’ve noticed a big growth in questions from friends about the area in recent months. Often, those questions are the same ones, too.

How does 3D printing even work? What’s all this about 3D-printed guns? Can you 3D-print a 3D printer? Why are they so expensive? What can you actually make with them? Apart from guns…

The ethical and legal questions around 3D printing and firearms are important and complex, but they also tend to hoover up a lot of the mainstream media attention for this area of technology. But it’s the “what can you actually make with them” question that’s been pulling me in recently.

There’s a growing community – from individual makers to nascent businesses – exploring the potential of 3D printing. This feature is just a snapshot of some of the products and projects that caught my attention, rather than a definitive roundup.

A taste of what’s happening, but one that’s ripe for your comments pointing out better examples in these categories, and other areas that have been left out. All contributions are welcome, but here are 30 things to start the discussion off.

1. RAF Tornado fighter jet parts

Early this year, BAE Systems said that British fighter jets had flown with the first time with components made using 3D printing technology. Its engineers are making parts for four squadrons of Tornado GR4 aircraft, with the aim of saving £1.2m of maintenance and service costs over the next four years. “You are suddenly not fixed in terms of where you have to manufacture these things,” said BAE’s Mike Murray. “You can manufacture the products at whatever base you want, providing you can get a machine there.”

2. Arms for children

Time’s article from earlier this month on the work of Not Impossible Labs makes for powerful reading: a project using 3D printers to make low-cost prosthetic limbs for amputees, including Sudanese bomb-blast victim Daniel Omar. But this is just one of the stories emerging: see also 3Ders’ piece on a four-year old called Hannah, with a condition called arthrogryposis that limits her ability to lift her arms unaided, but who now has a Wilmington Robotic Exoskeleton (WREX for short) to help, made using 3D printing.

3. Old Trafford and the Etihad Stadium

Manchester-based company Hobs’ business is based around working with architects, engineers and other creatives to use 3D printing as part of their work, but to show off its capabilities, the company 3D printed models of the city’s two football stadia – Old Trafford and the Etihad Stadium – giving them away in a competition for Manchester Evening News readers. The models were estimated to be worth £1,000 each.

4. Unborn babies

Not actually as creepy as it sounds. This is more an extension of the 4D ultrasound images of babies in the womb that have become more popular in recent years. The theory: why not print them out? One company doing it, 3D Babies, didn’t have much luck with a crowdfunding campaign last year, raising $1,225 of its $15,000 goal. Even so, its website is up and running, offering eight-inch “custom lifesize baby” models for $800 a pop.

5. Super Bowl shoe cleats

Expect to see a number of big brands launching 3D printing projects this year – part R&D and part PR campaigns. Nike is one example: it’s showing off a training shoe called the Vapor Carbon Elite Cleat for this year’s Super Bowl, with a 3D-printed nylon base and cleats – the latter based on the existing Vapor Laser Talon, which was unveiled a year ago.

6. Honda concept cars

Admittedly, not an actual concept car that you can drive. Not yet. But Honda has made five 3D-printable models available from its website for fans to download and make, including 1994’s FSR Concept and 2003’s Kiwami. So it’s more about shining a light on the company’s archives and being seen to be innovative – although the potential of 3D printing for internal prototyping at all kinds of manufacturers (cars included) is one of the most interesting areas for 3D printing.

Read the entire article here.

Image: Cubify’s 3DMe figures. Courtesy of Cubify.

Your Toaster on the Internet

Toaster

Billions of people have access to the Internet. Now, whether a significant proportion of these do anything productive with this tremendous resource is open to debate — many preferring only to post pictures of their breakfasts, themselves or to watch last-minute’s viral video hit.

Despite all these humans clogging up the Tubes of the Internets most traffic along the information superhighway is in fact not even human. Over 60 percent of all activity comes from computer systems, such as web crawlers, botnets, and increasingly, industrial control systems, ranging from security and monitoring devices, to in-home devices such as your thermostat, refrigerator, smart TV , smart toilet and toaster. So, soon Google will know what you eat and when, and your fridge will tell you what you should eat (or not) based on what it knows of your body mass index (BMI) from your bathroom scales.

Jokes aside, the Internet of Things (IoT) promises to herald an even more significant information revolution over the coming decades as all our devices and machines, from home to farm to factory, are connected and inter-connected.

From the ars technica:

If you believe what the likes of LG and Samsung have been promoting this week at CES, everything will soon be smart. We’ll be able to send messages to our washing machines, run apps on our fridges, and have TVs as powerful as computers. It may be too late to resist this movement, with smart TVs already firmly entrenched in the mid-to-high end market, but resist it we should. That’s because the “Internet of things” stands a really good chance of turning into the “Internet of unmaintained, insecure, and dangerously hackable things.”

These devices will inevitably be abandoned by their manufacturers, and the result will be lots of “smart” functionality—fridges that know what we buy and when, TVs that know what shows we watch—all connected to the Internet 24/7, all completely insecure.

While the value of smart watches or washing machines isn’t entirely clear, at least some smart devices—I think most notably phones and TVs—make sense. The utility of the smartphone, an Internet-connected computer that fits in your pocket, is obvious. The growth of streaming media services means that your antenna or cable box are no longer the sole source of televisual programming, so TVs that can directly use these streaming services similarly have some appeal.

But these smart features make the devices substantially more complex. Your smart TV is not really a TV so much as an all-in-one computer that runs Android, WebOS, or some custom operating system of the manufacturer’s invention. And where once it was purely a device for receiving data over a coax cable, it’s now equipped with bidirectional networking interfaces, exposing the Internet to the TV and the TV to the Internet.

The result is a whole lot of exposure to security problems. Even if we assume that these devices ship with no known flaws—a questionable assumption in and of itself if SOHO routers are anything to judge by—a few months or years down the line, that will no longer be the case. Flaws and insecurities will be uncovered, and the software components of these smart devices will need to be updated to address those problems. They’ll need these updates for the lifetime of the device, too. Old software is routinely vulnerable to newly discovered flaws, so there’s no point in any reasonable timeframe at which it’s OK to stop updating the software.

In addition to security, there’s also a question of utility. Netflix and Hulu may be hot today, but that may not be the case in five years’ time. New services will arrive; old ones will die out. Even if the service lineup remains the same, its underlying technology is unlikely to be static. In the future, Netflix, for example, might want to deprecate old APIs and replace them with new ones; Netflix apps will need to be updated to accommodate the changes. I can envision changes such as replacing the H.264 codec with H.265 (for reduced bandwidth and/or improved picture quality), which would similarly require updated software.

To remain useful, app platforms need up-to-date apps. As such, for your smart device to remain safe, secure, and valuable, it needs a lifetime of software fixes and updates.

A history of non-existent updates

Herein lies the problem, because if there’s one thing that companies like Samsung have demonstrated in the past, it’s a total unwillingness to provide a lifetime of software fixes and updates. Even smartphones, which are generally assumed to have a two-year lifecycle (with replacements driven by cheap or “free” contract-subsidized pricing), rarely receive updates for the full two years (Apple’s iPhone being the one notable exception).

A typical smartphone bought today will remain useful and usable for at least three years, but its system software support will tend to dry up after just 18 months.

This isn’t surprising, of course. Samsung doesn’t make any money from making your two-year-old phone better. Samsung makes its money when you buy a new Samsung phone. Improving the old phones with software updates would cost money, and that tends to limit sales of new phones. For Samsung, it’s lose-lose.

Our fridges, cars, and TVs are not even on a two-year replacement cycle. Even if you do replace your TV after it’s a couple years old, you probably won’t throw the old one away. It will just migrate from the living room to the master bedroom, and then from the master bedroom to the kids’ room. Likewise, it’s rare that a three-year-old car is simply consigned to the scrap heap. It’s given away or sold off for a second, third, or fourth “life” as someone else’s primary vehicle. Your fridge and washing machine will probably be kept until they blow up or you move houses.

These are all durable goods, kept for the long term without any equivalent to the smartphone carrier subsidy to promote premature replacement. If they’re going to be smart, software-powered devices, they’re going to need software lifecycles that are appropriate to their longevity.

That costs money, it requires a commitment to providing support, and it does little or nothing to promote sales of the latest and greatest devices. In the software world, there are companies that provide this level of support—the Microsofts and IBMs of the world—but it tends to be restricted to companies that have at least one eye on the enterprise market. In the consumer space, you’re doing well if you’re getting updates and support five years down the line. Consumer software fixes a decade later are rare, especially if there’s no system of subscriptions or other recurring payments to monetize the updates.

Of course, the companies building all these products have the perfect solution. Just replace all our stuff every 18-24 months. Fridge no longer getting updated? Not a problem. Just chuck out the still perfectly good fridge you have and buy a new one. This is, after all, the model that they already depend on for smartphones. Of course, it’s not really appropriate even to smartphones (a mid/high-end phone bought today will be just fine in three years), much less to stuff that will work well for 10 years.

These devices will be abandoned by their manufacturers, and it’s inevitable that they are abandoned long before they cease to be useful.

Superficially, this might seem to be no big deal. Sure, your TV might be insecure, but your NAT router will probably provide adequate protection, and while it wouldn’t be tremendously surprising to find that it has some passwords for online services or other personal information on it, TVs are sufficiently diverse that people are unlikely to expend too much effort targeting specific models.

Read the entire story here.

Image: A classically styled chrome two-slot automatic electric toaster. Courtesy of Wikipedia.

Wearable Gadget Idea Generator

Need a new idea that rides the new techno-wave where the Internet of Things meets smartphones and wearables?  Find the sweet-spot at the confluence of these big emerging trends and you could be the next internet zillionaire.

So, junk the late-night caffeine-induced brainstorming parties with your engineer friends and visit the following:

http://whatthefuckismywearablestrategy.com/

Courtesy of this wonderfully creative site we are now well on our way to inventing the following essential gizmos:

heart rate monitor that turns the central heating on when your sleep patterns change
pair of contact lenses that posts to facebook when it’s windy
t-shirt that tweets when you drink too much coffee
pair of trousers that turns the central heating on when you burn 100 calories
pair of shoes that instagrams a selfie when the cat needs feeding

 

 

Printing the Perfect Pasta

[tube]x6WzyUgbT5A#t[/tube]

Step 1: imagine a new pasta shape and design it in three dimensions on your iPad. Step 2: fill a printer cartridge with pasta dough. Step 3: put the cartridge in a 3D printer and download your print design. Step 4: print your custom-designed pasta. Step 5: cook, eat and enjoy!

In essence that’s what Barilla — the Italian food giant — is up to in its food research labs in conjunction with Dutch tech company TNO.

3D printers aimed at the home market are also on display at this week’s CES (Consumer Electronics Show), including several that print candy and desserts. Yum, but Mamma would certainly not approve.

From the Guardian:

Once, not so very long ago, the pasta of Italian dreams was kneaded, rolled and shaped by hand in the kitchen. Now, though, the world’s leading pasta producer is perfecting a very different kind of technique – using 3D printers.

The Parma-based food giant Barilla, a fourth-generation Italian family business, said on Thursday it was working with TNO, a Dutch organisation specialising in applied scientific research, on a project using the same cutting-edge technology that has already brought startling developments in manufacturing and biotech and may now be poised to make similar waves in the food sector.

Kjeld van Bommel, project leader at TNO, said one of the potential applications of the technology could be to enable customers to present restaurants with their pasta shape desires stored on a USB stick.

“Suppose it’s your 25th wedding anniversary,” Van Bommel was quoted as telling the Dutch newspaper Trouw. “You go out for dinner and surprise your wife with pasta in the shape of a rose.”

He said speed was a big focus of the Barilla project: they want to be able to print 15-20 pieces of pasta in under two minutes. Progress had already been made, he said, and it was already possible to print 10 times as quickly as when the technology first arrived.

According to reports, Barilla aims to offer customers cartridges of dough that they can insert into a 3D printer to create their own pasta designs.

But the company declined to give further details, dismissing the claims as “speculation”. It said that although the project had been going on for around two years, it was still “in a preliminary phase”.

When contacted by the Guardian, TNO said media interest in the project had spiked in recent days, and it declined to make any further comment on the nature of the project.

The technology of 3D printing is advancing in myriad sectors around the world. Last year a California-based company made the world’s first metal 3D-printed handgun, capable of accurately firing 50 rounds without breaking, and scientists at Cornell University produced a prosthetic human ear.

At the Consumer Electronics Show in Las Vegas this week, the US company 3D Systems unveiled a new range of food-creating printers specialising in sugar-based confectionary and chocolate edibles. Last year Natural Machines, a Spanish startup, revealed its own prototype, the Foodini, which it said combined “technology, food, art and design” and was capable of making edibles ranging from chocolate to pasta.

Read the entire article here.

Video courtesy of TNO.

An Ode to the Sinclair ZX81

Sinclair-ZX81What do the PDP-11, Commodore PET, APPLE II and Sinclair’s ZX81 have in common? And, more importantly, for anyone under the age of 35, what on earth are they?  Well, these are respectively, the first time-share mainframe, first personal computer, first Apple computer, and the first home-based computer programmed by theDiagonal’s friendly editor back in the pioneering days of computation.

The article below on technological nostalgia pushed the recall button, bringing back vivid memories of dot matrix printers, FORTRAN, large floppy diskettes (5 1/4 inch), reel-to-reel tape storage, and the 1Kb of programmable memory on the ZX81. In fact, despite the tremendous and now laughable limitations of the ZX81 — one had to save and load programs via a tape cassette — programming the device at home was a true revelation.

Some would go so far as to say that the first computer is very much like the first kiss or the first date. Well, not so. But fun nonetheless, and responsible for much in the way of future career paths.

From ars technica:

Being a bunch of technology journalists who make our living on the Web, we at Ars all have a fairly intimate relationship with computers dating back to our childhood—even if for some of us, that childhood is a bit more distant than others. And our technological careers and interests are at least partially shaped by the devices we started with.

So when Cyborgology’s David Banks recently offered up an autobiography of himself based on the computing devices he grew up with, it started a conversation among us about our first computing experiences. And being the most (chronologically) senior of Ars’ senior editors, the lot fell to me to pull these recollections together—since, in theory, I have the longest view of the bunch.

Considering the first computer I used was a Digital Equipment Corp. PDP-10, that theory is probably correct.

The DEC PDP-10 and DECWriter II Terminal

In 1979, I was a high school sophomore at Longwood High School in Middle Island, New York, just a short distance from the Department of Energy’s Brookhaven National Labs. And it was at Longwood that I got the first opportunity to learn how to code, thanks to a time-share connection we had to a DEC PDP-10 at the State University of New York at Stony Brook.

The computer lab at Longwood, which was run by the math department and overseen by my teacher Mr. Dennis Schultz, connected over a leased line to SUNY. It had, if I recall correctly, six LA36 DECWriter II terminals connected back to the mainframe—essentially dot-matrix printers with keyboards on them. Turn one on while the mainframe was down, and it would print over and over:

PDP-10 NOT AVAILABLE

Time at the terminals was a precious resource, so we were encouraged to write out all of our code by hand first on graph paper and then take a stack of cards over to the keypunch. This process did wonders for my handwriting. I spent an inordinate amount of time just writing BASIC and FORTRAN code in block letters on graph-paper notebooks.

One of my first fully original programs was an aerial combat program that used three-dimensional arrays to track the movement of the player’s and the programmed opponent’s airplanes as each maneuvered to get the other in its sights. Since the program output to pin-fed paper, that could be a tedious process.

At a certain point, Mr. Shultz, who had been more than tolerant of my enthusiasm, had to crack down—my code was using up more than half the school’s allotted system storage. I can’t imagine how much worse it would have been if we had video terminals.

Actually, I can imagine, because in my senior year I was introduced to the Apple II, video, and sound. The vastness of 360 kilobytes of storage and the ability to code at the keyboard were such a huge luxury after the spartan world of punch cards that I couldn’t contain myself. I soon coded a student parking pass database for my school—while also coding a Dungeons & Dragons character tracking system, complete with combat resolution and hit point tracking.

—Sean Gallagher

A printer terminal and an acoustic coupler

I never saw the computer that gave me my first computing experience, and I have little idea what it actually was. In fact, if I ever knew where it was located, I’ve since forgotten. But I do distinctly recall the gateway to it: a locked door to the left of the teacher’s desk in my high school biology lab. Fortunately, the guardian—commonly known as Mr. Dobrow—was excited about introducing some of his students to computers, and he let a number of us spend our lunch hours experimenting with the system.

And what a system it was. Behind the physical door was another gateway, this one electronic. Since the computer was located in another town, you had to dial in by modem. The modems of the day were something different entirely from what you may recall from AOL’s dialup heyday. Rather than plugging straight in to your phone line, you dialed in manually—on a rotary phone, no less—then dropped the speaker and mic carefully into two rubber receptacles spaced to accept the standard-issue hardware of the day. (And it was standard issue; AT&T was still a monopoly at the time.)

That modem was hooked into a sort of combination of line printer and keyboard. When you were entering text, the setup acted just like a typewriter. But as soon as you hit the return key, it transmitted, and the mysterious machine at the other end responded, sending characters back that were dutifully printed out by the same machine. This meant that an infinite loop would unleash a spray of paper, and it had to be terminated by hanging up the phone.

It took us a while to get to infinite loops, though. Mr. Dobrow started us off on small simulations of things like stock markets and malaria control. Eventually, we found a way to list all the programs available and discovered a Star Trek game. Photon torpedoes were deadly, but the phasers never seemed to work, so before too long one guy had the bright idea of trying to hack the game (although that wasn’t the term that we used). We were off.

John Timmer

Read the entire article here.

Image: Sinclair ZX81. Courtesy of Wikipedia.

Asimov Fifty Years On

1957-driverless-car

In 1964, Isaac Asimov wrote an essay for the New York Times entitled, Visit the World’s Fair in 2014. The essay was a free-wheeling opinion of things to come, viewed through the lens of New York’s World’s Fair of 1964. The essay shows that even a grand master of science fiction cannot predict the future — he got some things quite right and other things rather wrong. Some examples below, and his full essay are below.

That said, what has captured recent attention is Asimov’s thinking on the complex and evolving relationship between humans and technology, and the challenges of environmental stewardship in an increasingly over-populated and resource-starved world.

So, while Asimov was certainly not a teller of fortunes, we had many insights that many, even today, still lack.

Read the entire Isaac Asimov essay here.

What Asimov got right:

“Communications will become sight-sound and you will see as well as hear the person you telephone.”

“As for television, wall screens will have replaced the ordinary set…”

“Large solar-power stations will also be in operation in a number of desert and semi-desert areas…”

“Windows… will be polarized to block out the harsh sunlight. The degree of opacity of the glass may even be made to alter automatically in accordance with the intensity of the light falling upon it.”

What Asimov got wrong:

“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes.”

“…cars will be capable of crossing water on their jets…”

“For short-range travel, moving sidewalks (with benches on either side, standing room in the center) will be making their appearance in downtown sections.”

From the Atlantic:

In August of 1964, just more than 50 years ago, author Isaac Asimov wrote a piece in The New York Times, pegged to that summer’s World Fair.

In the essay, Asimov imagines what the World Fair would be like in 2014—his future, our present.

His notions were strange and wonderful (and conservative, as Matt Novak writes in a great run-down), in the way that dreams of the future from the point of view of the American mid-century tend to be. There will be electroluminescent walls for our windowless homes, levitating cars for our transportation, 3D cube televisions that will permit viewers to watch dance performances from all angles, and “Algae Bars” that taste like turkey and steak (“but,” he adds, “there will be considerable psychological resistance to such an innovation”).

He got some things wrong and some things right, as is common for those who engage in the sport of prediction-making. Keeping score is of little interest to me. What is of interest: what Asimov understood about the entangled relationships among humans, technological development, and the planet—and the implications of those ideas for us today, knowing what we know now.

Asimov begins by suggesting that in the coming decades, the gulf between humans and “nature” will expand, driven by technological development. “One thought that occurs to me,” he writes, “is that men will continue to withdraw from nature in order to create an environment that will suit them better. “

It is in this context that Asimov sees the future shining bright: underground, suburban houses, “free from the vicissitudes of weather, with air cleaned and light controlled, should be fairly common.” Windows, he says, “need be no more than an archaic touch,” with programmed, alterable, “scenery.” We will build our own world, an improvement on the natural one we found ourselves in for so long. Separation from nature, Asimov implies, will keep humans safe—safe from the irregularities of the natural world, and the bombs of the human one, a concern he just barely hints at, but that was deeply felt at the time.

But Asimov knows too that humans cannot survive on technology alone. Eight years before astronauts’ Blue Marble image of Earth would reshape how humans thought about the planet, Asimov sees that humans need a healthy Earth, and he worries that an exploding human population (6.5 billion, he accurately extrapolated) will wear down our resources, creating massive inequality.

Although technology will still keep up with population through 2014, it will be only through a supreme effort and with but partial success. Not all the world’s population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.

This troubled him, but the real problems lay yet further in the future, as “unchecked” population growth pushed urban sprawl to every corner of the planet, creating a “World-Manhattan” by 2450. But, he exclaimed, “society will collapse long before that!” Humans would have to stop reproducing so quickly to avert this catastrophe, he believed, and he predicted that by 2014 we would have decided that lowering the birth rate was a policy priority.

Asimov rightly saw the central role of the planet’s environmental health to a society: No matter how technologically developed humanity becomes, there is no escaping our fundamental reliance on Earth (at least not until we seriously leave Earth, that is). But in 1964 the environmental specters that haunt us today—climate change and impending mass extinctions—were only just beginning to gain notice. Asimov could not have imagined the particulars of this special blend of planetary destruction we are now brewing—and he was overly optimistic about our propensity to take action to protect an imperiled planet.

Read the entire article here.

Image: Driverless cars as imaged in 1957. Courtesy of America’s Independent Electric Light and Power Companies/Paleofuture.

 

 

 

2014: The Year of Big Stuff

new-years-eve-2013

Over the closing days of each year, or the first few days of the coming one, prognosticators the world over tell us about the future. Yet, while no one, to date, has yet been proven to have prescient skills — despite what your psychic tells you — we all like to dabble in art of prediction. Google’s Eric Schmidt has one big prediction for 2014: big. Everything will be big — big data, big genomics, smartphones will be even bigger, and of course, so will mistakes.

So, with that, a big Happy New Year to all our faithful readers and seers across our fragile and beautiful blue planet.

From the Guardian:

What does 2014 hold? According to Eric Schmidt, Google’s executive chairman, it means smartphones everywhere – and also the possibility of genetics data being used to develop new cures for cancer.

In an appearance on Bloomberg TV, Schmidt laid out his thoughts about general technological change, Google’s biggest mistake, and how Google sees the economy going in 2014.

“The biggest change for consumers is going to be that everyone’s going to have a smartphone,” Schmidt says. “And the fact that so many people are connected to what is essentially a supercomputer means a whole new generation of applications around entertainment, education, social life, those kinds of things. The trend has been that mobile is winning; it’s now won. There are more tablets and phones being sold than personal computers – people are moving to this new architecture very fast.”

It’s certainly true that tablets and smartphones are outselling PCs – in fact smartphones alone have been doing that since the end of 2010. This year, it’s forecast that tablets will have passed “traditional” PCs (desktops, fixed-keyboard laptops) too.

Disrupting business

Next, Schmidt says there’s a big change – a disruption – coming for business through the arrival of “big data”: “The biggest disruptor that we’re sure about is the arrival of big data and machine intelligence everywhere – so the ability [for businesses] to find people, to talk specifically to them, to judge them, to rank what they’re doing, to decide what to do with your products, changes every business globally.”

But he also sees potential in the field of genomics – the parsing of all the data being collected from DNA and gene sequencing. That might not be surprising, given that Google is an investor in 23andme, a gene sequencing company which aims to collect the genomes of a million people so that it can do data-matching analysis on their DNA. (Unfortunately, that plan has hit a snag: 23andme has been told to cease operating by the US Food and Drug Administration because it has failed to respond to inquiries about its testing methods and publication of results.)

Here’s what Schmidt has to say on genomics: “The biggest disruption that we don’t really know what’s going to happen is probably in the genetics area. The ability to have personal genetics records and the ability to start gathering all of the gene sequencing into places will yield discoveries in cancer treatment and diagnostics over the next year that that are unfathomably important.”

It may be worth mentioning that “we’ll find cures through genomics” has been the promise held up by scientists every year since the human genome was first sequenced. So far, it hasn’t happened – as much as anything because human gene variation is remarkably big, and there’s still a lot that isn’t known about the interaction of what appears to be non-functional parts of our DNA (which doesn’t seem to code to produce proteins) and the parts that do code for proteins.

Biggest mistake

As for Google’s biggest past mistake, Schmidt says it’s missing the rise of Facebook and Twitter: “At Google the biggest mistake that I made was not anticipating the rise of the social networking phenomenon – not a mistake we’re going to make again. I guess in our defence were working on many other things, but we should have been in that area, and I take responsibility for that.” The results of that effort to catch up can be seen in the way that Google+ is popping up everywhere – though it’s wrong to think of Google+ as a social network, since it’s more of a way that Google creates a substrate on the web to track individuals.

And what is Google doing in 2014? “Google is very much investing, we’re hiring globally, we see strong growth all around the world with the arrival of the internet everywhere. It’s all green in that sense from the standpoint of the year. Google benefits from transitions from traditional industries, and shockingly even when things are tough in a country, because we’re “return-on-investment”-based advertising – it’s smarter to move your advertising from others to Google, so we win no matter whether the industries are in good shape or not, because people need our services, we’re very proud of that.”

For Google, the sky’s the limit: “the key limiter on our growth is our rate of innovation, how smart are we, how clever are we, how quickly can we get these new systems deployed – we want to do that as fast as we can.”

It’s worth noting that Schmidt has a shaky track record on predictions. At Le Web in 2011 he famously forecast that developers would be shunning iOS to start developing on Android first, and that Google TV would be installed on 50% of all TVs on sale by summer 2012.

It didn’t turn out that way: even now, many apps start on iOS, and Google TV fizzled out as companies such as Logitech found that it didn’t work as well as Android to tempt buyers.

Since that, Schmidt has been a lot more cautious about predicting trends and changes – although he hasn’t been above the occasional comment which seems calculated to get a rise from his audience, such as telling executives at a Gartner conference that Android was more secure than the iPhone – which they apparently found humourous.

Read the entire article here.

Image: Happy New Year, 2014 Google doodle. Courtesy of Google.

Global Domination — One Pixel at a Time

google-maps-article

Google’s story began with text-based search and was quickly followed by digital maps. These simple innovations ushered in the company’s mission to organize the world’s information. But as Google ventures further from its roots into mobile operating systems (Android), video (youtube), social media (Google+), smartphone hardware (through its purchase of Motorola’s mobile business), augmented reality (Google Glass), Web browsers (Chrome) and notebook hardware (Chromebook) what of its core mapping service? And is global domination all that it’s cracked up to be?

From the NYT:

Fifty-five miles and three days down the Colorado River from the put-in at Lee’s Ferry, near the Utah-Arizona border, the two rafts in our little flotilla suddenly encountered a storm. It sneaked up from behind, preceded by only a cool breeze. With the canyon walls squeezing the sky to a ribbon of blue, we didn’t see the thunderhead until it was nearly on top of us.

I was seated in the front of the lead raft. Pole position meant taking a dunk through the rapids, but it also put me next to Luc Vincent, the expedition’s leader. Vincent is the man responsible for all the imagery in Google’s online maps. He’s in charge of everything from choosing satellite pictures to deploying Google’s planes around the world to sending its camera-equipped cars down every road to even this, a float through the Grand Canyon. The raft trip was a mapping expedition that was also serving as a celebration: Google Maps had just introduced a major redesign, and the outing was a way of rewarding some of the team’s members.

Vincent wore a black T-shirt with the eagle-globe-and-anchor insignia of the United States Marine Corps on his chest and the slogan “Pain is weakness leaving the body” across his back. Though short in stature, he has the upper-body strength of an avid rock climber. He chose to get his Ph.D. in computer vision, he told me, because the lab happened to be close to Fontainebleau — the famous climbing spot in France. While completing his postdoc at the Harvard Robotics Lab, he led a successful expedition up Denali, the highest peak in North America.

A Frenchman who has lived half his 49 years in the United States, Vincent was never in the Marines. But he is a leader in a new great game: the Internet land grab, which can be reduced to three key battles over three key conceptual territories. What came first, conquered by Google’s superior search algorithms. Who was next, and Facebook was the victor. But where, arguably the biggest prize of all, has yet to be completely won.

Where-type questions — the kind that result in a little map popping up on the search-results page — account for some 20 percent of all Google queries done from the desktop. But ultimately more important by far is location-awareness, the sort of geographical information that our phones and other mobile devices already require in order to function. In the future, such location-awareness will be built into more than just phones. All of our stuff will know where it is — and that awareness will imbue the real world with some of the power of the virtual. Your house keys will tell you that they’re still on your desk at work. Your tools will remind you that they were lent to a friend. And your car will be able to drive itself on an errand to retrieve both your keys and your tools.

While no one can say exactly how we will get from the current moment to that Jetsonian future, one thing for sure can be said about location-awareness: maps are required. Tomorrow’s map, integrally connected to everything that moves (the keys, the tools, the car), will be so fundamental to their operation that the map will, in effect, be their operating system. A map is to location-awareness as Windows is to a P.C. And as the history of Microsoft makes clear, a company that controls the operating system controls just about everything. So the competition to make the best maps, the thinking goes, is more than a struggle over who dominates the trillion-dollar smartphone market; it’s a contest over the future itself.

Google was relatively late to this territory. Its map was only a few months old when it was featured at Tim O’Reilly’s inaugural Where 2.0 conference in 2005. O’Reilly is a publisher and a well-known visionary in Silicon Valley who is convinced that the Internet is evolving into a single vast, shared computer, one of whose most important individual functions, or subroutines, is location-awareness.

Google’s original map was rudimentary, essentially a digitized road atlas. Like the maps from Microsoft and Yahoo, it used licensed data, and areas outside the United States and Europe were represented as blue emptiness. Google’s innovation was the web interface: its map was dragable, zoomable, panable.

These new capabilities were among the first implementations of a technology that turned what had been a static medium — a web of pages — into a dynamic one. MapQuest and similar sites showed you maps; Google let you interact with them. Developers soon realized that they could take advantage of that dynamism to hack Google’s map, add their own data and create their very own location-based services.

A computer scientist named Paul Rademacher did just that when he invented a technique to facilitate apartment-hunting in San Francisco. Frustrated by the limited, bare-bones nature of Craigslist’s classified ads and inspired by Google’s interactive quality, Rademacher spent six weeks overlaying Google’s map with apartment listings from Craigslist. The result, HousingMaps.com, was one of the web’s first mash-ups.

Read the entire article here.

Image: Luc Vincent, head of Google Maps imagery. Courtesy of NYT Magazine.

How to Burst the Filter Bubble

[tube]B8ofWFx525s[/tube]

As the customer service systems of all online retailers and media companies become ever-more attuned to their shoppers’ and members’ preferences the power of the filter bubble grows ever-greater. And, that’s not a good thing.

The filter bubble ensures that digital consumers see more content that matches their preferences and, by extension, continues to reinforce their opinions and beliefs. Conversely, consumers see less and less content that diverges from historical behavior and calculated preferences, often called “signals”.

And, that’s not a good thing.

What of diverse opinion and diverse views? Without a plurality of views and a rich spectrum of positions creativity loses in its battle with banality and conformity. So how can digital consumers break free of the systems that deliver custom recommendations and filtered content and reduce serendipitous discovery?

From Technology Review:

The term “filter bubble” entered the public domain back in 2011when the internet activist Eli Pariser coined it to refer to the way recommendation engines shield people from certain aspects of the real world.

Pariser used the example of two people who googled the term “BP”. One received links to investment news about BP while the other received links to the Deepwater Horizon oil spill, presumably as a result of some recommendation algorithm.

This is an insidious problem. Much social research shows that people prefer to receive information that they agree with instead of information that challenges their beliefs. This problem is compounded when social networks recommend content based on what users already like and on what people similar to them also like.

This the filter bubble—being surrounded only by people you like and content that you agree with.

And the danger is that it can polarise populations creating potentially harmful divisions in society.

Today, Eduardo Graells-Garrido at the Universitat Pompeu Fabra in Barcelona as well as Mounia Lalmas and Daniel Quercia, both at Yahoo Labs, say they’ve hit on a way to burst the filter bubble. Their idea that although people may have opposing views on sensitive topics, they may also share interests in other areas. And they’ve built a recommendation engine that points these kinds of people towards each other based on their own preferences.

The result is that individuals are exposed to a much wider range of opinions, ideas and people than they would otherwise experience. And because this is done using their own interests, they end up being equally satisfied with the results (although not without a period of acclimitisation). “We nudge users to read content from people who may have opposite views, or high view gaps, in those issues, while still being relevant according to their preferences,” say Graells-Garrido and co.

These guys have tested this approach by focusing on the topic of abortion as discussed by people in Chile in August and September this year. Chile has some of the most restrictive anti-abortion laws on the planet–it was legalised here in 1931 and then made illegal again in 1989. With presidential elections in November, a highly polarised debate was raging in the country at that time.

They found over 40,000 Twitter users who had expressed an opinion using the hashtags such as #pro-life and #pro-choice. They trimmed this group by choosing only those who gave their location as Chile and by excluding those who tweeted rarely. That left over 3000 Twitter users.

The team then computed the difference in the views of these users on this and other topics using the regularity with which they used certain other keywords. This allowed them to create a kind of wordcloud for each user that acted like a kind of data portrait.

They then recommended tweets to each person based on similarities between their word clouds and especially when they differed in their views on the topic of abortion.

The results show that people can be more open than expected to ideas that oppose their own. It turns out that users who openly speak about sensitive issues are more open to receive recommendations authored by people with opposing views, say Graells-Garrido and co.

They also say that challenging people with new ideas makes them generally more receptive to change. That has important implications for social media sites. There is good evidence that users can sometimes become so resistant to change than any form of redesign dramatically reduces the popularity of the service. Giving them a greater range of content could change that.

“We conclude that an indirect approach to connecting people with opposing views has great potential,” say Graells-Garrido and co.

It’s certainly a start. But whether it can prevent the herding behaviour in which users sometimes desert social media sites overnight, is debatable. But the overall approach is admirable. Connecting people is important when they share similar interests but arguably even more so when their views clash.

Read the entire article here.

Video: Eli Pariser, beware online “filter bubbles”. Courtesy of Eli Pariser, thefilterbubble.

4-D Printing and Self-Assembly

[tube]NV1blyzcdjE[/tube]

With the 3-D printing revolution firmly upon us comes word of the next logical extension — 3-D printing in time, or 4-D printing. This allows for “printing” of components that can self-assemble over time at a macro-scale. We are still a long way from Iain M. Banks’ self-assembling starships, but this heralds a small step in a very important direction.

From Slate:

Read the entire article here.

Vide curtesy of MIT: Self-assembly Lab.

Predicting the Future is Highly Overrated

Contrary to what political pundits, stock market talking heads and your local strip mall psychic will have you believe, no one, yet, can predict the future. And, it is no more possible for the current generation of tech wunderkinds or Silicon Valley venture fund investors or the armies of analysts.

From WSJ:

I believe the children aren’t our future. Teach them well, but when it comes to determining the next big thing in tech, let’s not fall victim to the ridiculous idea that they lead the way.

Yes, I’m talking about Snapchat.

Last week my colleagues reported that Facebook FB -2.71% recently offered $3 billion to acquire the company behind the hyper-popular messaging app. Stunningly, Evan Spiegel, Snapchat’s 23-year-old co-founder and CEO, rebuffed the offer.

If you’ve never used Snapchat—and I implore you to try it, because Snapchat can be pretty fun if you’re into that sort of thing, which I’m not, because I’m grumpy and old and I have two small kids and no time for fun, which I think will be evident from the rest of this column, and also would you please get off my lawn?—there are a few things you should know about the app.

First, Snapchat’s main selling point is ephemerality. When I send you a photo and caption using the app, I can select how long I want you to be able to view the picture. After you look at it for the specified time—1 to 10 seconds—the photo and all trace of our having chatted disappear from your phone. (Or, at least, they are supposed to. Snapchat’s security measures have frequently been defeated.)

Second, and relatedly, Snapchat is used primarily by teens and people in college. This explains much of Silicon Valley’s obsession with the company.

The app doesn’t make any money—its executives have barely even mentioned any desire to make money—but in the ad-supported tech industry, youth is the next best thing to revenue. For tech execs, youngsters are the canaries in the gold mine.

That logic follows a widely shared cultural belief: We all tend to assume that young people are on the technological vanguard, that they somehow have got an inside scoop on what’s next. If today’s kids are Snapchatting instead of Facebooking, the thinking goes, tomorrow we’ll all be Snapchatting, too, because tech habits, like hairstyles, flow only one way: young to old.

There is only one problem with elevating young people’s tastes this way: Kids are often wrong. There is little evidence to support the idea that the youth have any closer insight on the future than the rest of us do. Sometimes they are first to flock to technologies that turn out to be huge; other times, the young pick products and services that go nowhere. They can even be late adopters, embracing innovations that older people understood first. To butcher another song: The kids could be all wrong.

Here’s a thought exercise. How many of the products and services that you use every day were created or first used primarily by people under 25?

A few will spring to mind, Facebook the biggest of all. Yet the vast majority of your most-used things weren’t initially popular among teens. The iPhone, the iPad, the iPod, the Google search engine, YouTube, Twitter, TWTR -1.86% Gmail, Google Maps, Pinterest, LinkedIn, the Kindle, blogs, the personal computer, none of these were initially targeted to, or primarily used by, high-school or college-age kids. Indeed, many of the most popular tech products and services were burdened by factors that were actively off-putting to kids, such as high prices, an emphasis on productivity and a distinct lack of fun. Yet they succeeded anyway.

Even the exceptions suggest we should be wary of catering to youth. It is true that in 2004, Mark Zuckerberg designed Facebook for his Harvard classmates, and the social network was first made available only to college students. At the time, though, Facebook looked vastly more “grown up” than its competitors. The site prevented you from uglifying your page with your own design elements, something you could do with Myspace, which, incidentally, was the reigning social network among the pubescent set.

Mr. Zuckerberg deliberately avoided catering to this group. He often told his co-founders that he wanted Facebook to be useful, not cool. That is what makes the persistent worry about Facebook’s supposedly declining cachet among teens so bizarre; Facebook has never really been cool, but neither are a lot of other billion-dollar companies. Just ask Myspace how far being cool can get you.

Incidentally, though 20-something tech founders like Mr. Zuckerberg, Steve Jobs and Bill Gates get a lot of ink, they are unusual. A recent study by the VC firm Cowboy Ventures found that among tech startups that have earned a valuation of at least $1 billion since 2003, the average founder’s age was 34. “The twentysomething inexperienced founder is an outlier, not the norm,” wrote Cowboy’s founder Aileen Lee.

If you think about it for a second, the fact that young people aren’t especially reliable predictors of tech trends shouldn’t come as a surprise. Sure, youth is associated with cultural flexibility, a willingness to try new things that isn’t necessarily present in older folk. But there are other, less salutary hallmarks of youth, including capriciousness, immaturity, and a deference to peer pressure even at the cost of common sense. This is why high school is such fertile ground for fads. And it’s why, in other cultural areas, we don’t put much stock in teens’ choices. No one who’s older than 18, for instance, believes One Direction is the future of music.

That brings us back to Snapchat. Is the app just a youthful fad, just another boy band, or is it something more permanent; is it the Beatles?

To figure this out, we would need to know why kids are using it. Are they reaching for Snapchat for reasons that would resonate with older people—because, like the rest of us, they’ve grown wary of the public-sharing culture promoted by Facebook and Twitter? Or are they using it for less universal reasons, because they want to evade parental snooping, send risqué photos, or avoid feeling left out of a fad everyone else has adopted?

Read the entire article here.

Image: Snapchat logo. Courtesy of Snapchat / Wikipedia.

Retailing: An Engineering Problem

Traditional retailers look at retailing primarily as a marketing and customer acquisition and relationship problem. For Amazon, it’s more of an engineering and IT problem with solutions to be found in innovation and optimization.

From Technology Review:

Why do some stores succeed while others fail? Retailers constantly struggle with this question, battling one another in ways that change with each generation. In the late 1800s, architects ruled. Successful merchants like Marshall Field created palaces of commerce that were so gorgeous shoppers rushed to come inside. In the early 1900s, mail order became the “killer app,” with Sears Roebuck leading the way. Toward the end of the 20th century, ultra-efficient suburban discounters like Target and Walmart conquered all.

Now the tussles are fiercest in online retailing, where it’s hard to tell if anyone is winning. Retailers as big as Walmart and as small as Tweezerman.com all maintain their own websites, catering to an explosion of customer demand. Retail e-commerce sales expanded 15 percent in the U.S in 2012—seven times as fast as traditional retail. But price competition is relentless, and profit margins are thin to nonexistent. It’s easy to regard this $186 billion market as a poisoned prize: too big to ignore, too treacherous to pursue.

Even the most successful online retailer, Amazon.com, has a business model that leaves many people scratching their heads. Amazon is on track to ring up $75 billion in worldwide sales this year. Yet it often operates in the red; last quarter, Amazon posted a $41 million loss. Amazon’s founder and chief executive officer, Jeff Bezos, is indifferent to short-term earnings, having once quipped that when the company achieved profitability for a brief stretch in 1995, “it was probably a mistake.”

Look more closely at Bezos’s company, though, and its strategy becomes clear. Amazon is constantly plowing cash back into its business. Its secretive advanced-research division, Lab 126, works on next-generation Kindles and other mobile devices. More broadly, Amazon spends heavily to create the most advanced warehouses, the smoothest customer-service channels, and other features that help it grab an ever-larger share of the market. As former Amazon manager Eugene Wei wrote in a recent blog post, “Amazon’s core business model does generate a profit with most every transaction … The reason it isn’t showing a profit is because it’s undertaken a massive investment to support an even larger sales base.”

Much of that investment goes straight into technology. To Amazon, retailing looks like a giant engineering problem. Algorithms define everything from the best way to arrange a digital storefront to the optimal way of shipping a package. Other big retailers spend heavily on advertising and hire a few hundred engineers to keep systems running. Amazon prefers a puny ad budget and a payroll packed with thousands of engineering graduates from the likes of MIT, Carnegie Mellon, and Caltech.

Other big merchants are getting the message. Walmart, the world’s largest retailer, two years ago opened an R&D center in Silicon Valley where it develops its own search engines and looks for startups to buy. But competing on Amazon’s terms doesn’t stop with putting up a digital storefront or creating a mobile app. Walmart has gone as far as admitting that it may have to rethink what its stores are for. To equal Amazon’s flawless delivery, this year it even floated the idea of recruiting shoppers out of its aisles to play deliveryman, whisking goods to customers who’ve ordered online.

Amazon is a tech innovator by necessity, too. The company lacks three of conventional retailing’s most basic elements: a showroom where customers can touch the wares; on-the-spot salespeople who can woo shoppers; and the means for customers to take possession of their goods the instant a sale is complete. In one sense, everything that Amazon’s engineers create is meant to make these fundamental deficits vanish from sight.

Amazon’s cunning can be seen in the company’s growing patent portfolio. Since 1994, Amazon.com and a subsidiary, Amazon Technologies, have won 1,263 patents. (By contrast, Walmart has just 53.) Each Amazon invention is meant to make shopping on the site a little easier, a little more seductive, or to trim away costs. Consider U.S. Patent No. 8,261,983, on “generating customized packaging” which came into being in late 2012.

“We constantly try to drive down the percentage of air that goes into a shipment,” explains Dave Clark, the Amazon vice president who oversees the company’s nearly 100 warehouses, known as fulfillment centers. The idea of shipping goods in a needlessly bulky box (and paying a few extra cents to United Parcel Service or other carriers) makes him shudder. Ship nearly a billion packages a year, and those pennies add up. Amazon over the years has created more than 40 sizes of boxes– but even that isn’t enough. That’s the glory of Amazon’s packaging patent: when a customer’s odd pairing of items creates a one-of-a-kind shipment, Amazon now has systems that will compute the best way to pack that order and create a perfect box for it within 30 minutes.

For thousands of online merchants, it’s easier to live within Amazon’s ecosystem than to compete. So small retailers such as EasyLunchboxes.com have moved their inventory into Amazon’s warehouses, where they pay a commission on each sale for shipping and other services. That is becoming a highly lucrative business for Amazon, says Goldman Sachs analyst Heath Terry. He predicts Amazon will reap $3.5 billion in cash flow from third-party shipping in 2014, creating a very profitable side business that he values at $38 billion—about 20 percent of the company’s overall stock market value.

Jousting directly with Amazon is tougher. Researchers at Internet Retailer calculate that Amazon’s revenue exceeds that of its next 12 competitors combined. In a regulatory filing earlier this year, Target—the third-largest retailer in the U.S.—conceded that its “digital sales represented an immaterial amount of total sales.” For other online entrants, the most prudent strategies generally involve focusing on areas that the big guy hasn’t conquered yet, such as selling services, online “flash sales” that snare impulse buyers who can’t pass up a deal, or particularly challenging categories such as groceries. Yet many, if not most, of these upstarts are losing money.

Read the entire article here.

Image: Amazon fullfillment center, Scotland. Courtesy of Amazon / Wired.

Masters of the Universe: Silicon Valley Edition

As we all (should) know the “real” masters of the universe (MOTU) center around He-Man and his supporting cast of characters from the mind of the Mattel media company. In the 80s, we also find masters of the universe on Wall Street — bright young MBAs leading the charge towards the untold wealth (and eventual destruction) mined by investment banks. Ironically, many of the east coast MOTU have since disappeared from public view following the financial meltdown that many of them helped engineer. Now, we seem to be at risk from another group of arrogant MOTU: this time, a select group of high-tech entrepreneurs from Silicon Valley.

From the WSJ:

At a startup conference in the San Francisco Bay area last month, a brash and brilliant young entrepreneur named Balaji Srinivasan took the stage to lay out a case for Silicon Valley’s independence.

According to Mr. Srinivasan, who co-founded a successful genetics startup and is now a popular lecturer at Stanford University, the tech industry is under siege from Wall Street, Washington and Hollywood, which he says he believes are harboring resentment toward Silicon Valley’s efforts to usurp their cultural and economic power.

Balaji Srinivasan, an entrepreneur who proposes an ‘opt-in society,’ run by technology. His idea seems a more expansive version of a call by Google CEO Larry Page for ‘a piece of the world’ to try out controversial new technologies.

On its surface, Mr. Srinivasan’s talk,?called “Silicon Valley’s Ultimate Exit,”?sounded like a battle cry of the libertarian, anti-regulatory sensibility long espoused by some of the tech industry’s leading thinkers. After arguing that the rest of the country wants to put a stop to the Valley’s rise, Mr. Srinivasan floated a plan for techies to build an “opt-in society, outside the U.S., run by technology.”

His idea seemed a more expansive version of Google Chief Executive Larry Page‘s call for setting aside “a piece of the world” to try out controversial new technologies, and investor Peter Thiel’s “Seastead” movement, which aims to launch tech-utopian island nations.

But there was something more significant about Mr. Srinivasan’s talk than simply a rehash of Silicon Valley’s grievances. It was one of several recent episodes in which tech stars have sought to declare the Valley the nation’s leading center of power and to dismiss non-techies as unimportant to the nation’s future.

For instance, on “This Week in Start-Ups,” a popular tech podcast, the venture capitalist Chamath Palihapitiya recently argued that “it’s becoming excruciatingly, obviously clear to everyone else that where value is created is no longer in New York; it’s no longer in Washington; it’s no longer in L.A.; it’s in San Francisco and the Bay Area.”

This is Silicon Valley’s superiority complex, and it sure is an ugly thing to behold. As the tech industry has shaken off the memories of the last dot-com bust, its luminaries have become increasingly confident about their capacity to shape the future. And now they seem to have lost all humility about their place in the world.

Sure, they’re correct that whether you measure success financially or culturally, Silicon Valley now seems to be doing better than just about anywhere else. But there is a suggestion bubbling beneath the surface of every San Francisco networking salon that the industry is unstoppable, and that its very success renders it immune to legitimate criticism.

This is a dangerous idea. For Silicon Valley’s own sake, the triumphalist tone needs to be kept in check. Everyone knows that Silicon Valley aims to take over the world. But if they want to succeed, the Valley’s inhabitants would be wise to at least pretend to be more humble in their approach.

I tried to suggest this to Mr. Srinivasan when I met him at a Palo Alto, Calif., cafe a week after his incendiary talk. We spoke for two hours, and I found him to be disarming and charming.

He has a quick, capacious mind, the sort that flits effortlessly from discussions of genetics to economics to politics to history. (He is the kind of person who will refer to the Treaty of Westphalia in conversation.)

Contrary to press reports, Mr. Srinivasan says he wasn’t advocating Silicon Valley’s “secession.” And, in fact, he hadn’t used that word. Instead he was advocating a “peaceful exit,” something similar to what his father did when he emigrated from India to the U.S. in the past century. But when I asked him what harms techies faced that might prompt such a drastic response, he couldn’t offer much evidence.

He pointed to a few headlines in the national press warning that robots might be taking over people’s jobs. These, he said, were evidence of the rising resentment that technology will foster as it alters conditions across the country and why Silicon Valley needs to keep an escape hatch open.

But I found Mr. Srinivasan’s thesis to be naive. According to the industry’s own hype, technologies like robotics, artificial intelligence, data mining and ubiquitous networking are poised to usher in profound changes in how we all work and live. I believe, as Mr. Srinivasan argues, that many of these changes will eventually improve human welfare.

But in the short run, these technologies could cause enormous economic and social hardships for lots of people. And it is bizarre to expect, as Mr. Srinivasan and other techies seem to, that those who are affected wouldn’t criticize or move to stop the industry pushing them.

Tech leaders have a choice in how to deal with the dislocations their innovations cause. They can empathize and even work with stalwarts of the old economy to reduce the shock of new invention in sectors such as Hollywood, the news and publishing industries, the government, and finance—areas that Mr. Srinivasan collectively labels “the paper belt.”

They can continue to disrupt many of these institutions in the marketplace without making preening claims about the superiority of tech culture. (Apple’s executives rarely shill for the Valley, but still sometimes manage to change the world).

Or, tech leaders can adopt an oppositional tone: If you don’t recognize our superiority and the rightness of our ways, we’ll take our ball and go home.

Read the entire article here.

Image courtesy of Silicon Valley.

Zombie Technologies

Next time Halloween festivities roll around consider dressing up as a fax machine — one of several technologies that seems unwilling to die.

From Wired:

One of the things we love about technology is how fast it moves. New products and new services are solving our problems all the time, improving our connectivity and user experience on a nigh-daily basis.

But underneath sit the technologies that just keep hanging on. Every flesh wound, every injury, every rupture of their carcass levied by a new device or new method of doing things doesn’t merit even so much as a flinch from them. They keep moving, slowly but surely, eating away at our livelihoods. They are the undead of the technology world, and they’re coming for your brains.

Below, you’ll find some of technology’s more persistent walkers—every time we seem to kill them off, more hordes still clinging to their past relevancy lumber up to distract you. It’s about time we lodged an axe in their skulls.

Oddly specific yet totally unhelpful error codes

It’s common when you’re troubleshooting hardware and software—something, somewhere throws an error code that pairs an incredibly specific alphanumerical code (“0x000000F4”) with a completely generic and unhelpful message like “an unknown error occurred” or “a problem has been detected.”

Back in computing’s early days, the desire to use these codes instead of providing detailed troubleshooting guides made sense—storage space was at a premium, Internet connectivity could not be assumed, and it was a safe bet that the software in question came with some tome-like manual to assist people in the event of problems. Now, with connectivity virtually omnipresent and storage space a non-issue, it’s not clear why codes like these don’t link to more helpful information in some way.

All too often, you’re left to take the law into your own hands. Armed with your error code, you head over to your search engine of choice and punch it in. At this point, one of two things can happen, and I’m not sure which is more infuriating: you either find an expanded, totally helpful explanation of the code and how to fix it on the official support website (could you really not have built that into the software itself?), or, alternatively, you find a bunch of desperate, inconclusive forum posts that offer no additional insight into the problem (though they do offer insight into the absurdity of the human condition). There has to be a better way.

Copper landlines

I’ve been through the Northeast blackout, the 9-11 attacks, and Hurricane Sandy, all of which took out cell service at the same time family and friends were most anxious to get in touch. So I’m a prime candidate for maintaining a landline, which carries enough power to run phones, often provided by a facility with a backup generator. And, in fact, I’ve tried to retain one. But corporate indifference has turned copper wiring into the technology of the living dead.

Verizon really wants you to have two things: cellular service and FiOS. Except it doesn’t actually want to give you FiOS—the company has stopped expanding its fiber footprint, and it’s moving with the speed of a glacier to hook up neighborhoods that are FiOS accessible. That has left Verizon in a position where the company will offer you cell service, but, if you don’t want that, it will stick you with a technology it no longer wants to support: service over copper wires.

This was made explicit in the wake of Sandy when a shore community that had seen its wires washed out was offered cellular service as a replacement. When the community demanded wires, Verizon backed down and gave it FiOS. But the issue shows up in countless other ways. One of our editors recently decided to have DSL service over copper wire activated in his apartment; Verizon took two weeks to actually get the job done.

I stuck with Verizon DSL in the hope that I would be able to transfer directly to FiOS when it finally got activated. But Verizon’s indifference to wired service led to a six-month nightmare. I’d experience erratic DSL, call for Verizon for help, and have it fixed through a process that cut off the phone service. Getting the phone service restored would degrade the DSL. On it went until I gave up and switched to cable—which was a good thing, because it took Verizon about two years to finally put fiber in place.

At the moment, AT&T still considers copper wiring central to its services, but it’s not clear how long that position will remain tenable. If AT&T’s position changes, then it’s likely that the company will also treat the copper just as Verizon has: like a technology that’s dead even as it continues to shamble around causing trouble.

The scary text mode insanity lying in wait beneath it all

PRESS DEL TO ENTER SETUP. Oh, BIOS, how I hate thee. Often the very first thing you have to deal with when dragging a new computer out of the box is the text mode BIOS setup screen, where you have to figure out how to turn on support for legacy USB devices, or change the boot order, or disable PXE booting, or force onboard video to work, or any number of other crazy things. It’s like being sucked into a time warp back into 1992.

Though slowly being replaced across the board by UEFI, BIOS setup screens are definitely still a thing even on new hardware—the small dual-Ethernet server I purchased just a month ago to serve as my new firewall required me to spend minutes figuring out which of its onboard USB ports were legacy-enabled and then which key summoned the setup screen (F2? Delete? F10? F1? IT’S NEVER THE SAME ONE!). Once in, I had to figure out how to enable USB device booting so that I could get Smoothwall installed, but the computer inexplicably wouldn’t boot from my carefully prepared USB stick, even though the stick worked great on the other servers in the closet. I ended up having to install from a USB CD-ROM drive instead.

Many motherboard OEMs now provide a way to adjust BIOS options from inside of Windows, which is great, but that won’t necessarily help you on a fresh Windows install (or on a computer you’ve parted together yourself and on which you haven’t installed the OEM’s universally hideous BIOS tweaking application). UEFI as a replacement has been steadily gaining ground for almost three years now, but we’ve likely got many more years of occasionally having to reboot and hold DEL to adjust some esoteric settings. Ugh.

Fax machines, and the general concept of faxing

Faxing has a longer and more venerable history than I would have guessed, based on how abhorrent it is in the modern day. The first commercial telefaxing service was established in France in 1865 via wire transmission, and we started sending faxes over phone lines circa 1964. For a long time, faxing was actually the best and fastest way to get a photographic clone of one piece of paper to an entirely different geographical location.

Then came e-mail. And digital cameras. And electronic signatures. And smartphones with digital cameras. And Passbook. And cloud storage. Yet people continue to ask me to fax them things.

When it comes to signing contracts or verifying or simply passing along information, digital copies, properly backed up with redundant files everywhere, are easier to deal with at literally every step in the process. On the very rare occasion that a physical piece of paper is absolutely necessary, here: e-mail it; I will sign it electronically and e-mail it back to you, and you print it out. You already sent me that piece of paper? I will sign it, take a picture with my phone, e-mail that picture to you, and you print it out. Everyone comes out ahead, no one has to deal with a fax machine.

That a business, let alone businesses, have actually cropped up around the concept of allowing people to e-mail documents to a fax number is ludicrous. Get an e-mail address. They are free. Get a printer. It is cheaper than a fax machine. Don’t get a printer that is also a fax machine, because then you are just encouraging this technological concept to live on, when, in fact, it needs to die.

Read the entire article here.

Image courtesy of Mobiledia.

100-Year Starship Project

As Voyager 1 embarks on its interstellar voyage, having recently left the confines of our solar system, NASA and the Pentagon are collaborating with the 100-Year Starship Project. This effort aims to make human interstellar travel a reality within the next 100 years. While this is an admirable goal, let’s not forget that the current record holder for fastest man made object — Voyager 1 — would still take around 50,000 years to reach the nearest star to Earth. So NASA had better get its creative juices flowing.

From the Guardian:

It would be hard enough these days to find a human capable of playing a 12-inch LP, let alone an alien. So perhaps it is time for Nasa to update its welcome pack for extraterrestrials.

The agency announced earlier this month that its Voyager 1 probe has left the solar system, becoming the first object to enter interstellar space. On board is a gold-plated record from 1977.

It contains greetings in dozens of languages, sounds such as morse code, a tractor, a kiss, music – from Bach to Chuck Berry – and pictures of life on Earth, including a sperm fertilising an egg, athletes, and the Sydney Opera House.

Now, Jon Lomberg, the original Golden Record design director, has launched a project aiming to persuade Nasa to upload a current snapshot of Earth to one of its future interstellar craft as a sort of space-age message in a bottle.

The New Horizons spacecraft will reach Pluto in 2015, then is expected to leave the solar system in about three decades. The New Horizons Message Initiative wants to create a crowd-sourced “human fingerprint” for extra-terrestrial consumption that can be digitally uploaded to the probe as its journey continues. The message could be modified to reflect changes on Earth as years go by.

With the backing of numerous space experts, Lomberg is orchestrating a petition and fundraising campaign. The first stage will firm up what can be sent in a format that would be easy for aliens to decode; the second will be the online crowd-sourcing of material.

Especially given the remote possibility that the message will ever be read, Lomberg emphasises the benefits to earthlings of starting a debate about how we should introduce ourselves to interplanetary strangers.

“The Voyager record was our best foot forward. We just talked about what we were like on a good day … no wars or famine. It was a sanitised portrait. Should we go warts and all? That is a legitimate discussion that needs to be had,” he said.

“The previous messages were decided by elite groups … Everybody is equally entitled and qualified to do it. If you’re a human on Earth you have a right to decide how you’re presented.”

“Astronauts have said that you step off the Earth and look back and you see things differently. Looking at yourself with a different perspective is always useful. The Golden Record has had a tremendous effect in terms of making people think about the culture in ways they wouldn’t normally do.”

Buoyed by the Voyager news, scientists gathered in Houston last weekend for the annual symposium of the Nasa- and Pentagon-backed 100-Year Starship project, which aims to make human interstellar travel a reality within a century.

“I think it’s an incredible boost. I think it makes it much more plausible,” said Dr Mae Jemison, the group’s principal and the first African-American woman in space. “What it says is that we know we can get to interstellar space. We got to interstellar space with technologies that were developed 40 years ago. There is every reason to suspect that we can create and build vehicles that can go that far, faster.”

Jeff Nosanov, of Nasa’s Jet Propulsion Laboratory, near Los Angeles, hopes to persuade the agency to launch about ten interstellar probes to gather data from a variety of directions. They would be powered by giant sails that harness the sun’s energy, much like a boat on the ocean is propelled by wind. Solar sails are gaining credibility as a realistic way of producing faster spacecraft, given the limitations of existing rocket technology. Nasa is planning to launch a spacecraft with a 13,000 square-foot sail in November next year.

“We have a starship and it’s 36 years old, so that’s really good. This is not as impossible as it sounds. Where the challenge becomes ludicrous and really astounding is the distances from one star to another,” Nosanov said.

Read the entire article here.

Image: USS Enterprise (NCC-1701). Courtesy of Star Trek franchise.

Daddy, What’s a Stamp?

Sadly for philatelists and aficionados of these sticky, miniature works of art, stamps may be doomed to the same fate as cassette tapes, vinyl disks, 35mm film, floppy drives, and wrist watches.

From the New York Times:

It could easily be a glorious Pharaonic tomb, stocked with all the sustenance a philatelist might require for the afterlife. The William H. Gross Stamp Gallery, which opened on Sunday here at the Smithsonian National Postal Museum, includes an $18 million array of display spaces, artifacts, trays and touch screens. Its 20,000 items have been culled from more than 6 million at the museum, one of the world’s great collections. And the gallery’s 12,000 square feet are devoted to a single object that seems on the brink of extinction: the postage stamp.

But why should those of us who have never been consumed by the desire to hunt down a rare Inverted Jenny or a Brazilian Bull’s-eye give stamps (or their extinction) much attention? For most of us, they are utilitarian: we lick or peel, stick and use. And if, like me, you find a serious drop-off in the aesthetic values of American stamps in recent decades, what inspiration is there for collecting or contemplation? Why care that neither rain nor snow nor gloom of night are any longer the main hurdles for snail mail’s continuing rounds?

The Post Office has been struggling to reinvent itself, reeling from deficits and awaiting rescue. Maybe that’s why many recent stamps seem so hyped-up and saturated with color: they are straining to cheer up mail carriers, as well as customers.

But this new exhibit space provides more vigorous cheer. It is the world’s largest stamp gallery, we are told, made possible by a $10 million gift from William H. Gross, founder of the investment company Pimco. And surprisingly, until now, the National Postal Museum has had no major philatelic display showing off its collection; its main galleries focus on mail technology and delivery, not its sticky symbols. You might not begin collecting stamps immediately after a visit to their new home (though every visitor will get a head start with a selection of six free stamps), but you will start to think differently about them. Their current inconsequence is a far cry from the trust and influence they once possessed.

The Postmaster General, for example, was once so powerful that the position was included in the president’s cabinet. The mails were once the nation’s premier courier; in 1958, even the priceless Hope diamond was entrusted to the Postal Service for a cost of $145.29, including $1 million worth of insurance. (The heavily stamped and metered envelope is here; the diamond is nearby, at the Natural History Museum.)

And stamp images once had so much authority that in 1901, an engineer opposed to the digging of a Nicaraguan canal scared United States senators by presenting each with a Nicaraguan stamp showing the eruption of the nearby Mount Momotombo; the canal was dug in Panama instead.

We see, too, how mail delivery affected all modes of transportation. Nineteenth-century ocean liners could carry passengers because they were heavily subsidized by government mails. The RMS Titanic was so called because it was a Royal Mail Ship: we see a rusty set of mail keys, found on the drowned body of the Titanic’s sea post clerk.

The museum’s philatelic curator, Cheryl R. Ganz, explores the full range of stamps’ importance. Some are collector’s “gems,” including the famous Inverted Jenny stamp of 1918, in which a flying biplane was printed upside down on a single sheet of 100 stamps. There are discussions of delivery methods (in 1929, seaplanes carrying mail were catapulted off ocean liners); disasters (the Church Street post box damaged on Sept. 11 is here); and counterfeits (Jean de Sperati, once the “world’s most famous stamp forger,” was arrested in France in 1943 “for exporting rare stamps without a license,” but he beat the charge by proving that the stamps were his own forgeries).

There are also primers on stamp preservation, manufacture and design. On a touch-table, you can survey American stamps released since World War II, then e-mail a selection. One wall is lined with photographs of famous stamp collectors, offering riddlers untapped possibilities: How was Charlie Chaplin like Ayn Rand? What did Franklin Delano Roosevelt share with John Lennon?

Collectors, we see too, are entranced by the stamp’s physical trace, the path it takes through the world. And many have arrived here. Even the most experienced philatelist will find wonders in hundreds of sliding vertical frames that can be pulled out from the walls. One room offers an in-depth survey of American stamps; another space provides a broad international sampling.

Read the entire article here.

Image courtesy of Google search.

A Post-PC, Post-Laptop World

Not too long ago the founders and shapers of much of our IT world were dreaming up new information technologies, tools and processes that we didn’t know we needed. These tinkerers became the establishment luminaries that we still ove or hate — Microsoft, Dell, HP, Apple, Motorola and IBM. And, of course, they are still around.

But the world that they constructed is imploding and nobody really knows where it is heading. Will the leaders of the next IT revolution come from the likes of Google or Facebook? Or as is more likely, is this just a prelude to a more radical shift, with seeds being sown in anonymous garages and labs across the U.S. and other tech hubs. Regardless, we are in for some unpredictable and exciting times.

From ars technica:

Change happens in IT whether you want it to or not. But even with all the talk of the “post-PC” era and the rise of the horrifically named “bring your own device” hype, change has happened in a patchwork. Despite the disruptive technologies documented on Ars and elsewhere, the fundamentals of enterprise IT have evolved slowly over the past decade.

But this, naturally, is about to change. The model that we’ve built IT on for the past 10 years is in the midst of collapsing on itself, and the companies that sold us the twigs and straw it was built with—Microsoft, Dell, and Hewlett-Packard to name a few—are facing the same sort of inflection points in their corporate life cycles that have ripped past IT giants to shreds. These corporate giants are faced with moments of truth despite making big bets on acquisitions to try to position themselves for what they saw as the future.

Predicting the future is hard, especially when you have an installed base to consider. But it’s not hard to identify the economic, technological, and cultural forces that are converging right now to shape the future of enterprise IT in the short term. We’re not entering a “post-PC” era in IT—we’re entering an era where the device we use to access applications and information is almost irrelevant. Nearly everything we do as employees or customers will be instrumented, analyzed, and aggregated.

“We’re not on a 10-year reinvention path anymore for enterprise IT,” said David Nichols, Americas IT Transformation Leader at Ernst & Young. “It’s more like [a] five-year or four-year path. And it’s getting faster. It’s going to happen at a pace we haven’t seen before.”

While the impact may be revolutionary, the cause is more evolutionary. A host of technologies that have been the “next big thing” for much of the last decade—smart mobile devices, the “Internet of Things,” deep analytics, social networking, and cloud computing—have finally reached a tipping point. The demand for mobile applications has turned what were once called “Web services” into a new class of managed application programming interfaces. These are changing not just how users interact with data, but the way enterprises collect and share data, write applications, and secure them.

Add the technologies pushed forward by government and defense in the last decade (such as facial recognition) and an abundance of cheap sensors, and you have the perfect “big data” storm. This sea of structured and unstructured data could change the nature of the enterprise or drown IT departments in the process. It will create social challenges as employees and customers start to understand the level to which they are being tracked by enterprises. And it will give companies more ammunition to continue to squeeze more productivity out of a shrinking workforce, as jobs once done by people are turned over to software robots.

There has been a lot of talk about how smartphones and tablets have supplanted the PC. In many ways, that talk is true. In fact, we’re still largely using smartphones and tablets as if they were PCs.

But aside from mobile Web browsing and the use of tablets as a replacement for notebook PCs in presentations, most enterprises still use mobile devices the same way they used the BlackBerry in 1999—for e-mail. Mobile apps are the new webpage: everybody knows they need one to engage customers, but few are really sure what to do with them beyond what customers use their websites for. And while companies are trying to engage customers using social media on mobile, they’re largely not using the communications tools available on smart mobile devices to engage their own employees.

“I think right now, mobile adoption has been greatly overstated in terms of what people say they do with mobile versus mobile’s potential,” said Nichols. “Every CIO out there says, ‘Oh, we have mobile-enabled our workforce using tablets and smartphones.’ They’ve done mobile enablement but not mobile integration. Mobility at this point has not fundamentally changed the way the majority of the workforce works, at least not in the last five to six years.”

Smartphones make very poor PCs. But they have something no desktop PC has—a set of sensors that can provide a constant flow of data about where their user is. There’s visual information pulled in through a camera, motion and acceleration data, and even proximity. When combined with backend analytics, they can create opportunities to change how people work, collaborate, and interact with their environment.

Machine-to-machine (M2M) communications is a big part of that shift, according to Nichols. “Allowing devices with sensors to interact in a meaningful way is the next step,” he said. That step spans from the shop floor to the data center to the boardroom, as the devices we carry track our movements and our activities and interact with the systems around us.

Retailers are beginning to catch on to that, using mobile devices’ sensors to help close sales. “Everybody gets the concept that a mobile app is a necessity for a business-to-consumer retailer,” said Brian Kirschner, the director of Apigee Institute, a research organization created by the application infrastructure vendor Apigee in collaboration with executives of large enterprises and academic researchers. “But they don’t always get the transformative force on business that apps can have. Some can be small. For example, Home Depot has an app to help you search the store you’re in for what you’re looking for. We know that failure to find something in the store is a cause of lost sales and that Web search is useful and signs over aisles are ineffective. So the mobile app has a real impact on sales.”

But if you’ve already got stock information, location data for a customer, and e-commerce capabilities, why stop at making the app useful only during business hours? “If you think of the full potential of a mobile app, why can’t you buy something at the store when it’s closed if you’re near the store?” Kirschner said. “Instead of dropping you to a traditional Web process and offering you free shipping, they could have you pick it up at the store where you are tomorrow.”

That’s a change that’s being forced on many retailers, as noted in an article from the most recent MIT Sloan Management Review by a trio of experts: Erik Brynjolfsson, a professor at MIT’s Sloan School of Management and the director of the MIT Center for Digital Business; Yu Jeffrey Hu of the Georgia Institute of Technology; and Mohammed Rahman of the University of Calgary. If retailers don’t offer a way to meet mobile-equipped customers, they’ll buy it online elsewhere—often while standing in their store. Offering customers a way to extend their experience beyond the store’s walls is the kind of mobile use that’s going to create competitive advantage from information technology. And it’s the sort of competitive advantage that has long been milked out of the old IT model.

Nichols sees the same sort of technology transforming not just relationships with customers but the workplace itself. Say, for example, you’re in New York, and you want to discuss something with two colleagues. You request an appointment using your mobile device, and based on your location data, the location data of your colleagues, and the timing of the meeting, backend systems automatically book you a conference room and set up a video link to a co-worker out of town.

Based on analytics and the title of the meeting, relevant documents are dropped into a collaboration space. Your device records the meeting to an archive and notes who has attended in person. And this conversation is automatically transcribed, tagged, and forwarded to team members for review.

“Having location data to reserve conference rooms and calls and having all other logistics be handled in background changes the size of the organization I need to support that,” Nichols said.

The same applies to manufacturing, logistics, and other areas where applications can be tied into sensors and computing power. “If I have a factory where a machine has a belt that needs to be reordered every five years and it auto re-orders and it gets shipped without the need for human interaction, that changes the whole dynamics of how you operate,” Nichols said. “If you can take that and plug it into a proper workflow, you’re going to see an entirely new sort of workforce. That’s not that far away.”

Wearable devices like Google’s Glass will also feed into the new workplace. Wearable tech has been in use in some industries for decades, and in some cases it’s just an evolution from communication systems already used in many retail and manufacturing environments. But the ability to add augmented reality—a data overlay on top of a real world location—and to collect information without reaching for a device will quickly get traction in many enterprises.

Read the entire article here.

Image: Commodore PET (Personal Electronic Transactor) 2001 Series, circa 1977. Courtesy of Wikipedia.

Read Something Longer Than 140 Characters

Unplugging from the conveniences and obsessions of our age can be difficult, but not impossible. For those of you who have a demanding boss or needful relationships or lack the will to do away with the email, texts, tweets, voicemail, posts, SMS, likes and status messages there may still be (some) hope without having to go completely cold turkey.

While we would recommend you retreat to a quiet cabin by a still pond in the dark woods, the tips below may help you unwind if you’re frazzled but shun the idea of a remote hideaway. While you’re at it, why not immerse yourself in a copy of Walden.

From the Wall Street Journal:

You may never have read “Walden,” but you’re probably familiar with the premise: a guy with an ax builds a cabin in the woods and lives there for two years to tune out the inessential and discover himself. When Henry David Thoreau began his grand experiment, in 1845, he was about to turn 28—the age of a typical Instagram user today. Thoreau lived with his parents right before his move. During his sojourn, he returned home to do laundry.

Thoreau’s circumstances, in other words, weren’t so different from those of today’s 20-somethings—which is why seeking tech advice from a 19th-century transcendentalist isn’t as far-fetched as it may sound. “We do not ride on the railroad; it rides upon us,” he wrote in “Walden.” That statement still rings true for those of us who have lived with the latest high-tech wonders long enough to realize how much concentration they end up zapping. “We do not use the Facebook; it uses us,” we might say.

But even the average social-media curmudgeon’s views on gadgetry aren’t as extreme as those of Thoreau. Whereas he saw inventions “as improved means to an unimproved end,” most of us genuinely love our iPhones, Instagram feeds and on-demand video. We just don’t want them to take over our lives, lest we forget the joy of reading without the tempting interruption of email notifications, or the pleasure of watching just one good episode of a television show per sitting.

Thankfully, we don’t have to go off the grid to achieve more balance. We can arrive at a saner modern existence simply by tweaking a few settings on our gadgets and the services we rely on. Why renounce civilization when technology makes it so easy to duck out for short stretches?

Inspired by the writings of Thoreau, we looked for simple tools—the equivalent of Thoreau’s knife, ax, spade and wheelbarrow—to create the modern-day equivalent of a secluded cabin in the woods. Don’t worry: There’s still Wi-Fi.

1. Manage your Facebook ‘Friendships’

As your Facebook connections grow to include all 437 of the people you sort of knew in high school, it’s easy to get to the point where the site’s News Feed becomes a hub of oversharing—much of it accidental. (Your co-worker probably had no idea the site would post his results of the “Which Glee Character Are You?” quiz.) Adjusting a few settings will bring your feed back to a more Thoreauvian state.

Facebook tries to figure out which posts will be most interesting to you, but nothing beats getting in there yourself and decluttering by hand. The process is like playing Whac-A-Mole, with your hammer aimed at the irrelevant posts that pop up in your News Feed.

Start by removing serial offenders: On the website, hover your cursor over the person’s name as it appears above a post, hit the “Friends” button that pops up and then uncheck “Show in News Feed” to block future posts. If that feels too drastic, click “Acquaintances” from the pop-up screen instead. This relegates the person to a special “friends list” whose updates will appear lower in the News Feed. (Fear not, the person won’t be notified about either of the above demotions.)

You can go a step further and scale back the types of updates you receive from those you’ve added to Acquaintances (as well as any other friends lists you create). Hover your cursor over the News Feed’s “Friends” heading then click “More” and select the list name. Then click the “Manage Lists” button and, finally, “Choose Update Types.”

Unless you’re in the middle of a fierce match of Bejeweled Blitz, you can safely deselect “Games” and most likely “Music and Videos,” too. Go out on a limb and untick “Comments and Likes” to put the kibosh on musings and shout-outs about other people’s posts. You’ll probably want to leave the mysteriously named “Other Activity” checked, though; while it includes some yawn-inducing updates, the category also encompasses announcements of major life events, like engagements and births.

3. Read Something Longer Than 140 Characters

Computers, smartphones and tablets are perfect for skimming TMZ, but for hunkering down with the sort of thoughtful text Thoreau would endorse, a dedicated ereader is the tech equivalent of a wood-paneled reading room. Although there are fancier models out there, the classic Kindle and Kindle Paperwhite are still tough to beat. Because their screens aren’t backlit, they don’t cause eye strain the way a tablet or color ereader can. While Amazon sells discounted models that display advertisements (each costs $20 less), don’t fall for the trap: The ads undermine the tranquility of the device. (If you already own an ad-supported Kindle, remove the ads for $20 using the settings page.) Also be sure to install the Send to Kindle plug-in for the Chrome and Firefox Web browers. It lets you beam long articles that you stumble upon online to the device, magically stripping away banner ads and other Web detritus in the process.

Read the entire article here.

Image: Henry David Thoreau, 1856. Courtesy of Wikipedia.

Listening versus Snooping

Many of your mobile devices already know where you are and what you’re doing. Increasingly the devices you use will record your every step and every word (and those of any callers), and even know your mood and health status. Analysts and eavesdroppers at the U.S. National Security Agency (NSA) must be licking their collective their lips.

From Technology Review:

The Moto X, the new smartphone from Google’s Motorola Mobility, might be remembered best someday for helping to usher in the era of ubiquitous listening.

Unlike earlier phones, the Moto X includes two low-power chips whose only function is to process data from a microphone and other sensors—without tapping the main processor and draining the battery. This is a big endorsement of the idea that phones could serve you better if they did more to figure out what is going on (see “Motorola Reveals First Google-Era Phone”). For instance, you might say “OK Google Now” to activate Google’s intelligent assistant software, rather than having to first tap the screen or press buttons to get an audio-processing function up and running.

This brings us closer to having phones that continually monitor their auditory environment to detect the phone owner’s voice, discern what room or other setting the phone is in, or pick up other clues from background noise. Such capacities make it possible for software to detect your moods, know when you are talking and not to disturb you, and perhaps someday keep a running record of everything you hear.

“Devices of the future will be increasingly aware of the user’s current context, goals, and needs, will become proactive—taking initiative to present relevant information,” says Pattie Maes, a professor at MIT’s Media Lab. “Their use will become more integrated in our daily behaviors, becoming almost an extension of ourselves. The Moto X is definitely a step in that direction.”

Even before the Moto X, there were apps, such as the Shazam music-identification service, that could continually listen for a signal. When users enable a new feature called “auto-tagging” on a recent update to Shazam’s iPad app, Shazam listens to everything in the background, all the time. It’s seeking matches for songs and TV content that the company has stored on its servers, so you can go back and find information about something that you might have heard a few minutes ago. But the key change is that Shazam can now listen all the time, not just when you tap a button to ask it to identify something. The update is planned for other platforms, too.

But other potential uses abound. Tanzeem Choudury, a researcher at Cornell University, has demonstrated software that can detect whether you are talking faster than normal, or other changes in pitch or frequency that suggest stress. The StressSense app she is developing aims to do things like pinpoint the sources of your stress—is it the 9:30 a.m. meeting, or a call from Uncle Hank?

Similarly, audio analysis could allow the phone to understand where it is—and make fewer mistakes, says Vlad Sejnoha, the chief technology officer of Nuance Communications, which develops voice-recognition technologies. “I’m sure you’ve been in situation where someone has a smartphone in their pocket and suddenly a little voice emerges from the pocket, asking how they can be helped,” he says. That’s caused when an assistance app like Apple’s Siri is accidentally triggered. If the phone’s always-on ears could accurately detect the muffled acoustical properties of a pocket or purse, it could eliminate this false start and stop phones from accidentally dialing numbers as well. “That’s a work in progress,” Sejnoha says.  “And while it’s amusing, I think the general principle is serious: these devices have to try to understand the users’ world as much as possible.”

A phone might use ambient noise levels to decide how loud a ringtone should be: louder if you are out on the street, quiet if inside, says Chris Schmandt, director of the speech and mobility group at MIT’s Media Lab. Taking that concept a step further, a phone could detect an ambient conversation and recognize that one of the speakers was its owner. Then it might mute a potentially disruptive ringtone unless the call was from an important person, such as a spouse, Schmandt added.

Read the entire article here.

Warp Factor

To date the fastest speed ever traveled by humans is just under 25,000 miles per hour. This milestone was reached by the reentry capsule from the Apollo 10 moon mission — reaching 24,961 mph as it hurtled through Earth’s upper atmosphere. Yet this pales in comparison to the speed of light, which clocks in at 186,282 miles per second, in a vacuum. A quick visit to the calculator puts Apollo 10 at 6.93 miles per second, or 0.0037 percent speed of light!

Despite our very pedestrian speeds many dream of a future where humans might reach the stars, powered by some kind of “warp drive” (yes, Star Trek comes to mind). A handful of researchers at NASA are actively pondering this today. Though, our poor level of technology combined with our lack of understanding of the workings of the universe, suggests that an Alcubierre-like approach is still centuries away from our grasp.

From the New York Times:

Beyond the security gate at the Johnson Space Center’s 1960s-era campus here, inside a two-story glass and concrete building with winding corridors, there is a floating laboratory.

Harold G. White, a physicist and advanced propulsion engineer at NASA, beckoned toward a table full of equipment there on a recent afternoon: a laser, a camera, some small mirrors, a ring made of ceramic capacitors and a few other objects.

He and other NASA engineers have been designing and redesigning these instruments, with the goal of using them to slightly warp the trajectory of a photon, changing the distance it travels in a certain area, and then observing the change with a device called an interferometer. So sensitive is their measuring equipment that it was picking up myriad earthly vibrations, including people walking nearby. So they recently moved into this lab, which floats atop a system of underground pneumatic piers, freeing it from seismic disturbances.

The team is trying to determine whether faster-than-light travel — warp drive — might someday be possible.

Warp drive. Like on “Star Trek.”

“Space has been expanding since the Big Bang 13.7 billion years ago,” said Dr. White, 43, who runs the research project. “And we know that when you look at some of the cosmology models, there were early periods of the universe where there was explosive inflation, where two points would’ve went receding away from each other at very rapid speeds.”

“Nature can do it,” he said. “So the question is, can we do it?”

Einstein famously postulated that, as Dr. White put it, “thou shalt not exceed the speed of light,” essentially setting a galactic speed limit. But in 1994, a Mexican physicist, Miguel Alcubierre, theorized that faster-than-light speeds were possible in a way that did not contradict Einstein, though Dr. Alcubierre did not suggest anyone could actually construct the engine that could accomplish that.

His theory involved harnessing the expansion and contraction of space itself. Under Dr. Alcubierre’s hypothesis, a ship still couldn’t exceed light speed in a local region of space. But a theoretical propulsion system he sketched out manipulated space-time by generating a so-called “warp bubble” that would expand space on one side of a spacecraft and contract it on another.

“In this way, the spaceship will be pushed away from the Earth and pulled towards a distant star by space-time itself,” Dr. Alcubierre wrote. Dr. White has likened it to stepping onto a moving walkway at an airport.

But Dr. Alcubierre’s paper was purely theoretical, and suggested insurmountable hurdles. Among other things, it depended on large amounts of a little understood or observed type of “exotic matter” that violates typical physical laws.

Dr. White believes that advances he and others have made render warp speed less implausible. Among other things, he has redesigned the theoretical warp-traveling spacecraft — and in particular a ring around it that is key to its propulsion system — in a way that he believes will greatly reduce the energy requirements.

Read the entire article here.

Sounds of Extinction

Camera aficionados will find themselves lamenting the demise of the film advance. Now that the world has moved on from film to digital you will no longer hear that distinctive mechanical sound as you wind on the film, and hope the teeth on the spool engage the plastic of the film.

Hardcore computer buffs will no doubt miss the beep-beep-hiss sound of the 56K modem — that now seemingly ancient box that once connected us to… well, who knows what it actually connected us to at that speed.

Our favorite arcane sounds, soon to become relegated to the audio graveyard: the telephone handset slam, the click and carriage return of the typewriter, the whir of reel-to-reel tape, the crackle of the diamond stylus as it first hits an empty groove on a 33.

More sounds you may (or may not) miss below.

From Wired:

The forward march of technology has a drum beat. These days, it’s custom text-message alerts, or your friend saying “OK, Glass” every five minutes like a tech-drunk parrot. And meanwhile, some of the most beloved sounds are falling out of the marching band.

The boops and beeps of bygone technology can be used to chart its evolution. From the zzzzzzap of the Tesla coil to the tap-tap-tap of Morse code being sent via telegraph, what were once the most important nerd sounds in the world are now just historical signposts. But progress marches forward, and for every irritatingly smug Angry Pigs grunt we have to listen to, we move further away from the sound of the Defender ship exploding.

Let’s celebrate the dying cries of technology’s past. The follow sounds are either gone forever, or definitely on their way out. Bow your heads in silence and bid them a fond farewell.

The Telephone Slam

Ending a heated telephone conversation by slamming the receiver down in anger was so incredibly satisfying. There was no better way to punctuate your frustration with the person on the other end of the line. And when that receiver hit the phone, the clack of plastic against plastic was accompanied by a slight ringing of the phone’s internal bell. That’s how you knew you were really pissed — when you slammed the phone so hard, it rang.

There are other sounds we’ll miss from the phone. The busy signal died with the rise of voicemail (although my dad refuses to get voicemail or call waiting, so he’s still OG), and the rapid click-click-click of the dial on a rotary phone is gone. But none of those compare with hanging up the phone with a forceful slam.

Tapping a touchscreen just does not cut it. So the closest thing we have now is throwing the pitifully fragile smartphone against the wall.

The CRT Television

The only TVs left that still use cathode-ray tubes are stashed in the most depressing places — the waiting rooms of hospitals, used car dealerships, and the dusty guest bedroom at your grandparents’ house. But before we all fell prey to the magical resolution of zeros and ones, boxy CRT televisions warmed (literally) the living rooms of every home in America. The sounds they made when you turned them on warmed our hearts, too — the gentle whoosh of the degaussing coil as the set was brought to life with the heavy tug of a pull-switch, or the satisfying mechanical clunk of a power button. As the tube warmed up, you’d see the visuals slowly brighten on the screen, giving you ample time to settle into the couch to enjoy latest episode of Seinfeld.

Read the entire article here.

Image courtesy of Wired.

Technology and Kids

There is no doubting that technology’s grasp finds us at increasingly younger ages. No longer is it just our teens constantly mesmerized by status updates on their mobiles, and not just our “in-betweeners” addicted to “facetiming” with their BFFs. Now our technologies are fast becoming the tools of choice for our kindergarteners and pre-K kids. Some parents lament.

From New York Times:

A few months ago, I attended my daughter Josie’s kindergarten open house, the highlight of which was a video slide show featuring our moppets using iPads to practice their penmanship. Parental cooing ensued.

I happened to be sitting next to the teacher, and I asked her about the rumor I’d heard: that next year, every elementary-school kid in town would be provided his or her own iPad. She said this pilot program was being introduced only at the newly constructed school three blocks from our house, which Josie will attend next year. “You’re lucky,” she observed wistfully.

This seemed to be the consensus around the school-bus stop. The iPads are coming! Not only were our kids going to love learning, they were also going to do so on the cutting edge of innovation. Why, in the face of this giddy chatter, was I filled with dread?

It’s not because I’m a cranky Luddite. I swear. I recognize that iPads, if introduced with a clear plan, and properly supervised, can improve learning and allow students to work at their own pace. Those are big ifs in an era of overcrowded classrooms. But my hunch is that our school will do a fine job. We live in a town filled with talented educators and concerned parents.

Frankly, I find it more disturbing that a brand-name product is being elevated to the status of mandatory school supply. I also worry that iPads might transform the classroom from a social environment into an educational subway car, each student fixated on his or her personalized educational gadget.

But beneath this fretting is a more fundamental beef: the school system, without meaning to, is subverting my parenting, in particular my fitful efforts to regulate my children’s exposure to screens. These efforts arise directly from my own tortured history as a digital pioneer, and the war still raging within me between harnessing the dazzling gifts of technology versus fighting to preserve the slower, less convenient pleasures of the analog world.

What I’m experiencing is, in essence, a generational reckoning, that queasy moment when those of us whose impatient desires drove the tech revolution must face the inheritors of this enthusiasm: our children.

It will probably come as no surprise that I’m one of those annoying people fond of boasting that I don’t own a TV. It makes me feel noble to mention this — I am feeling noble right now! — as if I’m taking a brave stand against the vulgar superficiality of the age. What I mention less frequently is the reason I don’t own a TV: because I would watch it constantly.

My brothers and I were so devoted to television as kids that we created an entire lexicon around it. The brother who turned on the TV, and thus controlled the channel being watched, was said to “emanate.” I didn’t even know what “emanate” meant. It just sounded like the right verb.

This was back in the ’70s. We were latchkey kids living on the brink of a brave new world. In a few short years, we’d hurtled from the miraculous calculator (turn it over to spell out “boobs”!) to arcades filled with strobing amusements. I was one of those guys who spent every spare quarter mastering Asteroids and Defender, who found in video games a reliable short-term cure for the loneliness and competitive anxiety that plagued me. By the time I graduated from college, the era of personal computers had dawned. I used mine to become a closet Freecell Solitaire addict.

Midway through my 20s I underwent a reformation. I began reading, then writing, literary fiction. It quickly became apparent that the quality of my work rose in direct proportion to my ability filter out distractions. I’ve spent the past two decades struggling to resist the endless pixelated enticements intended to capture and monetize every spare second of human attention.

Has this campaign succeeded? Not really. I’ve just been a bit slower on the uptake than my contemporaries. But even without a TV or smartphones, our household can feel dominated by computers, especially because I and my wife (also a writer) work at home. We stare into our screens for hours at a stretch, working and just as often distracting ourselves from work.

Read the entire article here.

Image courtesy of Wired.

Technology and Employment

Technology is altering the lives of us all. Often it is a positive influence, offering its users tremendous benefits from time-saving to life-extension. However, the relationship of technology to our employment is more complex and usually detrimental.

Many traditional forms of employment have already disappeared thanks to our technological tools; still many other jobs have changed beyond recognition, requiring new skills and knowledge. And this may be just the beginning.

From Technology Review:

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson’s contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

It’s a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. ­Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. “It’s the great paradox of our era,” he says. “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren’t keeping up.”

Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.

Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world’s manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts—tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics’ Baxter (see “The Blue-Collar Robot,” May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google’s driverless car suggest what automation might be able to accomplish someday soon.

A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.

It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, “digital versions of human intelligence” are increasingly replacing even those jobs once thought to require people. “It will change every profession in ways we have barely seen yet,” he warns.

Read the entire article here.

Image: Industrial robots. Courtesy of Techjournal.

Beware! RoboBee May Be Watching You

History will probably show that humans are the likely cause for the mass disappearance and death of honey bees around the world.

So, while ecologists try to understand why and how to reverse bee death and colony collapse, engineers are busy building alternatives to our once nectar-loving friends. Meet RoboBee, also known as the Micro Air Vehicles Project.

From Scientific American:

We take for granted the effortless flight of insects, thinking nothing of swatting a pesky fly and crushing its wings. But this insect is a model of complexity. After 12 years of work, researchers at the Harvard School of Engineering and Applied Sciences have succeeded in creating a fly-like robot. And in early May, they announced that their tiny RoboBee (yes, it’s called a RoboBee even though it’s based on the mechanics of a fly) took flight. In the future, that could mean big things for everything from disaster relief to colony collapse disorder.

The RoboBee isn’t the only miniature flying robot in existence, but the 80-milligram, quarter-sized robot is certainly one of the smallest. “The motivations are really thinking about this as a platform to drive a host of really challenging open questions and drive new technology and engineering,” says Harvard professor Robert Wood, the engineering team lead for the project.

When Wood and his colleagues first set out to create a robotic fly, there were no off the shelf parts for them to use. “There were no motors small enough, no sensors that could fit on board. The microcontrollers, the microprocessors–everything had to be developed fresh,” says Wood. As a result, the RoboBee project has led to numerous innovations, including vision sensors for the bot, high power density piezoelectric actuators (ceramic strips that expand and contract when exposed to an electrical field), and a new kind of rapid manufacturing that involves layering laser-cut materials that fold like a pop-up book. The actuators assist with the bot’s wing-flapping, while the vision sensors monitor the world in relation to the RoboBee.

“Manufacturing took us quite awhile. Then it was control, how do you design the thing so we can fly it around, and the next one is going to be power, how we develop and integrate power sources,” says Wood. In a paper recently published by Science, the researchers describe the RoboBee’s power quandary: it can fly for just 20 seconds–and that’s while it’s tethered to a power source. “Batteries don’t exist at the size that we would want,” explains Wood. The researchers explain further in the report: ” If we implement on-board power with current technologies, we estimate no more than a few minutes of untethered, powered flight. Long duration power autonomy awaits advances in small, high-energy-density power sources.”

The RoboBees don’t last a particularly long time–Wood says the flight time is “on the order of tens of minutes”–but they can keep flapping their wings long enough for the Harvard researchers to learn everything they need to know from each successive generation of bots. For commercial applications, however, the RoboBees would need to be more durable.

Read the entire article here.

Image courtesy of Micro Air Vehicles Project, Harvard.

Please Press 1 to Avoid Phone Menu Hell

Good customer service once meant that a store or service employee would know you by name. This person would know your previous purchasing habits and your preferences; this person would know the names of your kids and your dog. Great customer service once meant that an employee could use this knowledge to anticipate your needs or personalize a specific deal. Well, this type of service still exists — in some places — but many businesses have outsourced it to offshore call center personnel or to machines, or both. Service may seem personal, but it’s not — service is customized to suit your profile, but it’s not personal in the same sense that once held true.

And, to rub more salt into the customer service wound, businesses now use their automated phone systems seemingly to shield themselves from you, rather than to provide you with the service you want. After all, when was the last time you managed to speak to a real customer service employee after making it through “please press 1 for English“, the poor choice of musak or sponsored ads and the never-ending phone menus?

Now thanks to an enterprising and extremely patient soul there is an answer to phone menu hell.

Welcome to Please Press 1. Founded by Nigel Clarke (alumnus of 400 year old Dame Alice Owens School in London), Please Press 1 provides shortcuts for customer service phone menus for many of the top businesses in Britain [ed: we desperately need this service in the United States].

 

From the MailOnline:

A frustrated IT manager who has spent seven years making 12,000 calls to automated phone centres has launched a new website listing ‘short cut’ codes which can shave up to eight minutes off calls.

Nigel Clarke, 53, has painstakingly catalogued the intricate phone menus of hundreds of leading multi-national companies – some of which have up to 80 options.

He has now formulated his results into the website pleasepress1.com, which lists which number options to press to reach the desired department.

The father-of-three, from Fawkham, Kent, reckons the free service can save consumers more than eight minutes by cutting out up to seven menu options.

For example, a Lloyds TSB home insurance customer who wishes to report a water leak would normally have to wade through 78 menu options over seven levels to get through to the correct department.

But the new service informs callers that the combination 1-3-2-1-1-5-4 will get them straight through – saving over four minutes of waiting.

Mr Clarke reckons the service could save consumers up to one billion minutes a year.

He said: ‘Everyone knows that calling your insurance or gas company is a pain but for most, it’s not an everyday problem.

‘However, the cumulative effect of these calls is really quite devastating when you’re moving house or having an issue.

‘I’ve been working in IT for over 30 years and nothing gets me riled up like having my time wasted through inefficient design.

‘This is why I’ve devoted the best part of seven years to solving this issue.’

Mr Clarke describes call centre menu options as the ‘modern equivalent of Dante’s circles of hell’.

He sites the HMRC as one of the worst offenders, where callers can take up to six minutes to reach the correct department.

As one of the UK’s busiest call centres, the Revenue receives 79 million calls per year, or a potential 4.3 million working hours just navigating menus.

Mr Clarke believes that with better menu design, at least three million caller hours could be saved here alone.

He began his quest seven years ago as a self-confessed ‘call centre menu enthusiast’.

‘The idea began with the frustration of being met with a seemingly endless list of menu options,’ he said.

‘Whether calling my phone, insurance or energy company, they each had a different and often worse way of trying to “help” me.

‘I could sit there for minutes that seemed like hours, trying to get through their phone menus only to end up at the wrong place and having to redial and start again.’

He began noting down the menu options and soon realised he could shave several minutes off the waiting time.

Mr Clarke said: ‘When I called numbers regularly, I started keeping notes of the options to press. The numbers didn’t change very often and then it hit me.

Read the entire article here and visit Please Press 1, here.

Images courtesy of Time and Please Press 1.

Tracking and Monetizing Your Every Move

Your movements are valuable — but not in the way you may think. Mobile technology companies are moving rapidly to exploit the vast amount of data collected from the billions of mobile devices. This data is extremely valuable to an array of organizations, including urban planners, retailers, and travel and transportation marketers. And, of course, this raises significant privacy concerns. Many believe that when the data is used collectively it preserves user anonymity. However, if correlated with other data sources it could be used to discover a range of unintended and previously private information, relating both to individuals and to groups.

From MIT Technology Review:

Wireless operators have access to an unprecedented volume of information about users’ real-world activities, but for years these massive data troves were put to little use other than for internal planning and marketing.

This data is under lock and key no more. Under pressure to seek new revenue streams (see “AT&T Looks to Outside Developers for Innovation”), a growing number of mobile carriers are now carefully mining, packaging, and repurposing their subscriber data to create powerful statistics about how people are moving about in the real world.

More comprehensive than the data collected by any app, this is the kind of information that, experts believe, could help cities plan smarter road networks, businesses reach more potential customers, and health officials track diseases. But even if shared with the utmost of care to protect anonymity, it could also present new privacy risks for customers.

Verizon Wireless, the largest U.S. carrier with more than 98 million retail customers, shows how such a program could come together. In late 2011, the company changed its privacy policy so that it could share anonymous and aggregated subscriber data with outside parties. That made possible the launch of its Precision Market Insights division last October.

The program, still in its early days, is creating a natural extension of what already happens online, with websites tracking clicks and getting a detailed breakdown of where visitors come from and what they are interested in.

Similarly, Verizon is working to sell demographics about the people who, for example, attend an event, how they got there or the kinds of apps they use once they arrive. In a recent case study, says program spokeswoman Debra Lewis, Verizon showed that fans from Baltimore outnumbered fans from San Francisco by three to one inside the Super Bowl stadium. That information might have been expensive or difficult to obtain in other ways, such as through surveys, because not all the people in the stadium purchased their own tickets and had credit card information on file, nor had they all downloaded the Super Bowl’s app.

Other telecommunications companies are exploring similar ideas. In Europe, for example, Telefonica launched a similar program last October, and the head of this new business unit gave the keynote address at new industry conference on “big data monetization in telecoms” in January.

“It doesn’t look to me like it’s a big part of their [telcos’] business yet, though at the same time it could be,” says Vincent Blondel, an applied mathematician who is now working on a research challenge from the operator Orange to analyze two billion anonymous records of communications between five million customers in Africa.

The concerns about making such data available, Blondel says, are not that individual data points will leak out or contain compromising information but that they might be cross-referenced with other data sources to reveal unintended details about individuals or specific groups (see “How Access to Location Data Could Trample Your Privacy”).

Already, some startups are building businesses by aggregating this kind of data in useful ways, beyond what individual companies may offer. For example, AirSage, an Atlanta, Georgia, a company founded in 2000, has spent much of the last decade negotiating what it says are exclusive rights to put its hardware inside the firewalls of two of the top three U.S. wireless carriers and collect, anonymize, encrypt, and analyze cellular tower signaling data in real time. Since AirSage solidified the second of these major partnerships about a year ago (it won’t specify which specific carriers it works with), it has been processing 15 billion locations a day and can account for movement of about a third of the U.S. population in some places to within less than 100 meters, says marketing vice president Andrea Moe.

As users’ mobile devices ping cellular towers in different locations, AirSage’s algorithms look for patterns in that location data—mostly to help transportation planners and traffic reports, so far. For example, the software might infer that the owners of devices that spend time in a business park from nine to five are likely at work, so a highway engineer might be able to estimate how much traffic on the local freeway exit is due to commuters.

Other companies are starting to add additional layers of information beyond cellular network data. One customer of AirSage is a relatively small San Francisco startup, Streetlight Data which recently raised $3 million in financing backed partly by the venture capital arm of Deutsche Telekom.

Streetlight buys both cellular network and GPS navigation data that can be mined for useful market research. (The cellular data covers a larger number of people, but the GPS data, collected by mapping software providers, can improve accuracy.) Today, many companies already build massive demographic and behavioral databases on top of U.S. Census information about households to help retailers choose where to build new stores and plan marketing budgets. But Streetlight’s software, with interactive, color-coded maps of neighborhoods and roads, offers more practical information. It can be tied to the demographics of people who work nearby, commute through on a particular highway, or are just there for a visit, rather than just supplying information about who lives in the area.

Read the entire article following the jump.

Image: mobile devices. Courtesy of W3.org