Category Archives: Technica

Big Data Knows What You Do and When

Data scientists are getting to know more about you and your fellow urban dwellers as you move around your neighborhood and your city. As smartphones and cell towers become more ubiquitous and  data collection and analysis gathers pace researchers (and advertisers) will come to know your daily habits and schedule rather intimately. So, questions from a significant other along the lines of, “and, where were you at 11:15 last night?” may soon be consigned to history.

From Technology Review:

Mobile phones have generated enormous insight into the human condition thanks largely to the study of the data they produce. Mobile phone companies record the time of each call, the caller and receiver ids, as well as the locations of the cell towers involved, among other things.

The combined data from millions of people produces some fascinating new insights in the nature of our society.

Anthropologists have crunched it to reveal human reproductive strategiesa universal law of commuting and even the distribution of wealth in Africa.

Today, computer scientists have gone one step further by using mobile phone data to map the structure of cities and how people use them throughout the day. “These results point towards the possibility of a new, quantitative classification of cities using high resolution spatio-temporal data,” say Thomas Louail at the Institut de Physique Théorique in Paris and a few pals.

They say their work is part of a new science of cities that aims to objectively measure and understand the nature of large population centers.

These guys begin with a database of mobile phone calls made by people in the 31 Spanish cities that have populations larger than 200,000. The data consists of the number of unique individuals using a given cell tower (whether making a call or not) for each hour of the day over almost two months.

Given the area that each tower covers, Louail and co work out the density of individuals in each location and how it varies throughout the day. And using this pattern, they search for “hotspots” in the cities where the density of individuals passes some specially chosen threshold at certain times of the day.

The results reveal some fascinating patterns in city structure. For a start, every city undergoes a kind of respiration in which people converge into the center and then withdraw on a daily basis, almost like breathing. And this happens in all cities. This “suggests the existence of a single ‘urban rhythm’ common to all cities,” says Louail and co.

During the week, the number of phone users peaks at about midday and then again at about 6 p.m. During the weekend the numbers peak a little later: at 1 p.m. and 8 p.m. Interestingly, the second peak starts about an hour later in western cities, such as Sevilla and Cordoba.

The data also reveals that small cities tend to have a single center that becomes busy during the day, such as the cities of Salamanca and Vitoria.

But it also shows that the number of hotspots increases with city size; so-called polycentric cities include Spain’s largest, such as Madrid, Barcelona, and Bilboa.

That could turn out to be useful for automatically classifying cities.

Read the entire article here.

Mining Minecraft

minecraft-example

If you have a child under the age of 13 it’s likely that you’ve heard of, seen or even used Minecraft. More than just a typical online game, Minecraft is a playground for aspiring architects — despite the Creepers. Minecraft began in 2011 with a simple premise — place and remove blocks to fend of unwanted marauders. Now it has become a blank canvas for young minds to design and collaborate on building fantastical structures. My own twin 11 year-olds have designed their dream homes complete with basement stables, glass stairways roof-top pool.

From the Guardian:

I couldn’t pinpoint exactly when I became aware of my eight-year-old son’s fixation with Minecraft. I only know that the odd reference to zombies and pickaxes burgeoned until it was an omnipresent force in our household, the dominant topic of conversation and, most bafflingly, a game he found so gripping that he didn’t just want to play it, he wanted to watch YouTube videos of others playing it too.

This was clearly more than any old computer game – for Otis and, judging by discussion at the school gates, his friends too. I felt as if he’d joined a cult, albeit a reasonably benign one, though as someone who last played a computer game when Jet Set Willy was the height of technological wizardry, I hardly felt in a position to judge.

Minecraft, I realised, was something I knew nothing about. It was time to become acquainted. I announced my intention to give myself a crash course in the game to Otis one evening, interrupting his search for Obsidian to build a portal to the Nether dimension. As you do. “Why would you want to play Minecraft?” he asked, as if I’d confided that I was taking up a career in trapeze-artistry.

For anyone as mystified about it as I was, Minecraft is now one of the world’s biggest computer games, a global phenomenon that’s totted up 14,403,011 purchases as I write; 19,270 in the past 24 hours – live statistics they update on their website, as if it were Children in Need night.

Trying to define the objective of the game isn’t easy. When I ask Otis, he shrugs. “I’m not sure there is one. But that’s what’s brilliant. You can do anything you like.”

This doesn’t seem like much of an insight, though to be fair, the developers themselves, Mojang, define it succinctly as, “a game about breaking and placing blocks”. This sounds delightfully simple, an impression echoed by its graphics. In sharp contrast to the rich, more cinematic style of other games, this is unapologetically old school, the sort of computer game of the future that Marty McFly would have played.

In this case, looks are deceptive. “The pixelated style might appear simple but it masks a huge amount of depth and complexity,” explains Alex Wiltshire, former editor of Edge magazine and author of forthcoming Minecraft guide, Block-o-pedia. “Its complex nature doesn’t lie in detailed art assets, but in how each element of the game interrelates.”

It’s this that gives players the potential to produce elaborate constructions on a lavish scale; fans have made everything from 1:1 scale re-creations of the Lord of the Rings’ Mines of Moria, to models of entire cities.

I’m a long way from that. “Don’t worry, Mum – when I first went on it when I was six, I had no idea what I was doing,” Otis reassures, shaking his head at the thought of those naive days, way back when.

Otis’s device of choice is his iPod, ideal for on-the-move sessions, though this once caused him serious grief after being caught on it under his duvet after lights out. I take one look at the lightning speed with which his fingers move and decide to download it on to my MacBook instead. The introduction of an additional version of the game into our household is greeted very much like Walter Raleigh’s return from the New World.

We open up the game and he tells me that I am “Steve”, the default player, and that we get a choice of modes in which to play: creative or survival. He suggests I start with the former on the basis that this is the best place for those who aren’t very good at it.

In creative mode, you are dropped into a newly generated world (an island in our case) and gifted a raft of resources – everything from coal and lapis lazuli to cake and beds.

At the risk of sounding like a dunce, it isn’t at all obvious what I’m supposed to do. So instead of springing into action, I’m left standing, looking around lamely as if I’m on the edge of a dance floor waiting for someone to come and put me out of my misery. Despite knowing that the major skill required in this game is building, before Otis intervenes, the most I can accomplish is to dig a few holes.

“When it first came out everyone was confused as the developer gave little or no guidance,” says Wiltshire. “It didn’t specifically say you had to cut down a tree to get some wood, whereas games that are produced by big companies give instructions – the last thing they want is for people not to understand how to play. With Minecraft, which had an indie developer, the player had to work things out for themselves. It was quite a tonic.”

He believes that this is why a game not specifically designed for children has become so popular with them. “Because you learn so much when you’re young, kids are used to the idea of a world they don’t fully understand, so they’re comfortable with having to find things out for themselves.”

For the moment, I’m happy to take instruction from my son, who begins his demonstration by creating a rollercoaster – an obvious priority when you’ve just landed on a desert island. He quickly installs its tracks, weaving them through trees and into the sea, before sending Steve for a ride. He asks me if I feel ready to have a go. I feel as if I’m on a nursing home word processing course.

Familiarising yourself takes a little time but once you get going – and have worked out the controls – being able to run, fly, swim and build is undeniably absorbing. I also finally manage to construct something, a slightly disappointing shipping container-type affair that explodes Wiltshire’s assertion that it’s “virtually impossible to build something that looks terrible in Minecraft”. Still, I’m enjoying it, I can’t deny it. Aged eight, I’d have loved it every bit as much as my son does.

The more I play it, the more I also start to understand why this game is been championed for its educational possibilities, with some schools in the US using it as a tool to teach maths and science.

Dr Helen O’Connor, who runs UK-based Childchology – which provides children and their families with support for common psychological problems via the internet – said: “Minecraft offers some strong positives for children. It works on a cognitive level in that it involves problem solving, imagination, memory, creativity and logical sequencing. There is a good educational element to the game, and it also requires some number crunching.

“Unlike lots of other games, there is little violence, with the exception of fighting off a few zombies and creepers. This is perhaps one of the reasons why it is fairly gender neutral and girls enjoy playing it as well as boys.”

The next part of Otis’s demonstration involves switching to survival mode. He explains: “You’ve got to find the resources yourself here. You’re not just given them. Oh and there are villains too. Zombie pigmen and that kind of thing.”

It’s clear that life in survival mode is a significantly hairier prospect than in creative, particularly when Otis changes the difficulty setting to its highest notch. He says he doesn’t do this often because, after spending three weeks creating a house from wood and cobblestones, zombies nearly trashed the place. I make a mental note to remind him of this conversation next time he has a sleepover.

One of the things that’s so appealing about Minecraft is that there is no obvious start and end; it’s a game of infinite possibilities, which is presumably why it’s often compared to Lego. Yet, the addictive nature of the game is clearly vexing many parents: internet talkboards are awash with people seeking advice on how to prize their children away from it.

Read the entire story here.

Image courtesy of Minecraft.

The Magnificent Seven

Magnificent-seven

Actually, these seven will not save your village from bandits. Nor will they ride triumphant into the sunset on horseback. These seven are more mundane, but they are nonetheless shrouded in a degree of mystery, albeit rather technical. These are the seven holders of the seven keys that control the Internet’s core directory — the Domain Name System. Without it the Internet’s billions of users would not be able to browse or search or shop or email or text.

From the Guardian:

In a nondescript industrial estate in El Segundo, a boxy suburb in south-west Los Angeles just a mile or two from LAX international airport, 20 people wait in a windowless canteen for a ceremony to begin. Outside, the sun is shining on an unseasonably warm February day; inside, the only light comes from the glare of halogen bulbs.

There is a strange mix of accents – predominantly American, but smatterings of Swedish, Russian, Spanish and Portuguese can be heard around the room, as men and women (but mostly men) chat over pepperoni pizza and 75-cent vending machine soda. In the corner, an Asteroids arcade machine blares out tinny music and flashing lights.

It might be a fairly typical office scene, were it not for the extraordinary security procedures that everyone in this room has had to complete just to get here, the sort of measures normally reserved for nuclear launch codes or presidential visits. The reason we are all here sounds like the stuff of science fiction, or the plot of a new Tom Cruise franchise: the ceremony we are about to witness sees the coming together of a group of people, from all over the world, who each hold a key to the internet. Together, their keys create a master key, which in turn controls one of the central security measures at the core of the web. Rumours about the power of these keyholders abound: could their key switch off the internet? Or, if someone somehow managed to bring the whole system down, could they turn it on again?

The keyholders have been meeting four times a year, twice on the east coast of the US and twice here on the west, since 2010. Gaining access to their inner sanctum isn’t easy, but last month I was invited along to watch the ceremony and meet some of the keyholders – a select group of security experts from around the world. All have long backgrounds in internet security and work for various international institutions. They were chosen for their geographical spread as well as their experience – no one country is allowed to have too many keyholders. They travel to the ceremony at their own, or their employer’s, expense.

What these men and women control is the system at the heart of the web: the domain name system, or DNS. This is the internet’s version of a telephone directory – a series of registers linking web addresses to a series of numbers, called IP addresses. Without these addresses, you would need to know a long sequence of numbers for every site you wanted to visit. To get to the Guardian, for instance, you’d have to enter “77.91.251.10” instead of theguardian.com.

The master key is part of a new global effort to make the whole domain name system secure and the internet safer: every time the keyholders meet, they are verifying that each entry in these online “phone books” is authentic. This prevents a proliferation of fake web addresses which could lead people to malicious sites, used to hack computers or steal credit card details.

The east and west coast ceremonies each have seven keyholders, with a further seven people around the world who could access a last-resort measure to reconstruct the system if something calamitous were to happen. Each of the 14 primary keyholders owns a traditional metal key to a safety deposit box, which in turn contains a smartcard, which in turn activates a machine that creates a new master key. The backup keyholders have something a bit different: smartcards that contain a fragment of code needed to build a replacement key-generating machine. Once a year, these shadow holders send the organisation that runs the system – the Internet Corporation for Assigned Names and Numbers (Icann) – a photograph of themselves with that day’s newspaper and their key, to verify that all is well.

The fact that the US-based, not-for-profit organisation Icann – rather than a government or an international body – has one of the biggest jobs in maintaining global internet security has inevitably come in for criticism. Today’s occasionally over-the-top ceremony (streamed live on Icann’s website) is intended to prove how seriously they are taking this responsibility. It’s one part The Matrix (the tech and security stuff) to two parts The Office (pretty much everything else).

For starters: to get to the canteen, you have to walk through a door that requires a pin code, a smartcard and a biometric hand scan. This takes you into a “mantrap”, a small room in which only one door at a time can ever be open. Another sequence of smartcards, handprints and codes opens the exit. Now you’re in the break room.

Already, not everything has gone entirely to plan. Leaning next to the Atari arcade machine, ex-state department official Rick Lamb, smartly suited and wearing black-rimmed glasses (he admits he’s dressed up for the occasion), is telling someone that one of the on-site guards had asked him out loud, “And your security pin is 9925, yes?” “Well, it was…” he says, with an eye-roll. Looking in our direction, he says it’s already been changed.

Lamb is now a senior programme manager for Icann, helping to roll out the new, secure system for verifying the web. This is happening fast, but it is not yet fully in play. If the master key were lost or stolen today, the consequences might not be calamitous: some users would receive security warnings, some networks would have problems, but not much more. But once everyone has moved to the new, more secure system (this is expected in the next three to five years), the effects of losing or damaging the key would be far graver. While every server would still be there, nothing would connect: it would all register as untrustworthy. The whole system, the backbone of the internet, would need to be rebuilt over weeks or months. What would happen if an intelligence agency or hacker – the NSA or Syrian Electronic Army, say – got hold of a copy of the master key? It’s possible they could redirect specific targets to fake websites designed to exploit their computers – although Icann and the keyholders say this is unlikely.

Standing in the break room next to Lamb is Dmitry Burkov, one of the keyholders, a brusque and heavy-set Russian security expert on the boards of several internet NGOs, who has flown in from Moscow for the ceremony. “The key issue with internet governance is always trust,” he says. “No matter what the forum, it always comes down to trust.” Given the tensions between Russia and the US, and Russia’s calls for new organisations to be put in charge of the internet, does he have faith in this current system? He gestures to the room at large: “They’re the best part of Icann.” I take it he means he likes these people, and not the wider organisation, but he won’t be drawn further.

It’s time to move to the ceremony room itself, which has been cleared for the most sensitive classified information. No electrical signals can come in or out. Building security guards are barred, as are cleaners. To make sure the room looks decent for visitors, an east coast keyholder, Anne-Marie Eklund Löwinder of Sweden, has been in the day before to vacuum with a $20 dustbuster.

We’re about to begin a detailed, tightly scripted series of more than 100 actions, all recorded to the minute using the GMT time zone for consistency. These steps are a strange mix of high-security measures lifted straight from a thriller (keycards, safe combinations, secure cages), coupled with more mundane technical details – a bit of trouble setting up a printer – and occasional bouts of farce. In short, much like the internet itself.

Read the entire article here.

Image: The Magnificent Seven, movie poster. Courtesy of Wikia.

The Joy of New Technology

prosthetic-hand

We are makers. We humans love to create and invent. Some of our inventions are hideous, laughable or just plain evil — Twinkies, collateralized debt obligations and subprime mortgages, Agent Orange, hair extensions, spray-on tans, cluster bombs, diet water.

However, for every misguided invention comes something truly great. This time, a prosthetic hand that provides a sense of real feeling, courtesy of the makers of the Veterans Affairs Medical Center in Cleveland, Ohio.

From Technology Review:

Igor Spetic’s hand was in a fist when it was severed by a forging hammer three years ago as he made an aluminum jet part at his job. For months afterward, he felt a phantom limb still clenched and throbbing with pain. “Some days it felt just like it did when it got injured,” he recalls.

He soon got a prosthesis. But for amputees like Spetic, these are more tools than limbs. Because the prosthetics can’t convey sensations, people wearing them can’t feel when they have dropped or crushed something.Now Spetic, 48, is getting some of his sensation back through electrodes that have been wired to residual nerves in his arm. Spetic is one of two people in an early trial that takes him from his home in Madison, Ohio, to the Cleveland Veterans Affairs Medical Center. In a basement lab, his prosthetic hand is rigged with force sensors that are plugged into 20 wires protruding from his upper right arm. These lead to three surgically implanted interfaces, seven millimeters long, with as many as eight electrodes apiece encased in a polymer, that surround three major nerves in Spetic’s forearm.

On a table, a nondescript white box of custom electronics does a crucial job: translating information from the sensors on Spetic’s prosthesis into a series of electrical pulses that the interfaces can translate into sensations. This technology “is 20 years in the making,” says the trial’s leader, Dustin Tyler, a professor of biomedical engineering at Case Western Reserve University and an expert in neural interfaces.

As of February, the implants had been in place and performing well in tests for more than a year and a half. Tyler’s group, drawing on years of neuroscience research on the signaling mechanisms that underlie sensation, has developed a library of patterns of electrical pulses to send to the arm nerves, varied in strength and timing. Spetic says that these different stimulus patterns produce distinct and realistic feelings in 20 spots on his prosthetic hand and fingers. The sensations include pressing on a ball bearing, pressing on the tip of a pen, brushing against a cotton ball, and touching sandpaper, he says. A surprising side effect: on the first day of tests, Spetic says, his phantom fist felt open, and after several months the phantom pain was “95 percent gone.”

On this day, Spetic faces a simple challenge: seeing whether he can feel a foam block. He dons a blindfold and noise-­canceling headphones (to make sure he’s relying only on his sense of touch), and then a postdoc holds the block inside his wide-open prosthetic hand and taps him on the shoulder. Spetic closes his prosthesis—a task made possible by existing commercial interfaces to residual arm muscles—and reports the moment he touches the block: success.

Read the entire article here.

Image: Prosthetic hand. Courtesy of MIT Technology Review / Veterans Affairs Medical Center.

A Quest For Skeuomorphic Noise

Toyota_Prius_III

Your Toyota Prius, or other electric vehicle, is a good environmental citizen. It helps reduce pollution and carbon emissions and does so rather efficiently. You and other eco-conscious owners should be proud.

But wait, not so fast. Your electric car may have a low carbon footprint, but it is a silent killer in waiting. It may be efficient, however it is far too quiet, and is thus somewhat of a hazard for pedestrians, cyclists and other motorists — they don’t hear it approaching.

Cars like the Prius are so quiet — in fact too quiet, for our own safety. So, enterprising engineers are working to add artificial noise to the next generations of almost silent cars. The irony is not lost: after years of trying to make cars quieter, engineers are now looking to make them noisier.

Perhaps, the added noise could be configurable as an option for customers — a base option would sound like a Citroen CV, a high-end model could sound like, well, a Ferrari or a classic Bugatti. Much better.

From Technology Review:

It was a pleasant June day in Munich, Germany. I was picked up at my hotel and driven to the country, farmland on either side of the narrow, two-lane road. Occasional walkers strode by, and every so often a bicyclist passed. We parked the car on the shoulder and joined a group of people looking up and down the road. “Okay, get ready,” I was told. “Close your eyes and listen.” I did so and about a minute later I heard a high-pitched whine, accompanied by a low humming sound: an automobile was approaching. As it came closer, I could hear tire noise. After the car had passed, I was asked my judgment of the sound. We repeated the exercise numerous times, and each time the sound was different. What was going on? We were evaluating sound designs for BMW’s new electric vehicles.

Electric cars are extremely quiet. The only sounds they make come from the tires, the air, and occasionally from the high-pitched whine of the electronics. Car lovers really like the silence. Pedestrians have mixed feelings, but blind people are greatly concerned. After all, they cross streets in traffic by relying upon the sounds of vehicles. That’s how they know when it is safe to cross. And what is true for the blind might also be true for anyone stepping onto the street while distracted. If the vehicles don’t make any sounds, they can kill. The United States National Highway Traffic Safety Administration determined that pedestrians are considerably more likely to be hit by hybrid or electric vehicles than by those with an internal-combustion engine. The greatest danger is when the hybrid or electric vehicles are moving slowly: they are almost completely silent.

Adding sound to a vehicle to warn pedestrians is not a new idea. For many years, commercial trucks and construction equipment have had to make beeping sounds when backing up. Horns are required by law, presumably so that drivers can use them to alert pedestrians and other drivers when the need arises, although they are often used as a way of venting anger and rage instead. But adding a continuous sound to a normal vehicle because it would otherwise be too quiet is a challenge.

What sound would you want? One group of blind people suggested putting some rocks into the hubcaps. I thought this was brilliant. The rocks would provide a natural set of cues, rich in meaning and easy to interpret. The car would be quiet until the wheels started to turn. Then the rocks would make natural, continuous scraping sounds at low speeds, change to the pitter-patter of falling stones at higher speeds. The frequency of the drops would increase with the speed of the car until the rocks ended up frozen against the circumference of the rim, silent. Which is fine: the sounds are not needed for fast-moving vehicles, because then the tire noise is audible. The lack of sound when the vehicle is not moving would be a problem, however.

The marketing divisions of automobile manufacturers thought the addition of artificial sounds would be a wonderful branding opportunity, so each car brand or model should have its own unique sound that captured just the car personality the brand wished to convey. Porsche added loudspeakers to its electric car prototype to give it the same throaty growl as its gasoline-powered cars. Nissan wondered whether a hybrid automobile should sound like tweeting birds. Some manufacturers thought all cars should sound the same, with standardized noises and sound levels, making it easier for everyone to learn how to interpret them. Some blind people thought they should sound like cars—you know, gasoline engines.

Skeuomorphic is the technical term for incorporating old, familiar ideas into new technologies, even though they no longer play a functional role. Skeuomorphic designs are often comfortable for traditionalists, and indeed the history of technology shows that new technologies and materials often slavishly imitate the old for no apparent reason except that it’s what people know how to do. Early automobiles looked like horse-driven carriages without the horses (which is also why they were called horseless carriages); early plastics were designed to look like wood; folders in computer file systems often look like paper folders, complete with tabs. One way of overcoming the fear of the new is to make it look like the old. This practice is decried by design purists, but in fact, it has its benefits in easing the transition from the old to the new. It gives comfort and makes learning easier. Existing conceptual models need only be modified rather than replaced. Eventually, new forms emerge that have no relationship to the old, but the skeuomorphic designs probably helped the transition.

When it came to deciding what sounds the new silent automobiles should generate, those who wanted differentiation ruled the day, yet everyone also agreed that there had to be some standards. It should be possible to determine that the sound is coming from an automobile, to identify its location, direction, and speed. No sound would be necessary once the car was going fast enough, in part because tire noise would be sufficient. Some standardization would be required, although with a lot of leeway. International standards committees started their procedures. Various countries, unhappy with the normally glacial speed of standards agreements and under pressure from their communities, started drafting legislation. Companies scurried to develop appropriate sounds, hiring psychologists, Hollywood sound designers, and experts in psychoacoustics.

The United States National Highway Traffic Safety Administration issued a set of principles along with a detailed list of requirements, including sound levels, spectra, and other criteria. The full document is 248 pages. The document states:

This standard will ensure that blind, visually-impaired, and other pedestrians are able to detect and recognize nearby hybrid and electric vehicles by requiring that hybrid and electric vehicles emit sound that pedestrians will be able to hear in a range of ambient environments and contain acoustic signal content that pedestrians will recognize as being emitted from a vehicle. The proposed standard establishes minimum sound requirements for hybrid and electric vehicles when operating under 30 kilometers per hour (km/h) (18 mph), when the vehicle’s starting system is activated but the vehicle is stationary, and when the vehicle is operating in reverse. The agency chose a crossover speed of 30 km/h because this was the speed at which the sound levels of the hybrid and electric vehicles measured by the agency approximated the sound levels produced by similar internal combustion engine vehicles. (Department of Transportation, 2013.)

As I write this, sound designers are still experimenting. The automobile companies, lawmakers, and standards committees are still at work. Standards are not expected until 2014 or later, and then it will take considerable time for the millions of vehicles across the world to meet them. What principles should be used for the sounds of electric vehicles (including hybrids)? The sounds have to meet several criteria:

Alerting. The sound will indicate the presence of an electric vehicle.

Orientation. The sound will make it possible to determine where the vehicle is located, roughly how fast it is going, and whether it is moving toward or away from the listener.

Lack of annoyance. Because these sounds will be heard frequently even in light traffic and continually in heavy traffic, they must not be annoying. Note the contrast with sirens, horns, and backup signals, all of which are intended to be aggressive warnings. Such sounds are deliberately unpleasant, but because they are infrequent and relatively short in duration, they are acceptable. The challenge for electric vehicles is to make sounds that alert and orient, not annoy.

Standardization versus individualization. Standardization is necessary to ensure that all electric-vehicle sounds can readily be interpreted. If they vary too much, novel sounds might confuse the listener. Individualization has two functions: safety and marketing. From a safety point of view, if there were many vehicles on the street, individualization would allow them to be tracked. This is especially important at crowded intersections. From a marketing point of view, individualization can ensure that each brand of electric vehicle has its own unique characteristic, perhaps matching the quality of the sound to the brand image.

Read the entire article here.

Image: Toyota Prius III. Courtesy of Toyota / Wikipedia.

Business Decison-Making Welcomes Science

data-visualization-ayasdi

It is likely that business will never eliminate gut instinct from the decision-making process. However, as data, now big data, increasingly pervades every crevice of every organization, the use of data-driven decisions will become the norm. As this happens, more and more businesses find themselves employing data scientists to help filter, categorize, mine and analyze these mountains of data in meaningful ways.

The caveat, of course, is that data, big data and an even bigger reliance on that data requires subject matter expertise and analysts with critical thinking skills and sound judgement — data cannot be used blindly.

From Technology review:

Throughout history, innovations in instrumentation—the microscope, the telescope, and the cyclotron—have repeatedly revolutionized science by improving scientists’ ability to measure the natural world. Now, with human behavior increasingly reliant on digital platforms like the Web and mobile apps, technology is effectively “instrumenting” the social world as well. The resulting deluge of data has revolutionary implications not only for social science but also for business decision making.

As enthusiasm for “big data” grows, skeptics warn that overreliance on data has pitfalls. Data may be biased and is almost always incomplete. It can lead decision makers to ignore information that is harder to obtain, or make them feel more certain than they should. The risk is that in managing what we have measured, we miss what really matters—as Vietnam-era Secretary of Defense Robert McNamara did in relying too much on his infamous body count, and as bankers did prior to the 2007–2009 financial crisis in relying too much on flawed quantitative models.

The skeptics are right that uncritical reliance on data alone can be problematic. But so is overreliance on intuition or ideology. For every Robert McNamara, there is a Ron Johnson, the CEO whose disastrous tenure as the head of JC Penney was characterized by his dismissing data and evidence in favor of instincts. For every flawed statistical model, there is a flawed ideology whose inflexibility leads to disastrous results.

So if data is unreliable and so is intuition, what is a responsible decision maker supposed to do? While there is no correct answer to this question—the world is too complicated for any one recipe to apply—I believe that leaders across a wide range of contexts could benefit from a scientific mind-set toward decision making.

A scientific mind-set takes as its inspiration the scientific method, which at its core is a recipe for learning about the world in a systematic, replicable way: start with some general question based on your experience; form a hypothesis that would resolve the puzzle and that also generates a testable prediction; gather data to test your prediction; and finally, evaluate your hypothesis relative to competing hypotheses.

The scientific method is largely responsible for the astonishing increase in our understanding of the natural world over the past few centuries. Yet it has been slow to enter the worlds of politics, business, policy, and marketing, where our prodigious intuition for human behavior can always generate explanations for why people do what they do or how to make them do something different. Because these explanations are so plausible, our natural tendency is to want to act on them without further ado. But if we have learned one thing from science, it is that the most plausible explanation is not necessarily correct. Adopting a scientific approach to decision making requires us to test our hypotheses with data.

While data is essential for scientific decision making, theory, intuition, and imagination remain important as well—to generate hypotheses in the first place, to devise creative tests of the hypotheses that we have, and to interpret the data that we collect. Data and theory, in other words, are the yin and yang of the scientific method—theory frames the right questions, while data answers the questions that have been asked. Emphasizing either at the expense of the other can lead to serious mistakes.

Also important is experimentation, which doesn’t mean “trying new things” or “being creative” but quite specifically the use of controlled experiments to tease out causal effects. In business, most of what we observe is correlation—we do X and Y happens—but often what we want to know is whether or not X caused Y. How many additional units of your new product did your advertising campaign cause consumers to buy? Will expanded health insurance coverage cause medical costs to increase or decline? Simply observing the outcome of a particular choice does not answer causal questions like these: we need to observe the difference between choices.

Replicating the conditions of a controlled experiment is often difficult or impossible in business or policy settings, but increasingly it is being done in “field experiments,” where treatments are randomly assigned to different individuals or communities. For example, MIT’s Poverty Action Lab has conducted over 400 field experiments to better understand aid delivery, while economists have used such experiments to measure the impact of online advertising.

Although field experiments are not an invention of the Internet era—randomized trials have been the gold standard of medical research for decades—digital technology has made them far easier to implement. Thus, as companies like Facebook, Google, Microsoft, and Amazon increasingly reap performance benefits from data science and experimentation, scientific decision making will become more pervasive.

Nevertheless, there are limits to how scientific decision makers can be. Unlike scientists, who have the luxury of withholding judgment until sufficient evidence has accumulated, policy makers or business leaders generally have to act in a state of partial ignorance. Strategic calls have to be made, policies implemented, reward or blame assigned. No matter how rigorously one tries to base one’s decisions on evidence, some guesswork will be required.

Exacerbating this problem is that many of the most consequential decisions offer only one opportunity to succeed. One cannot go to war with half of Iraq and not the other just to see which policy works out better. Likewise, one cannot reorganize the company in several different ways and then choose the best. The result is that we may never know which good plans failed and which bad plans worked.

Read the entire article here.

Image: Screenshot of Iris, Ayasdi’s data-visualization tool. Courtesy of Ayasdi / Wired.

The Persistent Self

eterni-screenshot

Many of us strive for persistence beyond the realm of our natural life-spans. Some seek to be remembered through monuments, buildings and other physical objects. Others seek permanence through literary and artistic works. Still others aim for remembrance through less lasting, but noble deeds: social programs, health initiatives, charitable foundations and so on. And yet others wish to be preserved in frozen stasis for later thawing and re-awakening. It is safe to say, that many of us would seek to live for ever.

So, it comes as no surprise to see internet startups exploring the market to preserve us or facsimiles of us — digitally — after death. Introducing Eterni.me — your avatar to a virtual eternity.

From Wired (UK):

“We don’t try to replace humans or give false hopes to people grieving.” Romanian design consultant Marius Ursache, cofounder of Eterni.me, needs to clear this up quickly. Because when you’re building a fledgling artificial intelligence company that promises to bring back the dead — or at least, their memories and character, as preserved in their digital footprint — for virtual chats with loved ones, expect a lot of flack.

“It is going to really suck — think Cleverbot with weird out-of-place references to things from that person’s life, masquerading as that person,” wrote one Redditor on the thread “Become Virtually Immortal (In the creepiest way possible)”, which immediately appeared after Eterni.me’s launch was announced last week. Retorts ranged from the bemused — “Now that is some scary f’d up s**t right there. WTF!?” — to the amusing: “Imagine a world where drunk you has to reason with sober AI you before you’re allowed to drunk dial every single person you ever dated or saw naked. So many awkward moments avoided.” But the resounding consensus seems to be that everyone wants to know more.

The site launched with the look of any other Silicon Valley internet startup, but a definitively new take on an old message. While social media companies want you to share and create the story of you while you’re alive, and lifelogging company Memoto promises to capture “meaningful [and shareable] moments”, Eterni.me wants to wrap that all up for those you leave behind into a cohesive AI they can chat with.

Three thousand people registered to the service within the first four days of the site going live, despite there being zero product to make use of (a beta version is slated for 2015). So with a year to ponder your own mortality, why the excitement for a technology that is, at this moment, merely a proof of concept?

“We got very mixed reactions, from ecstatic congratulations to hate mail. And it’s normal — it’s a very polarising topic. But one thing was constant: almost everybody we’ve interacted with truly believes this will be a reality someday. The only question is when it will be a reality and who will make it a reality,” Ursache tells us.

Popular culture and the somewhat innate human need to believe we are impervious, has well prepared us for the concept. Ray Kurzweil wants us to upload our brains to computers and develop synthetic neocortexes, and AI has featured prominently on film and TV for decades, including in this month’s Valentine’s Day release of a human-virtual assistant love story. In series two of British future-focused drama Black Mirror Hayley Atwell reconnects with her diseased lover using a system comparable to what Eterni.me is trying to achieve — though Ursache calls it a “creepier” version, and tells us “we’re trying to stay away from that idea”, the concept that it’s a way for grieving loved ones to stall moving on.

Sigmund Freud called our relationship with the concept of immortality the “real secret of heroism” — that we carry out heroic feats is only down to a perpetual and inherent belief that our consciousness is permanent. He writes in Reflections on War and Death: “We cannot, indeed, imagine our own death; whenever we try to do so we find that we survive ourselves as spectators. The school of psychoanalysis could thus assert that at bottom no one believes in his own death, which amounts to saying: in the unconscious every one of us is convinced of his immortality… Our unconscious therefore does not believe in its own death; it acts as though it were immortal.”

This is why Eterni.me is not just about loved ones signing up after the event, but individuals signing up to have their own character preserved, under their watchful eye while still alive.

The company’s motto is “it’s like a Skype chat from the past,” but it’s still very much about crafting how the world sees you — or remembers you, in this case — just as you might pause and ponder on hitting Facebook’s post button, wondering till the last if your spaghetti dinner photo/comment really gets the right message across. On its more troubling side, the site plays on the fear that you can no longer control your identity after you’re gone; that you are in fact a mere mortal. “The moments and emotions in our lifetime define how we are seen by our family and friends. All these slowly fade away after we die — until one day… we are all forgotten,” it says in its opening lines — scroll down and it provides the answer to all your problems: “Simply Become Immortal”. Part of the reason we might identify as being immortal — at least unconsciously, as Freud describes it — is because we craft a life we believe will be memorable, or have children we believe our legacy will live on in. Eterni.me’s comment shatters that illusion and could be seen as opportunistic on the founders’ part. The site also goes on to promise a “virtual YOU” that can “offer information and advice to your family and friends after you pass away”, a comfort to anyone worried about leaving behind a spouse or children.

In contrast to this rather dramatic claim, Ursache says: “We’re trying to make it clear that it’s not replacing a person, but trying to preserve as much of the information one generates, and offering asynchronous access to it.”

Read the entire article here.

Image: Eterni.me screenshot. Courtesy of Eterni.

The Persistent Panopticon

microsoft-surveillance-system

Based on the ever-encroaching surveillance systems used by local and national governments and private organizations one has to wonder if we — the presumed innocent — are living inside or outside a prison facility. Advances in security and surveillance systems now make it possible to track swathes of the population over periods of time across an entire city.

From the Washington Post:

Shooter and victim were just a pair of pixels, dark specks on a gray streetscape. Hair color, bullet wounds, even the weapon were not visible in the series of pictures taken from an airplane flying two miles above.

But what the images revealed — to a degree impossible just a few years ago — was location, mapped over time. Second by second, they showed a gang assembling, blocking off access points, sending the shooter to meet his target and taking flight after the body hit the pavement. When the report reached police, it included a picture of the blue stucco building into which the killer ultimately retreated, at last beyond the view of the powerful camera overhead.

“I’ve witnessed 34 of these,” said Ross McNutt, the genial president of Persistent Surveillance Systems, which collected the images of the killing in Ciudad Juarez, Mexico, from a specially outfitted Cessna. “It’s like opening up a murder mystery in the middle, and you need to figure out what happened before and after.”

As Americans have grown increasingly comfortable with traditional surveillance cameras, a new, far more powerful generation is being quietly deployed that can track every vehicle and person across an area the size of a small city, for several hours at a time. Though these cameras can’t read license plates or see faces, they provide such a wealth of data that police, businesses, even private individuals can use them to help identify people and track their movements.

Already, the cameras have been flown above major public events, such as the Ohio political rally where Sen. John McCain (R-Ariz.) named Sarah Palin as his running mate in 2008, McNutt said. They’ve been flown above Baltimore; Philadelphia; Compton, Calif.; and Dayton in demonstrations for police. They’ve also been used for traffic impact studies, for security at NASCAR races — and at the request of a Mexican politician, who commissioned the flights over Ciudad Juarez.

Video: A time machine for police, letting them watch criminals—and everyone else.

Defense contractors are developing similar technology for the military, but its potential for civilian use is raising novel civil-liberty concerns. In Dayton, where Persistent Surveillance Systems is based, city officials balked last year when police considered paying for 200 hours of flights, in part because of privacy complaints.

“There are an infinite number of surveillance technologies that would help solve crimes .?.?. but there are reasons that we don’t do those things, or shouldn’t be doing those things,” said Joel Pruce, a University of Dayton post-doctoral fellow in human rights who opposed the plan. “You know where there’s a lot less crime? There’s a lot less crime in China.”

McNutt, a retired Air Force officer who once helped design a similar system for the skies above Fallujah, a key battleground city in Iraq, hopes to win over officials in Dayton and elsewhere by convincing them that cameras mounted on fixed-wing aircraft can provide far more useful intelligence than police helicopters do, for less money. The Supreme Court generally has given wide latitude to police using aerial surveillance so long as the photography captures images visible to the naked eye.

A single camera mounted atop the Washington Monument, McNutt boasts, could deter crime all around the National Mall. He thinks regular flights over the most dangerous parts of Washington — combined with publicity about how much police could now see — would make a significant dent in the number of burglaries, robberies and murders. His 192-megapixel cameras would spot as many as 50 crimes per six-hour flight, he estimates, providing police with a continuous stream of images covering more than a third of the city.

“We watch 25 square miles, so you see lots of crimes,” he said. “And by the way, after people commit crimes, they drive like idiots.”

What McNutt is trying to sell is not merely the latest techno-wizardry for police. He envisions such steep drops in crime that they will bring substantial side effects, including rising property values, better schools, increased development and, eventually, lower incarceration rates as the reality of long-term overhead surveillance deters those tempted to commit crimes.

Dayton Police Chief Richard Biehl, a supporter of McNutt’s efforts, has even proposed inviting the public to visit the operations center, to get a glimpse of the technology in action.

“I want them to be worried that we’re watching,” Biehl said. “I want them to be worried that they never know when we’re overhead.”

Technology in action

McNutt, a suburban father of four with a doctorate from the Massachusetts Institute of Technology, is not deaf to concerns about his company’s ambitions. Unlike many of the giant defense contractors that are eagerly repurposing wartime surveillance technology for domestic use, he sought advice from the American Civil Liberties Union in writing a privacy policy.

It has rules on how long data can be kept, when images can be accessed and by whom. Police are supposed to begin looking at the pictures only after a crime has been reported. Pure fishing expeditions are prohibited.

The technology has inherent limitations as well. From the airborne cameras, each person appears as a single pixel indistinguishable from any other person. What they are doing — even whether they are clothed or not — is impossible to see. As camera technology improves, McNutt said he intends to increase their range, not the precision of the imagery, so that larger areas can be monitored.

The notion that McNutt and his roughly 40 employees are peeping Toms clearly rankles. They made a PowerPoint presentation for the ACLU that includes pictures taken to aid the response to Hurricane Sandy and the severe Iowa floods last summer. The section is titled: “Good People Doing Good Things.”

“We get a little frustrated when people get so worried about us seeing them in their back yard,” McNutt said in his operation center, where the walls are adorned with 120-inch monitors, each showing a different grainy urban scene collected from above. “We can’t even see what they are doing in their backyard. And, by the way, we don’t care.”

Yet in a world of increasingly pervasive surveillance, location and identity are becoming all but inextricable — one quickly leads to the other for those with the right tools.

During one of the company’s demonstration flights over Dayton in 2012, police got reports of an attempted robbery at a bookstore and shots fired at a Subway sandwich shop. The cameras revealed a single car moving between the two locations.

By reviewing the images, frame by frame, analysts were able to help police piece together a larger story: The man had left a residential neighborhood midday, attempted to rob the bookstore but fled when somebody hit an alarm. Then he drove to Subway, where the owner pulled a gun and chased him off. His next stop was a Family Dollar Store, where the man paused for several minutes. He soon returned home, after a short stop at a gas station where a video camera captured an image of his face.

A few hours later, after the surveillance flight ended, the Family Dollar Store was robbed. Police used the detailed map of the man’s movements, along with other evidence from the crime scenes, to arrest him for all three crimes.

On another occasion, Dayton police got a report of a burglary in progress. The aerial cameras spotted a white truck driving away from the scene. Police stopped the driver before he got home from the heist, with the stolen goods sitting in the back of the truck. A witnessed identified him soon after.

Read the entire story here.

Image: Surveillance cameras. Courtesy of Mashable / Microsoft.

Your iPhone is Worth $3,000

iphone_5C-colors

There is a slight catch.

Your iPhone is worth around $3,000 based on the combined value of a sack full of gadgets from over 20 years ago. We all know that no IPhone existed in the early nineties — not even inside Steve Jobs’ head. So intrepid tech-sleuth, Steve Cichon, calculated the iPhone’s value by combining the functions of fifteen or so consumer electronics devices from 1991, found at Radio Shack, which when all combined offer comparable features to one of today’s iPhones.

From the Washington Post:

Buffalo writer Steve Cichon dug up an old Radio Shack ad, offering a variety of what were then cutting-edge gadgets. There are 15 items listed on the page, and Cichon points out that all but two of them — the exceptions are a radar detector and a set of speakers — do jobs that can now be performed with a modern iPhone.

The other 13 items, including a desktop computer, a camcorder, a CD player  and a mobile phone, have a combined price of $3,071.21. The unsubsidized price of an iPhone is $549. And, of course, your iPhone is superior to these devices in many respects. The VHS camcorder, for example, captured video at a quality vastly inferior to the crystal-clear 1080p video an iPhone can record. That $1,599 Tandy computer would have struggled to browse the Web of the 1990s, to say nothing of the sophisticated Web sites iPhones access today. The CD player only lets you carry a few albums worth of music at a time; an iPhone can hold thousands of songs. And of course, the iPhone fits in your pocket.

This example is important to remember in the debate over whether the government’s official inflation figures understate or overstate inflation. In computing the inflation rate, economists assemble a representative “basket of goods” and see how its price changes over time. This isn’t difficult when the items in the basket are milk or gallons of gasoline. But it becomes extremely tricky when thinking about high-tech products. This year’s products are dramatically better than last year’s, so economists include a “quality adjustment” factor to reflect the change. But making apples-to-apples comparisons is difficult.

There’s no basket of 1991 gadgets that exactly duplicates the functionality of a modern iPhone, so deciding what to put into that basket is an inherently subjective enterprise. It’s not obvious that the average customer really gets as much value from his or her iPhone as a gadget lover in 1991 would have gotten from $3,000 worth of Radio Shack gadgets. On the other hand, iPhones do a lot of other things, too, like check Facebook, show movies on the go and provide turn-by-turn directions, that would have been hard to do on any gadget in 1991. So if anything, I suspect the way we measure inflation understates how quickly our standard of living has been improving.

Read the entire story here.

Image: Apple iPhone 5c. Courtesy of ABC News / Apple.

Techo-Blocking Technology

google-glass2

Many technologists, philosophers and social scientists who consider the ethics of technology have described it as a double-edged sword. Indeed observation does seem to uphold this idea; for every benefit gained from a new invention comes a mirroring disadvantage or a peril. Not that technology per se is a threat — but its human masters seem to be rather adept at deploying it for both good and evil means.

By corollary it is also evident that many a new technology spawns others, and sometimes entire industries, to counteract the first. The radar begets the radar-evading material; the radio begets the radio-jamming transmitter; cryptography begets hacking. You get the idea.

So not a moment too soon comes PlaceAvoider, a technology to suppress capturing and sharing of images seen through Google Glass. So, watch out Brin and Page and company, the watchers are watching you.

From Technology Review:

With last year’s launch of the Narrative Clip and Autographer, and Google Glass poised for release this year, technologies that can continuously capture our daily lives with photos and videos are inching closer to the mainstream. These gadgets can generate detailed visual diaries, drive self-improvement, and help those with memory problems. But do you really want to record in the bathroom or a sensitive work meeting?

Assuming that many people don’t, computer scientists at Indiana University have developed software that uses computer vision techniques to automatically identify potentially confidential or embarrassing pictures taken with these devices and prevent them from being shared. A prototype of the software, called PlaceAvoider, will be presented at the Network and Distributed System Security Symposium in San Diego in February.

“There simply isn’t the time to manually curate the thousands of images these devices can generate per day, and in a socially networked world that might lead to the inadvertent sharing of photos you don’t want to share,” says Apu Kapadia, who co-leads the team that developed the system. “Or those who are worried about that might just not share their life-log streams, so we’re trying to help people exploit these applications to the full by providing them with a way to share safely.”

Kapadia’s group began by acknowledging that devising algorithms that can identify sensitive pictures solely on the basis of visual content is probably impossible, since the things that people do and don’t want to share can vary widely and may be difficult to recognize. They set about designing software that users train by taking pictures of the rooms they want to blacklist. PlaceAvoider then flags new pictures taken in those rooms so the user will review them.

The system uses an existing computer-vision algorithm called scale-invariant feature transform (SIFT) to pinpoint regions of high contrast around corners and edges within the training images that are likely to stay visually constant even in varying light conditions and from different perspectives. For each of these, it produces a “numerical fingerprint” consisting of 128 separate numbers relating to properties such as color and texture, as well as its position relative to other regions of the image. Since images are sometimes blurry, PlaceAvoider also looks at more general properties such as colors and textures of walls and carpets, and takes into account the sequence in which shots are taken.

In tests, the system accurately determined whether images from streams captured in the homes and workplaces of the researchers were from blacklisted rooms an average of 89.8 percent of the time.

PlaceAvoider is currently a research prototype; its various components have been written but haven’t been combined as a completed product, and researchers used a smartphone worn around the neck to take photos rather than an existing device meant for life-logging. If developed to work on a life-logging device, an interface could be designed so that PlaceAvoider can flag potentially sensitive images at the time they are taken or place them in quarantine to be dealt with later.

Read the entire article here.

Image: Google Glass. Courtesy of Google.

3D Printing Grows Up

cubify-3dme

So, you’d like to print a 3D engine part for your jet fighter aircraft, or print a baby — actually a realistic model of one — or shoe insoles or a fake flower. Or perhaps you’d like to print a realistic windpipe or a new arm, or a guitar or a bikini or a model of a sports stadium or even a 3D selfie (please, say no). All of these and more can now be printed in three-dimensions courtesy of this rapidly developing area of technology.

From the Guardian:

As a technology journalist – even one who hasn’t written much about 3D printing – I’ve noticed a big growth in questions from friends about the area in recent months. Often, those questions are the same ones, too.

How does 3D printing even work? What’s all this about 3D-printed guns? Can you 3D-print a 3D printer? Why are they so expensive? What can you actually make with them? Apart from guns…

The ethical and legal questions around 3D printing and firearms are important and complex, but they also tend to hoover up a lot of the mainstream media attention for this area of technology. But it’s the “what can you actually make with them” question that’s been pulling me in recently.

There’s a growing community – from individual makers to nascent businesses – exploring the potential of 3D printing. This feature is just a snapshot of some of the products and projects that caught my attention, rather than a definitive roundup.

A taste of what’s happening, but one that’s ripe for your comments pointing out better examples in these categories, and other areas that have been left out. All contributions are welcome, but here are 30 things to start the discussion off.

1. RAF Tornado fighter jet parts

Early this year, BAE Systems said that British fighter jets had flown with the first time with components made using 3D printing technology. Its engineers are making parts for four squadrons of Tornado GR4 aircraft, with the aim of saving £1.2m of maintenance and service costs over the next four years. “You are suddenly not fixed in terms of where you have to manufacture these things,” said BAE’s Mike Murray. “You can manufacture the products at whatever base you want, providing you can get a machine there.”

2. Arms for children

Time’s article from earlier this month on the work of Not Impossible Labs makes for powerful reading: a project using 3D printers to make low-cost prosthetic limbs for amputees, including Sudanese bomb-blast victim Daniel Omar. But this is just one of the stories emerging: see also 3Ders’ piece on a four-year old called Hannah, with a condition called arthrogryposis that limits her ability to lift her arms unaided, but who now has a Wilmington Robotic Exoskeleton (WREX for short) to help, made using 3D printing.

3. Old Trafford and the Etihad Stadium

Manchester-based company Hobs’ business is based around working with architects, engineers and other creatives to use 3D printing as part of their work, but to show off its capabilities, the company 3D printed models of the city’s two football stadia – Old Trafford and the Etihad Stadium – giving them away in a competition for Manchester Evening News readers. The models were estimated to be worth £1,000 each.

4. Unborn babies

Not actually as creepy as it sounds. This is more an extension of the 4D ultrasound images of babies in the womb that have become more popular in recent years. The theory: why not print them out? One company doing it, 3D Babies, didn’t have much luck with a crowdfunding campaign last year, raising $1,225 of its $15,000 goal. Even so, its website is up and running, offering eight-inch “custom lifesize baby” models for $800 a pop.

5. Super Bowl shoe cleats

Expect to see a number of big brands launching 3D printing projects this year – part R&D and part PR campaigns. Nike is one example: it’s showing off a training shoe called the Vapor Carbon Elite Cleat for this year’s Super Bowl, with a 3D-printed nylon base and cleats – the latter based on the existing Vapor Laser Talon, which was unveiled a year ago.

6. Honda concept cars

Admittedly, not an actual concept car that you can drive. Not yet. But Honda has made five 3D-printable models available from its website for fans to download and make, including 1994’s FSR Concept and 2003’s Kiwami. So it’s more about shining a light on the company’s archives and being seen to be innovative – although the potential of 3D printing for internal prototyping at all kinds of manufacturers (cars included) is one of the most interesting areas for 3D printing.

Read the entire article here.

Image: Cubify’s 3DMe figures. Courtesy of Cubify.

Post-Siri Relationships

siri

What are we to make of a world when software-driven intelligent agents, artificial intelligence and language processing capabilities combine to deliver a human experience? After all, what does it really mean to be human and can a machine be sentient? We should all be pondering such weighty issues, since this emerging reality may well happen within our lifetimes.

From Technology Review:

In the movie Her, which was nominated for the Oscar for Best Picture this year, a middle-aged writer named Theodore Twombly installs and rapidly falls in love with an artificially intelligent operating system who christens herself Samantha.

Samantha lies far beyond the faux “artificial intelligence” of Google Now or Siri: she is as fully and unambiguously conscious as any human. The film’s director and writer, Spike Jonze, employs this premise for limited and prosaic ends, so the film limps along in an uncanny valley, neither believable as near-future reality nor philosophically daring enough to merit suspension of disbelief. Nonetheless, Her raises questions about how humans might relate to computers. Twombly is suffering a painful separation from his wife; can Samantha make him feel better?

Samantha’s self-awareness does not echo real-world trends for automated assistants, which are heading in a very different direction. Making personal assistants chatty, let alone flirtatious, would be a huge waste of resources, and most people would find them as irritating as the infamous Microsoft Clippy.

But it doesn’t necessarily follow that these qualities would be unwelcome in a different context. When dementia sufferers in nursing homes are invited to bond with robot seal pups, and a growing list of psychiatric conditions are being addressed with automated dialogues and therapy sessions, it can only be a matter of time before someone tries to create an app that helps people overcome ordinary loneliness. Suppose we do reach the point where it’s possible to feel genuinely engaged by repartee with a piece of software. What would that mean for the human participants?

Perhaps this prospect sounds absurd or repugnant. But some people already take comfort from immersion in the lives of fictional characters. And much as I wince when I hear someone say that “my best friend growing up was Elizabeth Bennet,” no one would treat it as evidence of psychotic delusion. Over the last two centuries, the mainstream perceptions of novel reading have traversed a full spectrum: once seen as a threat to public morality, it has become a badge of empathy and emotional sophistication. It’s rare now to hear claims that fiction is sapping its readers of time, energy, and emotional resources that they ought to be devoting to actual human relationships.

Of course, characters in Jane Austen novels cannot banter with the reader—and it’s another question whether it would be a travesty if they could—but what I’m envisaging are not characters from fiction “brought to life,” or even characters in a game world who can conduct more realistic dialogue with human players. A software interlocutor—an “SI”—would require some kind of invented back story and an ongoing “life” of its own, but these elements need not have been chosen as part of any great dramatic arc. Gripping as it is to watch an egotistical drug baron in a death spiral, or Raskolnikov dragged unwillingly toward his creator’s idea of redemption, the ideal SI would be more like a pen pal, living an ordinary life untouched by grand authorial schemes but ready to discuss anything, from the mundane to the metaphysical.

There are some obvious pitfalls to be avoided. It would be disastrous if the user really fell for the illusion of personhood, but then, most of us manage to keep the distinction clear in other forms of fiction. An SI that could be used to rehearse pathological fantasies of abusive relationships would be a poisonous thing—but conversely, one that stood its ground against attempts to manipulate or cower it might even do some good.

The art of conversation, of listening attentively and weighing each response, is not a universal gift, any more than any other skill. If it becomes possible to hone one’s conversational skills with a computer—discovering your strengths and weaknesses while enjoying a chat with a character that is no less interesting for failing to exist—that might well lead to better conversations with fellow humans.

Read the entire story here.

Image: Siri icon. Courtesy of Cult of Mac / Apple.

Your Toaster on the Internet

Toaster

Billions of people have access to the Internet. Now, whether a significant proportion of these do anything productive with this tremendous resource is open to debate — many preferring only to post pictures of their breakfasts, themselves or to watch last-minute’s viral video hit.

Despite all these humans clogging up the Tubes of the Internets most traffic along the information superhighway is in fact not even human. Over 60 percent of all activity comes from computer systems, such as web crawlers, botnets, and increasingly, industrial control systems, ranging from security and monitoring devices, to in-home devices such as your thermostat, refrigerator, smart TV , smart toilet and toaster. So, soon Google will know what you eat and when, and your fridge will tell you what you should eat (or not) based on what it knows of your body mass index (BMI) from your bathroom scales.

Jokes aside, the Internet of Things (IoT) promises to herald an even more significant information revolution over the coming decades as all our devices and machines, from home to farm to factory, are connected and inter-connected.

From the ars technica:

If you believe what the likes of LG and Samsung have been promoting this week at CES, everything will soon be smart. We’ll be able to send messages to our washing machines, run apps on our fridges, and have TVs as powerful as computers. It may be too late to resist this movement, with smart TVs already firmly entrenched in the mid-to-high end market, but resist it we should. That’s because the “Internet of things” stands a really good chance of turning into the “Internet of unmaintained, insecure, and dangerously hackable things.”

These devices will inevitably be abandoned by their manufacturers, and the result will be lots of “smart” functionality—fridges that know what we buy and when, TVs that know what shows we watch—all connected to the Internet 24/7, all completely insecure.

While the value of smart watches or washing machines isn’t entirely clear, at least some smart devices—I think most notably phones and TVs—make sense. The utility of the smartphone, an Internet-connected computer that fits in your pocket, is obvious. The growth of streaming media services means that your antenna or cable box are no longer the sole source of televisual programming, so TVs that can directly use these streaming services similarly have some appeal.

But these smart features make the devices substantially more complex. Your smart TV is not really a TV so much as an all-in-one computer that runs Android, WebOS, or some custom operating system of the manufacturer’s invention. And where once it was purely a device for receiving data over a coax cable, it’s now equipped with bidirectional networking interfaces, exposing the Internet to the TV and the TV to the Internet.

The result is a whole lot of exposure to security problems. Even if we assume that these devices ship with no known flaws—a questionable assumption in and of itself if SOHO routers are anything to judge by—a few months or years down the line, that will no longer be the case. Flaws and insecurities will be uncovered, and the software components of these smart devices will need to be updated to address those problems. They’ll need these updates for the lifetime of the device, too. Old software is routinely vulnerable to newly discovered flaws, so there’s no point in any reasonable timeframe at which it’s OK to stop updating the software.

In addition to security, there’s also a question of utility. Netflix and Hulu may be hot today, but that may not be the case in five years’ time. New services will arrive; old ones will die out. Even if the service lineup remains the same, its underlying technology is unlikely to be static. In the future, Netflix, for example, might want to deprecate old APIs and replace them with new ones; Netflix apps will need to be updated to accommodate the changes. I can envision changes such as replacing the H.264 codec with H.265 (for reduced bandwidth and/or improved picture quality), which would similarly require updated software.

To remain useful, app platforms need up-to-date apps. As such, for your smart device to remain safe, secure, and valuable, it needs a lifetime of software fixes and updates.

A history of non-existent updates

Herein lies the problem, because if there’s one thing that companies like Samsung have demonstrated in the past, it’s a total unwillingness to provide a lifetime of software fixes and updates. Even smartphones, which are generally assumed to have a two-year lifecycle (with replacements driven by cheap or “free” contract-subsidized pricing), rarely receive updates for the full two years (Apple’s iPhone being the one notable exception).

A typical smartphone bought today will remain useful and usable for at least three years, but its system software support will tend to dry up after just 18 months.

This isn’t surprising, of course. Samsung doesn’t make any money from making your two-year-old phone better. Samsung makes its money when you buy a new Samsung phone. Improving the old phones with software updates would cost money, and that tends to limit sales of new phones. For Samsung, it’s lose-lose.

Our fridges, cars, and TVs are not even on a two-year replacement cycle. Even if you do replace your TV after it’s a couple years old, you probably won’t throw the old one away. It will just migrate from the living room to the master bedroom, and then from the master bedroom to the kids’ room. Likewise, it’s rare that a three-year-old car is simply consigned to the scrap heap. It’s given away or sold off for a second, third, or fourth “life” as someone else’s primary vehicle. Your fridge and washing machine will probably be kept until they blow up or you move houses.

These are all durable goods, kept for the long term without any equivalent to the smartphone carrier subsidy to promote premature replacement. If they’re going to be smart, software-powered devices, they’re going to need software lifecycles that are appropriate to their longevity.

That costs money, it requires a commitment to providing support, and it does little or nothing to promote sales of the latest and greatest devices. In the software world, there are companies that provide this level of support—the Microsofts and IBMs of the world—but it tends to be restricted to companies that have at least one eye on the enterprise market. In the consumer space, you’re doing well if you’re getting updates and support five years down the line. Consumer software fixes a decade later are rare, especially if there’s no system of subscriptions or other recurring payments to monetize the updates.

Of course, the companies building all these products have the perfect solution. Just replace all our stuff every 18-24 months. Fridge no longer getting updated? Not a problem. Just chuck out the still perfectly good fridge you have and buy a new one. This is, after all, the model that they already depend on for smartphones. Of course, it’s not really appropriate even to smartphones (a mid/high-end phone bought today will be just fine in three years), much less to stuff that will work well for 10 years.

These devices will be abandoned by their manufacturers, and it’s inevitable that they are abandoned long before they cease to be useful.

Superficially, this might seem to be no big deal. Sure, your TV might be insecure, but your NAT router will probably provide adequate protection, and while it wouldn’t be tremendously surprising to find that it has some passwords for online services or other personal information on it, TVs are sufficiently diverse that people are unlikely to expend too much effort targeting specific models.

Read the entire story here.

Image: A classically styled chrome two-slot automatic electric toaster. Courtesy of Wikipedia.

Online Social Networks as Infectious Diseases

Yersinia_pestis

A new research study applies the concepts of infectious diseases to online social networks. By applying epidemiological modelling to examine the dynamics of networks, such as MySpace and Facebook, researchers are able to analyze the explosive growth — the term “viral” is not coincidental — and ultimate demise of such networks. So, is Facebook destined to suffer a fate similar to Myspace, Bebo, polio and the bubonic plague? These researchers from Princeton think so, estimating Facebook will lose 80 percent of its 1.2 billion users by 2017.

From the Guardian:

Facebook has spread like an infectious disease but we are slowly becoming immune to its attractions, and the platform will be largely abandoned by 2017, say researchers at Princeton University (pdf).

The forecast of Facebook’s impending doom was made by comparing the growth curve of epidemics to those of online social networks. Scientists argue that, like bubonic plague, Facebook will eventually die out.

The social network, which celebrates its 10th birthday on 4 February, has survived longer than rivals such as Myspace and Bebo, but the Princeton forecast says it will lose 80% of its peak user base within the next three years.

John Cannarella and Joshua Spechler, from the US university’s mechanical and aerospace engineering department, have based their prediction on the number of times Facebook is typed into Google as a search term. The charts produced by the Google Trends service show Facebook searches peaked in December 2012 and have since begun to trail off.

“Ideas, like diseases, have been shown to spread infectiously between people before eventually dying out, and have been successfully described with epidemiological models,” the authors claim in a paper entitled Epidemiological modelling of online social network dynamics.

“Ideas are spread through communicative contact between different people who share ideas with each other. Idea manifesters ultimately lose interest with the idea and no longer manifest the idea, which can be thought of as the gain of ‘immunity’ to the idea.”

Facebook reported nearly 1.2 billion monthly active users in October, and is due to update investors on its traffic numbers at the end of the month. While desktop traffic to its websites has indeed been falling, this is at least in part due to the fact that many people now only access the network via their mobile phones.

For their study, Cannarella and Spechler used what is known as the SIR (susceptible, infected, recovered) model of disease, which creates equations to map the spread and recovery of epidemics.

They tested various equations against the lifespan of Myspace, before applying them to Facebook. Myspace was founded in 2003 and reached its peak in 2007 with 300 million registered users, before falling out of use by 2011. Purchased by Rupert Murdoch’s News Corp for $580m, Myspace signed a $900m deal with Google in 2006 to sell its advertising space and was at one point valued at $12bn. It was eventually sold by News Corp for just $35m.

The 870 million people using Facebook via their smartphones each month could explain the drop in Google searches – those looking to log on are no longer doing so by typing the word Facebook into Google.

But Facebook’s chief financial officer David Ebersman admitted on an earnings call with analysts that during the previous three months: “We did see a decrease in daily users, specifically among younger teens.”

Investors do not appear to be heading for the exit just yet. Facebook’s share price reached record highs this month, valuing founder Mark Zuckerberg’s company at $142bn.

Read the entire article here.

Image: Scanning electron microscope image of Yersinia pestis, the bacterium responsible for bubonic plague. Courtesy of Wikipedia.

 

Waterproof Clothes

Another technology barrier falls by the wayside as textile and materials science researchers perfect an ultra-hydrophobic spray. No more getting your clothes wet in a downpour.

From the Guardian:

I hate being rained on. I especially hate it when it’s cold. You’d have thought that with all our 21st-century Google-Glass exploring-Mars engineering marvellousness, we would have made more progress on the problem of rain. But no. The umbrella is a few thousand years old and is nowhere near an optimal solution, especially in blustery windy weather. Wet-weather clothing works if you wear it, but most people don’t because it looks so awful.

From a materials-science perspective, the best solution for the British weather would be an invisible waterproof coating that you can spray on the clothes you actually do want to wear. Excitingly such materials have now been invented; they borrow tricks from nature, and they may yet get us singing in the rain.

Traditional waterproofing involves materials that are hydrophobic – in other words molecules that repel water. Waxes and other oily materials fall into this category because of the way they share their electrons at an atomic scale. Water molecules are polar, which means they have plus and minus charged ends. Waxes and oils prefer their electrons more equally distributed and so find it hard to conform to the polarity of water, and in the stand-off they repel each other. Hence oil and water don’t mix. This hydrophobic behaviour is bad for vinaigrettes but good for waterproofing.

Nature uses this trick too but is much better at it. Go into a garden during a rain shower and have a look at how many leaves repel water so effectively that water droplets sit like jewels glistening on their surface. Lotus leaves have long been known to have this superhydrophobic property, but no one knew why until electron microscopes revealed something very odd about the surface of the lotus leaf. There is a waxy material there, yes, but it is arranged on the surface in the form of billions of tiny microscopic bumps. When a drop of water sits on a hydrophobic surface it tries to minimise its area of contact, because it wants to minimise its interaction with the non-polar waxy material.

The bumps on the lotus leaf drastically increase this area of waxiness, forcing the droplet to sit up precariously on the tips of the bumps. In this, the Cassie-Baxter state, the droplet becomes very mobile and quickly slides off the leaf. So by manipulating just the bumpiness of its surface, lotus leaves are far better at repelling water.

The mobility of the droplets has another effect. By zooming around the surface of the leaf rather than sticking, the droplets of water collect small particles of dust, hoovering them up. This cleaning mechanism of these superhydrophobic surfaces is called the lotus effect.

Superhydrophobic surfaces have been synthesised and studied in labs for decades, but it is only recently that commercial versions have been produced. Now there are quite a few coming on to the market (eg neverwet.com), and they are impressive – when water is poured on to these surfaces it behaves like mercury and bounces off.

The trick, as with the lotus leaf, is to create a microscale patterned non-polar surface. The fact that these sophisticated surfaces can be sprayed out of a can is a triumph of nanotechnology. As with the lotus leaf these coatings not only keep things dry, they also keep them clean, since a lot of what constitutes dirt arrives on your clothes as splashes of liquid that subsequently dry leaving a residue. If the droplets of bolognese sauce, curry or mud don’t stick but bounce off, then they won’t leave a stain.

There are many other applications for these coatings, such as reducing the window cleaning bills on skyscrapers; keeping paint clean on cars; making sofas immune to red wine; and in its key role as waterproofer extraordinaire, keeping your mobile phone safe when it is dropped down the loo.

Read the entire article here.

Wearable Gadget Idea Generator

Need a new idea that rides the new techno-wave where the Internet of Things meets smartphones and wearables?  Find the sweet-spot at the confluence of these big emerging trends and you could be the next internet zillionaire.

So, junk the late-night caffeine-induced brainstorming parties with your engineer friends and visit the following:

http://whatthefuckismywearablestrategy.com/

Courtesy of this wonderfully creative site we are now well on our way to inventing the following essential gizmos:

heart rate monitor that turns the central heating on when your sleep patterns change
pair of contact lenses that posts to facebook when it’s windy
t-shirt that tweets when you drink too much coffee
pair of trousers that turns the central heating on when you burn 100 calories
pair of shoes that instagrams a selfie when the cat needs feeding

 

 

Younger Narcissists Use Twitter…

…older narcissists use Facebook.

google-search-selfie

Online social media and social networks provide a wonderful petri-dish with which to study humanity. For those who are online and connected — and that is a significant proportion of the world’s population — their every move, click, purchase, post and like can be collected, aggregated, dissected and analyzed (and sold). These trails through the digital landscape provide a fertile ground for psychologists and social scientists of all types to examine our behaviors and motivations, in real-time. By their very nature online social networks offer researchers a vast goldmine of data from which to extract rich nuggets of behavioral and cultural trends — a digital trail is easy to find and impossible to erase. A perennial favorite for researchers is the area of narcissism (and we suspect it is a favorite of narcissists as well).

From the Atlantic:

It’s not hard to see why the Internet would be a good cave for a narcissist to burrow into. Generally speaking, they prefer shallow relationships (preferably one-way, with the arrow pointing toward themselves), and need outside sources to maintain their inflated but delicate egos. So, a shallow cave that you can get into, but not out of. The Internet offers both a vast potential audience, and the possibility for anonymity, and if not anonymity, then a carefully curated veneer of self that you can attach your name to.

In 1987, the psychologists Hazel Markus and Paula Nurius claimed that a person has two selves: the “now self” and the “possible self.” The Internet allows a person to become her “possible self,” or at least present a version of herself that is closer to it.

When it comes to studies of online narcissism, and there have been many, social media dominates the discussion. One 2010 study notes that the emergence of the possible self “is most pronounced in anonymous online worlds, where accountability is lacking and the ‘true’ self can come out of hiding.” But non-anonymous social networks like Facebook, which this study was analyzing, “provide an ideal environment for the expression of the ‘hoped-for possible self,’ a subgroup of the possible-self. This state emphasizes realistic socially desirable identities an individual would like to establish given the right circumstances.”

The study, which found that people higher in narcissism were more active on Facebook, points out that you tend to encounter “identity statements” on social networks more than you would in real life. When you’re introduced to someone in person, it’s unlikely that they’ll bust out with a pithy sound bite that attempts to sum up all that they are and all they hope to be, but people do that in their Twitter bio or Facebook “About Me” section all the time.

Science has linked narcissism with high levels of activity on Facebook, Twitter, and Myspace (back in the day). But it’s important to narrow in farther and distinguish what kinds of activity the narcissists are engaging in, since hours of scrolling through your news feed, though time-wasting, isn’t exactly self-centered. And people post online for different reasons. For example, Twitter has been shown to sometimes fulfill a need to connect with others. The trouble with determining what’s normal and what’s narcissism is that both sets of people generally engage in the same online behaviors, they just have different motives for doing so.

A recent study published in Computers in Human Behavior dug into the how and why of narcissists’ social media use, looking at both college students and an older adult population. The researchers measured how often people tweeted or updated their Facebook status, but also why, asking them how much they agreed with statements like “It is important that my followers admire me,” and “It is important that my profile makes others want to be my friend.”

Overall, Twitter use was more correlated with narcissism, but lead researcher Shaun W. Davenport, chair of management and entrepreneurship at High Point University, points out that there was a key difference between generations. Older narcissists were more likely to take to Facebook, whereas younger narcissists were more active on Twitter.

“Facebook has really been around the whole time Generation Y was growing up and they see it more as a tool for communication,” Davenport says. “They use it like other generations use the telephone… For older adults who didn’t grow up using Facebook, it takes more intentional motives [to use it], like narcissism.”

Whereas on Facebook, the friend relationship is reciprocal, you don’t have to follow someone on Twitter who follows you (though it is often polite to do so, if you are the sort of person who thinks of Twitter more as an elegant tea room than, I don’t know, someplace without rules or scruples, like the Wild West or a suburban Chuck E. Cheese). Rather than friend-requesting people to get them to pay attention to you, the primary method to attract Twitter followers is just… tweeting, which partially explains the correlation between number of tweets and narcissism.

Of course, there’s something to be said for quality over quantity—just look at @OneTweetTony and his 2,000+ followers. And you’d think that, even if you gather a lot of followers to you through sheer volume of content spewed, eventually some would tire of your face’s constant presence in their feed and leave you. W. Keith Campbell, head of the University of Georgia’s psychology department and author of The Narcissism Epidemic: Living in the Age of Entitlement, says that people don’t actually make the effort to unfriend or unfollow someone that often, though.

“What you find in real life with narcissists is that they’re very good at gaining friends and becoming leaders, but eventually people see through them and stop liking them,” he says. “Online, people are very good at gaining relationships, but they don’t fall off naturally. If you’re incredibly annoying, they just ignore you, and even then it might be worth it for entertainment value. There’s a reason why, on reality TV, you find high levels of narcissism. It’s entertaining.”

Also like reality TV stars, narcissists like their own images. They show a preference for posting photos on Facebook, but Campbell clarifies that it’s the type of photos that matter—narcissists tend to choose more attractive, attention-seeking photos. In another 2011 study, narcissistic adolescents rated their own profile pictures as “more physically attractive, more fashionable, more glamorous, and more cool than their less narcissistic peers did.”

Though social media is an obvious and much-discussed bastion of narcissism, online role-playing games, the most famous being World of Warcraft, have been shown to hold some attraction as well. A study of 1,471 Korean online gamers showed narcissists to be more likely to be addicted to the games than non-narcissists. The concrete goals and rewards the games offer allow the players to gather prestige: “As you play, your character advances by gaining experience points, ‘leveling-up’ from one level to the next while collecting valuables and weapons and becoming wealthier and stronger,” the study reads. “In this social setting, excellent players receive the recognition and attention of others, and gain power and status.”

And if that power comes through violence, so much the better. Narcissism has been linked to aggression, another reason for the games’ appeal. Offline, narcissists are often bullies, though attempts to link narcissism to cyberbullying have resulted in a resounding “maybe.”

 “Narcissists typically have very high self esteem but it’s very fragile self esteem, so when someone attacks them, that self-esteem takes a dramatic nosedive,” Davenport says. “They need more wins to combat those losses…so the wins they have in that [virtual] world can boost their self-esteem.”

People can tell when you are attempting to boost your self-esteem through your online presence. A 2008 study had participants rate Facebook pages (which had already been painstakingly coded by researchers) for 37 different personality traits. The Facebook page’s owners had previously taken the Narcissistic Personality Inventory, and when it was there, the raters picked up on it.

Campbell, one of the researchers on that study, tempers now: “You can detect it, but it’s not perfect,” he says. “It’s sort of like shaving in your car window, you can do it, but it’s not perfect.”

Part of the reason why may be that, as we see more self-promoting behavior online, whether it’s coming from narcissists or not, it becomes more accepted, and thus, widespread.

Though, according to Davenport, the accusation that Generation Y, or—my least favorite term—Millennials, is the most narcissistic generation yet has been backed up by data, he wonders if it’s less a generational problem than just a general shift in our society.

“Some of it is that you see the behavior more on Facebook and Twitter, and some of it is that our society is becoming more accepting of narcissistic behavior,” Davenport says. “I do wonder if at some point the pendulum will swing back a little bit. Because you’re starting to see more published about ‘Is Gen Y more narcissistic?’, ‘What does this mean for the workplace?’, etc. All those questions are starting to become common conversation.”

When asked if our society is moving in a more narcissistic direction, Campbell replied: “President Obama took a selfie at Nelson Mandela’s funeral. Selfie was the word of the year in 2013. So yeah, this stuff becomes far more accepted.”

Read the entire article here.

Images courtesy of Google Search and respective “selfie” owners.

Playing Music, Playing Ads – Same Difference

pandoraThe internet music radio service Pandora knows a lot about you and another 200 million or so registered members. If you use the service regularly it comes to recognize your musical likes and dislikes. In this way Pandora learns to deliver more music programming that it thinks you will like, and it works rather well.

But, the story does not end there since Pandora is not just fun, it’s a business. For in its quest to monetize you even more effectively Pandora is seeking to pair personalized ads to your specific musical tastes. So, beware forthcoming ads tailored to your music perferences — metalheads, you have been warned!

From the NYT:

Pandora, the Internet radio service, is plying a new tune.

After years of customizing playlists to individual listeners by analyzing components of the songs they like, then playing them tracks with similar traits, the company has started data-mining users’ musical tastes for clues about the kinds of ads most likely to engage them.

“It’s becoming quite apparent to us that the world of playing the perfect music to people and the world of playing perfect advertising to them are strikingly similar,” says Eric Bieschke, Pandora’s chief scientist.

Consider someone who’s in an adventurous musical mood on a weekend afternoon, he says. One hypothesis is that this listener may be more likely to click on an ad for, say, adventure travel in Costa Rica than a person in an office on a Monday morning listening to familiar tunes. And that person at the office, Mr. Bieschke says, may be more inclined to respond to a more conservative travel ad for a restaurant-and-museum tour of Paris. Pandora is now testing hypotheses like these by, among other methods, measuring the frequency of ad clicks. “There are a lot of interesting things we can do on the music side that bridge the way to advertising,” says Mr. Bieschke, who led the development of Pandora’s music recommendation engine.

A few services, like Pandora, Amazon and Netflix, were early in developing algorithms to recommend products based on an individual customer’s preferences or those of people with similar profiles. Now, some companies are trying to differentiate themselves by using their proprietary data sets to make deeper inferences about individuals and try to influence their behavior.

This online ad customization technique is known as behavioral targeting, but Pandora adds a music layer. Pandora has collected song preference and other details about more than 200 million registered users, and those people have expressed their song likes and dislikes by pressing the site’s thumbs-up and thumbs-down buttons more than 35 billion times. Because Pandora needs to understand the type of device a listener is using in order to deliver songs in a playable format, its system also knows whether people are tuning in from their cars, from iPhones or Android phones or from desktops.

So it seems only logical for the company to start seeking correlations between users’ listening habits and the kinds of ads they might be most receptive to.

“The advantage of using our own in-house data is that we have it down to the individual level, to the specific person who is using Pandora,” Mr. Bieschke says. “We take all of these signals and look at correlations that lead us to come up with magical insights about somebody.”

People’s music, movie or book choices may reveal much more than commercial likes and dislikes. Certain product or cultural preferences can give glimpses into consumers’ political beliefs, religious faith, sexual orientation or other intimate issues. That means many organizations now are not merely collecting details about where we go and what we buy, but are also making inferences about who we are.

“I would guess, looking at music choices, you could probably predict with high accuracy a person’s worldview,” says Vitaly Shmatikov, an associate professor of computer science at the University of Texas at Austin, where he studies computer security and privacy. “You might be able to predict people’s stance on issues like gun control or the environment because there are bands and music tracks that do express strong positions.”

Pandora, for one, has a political ad-targeting system that has been used in presidential and congressional campaigns, and even a few for governor. It can deconstruct users’ song preferences to predict their political party of choice. (The company does not analyze listeners’ attitudes to individual political issues like abortion or fracking.)

During the next federal election cycle, for instance, Pandora users tuning into country music acts, stand-up comedians or Christian bands might hear or see ads for Republican candidates for Congress. Others listening to hip-hop tunes, or to classical acts like the Berlin Philharmonic, might hear ads for Democrats.

Because Pandora users provide their ZIP codes when they register, Mr. Bieschke says, “we can play ads only for the specific districts political campaigns want to target,” and “we can use their music to predict users’ political affiliations.” But he cautioned that the predictions about users’ political parties are machine-generated forecasts for groups of listeners with certain similar characteristics and may not be correct for any particular listener.

Shazam, the song recognition app with 80 million unique monthly users, also plays ads based on users’ preferred music genres. “Hypothetically, a Ford F-150 pickup truck might over-index to country music listeners,” says Kevin McGurn, Shazam’s chief revenue officer. For those who prefer U2 and Coldplay, a demographic that skews to middle-age people with relatively high incomes, he says, the app might play ads for luxury cars like Jaguars.

Read the entire article here.

Image courtesy of Pandora.

What About Telecleaning?

suitable-technologies

Telepresence devices and systems made some ripples in the vast oceans of new technology at the recent CES (Consumer Electronics Show) in Las Vegas. Telepresence allows anyone armed with an internet-connected camera to beam themselves elsewhere with the aid of a remote controlled screen on wheels. Some clinics and workplaces have experimented with the technology, allowing medical staff and workers to be virtually present in one location while being physically remote. Now, a handful of innovators are experimenting with telepresence for the home market.

So, sick of being around the kids, or need to see grandma but can’t get away from the office? Or, even better, buy buy one for your office so you can replace yourself with a robot, work from home and never visit the workplace again. Well, a telepresence robot for a mere $1,000 may be a very sound investment.

Sounds great, but where is the robot that will tidy, clean, dust, cook, repair, mow, launder…

From Technology Review:

When Scott Hassan went to Las Vegas for the International Consumer Electronics Show last week, he was still able to get the kids up in the morning and help them make breakfast at his California home. Hassan used a remote-controlled screen on wheels to spend time with his family, and today his company, Suitable Technologies, started taking orders for Beam+, a version of the same telepresence technology aimed at home users. This summer, it will also be available via Amazon and other retailers.

Hassan thinks the Beam+, essentially a 10-inch screen and camera mounted on wheels, will be popular with other businesspeople who want to spend more time with their kids, or those with aging parents they’d like to check up on more often.

Hassan says a person “visiting” aging parents this way could check up on them less obtrusively than via phone, for example by walking around to look for signs they’d taken their medication rather than bluntly asking, or watching to check that they take their pills with their meal. “For people with dementia or Alzheimer’s, I think that being able to see and hear and walk around with a familiar face is a lot better than just a phone call,” he says. “You could also just Beam in and watch Jeopardy! with your grandmother on TV.”

The Beam+ is designed so that once installed in a home, anyone with the login credentials can bring it to life and start moving around. The operator’s interface shows the view from a camera over the screen, as well as a smaller view looking down toward the unit’s base to aid maneuvering. A user drives it by moving a mouse over their view and clicking where they want to go.

The first 1,000 units of the Beam+ can be preordered for $995, with later units expected to costs $1,995. Both prices include the charging dock to which the device must return every two hours. The exterior design of the Beam+ was created by Fred Bould, who designed the Nest thermostat, among other gadgets.

The Beam+ is a cheaper, smaller, and restyled version of the company’s first product, known as the Beam, which is aimed at corporate users (see “Beam Yourself to Work in a Remote-Controlled Body”).

Intel, IBM, and Square all use Beam’s original product to give employees an option somewhere between a conventional video chat and an in-person visit when working with colleagues in distant offices. Hassan says interest has come from more than just technology companies, though. In Vegas he sold two Beam devices to a restaurant owner planning to use them as street barkers; meanwhile, a real-estate agency in California’s Lake Tahoe has started using them to show people around luxury condos.

Several startups and large companies, such as iRobot, which created the Roomba robotic vacuum cleaner, have launched mobile telepresence devices in recent years. However, despite it being clear that many people wish they could travel more easily in their professional and personal lives, the devices have sometimes been clunky (see “The New, More Awkward You”) and remain relatively expensive.

Read the entire article here.

Image: Beam+. Courtesy of Suitable Technologies, inc.

Printing the Perfect Pasta

[tube]x6WzyUgbT5A#t[/tube]

Step 1: imagine a new pasta shape and design it in three dimensions on your iPad. Step 2: fill a printer cartridge with pasta dough. Step 3: put the cartridge in a 3D printer and download your print design. Step 4: print your custom-designed pasta. Step 5: cook, eat and enjoy!

In essence that’s what Barilla — the Italian food giant — is up to in its food research labs in conjunction with Dutch tech company TNO.

3D printers aimed at the home market are also on display at this week’s CES (Consumer Electronics Show), including several that print candy and desserts. Yum, but Mamma would certainly not approve.

From the Guardian:

Once, not so very long ago, the pasta of Italian dreams was kneaded, rolled and shaped by hand in the kitchen. Now, though, the world’s leading pasta producer is perfecting a very different kind of technique – using 3D printers.

The Parma-based food giant Barilla, a fourth-generation Italian family business, said on Thursday it was working with TNO, a Dutch organisation specialising in applied scientific research, on a project using the same cutting-edge technology that has already brought startling developments in manufacturing and biotech and may now be poised to make similar waves in the food sector.

Kjeld van Bommel, project leader at TNO, said one of the potential applications of the technology could be to enable customers to present restaurants with their pasta shape desires stored on a USB stick.

“Suppose it’s your 25th wedding anniversary,” Van Bommel was quoted as telling the Dutch newspaper Trouw. “You go out for dinner and surprise your wife with pasta in the shape of a rose.”

He said speed was a big focus of the Barilla project: they want to be able to print 15-20 pieces of pasta in under two minutes. Progress had already been made, he said, and it was already possible to print 10 times as quickly as when the technology first arrived.

According to reports, Barilla aims to offer customers cartridges of dough that they can insert into a 3D printer to create their own pasta designs.

But the company declined to give further details, dismissing the claims as “speculation”. It said that although the project had been going on for around two years, it was still “in a preliminary phase”.

When contacted by the Guardian, TNO said media interest in the project had spiked in recent days, and it declined to make any further comment on the nature of the project.

The technology of 3D printing is advancing in myriad sectors around the world. Last year a California-based company made the world’s first metal 3D-printed handgun, capable of accurately firing 50 rounds without breaking, and scientists at Cornell University produced a prosthetic human ear.

At the Consumer Electronics Show in Las Vegas this week, the US company 3D Systems unveiled a new range of food-creating printers specialising in sugar-based confectionary and chocolate edibles. Last year Natural Machines, a Spanish startup, revealed its own prototype, the Foodini, which it said combined “technology, food, art and design” and was capable of making edibles ranging from chocolate to pasta.

Read the entire article here.

Video courtesy of TNO.

Zynga: Out to Pasture or Buying the Farm?

FarmVille_logoBy one measure, Zynga’s FarmVille on Facebook (and MSN) is extremely successful. The measure being dedicated and addicted players numbering in the millions each day. By another measure, Zynga isn’t faring very well at all, and that’s making money. Despite a valuation of over $3 billion, the company is struggling to find a way to convert virtual game currency into real dollar spend.

How the internet ecosystem manages to reward the lack of real and sustainable value creation is astonishing to those on the outside — but good for those on the inside. Would that all companies could bask in the glory of venture capital and IPO bubbles on such flimsy financial foundations. Quack!

Zynga has been on company deathwatch for a while. Read on to see some of its peers that seem to be on life-support

From ars technica:

HTC

To say that 2013 was a bad year for Taiwanese handset maker HTC is probably something of an understatement. The year was capped off by the indictment of six HTC employees on a variety of charges such as taking kickbacks, falsifying expenses, and leaking company trade secrets—including elements of HTC’s new interface for Android phones. Thomas Chien, the former vice president of design for HTC, was reportedly taking the information to a group in Beijing that was planning to form a new company, according to The Wall Street Journal.

On top of that, despite positive reviews for its flagship HTC One line, the company has been struggling to sell the phone. Blame it on bad marketing, bad execution, or just bad management, but HTC has been beaten down badly by Samsung.

The investigation of Chien started in August, but it was hardly the worst news HTC had last year as the company’s executive ranks thinned and losses mounted. There was reshuffling of deck chairs at the top of the company as CEO Peter Chou handed off chunks of his operational duties to co-founder and chairwoman Cher Wang—giving her control over marketing, sales, and the company’s supply chain in the wake of a parts shortage that hampered the launch of the HTC One. The Wall Street Journal reported that HTC couldn’t get camera parts for the One because suppliers believed “it is no longer a tier one customer,” according to an unnamed executive.

That’s a pretty dramatic fall from HTC’s peak, when the company vaulted from contract manufacturer to major mobile player. Way back in the heady days of 2011, HTC was second only to Apple in US cell phone market share, and it held 9.3 percent of the global market. Now it’s in fourth place in the US, with just 6.7 percent market share based on comScore numbers—behind Google’s Motorola and just ahead of LG Electronics by a hair. Its sales in the last quarter of 2013 were down by 40 percent from last year, and revenues for 2013 were down by 28.6 percent from 2012. With a patent infringement suit from Nokia over chips in the HTC One and One Mini still hanging over its head in the United Kingdom, the company could face a ban on selling some of its phones there.

Executives insist that HTC won’t be sold, especially to a Chinese buyer—the politics of such a deal being toxic to a Taiwanese company. But ironically, the Chinese market is perhaps HTC’s best hope in the long term—the company does more than a third of its business there. The company’s best bet may be going back to manufacturing phones with someone else’s name on the faceplate and leaving the marketing to someone else.

AMD

Advanced Micro Devices is still on deathwatch. Yes, AMD reported a quarterly profit of $48 million in September thanks to a gift from the game console gods (and IBM Power’s fall from grace). But that was hardly enough to jolt the chip company out of what has been a really bad year—and AMD is trying to manage expectations for the results for the final quarter of 2013.

AMD is caught between a rock and a hard place—or more specifically, between Intel and ARM. On the bright side, it probably has nothing to fear from ARM in the low-cost Windows device market considering how horrifically Windows RT fared in 2013. AMD actually gained in market share in the x86 space thanks to the Xbox One and PS4—both of which replace non-x86 consoles. And AMD still holds a substantial chunk of the graphics processor market—and all those potential sales in Bitcoin miners to go with it.

But in the PC space, AMD’s market share declined to a mere 15.8 percent (of what is a much smaller pie than it used to be). And in a future driven increasingly by mobile and low-power devices, AMD hasn’t been able to make any gains with the two low-power chips it introduced in 2013—Kabini and Temash. Those chips were supposed to finally give AMD a competitive footing with Intel on low-cost PCs and tablets, but they ended up being middling in comparison.

All that adds up to 2014 being a very important year for AMD—one that could end with AMD essentially being a graphics and specialty processor chip designer. The company has already divorced itself from its own fabrication capability and slashed its workforce, so there isn’t much more to cut but bone if the markets demand better margins.

Read the entire article here.

Image: FarmVille logo. Courtesy of Wikipedia.

An Ode to the Sinclair ZX81

Sinclair-ZX81What do the PDP-11, Commodore PET, APPLE II and Sinclair’s ZX81 have in common? And, more importantly, for anyone under the age of 35, what on earth are they?  Well, these are respectively, the first time-share mainframe, first personal computer, first Apple computer, and the first home-based computer programmed by theDiagonal’s friendly editor back in the pioneering days of computation.

The article below on technological nostalgia pushed the recall button, bringing back vivid memories of dot matrix printers, FORTRAN, large floppy diskettes (5 1/4 inch), reel-to-reel tape storage, and the 1Kb of programmable memory on the ZX81. In fact, despite the tremendous and now laughable limitations of the ZX81 — one had to save and load programs via a tape cassette — programming the device at home was a true revelation.

Some would go so far as to say that the first computer is very much like the first kiss or the first date. Well, not so. But fun nonetheless, and responsible for much in the way of future career paths.

From ars technica:

Being a bunch of technology journalists who make our living on the Web, we at Ars all have a fairly intimate relationship with computers dating back to our childhood—even if for some of us, that childhood is a bit more distant than others. And our technological careers and interests are at least partially shaped by the devices we started with.

So when Cyborgology’s David Banks recently offered up an autobiography of himself based on the computing devices he grew up with, it started a conversation among us about our first computing experiences. And being the most (chronologically) senior of Ars’ senior editors, the lot fell to me to pull these recollections together—since, in theory, I have the longest view of the bunch.

Considering the first computer I used was a Digital Equipment Corp. PDP-10, that theory is probably correct.

The DEC PDP-10 and DECWriter II Terminal

In 1979, I was a high school sophomore at Longwood High School in Middle Island, New York, just a short distance from the Department of Energy’s Brookhaven National Labs. And it was at Longwood that I got the first opportunity to learn how to code, thanks to a time-share connection we had to a DEC PDP-10 at the State University of New York at Stony Brook.

The computer lab at Longwood, which was run by the math department and overseen by my teacher Mr. Dennis Schultz, connected over a leased line to SUNY. It had, if I recall correctly, six LA36 DECWriter II terminals connected back to the mainframe—essentially dot-matrix printers with keyboards on them. Turn one on while the mainframe was down, and it would print over and over:

PDP-10 NOT AVAILABLE

Time at the terminals was a precious resource, so we were encouraged to write out all of our code by hand first on graph paper and then take a stack of cards over to the keypunch. This process did wonders for my handwriting. I spent an inordinate amount of time just writing BASIC and FORTRAN code in block letters on graph-paper notebooks.

One of my first fully original programs was an aerial combat program that used three-dimensional arrays to track the movement of the player’s and the programmed opponent’s airplanes as each maneuvered to get the other in its sights. Since the program output to pin-fed paper, that could be a tedious process.

At a certain point, Mr. Shultz, who had been more than tolerant of my enthusiasm, had to crack down—my code was using up more than half the school’s allotted system storage. I can’t imagine how much worse it would have been if we had video terminals.

Actually, I can imagine, because in my senior year I was introduced to the Apple II, video, and sound. The vastness of 360 kilobytes of storage and the ability to code at the keyboard were such a huge luxury after the spartan world of punch cards that I couldn’t contain myself. I soon coded a student parking pass database for my school—while also coding a Dungeons & Dragons character tracking system, complete with combat resolution and hit point tracking.

—Sean Gallagher

A printer terminal and an acoustic coupler

I never saw the computer that gave me my first computing experience, and I have little idea what it actually was. In fact, if I ever knew where it was located, I’ve since forgotten. But I do distinctly recall the gateway to it: a locked door to the left of the teacher’s desk in my high school biology lab. Fortunately, the guardian—commonly known as Mr. Dobrow—was excited about introducing some of his students to computers, and he let a number of us spend our lunch hours experimenting with the system.

And what a system it was. Behind the physical door was another gateway, this one electronic. Since the computer was located in another town, you had to dial in by modem. The modems of the day were something different entirely from what you may recall from AOL’s dialup heyday. Rather than plugging straight in to your phone line, you dialed in manually—on a rotary phone, no less—then dropped the speaker and mic carefully into two rubber receptacles spaced to accept the standard-issue hardware of the day. (And it was standard issue; AT&T was still a monopoly at the time.)

That modem was hooked into a sort of combination of line printer and keyboard. When you were entering text, the setup acted just like a typewriter. But as soon as you hit the return key, it transmitted, and the mysterious machine at the other end responded, sending characters back that were dutifully printed out by the same machine. This meant that an infinite loop would unleash a spray of paper, and it had to be terminated by hanging up the phone.

It took us a while to get to infinite loops, though. Mr. Dobrow started us off on small simulations of things like stock markets and malaria control. Eventually, we found a way to list all the programs available and discovered a Star Trek game. Photon torpedoes were deadly, but the phasers never seemed to work, so before too long one guy had the bright idea of trying to hack the game (although that wasn’t the term that we used). We were off.

John Timmer

Read the entire article here.

Image: Sinclair ZX81. Courtesy of Wikipedia.

A Window that Vacuums Sound

We are all familiar with double-glazed windows that reduce transmission of sound by way of a partial vacuum between the two or more panes of glass. However, open a double-glazed window to let in some fresh air and the benefit of the sound reduction is gone. So, what if you could invent a window that lets in air but cuts out the noise pollution? Sounds impossible. But not to materials scientists Sang-Hoon Kim and Seong-Hyun Lee from South Korea.

From Technology Review:

Noise pollution is one of the bugbears of modern life. The sound of machinery, engines, neighbours and the like can seriously affect our quality of life and that of the other creatures that share this planet.

But insulating against sound is a difficult and expensive business. Soundproofing generally works on the principle of transferring sound from the air into another medium which absorbs and attenuates it.

So the notion of creating a barrier that absorbs sound while allowing the free of passage of air seems, at first thought, entirely impossible. But that’s exactly what Sang-Hoon Kima at the Mokpo National Maritime University in South Korea and Seong-Hyun Lee at the Korea Institute of Machinery and Materials, have achieved.

These guys have come up with a way to separate sound from the air in which it travels and then to attenuate it. This has allowed them to build a window that allows air to flow but not sound.

The design is relatively simple and relies on two exotic acoustic phenomenon. The first is to create a material with a negative bulk modulus.

A material’s bulk modulus is essentially its resistance to compression and this is an important factor in determining the speed at which sound moves through it. A material with a negative bulk modulus exponentially attenuates any sound passing through it.

However, it’s hard to imagine a solid material having a negative bulk modulus, which is where a bit of clever design comes in handy.

Kima and Lee’s idea is to design a sound resonance chamber in which the resonant forces oppose any compression. With careful design, this leads to a negative bulk modulus for a certain range of frequencies.

Their resonance chamber is actually very simple—it consists of two parallel plates of transparent acrylic plastic about 150 millimetres square and separated by 40 millimetres, rather like a section of double-glazing about the size of a paperback book.

This chamber is designed to ensure that any sound resonating inside it acts against the way the same sound compresses the chamber. When this happens the bulk modulus of the entire chamber is negative.

An important factor in this is how efficiently the sound can get into the chamber and here Kima and Lee have another trick. To maximise this efficiency, they drill a 50 millimetre hole through each piece of acrylic. This acts as a diffraction element causing any sound that hits the chamber to diffract strongly into it.

The result is a double-glazed window with a negative bulk modulus that strongly attenuates the sound hitting it.

Kima and Lee use their double-glazing unit as a building block to create larger windows. In tests with a 3x4x3 “wall” of building blocks, they say their window reduces sound levels by 20-35 decibels over a sound range of 700 Hz to 2,200 Hz. That’s a significant reduction.

And by using extra building blocks with smaller holes, they can extend this range to cover lower frequencies.

What’s handy about these windows is that holes through them also allow the free flow of air, giving ample ventilation as well.

Read the entire article here.

Under the Covers at Uber

uber-image

A mere four years ago Uber was being used mostly by Silicon Valley engineers to reserve local limo rides. Now, the Uber app is in the hands of millions of people and being used to book car transportation across sixty cities in six continents. Google recently invested $258 million in the company, which gives Uber a value of around $3.5 billion. Those who have used the service — drivers and passengers alike — swear by it; the service is convenient and the app is simple and engaging. But that doesn’t seem to justify the enormous valuation. So, what’s going on?

From Wired:

When Uber cofounder and CEO Travis Kalanick was in sixth grade, he learned to code on a Commodore 64. His favorite things to program were videogames. But in the mid-’80s, getting the machine to do what he wanted still felt a lot like manual labor. “Back then you would have to do the graphics pixel by pixel,” Kalanick says. “But it was cool because you were like, oh my God, it’s moving across the screen! My monster is moving across the screen!” These days, Kalanick, 37, has lost none of his fascination with watching pixels on the move.

In Uber’s San Francisco headquarters, a software tool called God View shows all the vehicles on the Uber system moving at once. On a laptop web browser, tiny cars on a map show every Uber driver currently on the city’s streets. Tiny eyeballs on the same map show the location of every customer currently looking at the Uber app on their smartphone. In a way, the company anointed by Silicon Valley’s elite as the best hope for transforming global transportation couldn’t have a simpler task: It just has to bring those cars and those eyeballs together — the faster and cheaper, the better.

“Uber should feel magical to the customer,” Kalanick says one morning in November. “They just push the button and the car comes. But there’s a lot going on under the hood to make that happen.”

A little less than four years ago, when Uber was barely more than a private luxury car service for Silicon Valley’s elite techies, Kalanick sat watching the cars crisscrossing San Francisco on God View and had a Matrix-y moment when he “started seeing the math.” He was going to make the monster move — not just across the screen but across cities around the globe. Since then, Uber has expanded to some 60 cities on six continents and grown to at least 400 employees. Millions of people have used Uber to get a ride, and revenue has increased at a rate of nearly 20 percent every month over the past year.

The company’s speedy ascent has taken place in parallel with a surge of interest in the so-called sharing economy — using technology to connect consumers with goods and services that would otherwise go unused. Kalanick had the vision to see potential profit in the empty seats of limos and taxis sitting idle as drivers wait for customers to call.

But Kalanick doesn’t put on the airs of a visionary. In business he’s a brawler. Reaching Uber’s goals has meant digging in against the established bureaucracy in many cities, where giving rides for money is heavily regulated. Uber has won enough of those fights to threaten the market share of the entrenched players. It not only offers a more efficient way to hail a ride but gives drivers a whole new way to see where demand is bubbling up. In the process, Uber seems capable of opening up sections of cities that taxis and car services never bothered with before.

In an Uber-fied future, fewer people own cars, but everybody has access to them.

In San Francisco, Uber has become its own noun — you “get an Uber.” But to make it a verb — to get to the point where everyone Ubers the same way they Google — the company must outperform on transportation the same way Google does on search.

No less than Google itself believes Uber has this potential. In a massive funding round in August led by the search giant’s venture capital arm, Uber received $258 million. The investment reportedly valued Uber at around $3.5 billion and pushed the company to the forefront of speculation about the next big tech IPO — and Kalanick as the next great tech leader.

The deal set Silicon Valley buzzing about what else Uber could become. A delivery service powered by Google’s self-driving cars? The new on-the-ground army for ferrying all things Amazon? Jeff Bezos also is an Uber investor, and Kalanick cites him as an entrepreneurial inspiration. “Amazon was just books and then some CDs,” Kalanick says. “And then they’re like, you know what, let’s do frickin’ ladders!” Then came the Kindle and Amazon Web Services — examples, Kalanick says, of how an entrepreneur’s “creative pragmatism” can defy expectations. He clearly enjoys daring the world to think of Uber as merely another way to get a ride.

“We feel like we’re still realizing what the potential is,” he says. “We don’t know yet where that stops.”

From the back of an Uber-summoned Mercedes GL450 SUV, Kalanick banters with the driver about which make and model will replace the discontinued Lincoln Town Car as the default limo of choice.

Mercedes S-Class? Too expensive, Kalanick says. Cadillac XTS? Too small.

So what is it?

“OK, I’m glad you asked,” Kalanick says. “This is going to blow you away, dude. Are you ready? Have you seen the 2013 Ford Explorer?” Spacious, like a Lexus crossover, but way cheaper.

As Uber becomes a dominant presence in urban transportation, it’s easy to imagine the company playing a role in making this prophecy self-fulfilling. It’s just one more sign of how far Uber has come since Kalanick helped create the company in 2009. In the beginning, it was just a way for him and his cofounder, StumbleUpon creator Garrett Camp, and their friends to get around in style.

They could certainly afford it. At age 21, Kalanick, born and raised in Los Angeles, had started a Napster-like peer-to-peer file-sharing search engine called Scour that got him sued for a quarter-trillion dollars by major media companies. Scour filed for bankruptcy, but Kalanick cofounded Red Swoosh to serve digital media over the Internet for the same companies that had sued him. Akamai bought the company in 2007 in a stock deal worth $19 million.

By the time he reached his thirties, Kalanick was a seasoned veteran in the startup trenches. But part of him wondered if he still had the drive to build another company. His breakthrough came when he was watching, of all things, a Woody Allen movie. The film was Vicky Christina Barcelona, which Allen made in 2008, when he was in his seventies. “I’m like, that dude is old! And he is still bringing it! He’s still making really beautiful art. And I’m like, all right, I’ve got a chance, man. I can do it too.”

Kalanick charged into Uber and quickly collided with the muscular resistance of the taxi and limo industry. It wasn’t long before San Francisco’s transportation agency sent the company a cease-and-desist letter, calling Uber an unlicensed taxi service. Kalanick and Uber did neither, arguing vehemently that it merely made the software that connected drivers and riders. The company kept offering rides and building its stature among tech types—a constituency city politicians have been loathe to alienate—as the cool way to get around.

Uber has since faced the wrath of government and industry in other cites, notably New York, Chicago, Boston, and Washington, DC.

One councilmember opposed to Uber in the nation’s capital was self-described friend of the taxi industry Marion Barry (yes, that Marion Barry). Kalanick, in DC to lobby on Uber’s behalf, told The Washington Post he had an offer for the former mayor: “I will personally chauffeur him myself in his silver Jaguar to work every day of the week, if he can just make this happen.” Though that ride never happened, the council ultimately passed a legal framework that Uber called “an innovative model for city transportation legislation across the country.”

Though Kalanick clearly relishes a fight, he lights up more when talking about Uber as an engineering problem. To fulfill its promise—a ride within five minutes of the tap of a smartphone button—Uber must constantly optimize the algorithms that govern, among other things, how many of its cars are on the road, where they go, and how much a ride costs. While Uber offers standard local rates for its various options, times of peak demand send prices up, which Uber calls surge pricing. Some critics call it price-gouging, but Kalanick says the economics are far less insidious. To meet increased demand, drivers need extra incentive to get out on the road. Since they aren’t employees, the marketplace has to motivate them. “Most things are dynamically priced,” Kalanick points out, from airline tickets to happy hour cocktails.

Kalanick employs a data-science team of PhDs from fields like nuclear physics, astrophysics, and computational biology to grapple with the number of variables involved in keeping Uber reliable. They stay busy perfecting algorithms that are dependable and flexible enough to be ported to hundreds of cities worldwide. When we met, Uber had just gone live in Bogotè, Colombia, as well as Shanghai, Dubai, and Bangalore.

And it’s no longer just black cars and yellow cabs. A newer option, UberX, offers lower-priced rides from drivers piloting their personal vehicles. According to Uber, only certain late-model cars are allowed, and drivers undergo the same background screening as others in the service. In an Uber-fied version of the future, far fewer people may own cars but everybody would have access to them. “You know, I hadn’t driven for a year, and then I drove over the weekend,” Kalanick says. “I had to jump-start my car to get going. It was a little awkward. So I think that’s a sign.”

Back at Uber headquarters, burly drivers crowd the lobby while nearby, coders sit elbow to elbow. Like other San Francisco startups on the cusp of something bigger, Uber is preparing to move to a larger space. Its new digs will be in the same building as Square, the mobile payments company led by Twitter mastermind Jack Dorsey. Twitter’s offices are across the street. The symbolism is hard to miss: Uber is joining the coterie of companies that define San Francisco’s latest tech boom.

Still, part of that image depends on Uber’s outsize potential to expand what it does. The logistical numbers it crunches to make it easier for people to get around would seem a natural fit for a transition into a delivery service. Uber coyly fuels that perception with publicity stunts like ferrying ice cream and barbecue to customers through its app. It’s easy to imagine such promotions quietly doubling as proofs of concept. News of Google’s massive investment prompted visions of a push-button delivery service powered by Google’s self-driving cars.

If Uber expands into delivery, its competition will suddenly include behemoths like Amazon, eBay, and Walmart.

Kalanick acknowledges that the most recent round of investment is intended to fund Uber’s growth, but that’s as far as he’ll go. “In a lot of ways, it’s not the money that allows you to do new things. It’s the growth and the ability to find things that people want and to use your creativity to target those,” he says. “There are a whole hell of a lot of other things that we can do and intend on doing.”

But the calculus of delivery may not even be the hardest part. If Uber were to expand into delivery, its competition—for now other ride-sharing startups such as Lyft, Sidecar, and Hailo—would include Amazon, eBay, and Walmart too.

One way to skirt rivalry with such giants is to offer itself as the back-end technology that can power same-day online retail. In early fall, Google launched its Shopping Express service in San Francisco. The program lets customers shop online at local stores through a Google-powered app; Google sends a courier with their deliveries the same day.

David Krane, the Google Ventures partner who led the investment deal, says there’s nothing happening between Uber and Shopping Express. He also says self-driving delivery vehicles are nowhere near ready to be looked at seriously as part of Uber. “Those meetings will happen when the technology is ready for such discussion,” he says. “That is many moons away.”

Read the entire article here.

Image courtesy of Uber.

Asimov Fifty Years On

1957-driverless-car

In 1964, Isaac Asimov wrote an essay for the New York Times entitled, Visit the World’s Fair in 2014. The essay was a free-wheeling opinion of things to come, viewed through the lens of New York’s World’s Fair of 1964. The essay shows that even a grand master of science fiction cannot predict the future — he got some things quite right and other things rather wrong. Some examples below, and his full essay are below.

That said, what has captured recent attention is Asimov’s thinking on the complex and evolving relationship between humans and technology, and the challenges of environmental stewardship in an increasingly over-populated and resource-starved world.

So, while Asimov was certainly not a teller of fortunes, we had many insights that many, even today, still lack.

Read the entire Isaac Asimov essay here.

What Asimov got right:

“Communications will become sight-sound and you will see as well as hear the person you telephone.”

“As for television, wall screens will have replaced the ordinary set…”

“Large solar-power stations will also be in operation in a number of desert and semi-desert areas…”

“Windows… will be polarized to block out the harsh sunlight. The degree of opacity of the glass may even be made to alter automatically in accordance with the intensity of the light falling upon it.”

What Asimov got wrong:

“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes.”

“…cars will be capable of crossing water on their jets…”

“For short-range travel, moving sidewalks (with benches on either side, standing room in the center) will be making their appearance in downtown sections.”

From the Atlantic:

In August of 1964, just more than 50 years ago, author Isaac Asimov wrote a piece in The New York Times, pegged to that summer’s World Fair.

In the essay, Asimov imagines what the World Fair would be like in 2014—his future, our present.

His notions were strange and wonderful (and conservative, as Matt Novak writes in a great run-down), in the way that dreams of the future from the point of view of the American mid-century tend to be. There will be electroluminescent walls for our windowless homes, levitating cars for our transportation, 3D cube televisions that will permit viewers to watch dance performances from all angles, and “Algae Bars” that taste like turkey and steak (“but,” he adds, “there will be considerable psychological resistance to such an innovation”).

He got some things wrong and some things right, as is common for those who engage in the sport of prediction-making. Keeping score is of little interest to me. What is of interest: what Asimov understood about the entangled relationships among humans, technological development, and the planet—and the implications of those ideas for us today, knowing what we know now.

Asimov begins by suggesting that in the coming decades, the gulf between humans and “nature” will expand, driven by technological development. “One thought that occurs to me,” he writes, “is that men will continue to withdraw from nature in order to create an environment that will suit them better. “

It is in this context that Asimov sees the future shining bright: underground, suburban houses, “free from the vicissitudes of weather, with air cleaned and light controlled, should be fairly common.” Windows, he says, “need be no more than an archaic touch,” with programmed, alterable, “scenery.” We will build our own world, an improvement on the natural one we found ourselves in for so long. Separation from nature, Asimov implies, will keep humans safe—safe from the irregularities of the natural world, and the bombs of the human one, a concern he just barely hints at, but that was deeply felt at the time.

But Asimov knows too that humans cannot survive on technology alone. Eight years before astronauts’ Blue Marble image of Earth would reshape how humans thought about the planet, Asimov sees that humans need a healthy Earth, and he worries that an exploding human population (6.5 billion, he accurately extrapolated) will wear down our resources, creating massive inequality.

Although technology will still keep up with population through 2014, it will be only through a supreme effort and with but partial success. Not all the world’s population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.

This troubled him, but the real problems lay yet further in the future, as “unchecked” population growth pushed urban sprawl to every corner of the planet, creating a “World-Manhattan” by 2450. But, he exclaimed, “society will collapse long before that!” Humans would have to stop reproducing so quickly to avert this catastrophe, he believed, and he predicted that by 2014 we would have decided that lowering the birth rate was a policy priority.

Asimov rightly saw the central role of the planet’s environmental health to a society: No matter how technologically developed humanity becomes, there is no escaping our fundamental reliance on Earth (at least not until we seriously leave Earth, that is). But in 1964 the environmental specters that haunt us today—climate change and impending mass extinctions—were only just beginning to gain notice. Asimov could not have imagined the particulars of this special blend of planetary destruction we are now brewing—and he was overly optimistic about our propensity to take action to protect an imperiled planet.

Read the entire article here.

Image: Driverless cars as imaged in 1957. Courtesy of America’s Independent Electric Light and Power Companies/Paleofuture.

 

 

 

The Future Tubes of the Internets

CerfKahnMedalOfFreedom

Back in 1973, when computer scientists Vint Cerf and Robert Kahn sketched out plans to connect a handful of government networks little did they realize the scale of their invention — TCP/IP (a standard protocol for the interconnection of computer networks. Now, the two patriarchs of the Internet revolution — with no Al Gore in sight — prognosticate on the next 40 years of the internet.

From the NYT:

Will 2014 be the year that the Internet is reined in?

When Edward J. Snowden, the disaffected National Security Agency contract employee, purloined tens of thousands of classified documents from computers around the world, his actions — and their still-reverberating consequences — heightened international pressure to control the network that has increasingly become the world’s stage. At issue is the technical principle that is the basis for the Internet, its “any-to-any” connectivity. That capability has defined the technology ever since Vinton Cerf and Robert Kahn sequestered themselves in the conference room of a Palo Alto, Calif., hotel in 1973, with the task of interconnecting computer networks for an elite group of scientists, engineers and military personnel.

The two men wound up developing a simple and universal set of rules for exchanging digital information — the conventions of the modern Internet. Despite many technological changes, their work prevails.

But while the Internet’s global capability to connect anyone with anything has affected every nook and cranny of modern life — with politics, education, espionage, war, civil liberties, entertainment, sex, science, finance and manufacturing all transformed — its growth increasingly presents paradoxes.

It was, for example, the Internet’s global reach that made classified documents available to Mr. Snowden — and made it so easy for him to distribute them to news organizations.

Yet the Internet also made possible widespread surveillance, a practice that alarmed Mr. Snowden and triggered his plan to steal and publicly release the information.

With the Snowden affair starkly highlighting the issues, the new year is likely to see renewed calls to change the way the Internet is governed. In particular, governments that do not favor the free flow of information, especially if it’s through a system designed by Americans, would like to see the Internet regulated in a way that would “Balkanize” it by preventing access to certain websites.

The debate right now involves two international organizations, usually known by their acronyms, with different views: Icann, the Internet Corporation for Assigned Names and Numbers, and the I.T.U., or International Telecommunication Union.

Icann, a nonprofit that oversees the Internet’s basic functions, like the assignment of names to websites, was established in 1998 by the United States government to create an international forum for “governing” the Internet. The United States continues to favor this group.

The I.T.U., created in 1865 as the International Telegraph Convention, is the United Nations telecommunications regulatory agency. Nations like Brazil, China and Russia have been pressing the United States to switch governance of the Internet to this organization.

Dr. Cerf, 70, and Dr. Kahn, 75, have taken slightly different positions on the matter. Dr. Cerf, who was chairman of Icann from 2000-7, has become known as an informal “Internet ambassador” and a strong proponent of an Internet that remains independent of state control. He has been one of the major supporters of the idea of “network neutrality” — the principle that Internet service providers should enable access to all content and applications, regardless of the source.

Dr. Kahn has made a determined effort to stay out of the network neutrality debate. Nevertheless, he has been more willing to work with the I.T.U., particularly in attempting to build support for a system, known as Digital Object Architecture, for tracking and authenticating all content distributed through the Internet.

Both men agreed to sit down, in separate interviews, to talk about their views on the Internet’s future. The interviews were edited and condensed.

The Internet Ambassador

After serving as a program manager at the Pentagon’s Defense Advanced Research Projects Agency, Vinton Cerf joined MCI Communications Corp., an early commercial Internet company that was purchased by Verizon in 2006, to lead the development of electronic mail systems for the Internet. In 2005, he became a vice president and “Internet evangelist” for Google. Last year he became the president of the Association for Computing Machinery, a leading international educational and scientific computing society.

Q. Edward Snowden’s actions have raised a new storm of controversy about the role of the Internet. Is it a significant new challenge to an open and global Internet?

A. The answer is no, I don’t think so. There are some similar analogues in history. The French historically copied every telex or every telegram that you sent, and they shared it with businesses in order to remain competitive. And when that finally became apparent, it didn’t shut down the telegraph system.

The Snowden revelations will increase interest in end-to-end cryptography for encrypting information both in transit and at rest. For many of us, including me, who believe that is an important capacity to have, this little crisis may be the trigger that induces people to spend time and energy learning how to use it.

You’ve drawn the analogy to a road or highway system. That brings to mind the idea of requiring a driver’s license to use the Internet, which raises questions about responsibility and anonymity.

I still believe that anonymity is an important capacity, that people should have the ability to speak anonymously. It’s argued that people will be encouraged to say untrue things, harmful things, especially if they believe they are anonymous.

There is a tension there, because in some environments the only way you will be able to behave safely is to have some anonymity.

Read the entire article here.

Image: Vinton Cerf and Robert Kahn receiving the Presidential Medal of Freedom from President George W. Bush in 2005. Courtesy of Wikipedia.

Content Versus Innovation

VHS-cassetteThe entertainment and media industry is not known for its innovation. Left to its own devices we would all be consuming news from broadsheets and a town crier, and digesting shows at the theater. Not too long ago the industry, led by Hollywood heavyweights, was doing its utmost to kill emerging forms of media consumption, such as the video tape cassette and the VCR.

Following numerous regulatory, legal and political skirmishes innovation finally triumphed over entrenched interests, allowing VHS tape, followed by the DVD, to flourish, albeit for a while. This of course paved the way for new forms of distribution — the rise of Blockbuster and a myriad of neighborhood video rental stores.

In a great ironic twist, the likes of Blockbuster failed to recognize signals from the market that without significant and continual innovation their business models would subsequently crumble. Now Netflix and other streaming services have managed to end our weekend visits to the movie rental store.

A fascinating article excerpted below takes a look back at the lengthy, and continuing, fight between the conservative media empires and the market’s constant pull from technological innovation.

[For a fresh perspective on the future of media distribution, see our recent posting here.]

From TechCrunch:

The once iconic video rental giant Blockbuster is shutting down its remaining stores across the country. Netflix, meanwhile, is emerging as the leader in video rental, now primarily through online streaming. But Blockbuster, Netflix and home media consumption (VCR/DVD/Blu-ray) may never have existed at all in their current form if the content industry had been successful in banning or regulating them. In 1983, nearly 30 years before thousands of websites blacked out in protest of SOPA/PIPA, video stores across the country closed in protest against legislation that would bar their market model.

A Look Back

In 1977, the first video-rental store opened. It was 600 square feet and located on Wilshire Boulevard in Los Angeles. George Atkinson, the entrepreneur who decided to launch this idea, charged $50 for an “annual membership” and $100 for a “lifetime membership” but the memberships only allowed people to rent videos for $10 a day. Despite an unusual business model, Atkinson’s store was an enormous success, growing to 42 affiliated stores in fewer than 20 months and resulting in numerous competitors.

In retrospect, Atkinson’s success represented the emergence of an entirely new market: home consumption of paid content. It would become an $18 billion dollar domestic market, and, rather than cannibalize from the existing movie theater market, it would eclipse it and thereby become a massive revenue source for the industry.

Atkinson’s success in 1977 is particularly remarkable as the Sony Betamax (the first VCR) had only gone on sale domestically in 1975 at a cost of $1,400 (which in 2013 U.S. dollars is $6,093). As a comparison, the first DVD player in 1997 cost $1,458 in 2013 dollars and the first Blu-ray player in 2006 cost $1,161 in 2013 dollars. And unlike the DVD and Blu-ray player, it would take eight years, until 1983, for the VCR to reach 10 percent of U.S. television households. Atkinson’s success, and that of his early competitors, was in catering to a market of well under 10 percent of U.S. households.

While many content companies realized this as a massive new revenue stream — e.g. 20th Century Fox buying one video rental company for $7.5 million in 1979 — the content industry lawyers and lobbyists tried to stop the home content market through litigation and regulation.

The content industry sued to ban the sale of the Betamax, the first VCR. This legal strategy was coupled by leveraging the overwhelming firepower of the content industry in Washington. If they lost in court to ban the technology and rental business model, then they would ban the technology and rental business model in Congress.

Litigation Attack

In 1976, the content industry filed suit against Sony, seeking an injunction to prevent the company from “manufacturing, distributing, selling or offering for sale Betamax or Betamax tapes.” Essentially granting this remedy would have banned the VCR for all Americans. The content industry’s motivation behind this suit was largely to deal with individuals recording live television, but the emergence of the rental industry was likely a contributing factor.

While Sony won at the district court level in 1979, in 1981 it lost at the Court of Appeals for the Ninth Circuit where the court found that Sony was liable for copyright infringement by their users — recording broadcast television. The Appellate court ordered the lower court to impose an appropriate remedy, advising in favor of an injunction to block the sale of the Betamax.

And in 1981, under normal circumstances, the VCR would have been banned then and there. Sony faced liability well beyond its net worth, so it may well have been the end of Sony, or at least its U.S. subsidiary, and the end of the VCR. Millions of private citizens could have been liable for damages for copyright infringement for recording television shows for personal use. But Sony appealed this ruling to the Supreme Court.

The Supreme Court is able to take very few cases. For example in 2009, 1.1 percent of petitions for certiorari were granted, and of these approximately 70 percent are cases where there is a conflict among different courts (here there was no conflict). But in 1982, the Supreme Court granted certiorari and agreed to hear the case.

After an oral hearing, the justices took a vote internally, and originally only one of them was persuaded to keep the VCR as legal (but after discussion, the number of justices in favor of the VCR would eventually increase to four).

With five votes in favor of affirming the previous ruling the Betamax (VCR) was to be illegal in the United States (see Justice Blackmun’s papers).

But then, something even more unusual happened – which is why we have the VCR and subsequent technologies: The Supreme Court decided for both sides to re-argue a portion of the case. Under the Burger Court (when he was Chief Justice), this only happened in 2.6 percent of the cases that received oral argument. In the re-argument of the case, a crucial vote switched sides, which resulted in a 5-4 decision in favor of Sony. The VCR was legal. There would be no injunction barring its sale.

The majority opinion characterized the lawsuit as an “unprecedented attempt to impose copyright liability upon the distributors of copying equipment and rejected “[s]uch an expansion of the copyright privilege” as “beyond the limits” given by Congress. The Court even cited Mr. Rogers who testified during the trial:

I have always felt that with the advent of all of this new technology that allows people to tape the ‘Neighborhood’ off-the-air . . . Very frankly, I am opposed to people being programmed by others.

On the absolute narrowest of legal grounds, through a highly unusual legal process (and significant luck), the VCR was saved by one vote at the Supreme Court in 1984.

Regulation Attack

In 1982 legislation was introduced in Congress to give copyright holders the exclusive right to authorize the rental of prerecorded videos. Legislation was reintroduced in 1983, the Consumer Video Sales Rental Act of 1983. This legislation would have allowed the content industry to shut down the rental market, or charge exorbitant fees, by making it a crime to rent out movies purchased commercially. In effect, this legislation would have ended the existing market model of rental stores. With 34 co-sponsors, major lobbyists and significant campaign contributions to support it, this legislation had substantial support at the time.

Video stores saw the Consumer Video Sales Rental Act as an existential threat, and on October 21, 1983, about 30 years before the SOPA/PIPA protests, video stores across the country closed down for several hours in protest. While the 1983 legislation died in committee, the legislation would be reintroduced in 1984. In 1984, similar legislation was enacted, The Record Rental Amendment of 1984, which banned the renting and leasing of music. In 1990, Congress banned the renting of computer software.

But in the face of public backlash from video retailers and customers, Congress did not pass the Consumer Video Sales Rental Act.

At the same time, the movie studios tried to ban the Betamax VCR through legislation. Eventually the content industry decided to support legislation that would require compulsory licensing rather than an outright ban. But such a compulsory licensing scheme would have drastically driven up the costs of video tape players and may have effectively banned the technology (similar regulations did ban other technologies).

For the content industry, banning the technology was a feature, not a bug.

Read the entire article here.

Image: Video Home System (VHS) cassette tape. Courtesy of Wikipedia.

2014: The Year of Big Stuff

new-years-eve-2013

Over the closing days of each year, or the first few days of the coming one, prognosticators the world over tell us about the future. Yet, while no one, to date, has yet been proven to have prescient skills — despite what your psychic tells you — we all like to dabble in art of prediction. Google’s Eric Schmidt has one big prediction for 2014: big. Everything will be big — big data, big genomics, smartphones will be even bigger, and of course, so will mistakes.

So, with that, a big Happy New Year to all our faithful readers and seers across our fragile and beautiful blue planet.

From the Guardian:

What does 2014 hold? According to Eric Schmidt, Google’s executive chairman, it means smartphones everywhere – and also the possibility of genetics data being used to develop new cures for cancer.

In an appearance on Bloomberg TV, Schmidt laid out his thoughts about general technological change, Google’s biggest mistake, and how Google sees the economy going in 2014.

“The biggest change for consumers is going to be that everyone’s going to have a smartphone,” Schmidt says. “And the fact that so many people are connected to what is essentially a supercomputer means a whole new generation of applications around entertainment, education, social life, those kinds of things. The trend has been that mobile is winning; it’s now won. There are more tablets and phones being sold than personal computers – people are moving to this new architecture very fast.”

It’s certainly true that tablets and smartphones are outselling PCs – in fact smartphones alone have been doing that since the end of 2010. This year, it’s forecast that tablets will have passed “traditional” PCs (desktops, fixed-keyboard laptops) too.

Disrupting business

Next, Schmidt says there’s a big change – a disruption – coming for business through the arrival of “big data”: “The biggest disruptor that we’re sure about is the arrival of big data and machine intelligence everywhere – so the ability [for businesses] to find people, to talk specifically to them, to judge them, to rank what they’re doing, to decide what to do with your products, changes every business globally.”

But he also sees potential in the field of genomics – the parsing of all the data being collected from DNA and gene sequencing. That might not be surprising, given that Google is an investor in 23andme, a gene sequencing company which aims to collect the genomes of a million people so that it can do data-matching analysis on their DNA. (Unfortunately, that plan has hit a snag: 23andme has been told to cease operating by the US Food and Drug Administration because it has failed to respond to inquiries about its testing methods and publication of results.)

Here’s what Schmidt has to say on genomics: “The biggest disruption that we don’t really know what’s going to happen is probably in the genetics area. The ability to have personal genetics records and the ability to start gathering all of the gene sequencing into places will yield discoveries in cancer treatment and diagnostics over the next year that that are unfathomably important.”

It may be worth mentioning that “we’ll find cures through genomics” has been the promise held up by scientists every year since the human genome was first sequenced. So far, it hasn’t happened – as much as anything because human gene variation is remarkably big, and there’s still a lot that isn’t known about the interaction of what appears to be non-functional parts of our DNA (which doesn’t seem to code to produce proteins) and the parts that do code for proteins.

Biggest mistake

As for Google’s biggest past mistake, Schmidt says it’s missing the rise of Facebook and Twitter: “At Google the biggest mistake that I made was not anticipating the rise of the social networking phenomenon – not a mistake we’re going to make again. I guess in our defence were working on many other things, but we should have been in that area, and I take responsibility for that.” The results of that effort to catch up can be seen in the way that Google+ is popping up everywhere – though it’s wrong to think of Google+ as a social network, since it’s more of a way that Google creates a substrate on the web to track individuals.

And what is Google doing in 2014? “Google is very much investing, we’re hiring globally, we see strong growth all around the world with the arrival of the internet everywhere. It’s all green in that sense from the standpoint of the year. Google benefits from transitions from traditional industries, and shockingly even when things are tough in a country, because we’re “return-on-investment”-based advertising – it’s smarter to move your advertising from others to Google, so we win no matter whether the industries are in good shape or not, because people need our services, we’re very proud of that.”

For Google, the sky’s the limit: “the key limiter on our growth is our rate of innovation, how smart are we, how clever are we, how quickly can we get these new systems deployed – we want to do that as fast as we can.”

It’s worth noting that Schmidt has a shaky track record on predictions. At Le Web in 2011 he famously forecast that developers would be shunning iOS to start developing on Android first, and that Google TV would be installed on 50% of all TVs on sale by summer 2012.

It didn’t turn out that way: even now, many apps start on iOS, and Google TV fizzled out as companies such as Logitech found that it didn’t work as well as Android to tempt buyers.

Since that, Schmidt has been a lot more cautious about predicting trends and changes – although he hasn’t been above the occasional comment which seems calculated to get a rise from his audience, such as telling executives at a Gartner conference that Android was more secure than the iPhone – which they apparently found humourous.

Read the entire article here.

Image: Happy New Year, 2014 Google doodle. Courtesy of Google.

Global Domination — One Pixel at a Time

google-maps-article

Google’s story began with text-based search and was quickly followed by digital maps. These simple innovations ushered in the company’s mission to organize the world’s information. But as Google ventures further from its roots into mobile operating systems (Android), video (youtube), social media (Google+), smartphone hardware (through its purchase of Motorola’s mobile business), augmented reality (Google Glass), Web browsers (Chrome) and notebook hardware (Chromebook) what of its core mapping service? And is global domination all that it’s cracked up to be?

From the NYT:

Fifty-five miles and three days down the Colorado River from the put-in at Lee’s Ferry, near the Utah-Arizona border, the two rafts in our little flotilla suddenly encountered a storm. It sneaked up from behind, preceded by only a cool breeze. With the canyon walls squeezing the sky to a ribbon of blue, we didn’t see the thunderhead until it was nearly on top of us.

I was seated in the front of the lead raft. Pole position meant taking a dunk through the rapids, but it also put me next to Luc Vincent, the expedition’s leader. Vincent is the man responsible for all the imagery in Google’s online maps. He’s in charge of everything from choosing satellite pictures to deploying Google’s planes around the world to sending its camera-equipped cars down every road to even this, a float through the Grand Canyon. The raft trip was a mapping expedition that was also serving as a celebration: Google Maps had just introduced a major redesign, and the outing was a way of rewarding some of the team’s members.

Vincent wore a black T-shirt with the eagle-globe-and-anchor insignia of the United States Marine Corps on his chest and the slogan “Pain is weakness leaving the body” across his back. Though short in stature, he has the upper-body strength of an avid rock climber. He chose to get his Ph.D. in computer vision, he told me, because the lab happened to be close to Fontainebleau — the famous climbing spot in France. While completing his postdoc at the Harvard Robotics Lab, he led a successful expedition up Denali, the highest peak in North America.

A Frenchman who has lived half his 49 years in the United States, Vincent was never in the Marines. But he is a leader in a new great game: the Internet land grab, which can be reduced to three key battles over three key conceptual territories. What came first, conquered by Google’s superior search algorithms. Who was next, and Facebook was the victor. But where, arguably the biggest prize of all, has yet to be completely won.

Where-type questions — the kind that result in a little map popping up on the search-results page — account for some 20 percent of all Google queries done from the desktop. But ultimately more important by far is location-awareness, the sort of geographical information that our phones and other mobile devices already require in order to function. In the future, such location-awareness will be built into more than just phones. All of our stuff will know where it is — and that awareness will imbue the real world with some of the power of the virtual. Your house keys will tell you that they’re still on your desk at work. Your tools will remind you that they were lent to a friend. And your car will be able to drive itself on an errand to retrieve both your keys and your tools.

While no one can say exactly how we will get from the current moment to that Jetsonian future, one thing for sure can be said about location-awareness: maps are required. Tomorrow’s map, integrally connected to everything that moves (the keys, the tools, the car), will be so fundamental to their operation that the map will, in effect, be their operating system. A map is to location-awareness as Windows is to a P.C. And as the history of Microsoft makes clear, a company that controls the operating system controls just about everything. So the competition to make the best maps, the thinking goes, is more than a struggle over who dominates the trillion-dollar smartphone market; it’s a contest over the future itself.

Google was relatively late to this territory. Its map was only a few months old when it was featured at Tim O’Reilly’s inaugural Where 2.0 conference in 2005. O’Reilly is a publisher and a well-known visionary in Silicon Valley who is convinced that the Internet is evolving into a single vast, shared computer, one of whose most important individual functions, or subroutines, is location-awareness.

Google’s original map was rudimentary, essentially a digitized road atlas. Like the maps from Microsoft and Yahoo, it used licensed data, and areas outside the United States and Europe were represented as blue emptiness. Google’s innovation was the web interface: its map was dragable, zoomable, panable.

These new capabilities were among the first implementations of a technology that turned what had been a static medium — a web of pages — into a dynamic one. MapQuest and similar sites showed you maps; Google let you interact with them. Developers soon realized that they could take advantage of that dynamism to hack Google’s map, add their own data and create their very own location-based services.

A computer scientist named Paul Rademacher did just that when he invented a technique to facilitate apartment-hunting in San Francisco. Frustrated by the limited, bare-bones nature of Craigslist’s classified ads and inspired by Google’s interactive quality, Rademacher spent six weeks overlaying Google’s map with apartment listings from Craigslist. The result, HousingMaps.com, was one of the web’s first mash-ups.

Read the entire article here.

Image: Luc Vincent, head of Google Maps imagery. Courtesy of NYT Magazine.

5 Billion Infractions per Day

New reports suggest that the NSA (National Security Agency) is collecting and analyzing over 5 billion records per day from mobile phones worldwide. That’s a vast amount of data covering lots of people — presumably over 99.9999 percent innocent people.

Yet, the nation yawns and continues to soak in the latest shenanigans on Duck Dynasty. One wonders if Uncle Si and his cohorts are being tracked as well. Probably.

From the Washington Post:

The National Security Agency is gathering nearly 5 billion records a day on the whereabouts of cellphones around the world, according to top-secret documents and interviews with U.S. intelligence officials, enabling the agency to track the movements of individuals — and map their relationships — in ways that would have been previously unimaginable.

The records feed a vast database that stores information about the locations of at least hundreds of millions of devices, according to the officials and the documents, which were provided by former NSA contractor Edward Snowden. New projects created to analyze that data have provided the intelligence community with what amounts to a mass surveillance tool.

The NSA does not target Americans’ location data by design, but the agency acquires a substantial amount of information on the whereabouts of domestic cellphones “incidentally,” a legal term that connotes a foreseeable but not deliberate result.

One senior collection manager, speaking on the condition of anonymity but with permission from the NSA, said “we are getting vast volumes” of location data from around the world by tapping into the cables that connect mobile networks globally and that serve U.S. cellphones as well as foreign ones. Additionally, data are often collected from the tens of millions of Americans who travel abroad with their cellphones every year.

In scale, scope and potential impact on privacy, the efforts to collect and analyze location data may be unsurpassed among the NSA surveillance programs that have been disclosed since June. Analysts can find cellphones anywhere in the world, retrace their movements and expose hidden relationships among the people using them.

U.S. officials said the programs that collect and analyze location data are lawful and intended strictly to develop intelligence about foreign targets.

Robert Litt, general counsel for the Office of the Director of National Intelligence, which oversees the NSA, said “there is no element of the intelligence community that under any authority is intentionally collecting bulk cellphone location information about cellphones in the United States.”

The NSA has no reason to suspect that the movements of the overwhelming majority of cellphone users would be relevant to national security. Rather, it collects locations in bulk because its most powerful analytic tools — known collectively as CO-TRAVELER — allow it to look for unknown associates of known intelligence targets by tracking people whose movements intersect.

Still, location data, especially when aggregated over time, are widely regarded among privacy advocates as uniquely sensitive. Sophisticated mathematical tech­niques enable NSA analysts to map cellphone owners’ relationships by correlating their patterns of movement over time with thousands or millions of other phone users who cross their paths. Cellphones broadcast their locations even when they are not being used to place a call or send a text message.

Read the entire article here.

Image: Duck Dynasty show promotional still. Courtesy of Wikipedia / A&E.