Tag Archives: Google

MondayMap: European Stereotypes

map-europe-google-stereotypeHot on the heels of the map of US state stereotypes I am delighted to present a second one. This time it’s a map of Google searches in the UK for various nations across Europe. It was compiled by taking the most frequent result from Google’s autocomplete function. For instance, type in, “Why is Italy…”, and Google automatically fills in the most popular result with “Why is Italy shaped like a boot”.

Highlighting just a few: Switzerland is viewed as rich; Austria is racist; Ireland is green.

Map: European nation stereotypes by British Google users. Courtesy: Independent Media.

Send to Kindle

MondayMap: State Stereotypes

map-uk-google-stereotype

An interesting map courtesy of Google searches in the UK shows some fascinating stereotypes for each of the US fifty states. It was compiled by taking the most frequent result from Google’s autocomplete function. For instance, type in, “Why is Colorado so…”, and Google automatically fills in the most popular result with “Why is Colorado so fit”.

It’s not entirely scientific but interesting nonetheless. Highlighting just a few: Wisconsin is viewed as drunk; Louisiana as racist; Colorado as fit; and, Nevada as dangerous.

Map: US state stereotypes by British Google users. Courtesy: Independent Media.

Send to Kindle

MondayMap: Internet Racism

map-internet-racism

Darkest blue and light blue respectively indicate much less and less racist areas than the national average. The darkest red indicates the most racist zones.

No surprise: the areas with the highest number of racists are in the South and the rural Northeastern United States. Head west of Texas and you’ll find fewer and fewer pockets of racists. Further, and perhaps not surprisingly, the greater the degree of n-word usage the higher is the rate of black mortality.

Sadly, this map is not of 18th or 19th century America, it’s from a recent study, April 2015, posted on Public Library of Science (PLOS) ONE.

Now keep in mind that the map highlights racism through tracking of pejorative search terms such as the n-word, and doesn’t count actual people, and it’s a geographic generalization. Nonetheless it’s a stark reminder that we seem to be two nations divided by the mighty Mississippi River and we still have a very long way to go before we are all “westerners”.

From Washington Post:

Where do America’s most racist people live? “The rural Northeast and South,” suggests a new study just published in PLOS ONE.

The paper introduces a novel but makes-tons-of-sense-when-you-think-about-it method for measuring the incidence of racist attitudes: Google search data. The methodology comes from data scientist Seth Stephens-Davidowitz. He’s used it before to measure the effect of racist attitudes on Barack Obama’s electoral prospects.

“Google data, evidence suggests, are unlikely to suffer from major social censoring,” Stephens-Davidowitz wrote in a previous paper. “Google searchers are online and likely alone, both of which make it easier to express socially taboo thoughts. Individuals, indeed, note that they are unusually forthcoming with Google.” He also notes that the Google measure correlates strongly with other standard measures social science researchers have used to study racist attitudes.

This is important, because racism is a notoriously tricky thing to measure. Traditional survey methods don’t really work — if you flat-out ask someone if they’re racist, they will simply tell you no. That’s partly because most racism in society today operates at the subconscious level, or gets vented anonymously online.

For the PLOS ONE paper, researchers looked at searches containing the N-word. People search frequently for it, roughly as often as searches for  “migraine(s),” “economist,” “sweater,” “Daily Show,” and “Lakers.” (The authors attempted to control for variants of the N-word not necessarily intended as pejoratives, excluding the “a” version of the word that analysis revealed was often used “in different contexts compared to searches of the term ending in ‘-er’.”)

Read the entire article here.

Image: Association between an Internet-Based Measure of Area Racism and Black Mortality. Courtesy of Washington Post / PLOS (Public Library of Science) ONE.

Send to Kindle

Software That Learns to Eat Itself

Google became a monstrously successful technology company by inventing a solution to index and search content scattered across the Web, and then monetizing the search results through contextual ads. Since its inception the company has relied on increasingly sophisticated algorithms for indexing mountains of information and then serving up increasingly relevant results. These algorithms are based on a secret sauce that ranks the relevance of a webpage by evaluating its content, structure and relationships with other pages. They are defined and continuously improved by technologists and encoded into software by teams of engineers.

But as is the case in many areas of human endeavor, the underlying search engine technology and its teams of human designers and caregivers are being replaced by newer, better technology. In this case the better technology is based on artificial intelligence (AI), and it doesn’t rely on humans. It is based on machine or deep learning and neural networks — a combination of hardware and software that increasingly mimics the human brain in its ability to aggregate and filter information, decipher patterns and infer meaning.

[I’m sure it will not be long before yours truly is replaced by a bot.]

From Wired:

Yesterday, the 46-year-old Google veteran who oversees its search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea. On one level, these are just two guys doing something new with their lives. But you can also view the pair as the ideal metaphor for a momentous shift in the way things work inside Google—and across the tech world as a whole.

Giannandrea, you see, oversees Google’s work in artificial intelligence. This includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.

This approach, called deep learning, is rapidly reinventing so many of the Internet’s most popular services, from Facebook to Twitter to Skype. Over the past year, it has also reinvented Google Search, where the company generates most of its revenue. Early in 2015, as Bloomberg recently reported, Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries. As of October, RankBrain played a role in “a very large fraction” of the millions of queries that go through the search engine with each passing second.

Read the entire story here.

Send to Kindle

Google AI Versus the Human Race

Korean_Go_Game_ca_1910-1920

It does indeed appear that a computer armed with Google’s experimental AI (artificial intelligence) software just beat a grandmaster of the strategy board game Go. The game was devised in ancient China — it’s been around for several millennia. Go is commonly held to be substantially more difficult than chess to master, to which I can personally attest.

So, does this mean that the human race is next in line for a defeat at the hands of an uber-intelligent AI? Well, not really, not yet anyway.

But, I’m with prominent scientists and entrepreneurs — including Stephen Hawking, Bill Gates and Elon Musk — who warn of the long-term existential peril to humanity from unfettered AI. In the meantime check out how AlphaGo from Google’s DeepMind unit set about thrashing a human.

From Wired:

An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that’s exponentially more complex than the game of chess. And Nick Bostrom isn’t exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It’s not that he discounts the power of Google’s Go-playing machine. He just argues that it isn’t necessarily a huge leap forward. The technologies behind Google’s system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

“There has been, and there is, a lot of progress in state-of-the-art artificial intelligence,” Bostrom says. “[Google’s] underlying technology is very much continuous with what has been under development for the last several years.”

But if you look at this another way, it’s exactly why Google’s triumph is so exciting—and perhaps a little frightening. Even Bostrom says it’s a good excuse to stop and take a look at how far this technology has come and where it’s going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it’s headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

Building a Brain

Google’s AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft’s Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game.

Reinforcement learning takes things a step further. Once you’ve built a neural net that’s pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level.

AlphaGo uses all this. And then some. Hassabis [Demis Hassabis, AlphaGo founder] and his team added a second level of “deep reinforcement learning” that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Read the entire story here.

Image: Korean couple, in traditional dress, play Go; photograph dated between 1910 and 1920. Courtesy: Frank and Frances Carpenter Collection. Public Domain.

Send to Kindle

MondayMap: Search by State

This treasure of a map shows the most popular Google search terms by state in 2015.

Google-search-by-state-2015

The vastly different searches show how the United States really is a collection of very diverse and loosely federated communities. The US may be a great melting pot, but down at the state level its residents seem to care about very different things.

For instance, while Floridians favorite search was “concealed weapons permit“, residents of Mississippi went rather dubiously for “Ashley Madison“, and Oklahoma’s top search was “Caitlyn Jenner“. Kudos to my home state, residents there put aside politics, reality TV, guns and other inanities by searching most for “water on mars“. Similarly, citizens of New Mexico looked far beyond their borders by searching most for “Pluto“.

And, I have to scratch my head over why New York State cares more about “Charlie Sheen HIV” and Kentucky prefers “Dusty Rhodes” over Washington State’s search for “Leonard Nimoy”.

The map was put together by the kind people at Estately. You can read more fascinating state-by-state search rankings here.

Send to Kindle

Hate Crimes and the Google Correlation

Google-search-hate-speechIt had never occurred to me, but it makes perfect sense: there’s a direct correlation between Muslim hates crimes and Muslim hate searches on Google. For that matter, there is probably a correlation between other types of hate speech and hate crimes — women, gays, lesbians, bosses, blacks, whites, bad drivers, religion X. But it is certainly the case that Muslims and the Islamic religion are taking the current brunt both online and in the real world.

Clearly, we have a long way to go in learning that entire populations are not to blame for the criminal acts of a few. However, back to the correlations.

Mining of Google search data shows indisputable relationships. As the researchers point out, “When Islamophobic searches are at their highest levels, such as during the controversy over the ‘ground zero mosque’ in 2010 or around the anniversary of 9/11, hate crimes tend to be at their highest levels, too.” Interestingly enough there are currently just over 50 daily searches for “I hate my boss” in the US. In November there were 120 searches per day for “I hate Muslims”.

So, here’s an idea. Let’s get Google to replace the “I’m Feeling Lucky” button on the search page (who uses that anyway) with “I’m Feeling Hateful”. This would make the search more productive for those needing to vent their hatred.

More from NYT:

HOURS after the massacre in San Bernardino, Calif., on Dec. 2, and minutes after the media first reported that at least one of the shooters had a Muslim-sounding name, a disturbing number of Californians had decided what they wanted to do with Muslims: kill them.

The top Google search in California with the word “Muslims” in it was “kill Muslims.” And the rest of America searched for the phrase “kill Muslims” with about the same frequency that they searched for “martini recipe,” “migraine symptoms” and “Cowboys roster.”

People often have vicious thoughts. Sometimes they share them on Google. Do these thoughts matter?

Yes. Using weekly data from 2004 to 2013, we found a direct correlation between anti-Muslim searches and anti-Muslim hate crimes.

We measured Islamophobic sentiment by using common Google searches that imply hateful attitudes toward Muslims. A search for “are all Muslims terrorists?” for example leaves little to the imagination about what the searcher really thinks. Searches for “I hate Muslims” are even clearer.

When Islamophobic searches are at their highest levels, such as during the controversy over the “ground zero mosque” in 2010 or around the anniversary of 9/11, hate crimes tend to be at their highest levels, too.

In 2014, according to the F.B.I., anti-Muslim hate crimes represented 16.3 percent of the total of 1,092 reported offenses. Anti-Semitism still led the way as a motive for hate crimes, at 58.2 percent.

Hate crimes may seem chaotic and unpredictable, a consequence of random neurons that happen to fire in the brains of a few angry young men. But we can explain some of the rise and fall of anti-Muslim hate crimes just based on what people are Googling about Muslims.

The frightening thing is this: If our model is right, Islamophobia and thus anti-Muslim hate crimes are currently higher than at any time since the immediate aftermath of the Sept. 11 attacks. Although it will take awhile for the F.B.I. to collect and analyze the data before we know whether anti-Muslim hate crimes are in fact rising spectacularly now, Islamophobic searches in the United States were 10 times higher the week after the Paris attacks than the week before. They have been elevated since then and rose again after the San Bernardino attack.

According to our model, when all the data is analyzed by the F.B.I., there will have been more than 200 anti-Muslim attacks in 2015, making it the worst year since 2001.

How can these Google searches track Islamophobia so well? Who searches for “I hate Muslims” anyway?

We often think of Google as a source from which we seek information directly, on topics like the weather, who won last night’s game or how to make apple pie. But sometimes we type our uncensored thoughts into Google, without much hope that Google will be able to help us. The search window can serve as a kind of confessional.

There are thousands of searches every year, for example, for “I hate my boss,” “people are annoying” and “I am drunk.” Google searches expressing moods, rather than looking for information, represent a tiny sample of everyone who is actually thinking those thoughts.

There are about 1,600 searches for “I hate my boss” every month in the United States. In a survey of American workers, half of the respondents said that they had left a job because they hated their boss; there are about 150 million workers in America.

In November, there were about 3,600 searches in the United States for “I hate Muslims” and about 2,400 for “kill Muslims.” We suspect these Islamophobic searches represent a similarly tiny fraction of those who had the same thoughts but didn’t drop them into Google.

“If someone is willing to say ‘I hate them’ or ‘they disgust me,’ we know that those emotions are as good a predictor of behavior as actual intent,” said Susan Fiske, a social psychologist at Princeton, pointing to 50 years of psychology research on anti-black bias. “If people are making expressive searches about Muslims, it’s likely to be tied to anti-Muslim hate crime.”

Google searches seem to suffer from selection bias: Instead of asking a random sample of Americans how they feel, you just get information from those who are motivated to search. But this restriction may actually help search data predict hate crimes.

Read more here.

Image courtesy of Google Search.

 

Send to Kindle

Google: The Standard Oil of Our Age

Google’s aim to organize the world’s information sounds benign enough. But delve a little deeper into its research and development efforts or witness its boundless encroachment into advertising, software, phones, glasses, cars, home automation, travel, internet services, artificial intelligence, robotics, online shopping (and so on), and you may get a more uneasy and prickly sensation. Is Google out to organize information or you? Perhaps it’s time to begin thinking about Google as a corporate hegemony, not quite a monopoly yet, but so powerful that counter-measures become warranted.

An open letter, excerpted below, from Mathias Döpfner, CEO of Axel Springer AG, does us all a service by raising the alarm bells.

From the Guardian:

Dear Eric Schmidt,

As you know, I am a great admirer of Google’s entrepreneurial success. Google’s employees are always extremely friendly to us and to other publishing houses, but we are not communicating with each other on equal terms. How could we? Google doesn’t need us. But we need Google. We are afraid of Google. I must state this very clearly and frankly, because few of my colleagues dare do so publicly. And as the biggest among the small, perhaps it is also up to us to be the first to speak out in this debate. You yourself speak of the new power of the creators, owners, and users.

In the long term I’m not so sure about the users. Power is soon followed by powerlessness. And this is precisely the reason why we now need to have this discussion in the interests of the long-term integrity of the digital economy’s ecosystem. This applies to competition – not only economic, but also political. As the situation stands, your company will play a leading role in the various areas of our professional and private lives – in the house, in the car, in healthcare, in robotronics. This is a huge opportunity and a no less serious threat. I am afraid that it is simply not enough to state, as you do, that you want to make the world a “better place”.

Google lists its own products, from e-commerce to pages from its own Google+ network, higher than those of its competitors, even if these are sometimes of less value for consumers and should not be displayed in accordance with the Google algorithm. It is not even clearly pointed out to the user that these search results are the result of self-advertising. Even when a Google service has fewer visitors than that of a competitor, it appears higher up the page until it eventually also receives more visitors.

You know very well that this would result in long-term discrimination against, and weakening of, any competition, meaning that Google would be able to develop its superior market position still further. And that this would further weaken the European digital economy in particular.

This also applies to the large and even more problematic set of issues concerning data security and data utilisation. Ever since Edward Snowden triggered the NSA affair, and ever since the close relations between major American online companies and the American secret services became public, the social climate – at least in Europe – has fundamentally changed. People have become more sensitive about what happens to their user data. Nobody knows as much about its customers as Google. Even private or business emails are read by Gmail and, if necessary, can be evaluated. You yourself said in 2010: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.” This is a remarkably honest sentence. The question is: are users happy with the fact that this information is used not only for commercial purposes – which may have many advantages, yet a number of spooky negative aspects as well – but could end up in the hands of the intelligence services, and to a certain extent already has?

Google is sitting on the entire current data trove of humanity, like the giant Fafner in The Ring of the Nibelung: “Here I lie and here I hold.” I hope you are aware of your company’s special responsibility. If fossil fuels were the fuels of the 20th century, then those of the 21st century are surely data and user profiles. We need to ask ourselves whether competition can generally still function in the digital age, if data is so extensively concentrated in the hands of one party.

There is a quote from you in this context that concerns me. In 2009 you said: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” The essence of freedom is precisely the fact that I am not obliged to disclose everything that I am doing, that I have a right to confidentiality and, yes, even to secrets; that I am able to determine for myself what I wish to disclose about myself. The individual right to this is what makes a democracy. Only dictatorships want transparent citizens instead of a free press.

Against this background, it greatly concerns me that Google – which has just announced the acquisition of drone manufacturer Titan Aerospace – has been seen for some time as being behind a number of planned enormous ships and floating working environments that can cruise and operate in the open ocean. What is the reason for this development? You don’t have to be a conspiracy theorist to find this alarming.

Historically, monopolies have never survived in the long term. Either they have failed as a result of their complacency, which breeds its own success, or they have been weakened by competition – both unlikely scenarios in Google’s case. Or they have been restricted by political initiatives.

Another way would be voluntary self-restraint on the part of the winner. Is it really smart to wait until the first serious politician demands the breakup of Google? Or even worse – until the people refuse to follow?

Sincerely yours,

Mathias Döpfner

Read the entire article here.

 

Send to Kindle

Research Without a Research Lab

Many technology companies have separate research teams, or even divisions, that play with new product ideas and invent new gizmos. The conventional wisdom suggests that businesses like Microsoft or IBM need to keep their innovative, far-sighted people away from those tasked with keeping yesterday’s products functioning and today’s customers happy. Google and a handful of other innovators on the other hand follow a different mantra; they invent in hallways and cubes — everywhere.

From Technology Review:

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.

“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Alan MacCormack, an adjunct professor at Harvard Business School who studies innovation and product development in the technology sector, says Google’s approach to research helps it deal with a conundrum facing many large companies. “Many firms are trying to balance a corporate strategy that defines who they are in five years with trying to discover new stuff that is unpredictable—this model has allowed them to do both.” Embedding people working on fundamental research into the core business also makes it possible for Google to encourage creative contributions from workers who would typically be far removed from any kind of research and development, adds MacCormack.

Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.

Cynthia Wagner Weick, a management professor at University of the Pacific, thinks that Google’s approach stems from its cofounders’ determination to avoid the usual corporate approach of keeping fundamental research isolated. “They are interested in solving major problems, and not just in the IT and communications space,” she says. Weick recently published a paper singling out Google, Edwards Lifescience, and Elon Musk’s companies, Tesla Motors and Space X, as examples of how tech companies can meet short-term needs while also thinking about far-off ideas.

Google can also draw on academia to boost its fundamental research. It spends millions each year on more than 100 research grants to universities and a few dozen PhD fellowships. At any given time it also hosts around 30 academics who “embed” at the company for up to 18 months. But it has lured many leading computing thinkers away from academia in recent years, particularly in artificial intelligence (see “Is Google Cornering the Market on Deep Learning?”). Those that make the switch get to keep publishing academic research while also gaining access to resources, tools and data unavailable inside universities.

Spector argues that it’s increasingly difficult for academic thinkers to independently advance a field like computer science without the involvement of corporations. Access to piles of data and working systems like those of Google is now a requirement to develop and test ideas that can move the discipline forward, he says. “Google’s played a larger role than almost any company in bringing that empiricism into the mainstream of the field,” he says. “Because of machine learning and operation at scale you can do things that are vastly different. You don’t want to separate researchers from data.”

It’s hard to say how long Google will be able to count on luring leading researchers, given the flush times for competing Silicon Valley startups. “We’re back to a time when there are a lot of startups out there exploring new ground,” says MacCormack, and if competitors can amass more interesting data, they may be able to leach away Google’s research mojo.

Read the entire story here.

Send to Kindle

Techo-Blocking Technology

google-glass2

Many technologists, philosophers and social scientists who consider the ethics of technology have described it as a double-edged sword. Indeed observation does seem to uphold this idea; for every benefit gained from a new invention comes a mirroring disadvantage or a peril. Not that technology per se is a threat — but its human masters seem to be rather adept at deploying it for both good and evil means.

By corollary it is also evident that many a new technology spawns others, and sometimes entire industries, to counteract the first. The radar begets the radar-evading material; the radio begets the radio-jamming transmitter; cryptography begets hacking. You get the idea.

So not a moment too soon comes PlaceAvoider, a technology to suppress capturing and sharing of images seen through Google Glass. So, watch out Brin and Page and company, the watchers are watching you.

From Technology Review:

With last year’s launch of the Narrative Clip and Autographer, and Google Glass poised for release this year, technologies that can continuously capture our daily lives with photos and videos are inching closer to the mainstream. These gadgets can generate detailed visual diaries, drive self-improvement, and help those with memory problems. But do you really want to record in the bathroom or a sensitive work meeting?

Assuming that many people don’t, computer scientists at Indiana University have developed software that uses computer vision techniques to automatically identify potentially confidential or embarrassing pictures taken with these devices and prevent them from being shared. A prototype of the software, called PlaceAvoider, will be presented at the Network and Distributed System Security Symposium in San Diego in February.

“There simply isn’t the time to manually curate the thousands of images these devices can generate per day, and in a socially networked world that might lead to the inadvertent sharing of photos you don’t want to share,” says Apu Kapadia, who co-leads the team that developed the system. “Or those who are worried about that might just not share their life-log streams, so we’re trying to help people exploit these applications to the full by providing them with a way to share safely.”

Kapadia’s group began by acknowledging that devising algorithms that can identify sensitive pictures solely on the basis of visual content is probably impossible, since the things that people do and don’t want to share can vary widely and may be difficult to recognize. They set about designing software that users train by taking pictures of the rooms they want to blacklist. PlaceAvoider then flags new pictures taken in those rooms so the user will review them.

The system uses an existing computer-vision algorithm called scale-invariant feature transform (SIFT) to pinpoint regions of high contrast around corners and edges within the training images that are likely to stay visually constant even in varying light conditions and from different perspectives. For each of these, it produces a “numerical fingerprint” consisting of 128 separate numbers relating to properties such as color and texture, as well as its position relative to other regions of the image. Since images are sometimes blurry, PlaceAvoider also looks at more general properties such as colors and textures of walls and carpets, and takes into account the sequence in which shots are taken.

In tests, the system accurately determined whether images from streams captured in the homes and workplaces of the researchers were from blacklisted rooms an average of 89.8 percent of the time.

PlaceAvoider is currently a research prototype; its various components have been written but haven’t been combined as a completed product, and researchers used a smartphone worn around the neck to take photos rather than an existing device meant for life-logging. If developed to work on a life-logging device, an interface could be designed so that PlaceAvoider can flag potentially sensitive images at the time they are taken or place them in quarantine to be dealt with later.

Read the entire article here.

Image: Google Glass. Courtesy of Google.

Send to Kindle

2014: The Year of Big Stuff

new-years-eve-2013

Over the closing days of each year, or the first few days of the coming one, prognosticators the world over tell us about the future. Yet, while no one, to date, has yet been proven to have prescient skills — despite what your psychic tells you — we all like to dabble in art of prediction. Google’s Eric Schmidt has one big prediction for 2014: big. Everything will be big — big data, big genomics, smartphones will be even bigger, and of course, so will mistakes.

So, with that, a big Happy New Year to all our faithful readers and seers across our fragile and beautiful blue planet.

From the Guardian:

What does 2014 hold? According to Eric Schmidt, Google’s executive chairman, it means smartphones everywhere – and also the possibility of genetics data being used to develop new cures for cancer.

In an appearance on Bloomberg TV, Schmidt laid out his thoughts about general technological change, Google’s biggest mistake, and how Google sees the economy going in 2014.

“The biggest change for consumers is going to be that everyone’s going to have a smartphone,” Schmidt says. “And the fact that so many people are connected to what is essentially a supercomputer means a whole new generation of applications around entertainment, education, social life, those kinds of things. The trend has been that mobile is winning; it’s now won. There are more tablets and phones being sold than personal computers – people are moving to this new architecture very fast.”

It’s certainly true that tablets and smartphones are outselling PCs – in fact smartphones alone have been doing that since the end of 2010. This year, it’s forecast that tablets will have passed “traditional” PCs (desktops, fixed-keyboard laptops) too.

Disrupting business

Next, Schmidt says there’s a big change – a disruption – coming for business through the arrival of “big data”: “The biggest disruptor that we’re sure about is the arrival of big data and machine intelligence everywhere – so the ability [for businesses] to find people, to talk specifically to them, to judge them, to rank what they’re doing, to decide what to do with your products, changes every business globally.”

But he also sees potential in the field of genomics – the parsing of all the data being collected from DNA and gene sequencing. That might not be surprising, given that Google is an investor in 23andme, a gene sequencing company which aims to collect the genomes of a million people so that it can do data-matching analysis on their DNA. (Unfortunately, that plan has hit a snag: 23andme has been told to cease operating by the US Food and Drug Administration because it has failed to respond to inquiries about its testing methods and publication of results.)

Here’s what Schmidt has to say on genomics: “The biggest disruption that we don’t really know what’s going to happen is probably in the genetics area. The ability to have personal genetics records and the ability to start gathering all of the gene sequencing into places will yield discoveries in cancer treatment and diagnostics over the next year that that are unfathomably important.”

It may be worth mentioning that “we’ll find cures through genomics” has been the promise held up by scientists every year since the human genome was first sequenced. So far, it hasn’t happened – as much as anything because human gene variation is remarkably big, and there’s still a lot that isn’t known about the interaction of what appears to be non-functional parts of our DNA (which doesn’t seem to code to produce proteins) and the parts that do code for proteins.

Biggest mistake

As for Google’s biggest past mistake, Schmidt says it’s missing the rise of Facebook and Twitter: “At Google the biggest mistake that I made was not anticipating the rise of the social networking phenomenon – not a mistake we’re going to make again. I guess in our defence were working on many other things, but we should have been in that area, and I take responsibility for that.” The results of that effort to catch up can be seen in the way that Google+ is popping up everywhere – though it’s wrong to think of Google+ as a social network, since it’s more of a way that Google creates a substrate on the web to track individuals.

And what is Google doing in 2014? “Google is very much investing, we’re hiring globally, we see strong growth all around the world with the arrival of the internet everywhere. It’s all green in that sense from the standpoint of the year. Google benefits from transitions from traditional industries, and shockingly even when things are tough in a country, because we’re “return-on-investment”-based advertising – it’s smarter to move your advertising from others to Google, so we win no matter whether the industries are in good shape or not, because people need our services, we’re very proud of that.”

For Google, the sky’s the limit: “the key limiter on our growth is our rate of innovation, how smart are we, how clever are we, how quickly can we get these new systems deployed – we want to do that as fast as we can.”

It’s worth noting that Schmidt has a shaky track record on predictions. At Le Web in 2011 he famously forecast that developers would be shunning iOS to start developing on Android first, and that Google TV would be installed on 50% of all TVs on sale by summer 2012.

It didn’t turn out that way: even now, many apps start on iOS, and Google TV fizzled out as companies such as Logitech found that it didn’t work as well as Android to tempt buyers.

Since that, Schmidt has been a lot more cautious about predicting trends and changes – although he hasn’t been above the occasional comment which seems calculated to get a rise from his audience, such as telling executives at a Gartner conference that Android was more secure than the iPhone – which they apparently found humourous.

Read the entire article here.

Image: Happy New Year, 2014 Google doodle. Courtesy of Google.

Send to Kindle

Global Domination — One Pixel at a Time

google-maps-article

Google’s story began with text-based search and was quickly followed by digital maps. These simple innovations ushered in the company’s mission to organize the world’s information. But as Google ventures further from its roots into mobile operating systems (Android), video (youtube), social media (Google+), smartphone hardware (through its purchase of Motorola’s mobile business), augmented reality (Google Glass), Web browsers (Chrome) and notebook hardware (Chromebook) what of its core mapping service? And is global domination all that it’s cracked up to be?

From the NYT:

Fifty-five miles and three days down the Colorado River from the put-in at Lee’s Ferry, near the Utah-Arizona border, the two rafts in our little flotilla suddenly encountered a storm. It sneaked up from behind, preceded by only a cool breeze. With the canyon walls squeezing the sky to a ribbon of blue, we didn’t see the thunderhead until it was nearly on top of us.

I was seated in the front of the lead raft. Pole position meant taking a dunk through the rapids, but it also put me next to Luc Vincent, the expedition’s leader. Vincent is the man responsible for all the imagery in Google’s online maps. He’s in charge of everything from choosing satellite pictures to deploying Google’s planes around the world to sending its camera-equipped cars down every road to even this, a float through the Grand Canyon. The raft trip was a mapping expedition that was also serving as a celebration: Google Maps had just introduced a major redesign, and the outing was a way of rewarding some of the team’s members.

Vincent wore a black T-shirt with the eagle-globe-and-anchor insignia of the United States Marine Corps on his chest and the slogan “Pain is weakness leaving the body” across his back. Though short in stature, he has the upper-body strength of an avid rock climber. He chose to get his Ph.D. in computer vision, he told me, because the lab happened to be close to Fontainebleau — the famous climbing spot in France. While completing his postdoc at the Harvard Robotics Lab, he led a successful expedition up Denali, the highest peak in North America.

A Frenchman who has lived half his 49 years in the United States, Vincent was never in the Marines. But he is a leader in a new great game: the Internet land grab, which can be reduced to three key battles over three key conceptual territories. What came first, conquered by Google’s superior search algorithms. Who was next, and Facebook was the victor. But where, arguably the biggest prize of all, has yet to be completely won.

Where-type questions — the kind that result in a little map popping up on the search-results page — account for some 20 percent of all Google queries done from the desktop. But ultimately more important by far is location-awareness, the sort of geographical information that our phones and other mobile devices already require in order to function. In the future, such location-awareness will be built into more than just phones. All of our stuff will know where it is — and that awareness will imbue the real world with some of the power of the virtual. Your house keys will tell you that they’re still on your desk at work. Your tools will remind you that they were lent to a friend. And your car will be able to drive itself on an errand to retrieve both your keys and your tools.

While no one can say exactly how we will get from the current moment to that Jetsonian future, one thing for sure can be said about location-awareness: maps are required. Tomorrow’s map, integrally connected to everything that moves (the keys, the tools, the car), will be so fundamental to their operation that the map will, in effect, be their operating system. A map is to location-awareness as Windows is to a P.C. And as the history of Microsoft makes clear, a company that controls the operating system controls just about everything. So the competition to make the best maps, the thinking goes, is more than a struggle over who dominates the trillion-dollar smartphone market; it’s a contest over the future itself.

Google was relatively late to this territory. Its map was only a few months old when it was featured at Tim O’Reilly’s inaugural Where 2.0 conference in 2005. O’Reilly is a publisher and a well-known visionary in Silicon Valley who is convinced that the Internet is evolving into a single vast, shared computer, one of whose most important individual functions, or subroutines, is location-awareness.

Google’s original map was rudimentary, essentially a digitized road atlas. Like the maps from Microsoft and Yahoo, it used licensed data, and areas outside the United States and Europe were represented as blue emptiness. Google’s innovation was the web interface: its map was dragable, zoomable, panable.

These new capabilities were among the first implementations of a technology that turned what had been a static medium — a web of pages — into a dynamic one. MapQuest and similar sites showed you maps; Google let you interact with them. Developers soon realized that they could take advantage of that dynamism to hack Google’s map, add their own data and create their very own location-based services.

A computer scientist named Paul Rademacher did just that when he invented a technique to facilitate apartment-hunting in San Francisco. Frustrated by the limited, bare-bones nature of Craigslist’s classified ads and inspired by Google’s interactive quality, Rademacher spent six weeks overlaying Google’s map with apartment listings from Craigslist. The result, HousingMaps.com, was one of the web’s first mash-ups.

Read the entire article here.

Image: Luc Vincent, head of Google Maps imagery. Courtesy of NYT Magazine.

Send to Kindle

Google Hacks

Some cool shortcuts to make the most of Google search.

From the Telegraph:

1. Calculator

Google’s calculator function is far more powerful than most people realise. As well as doing basic maths (5+6 or 3*2) it can do logarithmic calculations, and it knows constants (like e and pi), as well as functions like Cos and Sin. Google can also translate numbers into binary code – try typing ’12*3 in binary’.

2. Site search

By using the ‘site:’ keyword, you can make Google only return results from one site. So for example, you could search for “site:telegraph.co.uk manchester united” and only get stories on Manchester United from the Telegraph website.

3. Conversions

Currency conversions and unit conversions can be found by using the syntax: in . So for example, you could type ‘1 GBP in USD’, ’20 C in F’ or ’15 inches in cm’ and get an instant answer.

4. Time zones

Search for ‘time in ’ and you will get the local time for that place, as well as the time zone it is in.

5. Translations

A quick way to translate foreign words is to type ‘translate to ’. So for example, ‘translate pomme to english’ returns the result apple, and ‘translate pomme to spanish’ returns the result ‘manzana’.

6. Search for a specific file type

If you know you are looking for a PDF or a Word file, you can search for specific file types by typing ‘ filetype:pdf’ or ‘ filetype:doc’

7. Check flight status

If you type in a flight number, the top result is the details of the flight and its status. So, for example, typing in BA 335 reveals that British Airways flight 335 departs Paris at 15.45 today and arrives at Heathrow Terminal 5 at 15.48 local time.

8. Search for local film showings

Search for film showings in your area by typing ‘films’ or ‘movies’ followed by your postcode. In the UK, this only narrows it down to your town or city. In the US this is more accurate, as results are displayed according to zip-code.

9. Weather forecasts

Type the name of a city followed by ‘forecast’, and Google will tell you the weather today, including levels of precipitation, humidity and wind, as well as the forecast for the next week, based on data from The Weather Channel.

10. Exclude search terms

When you’re enter a search term that has a second meaning, or a close association with something else, it can be difficult to find the results you want. Exclude irrelevant results using the ‘-‘ sign. So for searches for ‘apple’ where the word ‘iPhone’ is not used, enter ‘apple -iPhone’.

Read the entire article below here.

Image courtesy of Google.

Send to Kindle

Linguistic Vectors

Our friends at Google have transformed the challenge of language translation from one of linguistics to mathematics.

By mapping parts of the linguistic structure of one language in the form of vectors in a mathematical space and comparing those to the structure of a few similar words in another they have condensed the effort to equations. Their early results of an English to Spanish translation seem very promising. (Now, if they could only address human conflict, aging and death.)

Visit arXiv for a pre-print of their research.

From Technology Review:

Computer science is changing the nature of the translation of words and sentences from one language to another. Anybody who has tried BabelFish or Google Translate will know that they provide useful translation services but ones that are far from perfect.

The basic idea is to compare a corpus of words in one language with the same corpus of words translated into another. Words and phrases that share similar statistical properties are considered equivalent.

The problem, of course, is that the initial translations rely on dictionaries that have to be compiled by human experts and this takes significant time and effort.

Now Tomas Mikolov and a couple of pals at Google in Mountain View have developed a technique that automatically generates dictionaries and phrase tables that convert one language into another.

The new technique does not rely on versions of the same document in different languages. Instead, it uses data mining techniques to model the structure of a single language and then compares this to the structure of another language.

“This method makes little assumption about the languages, so it can be used to extend and re?ne dictionaries and translation tables for any language pairs,” they say.

The new approach is relatively straightforward. It relies on the notion that every language must describe a similar set of ideas, so the words that do this must also be similar. For example, most languages will have words for common animals such as cat, dog, cow and so on. And these words are probably used in the same way in sentences such as “a cat is an animal that is smaller than a dog.”

The same is true of numbers. The image above shows the vector representations of the numbers one to five in English and Spanish and demonstrates how similar they are.

This is an important clue. The new trick is to represent an entire language using the relationship between its words. The set of all the relationships, the so-called “language space”, can be thought of as a set of vectors that each point from one word to another. And in recent years, linguists have discovered that it is possible to handle these vectors mathematically. For example, the operation ‘king’ – ‘man’ + ‘woman’ results in a vector that is similar to ‘queen’.

It turns out that different languages share many similarities in this vector space. That means the process of converting one language into another is equivalent to finding the transformation that converts one vector space into the other.

This turns the problem of translation from one of linguistics into one of mathematics. So the problem for the Google team is to find a way of accurately mapping one vector space onto the other. For this they use a small bilingual dictionary compiled by human experts–comparing same corpus of words in two different languages gives them a ready-made linear transformation that does the trick.

Having identified this mapping, it is then a simple matter to apply it to the bigger language spaces. Mikolov and co say it works remarkably well. “Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish,” they say.

The method can be used to extend and refine existing dictionaries, and even to spot mistakes in them. Indeed, the Google team do exactly that with an English-Czech dictionary, finding numerous mistakes.

Finally, the team point out that since the technique makes few assumptions about the languages themselves, it can be used on argots that are entirely unrelated. So while Spanish and English have a common Indo-European history, Mikolov and co show that the new technique also works just as well for pairs of languages that are less closely related, such as English and Vietnamese.

Read the entire article here.

Send to Kindle

UnGoogleable: The Height of Cool

So, it is no longer a surprise — our digital lives are tracked, correlated, stored and examined. The NSA (National Security Agency) does it to determine if you are an unsavory type; Google does it to serve you better information and ads; and, a whole host of other companies do it to sell you more things that you probably don’t need and for a price that you can’t afford. This of course raises deep and troubling questions about privacy. With this in mind, some are taking ownership of the issue and seeking to erase themselves from the vast digital Orwellian eye. However, to some being untraceable online is a fashion statement, rather than a victory for privacy.

From the Guardian:

“The chicest thing,” said fashion designer Phoebe Philo recently, “is when you don’t exist on Google. God, I would love to be that person!”

Philo, creative director of Céline, is not that person. As the London Evening Standard put it: “Unfortunately for the famously publicity-shy London designer – Paris born, Harrow-on-the-Hill raised – who has reinvented the way modern women dress, privacy may well continue to be a luxury.” Nobody who is oxymoronically described as “famously publicity-shy” will ever be unGoogleable. And if you’re not unGoogleable then, if Philo is right, you can never be truly chic, even if you were born in Paris. And if you’re not truly chic, then you might as well die – at least if you’re in fashion.

If she truly wanted to disappear herself from Google, Philo could start by changing her superb name to something less diverting. Prize-winning novelist AM Homes is an outlier in this respect. Google “am homes” and you’re in a world of blah US real estate rather than cutting-edge literature. But then Homes has thought a lot about privacy, having written a play about the most famously private person in recent history, JD Salinger, and had him threaten to sue her as a result.

And Homes isn’t the only one to make herself difficult to detect online. UnGoogleable bands are 10 a penny. The New York-based band !!! (known verbally as “chick chick chick” or “bang bang bang” – apparently “Exclamation point, exclamation point, exclamation point” proved too verbose for their meagre fanbase) must drive their business manager nuts. As must the band Merchandise, whose name – one might think – is a nominalist satire of commodification by the music industry. Nice work, Brad, Con, John and Rick.

 

If Philo renamed herself online as Google Maps or @, she might make herself more chic.

Welcome to anonymity chic – the antidote to an online world of exhibitionism. But let’s not go crazy: anonymity may be chic, but it is no business model. For years XXX Porn Site, my confusingly named alt-folk combo, has remained undiscovered. There are several bands called Girls (at least one of them including, confusingly, dudes) and each one has worried – after a period of chic iconoclasm – that such a putatively cool name means no one can find them online.

But still, maybe we should all embrace anonymity, given this week’s revelations that technology giants cooperated in Prism, a top-secret system at the US National Security Agency that collects emails, documents, photos and other material for secret service agents to review. It has also been a week in which Lindsay Mills, girlfriend of NSA whistleblower Edward Snowden, has posted on her blog (entitled: “Adventures of a world-traveling, pole-dancing super hero” with many photos showing her performing with the Waikiki Acrobatic Troupe) her misery that her fugitive boyfriend has fled to Hong Kong. Only a cynic would suggest that this blog post might help the Waikiki Acrobating Troupe veteran’s career at this – serious face – difficult time. Better the dignity of silent anonymity than using the internet for that.

Furthermore, as social media diminishes us with not just information overload but the 24/7 servitude of liking, friending and status updating, this going under the radar reminds us that we might benefit from withdrawing the labour on which the founders of Facebook, Twitter and Instagram have built their billions. “Today our intense cultivation of a singular self is tied up in the drive to constantly produce and update,” argues Geert Lovink, research professor of interactive media at the Hogeschool van Amsterdam and author of Networks Without a Cause: A Critique of Social Media. “You have to tweet, be on Facebook, answer emails,” says Lovink. “So the time pressure on people to remain present and keep up their presence is a very heavy load that leads to what some call the psychopathology of online.”

Internet evangelists such as Clay Shirky and Charles Leadbeater hoped for something very different from this pathologised reality. In Shirky’s Here Comes Everybody and Leadbeater’s We-Think, both published in 2008, the nascent social media were to echo the anti-authoritarian, democratising tendencies of the 60s counterculture. Both men revelled in the fact that new web-based social tools helped single mothers looking online for social networks and pro-democracy campaigners in Belarus. Neither sufficiently realised that these tools could just as readily be co-opted by The Man. Or, if you prefer, Mark Zuckerberg.

Not that Zuckerberg is the devil in this story. Social media have changed the way we interact with other people in line with what the sociologist Zygmunt Bauman wrote in Liquid Love. For us “liquid moderns”, who have lost faith in the future, cannot commit to relationships and have few kinship ties, Zuckerberg created a new way of belonging, one in which we use our wits to create provisional bonds loose enough to stop suffocation, but tight enough to give a needed sense of security now that the traditional sources of solace (family, career, loving relationships) are less reliable than ever.

Read the entire article here.

Send to Kindle

Amazon All the Time and Google Toilet Paper

Soon courtesy of Amazon, Google and other retail giants, and of course lubricated by the likes of the ubiquitous UPS and Fedex trucks, you may be able to dispense with the weekly or even daily trip to the grocery store. Amazon is expanding a trial of its same-day grocery delivery service, and others are following suit in select local and regional tests.

You may recall the spectacular implosion of the online grocery delivery service Webvan — a dot.com darling — that came and went in the blink of an internet eye, finally going bankrupt in 2001. Well, times have changed and now avaricious Amazon and its peers have their eyes trained on your groceries.

So now all you need to do is find a service to deliver your kids to and from school, an employer who will let you work from home, convince your spouse that “staycations” are cool, use Google Street View to become a virtual tourist, and you will never, ever, ever, EVER need to leave your house again!

From Slate:

The other day I ran out of toilet paper. You know how that goes. The last roll in the house sets off a ticking clock; depending on how many people you live with and their TP profligacy, you’re going to need to run to the store within a few hours, a day at the max, or you’re SOL. (Unless you’re a man who lives alone, in which case you can wait till the next equinox.) But it gets worse. My last roll of toilet paper happened to coincide with a shortage of paper towels, a severe run on diapers (you know, for kids!), and the last load of dishwashing soap. It was a perfect storm of household need. And, as usual, I was busy and in no mood to go to the store.

This quotidian catastrophe has a happy ending. In April, I got into the “pilot test” for Google Shopping Express, the search company’s effort to create an e-commerce service that delivers goods within a few hours of your order. The service, which is currently being offered in the San Francisco Bay Area, allows you to shop online at Target, Walgreens, Toys R Us, Office Depot, and several smaller, local stores, like Blue Bottle Coffee. Shopping Express combines most of those stores’ goods into a single interface, which means you can include all sorts of disparate items in the same purchase. Shopping Express also offers the same prices you’d find at the store. After you choose your items, you select a delivery window—something like “Anytime Today” or “Between 2 p.m. and 6 p.m.”—and you’re done. On the fateful day that I’d run out of toilet paper, I placed my order at around noon. Shortly after 4, a green-shirted Google delivery guy strode up to my door with my goods. I was back in business, and I never left the house.

Google is reportedly thinking about charging $60 to $70 a year for the service, making it a competitor to Amazon’s Prime subscription plan. But at this point the company hasn’t finalized pricing, and during the trial period, the whole thing is free. I’ve found it easy to use, cheap, and reliable. Similar to my experience when I first got Amazon Prime, it has transformed how I think about shopping. In fact, in the short time I’ve been using it, Shopping Express has replaced Amazon as my go-to source for many household items. I used to buy toilet paper, paper towels, and diapers through Amazon’s Subscribe & Save plan, which offers deep discounts on bulk goods if you choose a regular delivery schedule. I like that plan when it works, but subscribing to items whose use is unpredictable—like diapers for a newborn—is tricky. I often either run out of my Subscribe & Save items before my next delivery, or I get a new delivery while I still have a big load of the old stuff. Shopping Express is far simpler. You get access to low-priced big-box-store goods without all the hassle of big-box stores—driving, parking, waiting in line. And you get all the items you want immediately.

After using it for a few weeks, it’s hard to escape the notion that a service like Shopping Express represents the future of shopping. (Also the past of shopping—the return of profitless late-1990s’ services like Kozmo and WebVan, though presumably with some way of making money this time.) It’s not just Google: Yesterday, Reuters reported that Amazon is expanding AmazonFresh, its grocery delivery service, to big cities beyond Seattle, where it has been running for several years. Amazon’s move confirms the theory I floated a year ago, that the e-commerce giant’s long-term goal is to make same-day shipping the norm for most of its customers.

Amazon’s main competitive disadvantage, today, is shipping delays. While shopping online makes sense for many purchases, the vast majority of the world’s retail commerce involves stuff like toilet paper and dishwashing soap—items that people need (or think they need) immediately. That explains why Wal-Mart sells half a trillion dollars worth of goods every year, and Amazon sells only $61 billion. Wal-Mart’s customers return several times a week to buy what they need for dinner, and while they’re there, they sometimes pick up higher-margin stuff, too. By offering same-day delivery on groceries and household items, Amazon and Google are trying to edge in on that market.

As I learned while using Shopping Express, the plan could be a hit. If done well, same-day shipping erases the distinctions between the kinds of goods we buy online and those we buy offline. Today, when you think of something you need, you have to go through a mental checklist: Do I need it now? Can it wait two days? Is it worth driving for? With same-day shipping, you don’t have to do that. All shopping becomes online shopping.

Read the entire article here.

Image: Webvan truck. Courtesy of Wikipedia.

Send to Kindle

Google’s AI

The collective IQ of Google, the company, inched up a few notches in January of 2013 when they hired Ray Kurzweil. Over the coming years if the work of Kurzweil, and many of his colleagues, pays off the company’s intelligence may surge significantly. This time though it will be thanks to their work on artificial intelligence (AI), machine learning and (very) big data.

From  Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Read the entire fascinating article following the jump.

Image courtesy of Wired.

Send to Kindle

Ray Kurzweil and Living a Googol Years

By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.

From the Wall Street Journal:

Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?

This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.

Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.

In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.

If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”

If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.

In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”

There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.

Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”

“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “

To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.

“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.

“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.

By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.

How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”

We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”

“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.

Read the entire article after the jump.

Send to Kindle

The Digital Afterlife and i-Death

Leave it to Google to help you auto-euthanize and die digitally. The presence of our online selves after death was of limited concern until recently. However, with the explosion of online media and social networks our digital tracks remain preserved and scattered across drives and backups in distributed, anonymous data centers. Physical death does not change this.

[A case in point: your friendly editor at theDiagonal was recently asked to befriend a colleague via LinkedIn. All well and good, except that the colleague had passed-away two years earlier.]

So, armed with Google’s new Inactive Account Manager, death — at least online — may be just a couple of clicks away. By corollary it would be a small leap indeed to imagine an enterprising company charging an annual fee to a dearly-departed member to maintain a digital afterlife ad infinitum.

From the Independent:

The search engine giant Google has announced a new feature designed to allow users to decide what happens to their data after they die.

The feature, which applies to the Google-run email system Gmail as well as Google Plus, YouTube, Picasa and other tools, represents an attempt by the company to be the first to deal with the sensitive issue of data after death.

In a post on the company’s Public Policy Blog Andreas Tuerk, Product Manager, writes: “We hope that this new feature will enable you to plan your digital afterlife – in a way that protects your privacy and security – and make life easier for your loved ones after you’re gone.”

Google says that the new account management tool will allow users to opt to have their data deleted after three, six, nine or 12 months of inactivity. Alternatively users can arrange for certain contacts to be sent data from some or all of their services.

The California-based company did however stress that individuals listed to receive data in the event of ‘inactivity’ would be warned by text or email before the information was sent.

Social Networking site Facebook already has a function that allows friends and family to “memorialize” an account once its owner has died.

Read the entire article following the jump.

Send to Kindle

You Are a Google Datapoint

At first glance Google’s aim to make all known information accessible and searchable seems to be a fundamentally worthy goal, and in keeping with its “Do No Evil” mantra. Surely, giving all people access to the combined knowledge of the human race can do nothing but good, intellectually, politically and culturally.

However, what if that information includes you? After all, you are information: from the sequence of bases in your DNA, to the food you eat and the products you purchase, to your location and your planned vacations, your circle of friends and colleagues at work, to what you say and write and hear and see. You are a collection of datapoints, and if you don’t market and monetize them, someone else will.

Google continues to extend its technology boundaries and its vast indexed database of information. Now with the introduction of Google Glass the company extends its domain to a much more intimate level. Glass gives Google access to data on your precise location; it can record what you say and the sounds around you; it can capture what you are looking at and make it instantly shareable over the internet. Not surprisingly, this raises numerous concerns over privacy and security, and not only for the wearer of Google Glass. While active opt-in / opt-out features would allow a user a fair degree of control over how and what data is collected and shared with Google, it does not address those being observed.

So, beware the next time you are sitting in a Starbucks or shopping in a mall or riding the subway, you may be being recorded and your digital essence distributed over the internet. Perhaps, someone somewhere will even be making money from you. While the Orwellian dystopia of government surveillance and control may still be a nightmarish fiction, corporate snooping and monetization is no less troubling. Remember, to some, you are merely a datapoint (care of Google), a publication (via Facebook), and a product (courtesy of Twitter).

From the Telegraph:

In the online world – for now, at least – it’s the advertisers that make the world go round. If you’re Google, they represent more than 90% of your revenue and without them you would cease to exist.

So how do you reconcile the fact that there is a finite amount of data to be gathered online with the need to expand your data collection to keep ahead of your competitors?

There are two main routes. Firstly, try as hard as is legally possible to monopolise the data streams you already have, and hope regulators fine you less than the profit it generated. Secondly, you need to get up from behind the computer and hit the streets.

Google Glass is the first major salvo in an arms race that is going to see increasingly intrusive efforts made to join up our real lives with the digital businesses we have become accustomed to handing over huge amounts of personal data to.

The principles that underpin everyday consumer interactions – choice, informed consent, control – are at risk in a way that cannot be healthy. Our ability to walk away from a service depends on having a choice in the first place and knowing what data is collected and how it is used before we sign up.

Imagine if Google or Facebook decided to install their own CCTV cameras everywhere, gathering data about our movements, recording our lives and joining up every camera in the land in one giant control room. It’s Orwellian surveillance with fluffier branding. And this isn’t just video surveillance – Glass uses audio recording too. For added impact, if you’re not content with Google analysing the data, the person can share it to social media as they see fit too.

Yet that is the reality of Google Glass. Everything you see, Google sees. You don’t own the data, you don’t control the data and you definitely don’t know what happens to the data. Put another way – what would you say if instead of it being Google Glass, it was Government Glass? A revolutionary way of improving public services, some may say. Call me a cynic, but I don’t think it’d have much success.

More importantly, who gave you permission to collect data on the person sitting opposite you on the Tube? How about collecting information on your children’s friends? There is a gaping hole in the middle of the Google Glass world and it is one where privacy is not only seen as an annoying restriction on Google’s profit, but as something that simply does not even come into the equation. Google has empowered you to ignore the privacy of other people. Bravo.

It’s already led to reactions in the US. ‘Stop the Cyborgs’ might sound like the rallying cry of the next Terminator film, but this is the start of a campaign to ensure places of work, cafes, bars and public spaces are no-go areas for Google Glass. They’ve already produced stickers to put up informing people that they should take off their Glass.

They argue, rightly, that this is more than just a question of privacy. There’s a real issue about how much decision making is devolved to the display we see, in exactly the same way as the difference between appearing on page one or page two of Google’s search can spell the difference between commercial success and failure for small businesses. We trust what we see, it’s convenient and we don’t question the motives of a search engine in providing us with information.

The reality is very different. In abandoning critical thought and decision making, allowing ourselves to be guided by a melee of search results, social media and advertisements we do risk losing a part of what it is to be human. You can see the marketing already – Glass is all-knowing. The issue is that to be all-knowing, it needs you to help it be all-seeing.

Read the entire article after the jump.

Image: Google’s Sergin Brin wearing Google Glass. Courtesy of CBS News.

Send to Kindle

Beware North Korea, Google is Watching You

This week Google refreshed its maps of North Korea. What was previously a blank canvas with only the country’s capital — Pyongyang — visible, now boasts roads, hotels, monuments and even some North Korean internment camps. While this is not the first detailed map of the secretive state it is an important milestone in Google’s quest to map us all.

From the Washington Post:

Until Tuesday, North Korea appeared on Google Maps as a near-total white space — no roads, no train lines, no parks and no restaurants. The only thing labeled was the capital city, Pyongyang.

This all changed when Google, on Tuesday, rolled out a detailed map of one of the world’s most secretive states. The new map labels everything from Pyongyang’s subway stops to the country’s several city-sized gulags, as well as its monuments, hotels, hospitals and department stores.

According to a Google blog post, the maps were created by a group of volunteer “citizen cartographers,” through an interface known as Google Map Maker. That program — much like Wikipedia — allows users to submit their own data, which is then fact-checked by other users, and sometimes altered many times over. Similar processes were used in other once-unmapped countries like Afghanistan and Burma.

In the case of North Korea, those volunteers worked from outside of the country, beginning from 2009. They used information that was already public, compiling details from existing analog maps, satellite images, or other Web-based materials. Much of the information was already available on the Internet, said Hwang Min-woo, 28, a volunteer mapmaker from Seoul who worked for two years on the project.

North Korea was the last country virtually unmapped by Google, but other — even more detailed — maps of the North existed before this. Most notable is a map created by Curtis Melvin, who runs the North Korea Economy Watch blog and spent years identifying thousands of landmarks in the North: tombs, textile factories, film studios, even rumored spy training locations. Melvin’s map is available as a downloadable Google Earth file.

Google’s map is important, though, because it is so readily accessible.  The map is unlikely to have an immediate influence in the North, where Internet use is restricted to all but a handful of elites. But it could prove beneficial for outsider analysts and scholars, providing an easy-to-access record about North Korea’s provinces, roads, landmarks, as well as hints about its many unseen horrors.

Read the entire article and check out more maps after the jump.

Send to Kindle

Big Brother is Mapping You

One hopes that Google’s intention to “organize the world’s information” will remain benign for the foreseeable future. Yet, as more and more of our surroundings and moves are mapped and tracked online, and increasingly offline, it would be wise to remain ever vigilant. Many put up with the encroachment of advertisers and promoters into almost every facet of their daily lives as a necessary, modern evil. But where is the dividing line that separates an ignorable irritation from an intrusion of privacy and a grab for control? For the paranoid amongst us, it may only be a matter of time before our digital footprints come under the increasing scrutiny, and control, of organizations with grander designs.

From the Guardian:

Eight years ago, Google bought a cool little graphics business called Keyhole, which had been working on 3D maps. Along with the acquisition came Brian McClendon, aka “Bam”, a tall and serious Kansan who in a previous incarnation had supplied high-end graphics software that Hollywood used in films including Jurassic Park and Terminator 2. It turned out to be a very smart move.

Today McClendon is Google’s Mr Maps – presiding over one of the fastest-growing areas in the search giant’s business, one that has recently left arch-rival Apple red-faced and threatens to make Google the most powerful company in mapping the world has ever seen.

Google is throwing its considerable resources into building arguably the most comprehensive map ever made. It’s all part of the company’s self-avowed mission is to organize all the world’s information, says McClendon.

“You need to have the basic structure of the world so you can place the relevant information on top of it. If you don’t have an accurate map, everything else is inaccurate,” he says.

It’s a message that will make Apple cringe. Apple triggered howls of outrage when it pulled Google Maps off the latest iteration of its iPhone software for its own bug-riddled and often wildly inaccurate map system. “We screwed up,” Apple boss Tim Cook said earlier this week.

McClendon, pictured, won’t comment on when and if Apple will put Google’s application back on the iPhone. Talks are ongoing and he’s at pains to point out what a “great” product the iPhone is. But when – or if – Apple caves, it will be a huge climbdown. In the meantime, what McClendon really cares about is building a better map.

This not the first time Google has made a landgrab in the real world, as the publishing industry will attest. Unhappy that online search was missing all the good stuff inside old books, Google – controversially – set about scanning the treasures of Oxford’s Bodleian library and some of the world’s other most respected collections.

Its ambitions in maps may be bigger, more far reaching and perhaps more controversial still. For a company developing driverless cars and glasses that are wearable computers, maps are a serious business. There’s no doubting the scale of McClendon’s vision. His license plate reads: ITLLHPN.

Until the 1980s, maps were still largely a pen and ink affair. Then mainframe computers allowed the development of geographic information system software (GIS), which was able to display and organise geographic information in new ways. By 2005, when Google launched Google Maps, computing power allowed GIS to go mainstream. Maps were about to change the way we find a bar, a parcel or even a story. Washington DC’s homicidewatch.org, for example, uses Google Maps to track and follow deaths across the city. Now the rise of mobile devices has pushed mapping into everyone’s hands and to the front line in the battle of the tech giants.

It’s easy to see why Google is so keen on maps. Some 20% of Google’s queries are now “location specific”. The company doesn’t split the number out but on mobile the percentage is “even higher”, says McClendon, who believes maps are set to unfold themselves ever further into our lives.

Google’s approach to making better maps is about layers. Starting with an aerial view, in 2007 Google added Street View, an on-the-ground photographic map snapped from its own fleet of specially designed cars that now covers 5 million of the 27.9 million miles of roads on Google Maps.

Google isn’t stopping there. The company has put cameras on bikes to cover harder-to-reach trails, and you can tour the Great Barrier Reef thanks to diving mappers. Luc Vincent, the Google engineer known as “Mr Street View”, carried a 40lb pack of snapping cameras down to the bottom of the Grand Canyon and then back up along another trail as fellow hikers excitedly shouted “Google, Google” at the man with the space-age backpack. McClendon, pictured, has also played his part. He took his camera to Antarctica, taking 500 or more photos of a penguin-filled island to add to Google Maps. “The penguins were pretty oblivious. They just don’t care about people,” he says.

Now the company has projects called Ground Truth, which corrects errors online, and Map Maker, a service that lets people make their own maps. In the western world the product has been used to add a missing road or correct a one-way street that is pointing the wrong way, and to generally improve what’s already there. In Africa, Asia and other less well covered areas of the world, Google is – literally – helping people put themselves on the map.

In 2008, it could take six to 18 months for Google to update a map. The company would have to go back to the firm that provided its map information and get them to check the error, correct it and send it back. “At that point we decided we wanted to bring that information in house,” says McClendon. Google now updates its maps hundreds of times a day. Anyone can correct errors with roads signs or add missing roads and other details; Google double checks and relies on other users to spot mistakes.

Thousands of people use Google’s Map Maker daily to recreate their world online, says Michael Weiss-Malik, engineering director at Google Maps. “We have some Pakistanis living in the UK who have basically built the whole map,” he says. Using aerial shots and local information, people have created the most detailed, and certainly most up-to-date, maps of cities like Karachi that have probably ever existed. Regions of Africa and Asia have been added by map-mad volunteers.

Read the entire article following the jump.

Send to Kindle

The Tubes of the Internets

Google lets the world peek at the many tubes that form a critical part of its search engine infrastructure — functional and pretty too.

From the Independent:

They are the cathedrals of the information age – with the colour scheme of an adventure playground.

For the first time, Google has allowed cameras into its high security data centres – the beating hearts of its global network that allow the web giant to process 3 billion internet searches every day.

Only a small band of Google employees have ever been inside the doors of the data centres, which are hidden away in remote parts of North America, Belgium and Finland.

Their workplaces glow with the blinking lights of LEDs on internet servers reassuring technicians that all is well with the web, and hum to the sound of hundreds of giant fans and thousands of gallons of water, that stop the whole thing overheating.

“Very few people have stepped inside Google’s data centers [sic], and for good reason: our first priority is the privacy and security of your data, and we go to great lengths to protect it, keeping our sites under close guard,” the company said yesterday. Row upon row of glowing servers send and receive information from 20 billion web pages every day, while towering libraries store all the data that Google has ever processed – in case of a system failure.

With data speeds 200,000 times faster than an ordinary home internet connection, Google’s centres in America can share huge amounts of information with European counterparts like the remote, snow-packed Hamina centre in Finland, in the blink of an eye.

Read the entire article after the jump, or take a look at more images from the bowels of Google after the leap.

Send to Kindle

Google: Please Don’t Be Evil

Google has been variously praised and derided for its corporate manta, “Don’t Be Evil”. For those who like to believe that Google has good intentions recent events strain these assumptions. The company was found to have been snooping on and collecting data from personal Wi-Fi routers. Is this the case of a lone-wolf or a corporate strategy?

From Slate:

Was Google’s snooping on home Wi-Fi users the work of a rogue software engineer? Was it a deliberate corporate strategy? Was it simply an honest-to-goodness mistake? And which of these scenarios should we wish for—which would assuage your fears about the company that manages so much of our personal data?

These are the central questions raised by a damning FCC report on Google’s Street View program that was released last weekend. The Street View scandal began with a revolutionary idea—Larry Page wanted to snap photos of every public building in the world. Beginning in 2007, the search company’s vehicles began driving on streets in the United States (and later Europe, Canada, Mexico, and everywhere else), collecting a stream of images to feed into Google Maps.

While developing its Street View cars, Google’s engineers realized that the vehicles could also be used for “wardriving.” That’s a sinister-sounding name for the mainly noble effort to map the physical location of the world’s Wi-Fi routers. Creating a location database of Wi-Fi hotspots would make Google Maps more useful on mobile devices—phones without GPS chips could use the database to approximate their physical location, while GPS-enabled devices could use the system to speed up their location-monitoring systems. As a privacy matter, there was nothing unusual about wardriving. By the time Google began building its system, several startups had already created their own Wi-Fi mapping databases.

But Google, unlike other companies, wasn’t just recording the location of people’s Wi-Fi routers. When a Street View car encountered an open Wi-Fi network—that is, a router that was not protected by a password—it recorded all the digital traffic traveling across that router. As long as the car was within the vicinity, it sucked up a flood of personal data: login names, passwords, the full text of emails, Web histories, details of people’s medical conditions, online dating searches, and streaming music and movies.

Imagine a postal worker who opens and copies one letter from every mailbox along his route. Google’s sniffing was pretty much the same thing, except instead of one guy on one route it was a whole company operating around the world. The FCC report says that when French investigators looked at the data Google collected, they found “an exchange of emails between a married woman and man, both seeking an extra-marital relationship” and “Web addresses that revealed the sexual preferences of consumers at specific residences.” In the United States, Google’s cars collected 200 gigabytes of such data between 2008 and 2010, and they stopped only when regulators discovered the practice.

Why did Google collect all this data? What did it want to do with people’s private information? Was collecting it a mistake? Was it the inevitable result of Google’s maximalist philosophy about public data—its aim to collect and organize all of the world’s information?

Google says the answer to that final question is no. In its response to the FCC and its public blog posts, the company says it is sorry for what happened, and insists that it has established a much stricter set of internal policies to prevent something like this from happening again. The company characterizes the collection of Wi-Fi payload data as the idea of one guy, an engineer who contributed code to the Street View program. In the FCC report, he’s called Engineer Doe. On Monday, the New York Times identified him as Marius Milner, a network programmer who created Network Stumbler, a popular Wi-Fi network detection tool. The company argues that Milner—for reasons that aren’t really clear—slipped the snooping code into the Street View program without anyone else figuring out what he was up to. Nobody else on the Street View team wanted to collect Wi-Fi data, Google says—they didn’t think it would be useful in any way, and, in fact, the data was never used for any Google product.

Should we believe Google’s lone-coder theory? I have a hard time doing so. The FCC report points out that Milner’s “design document” mentions his intention to collect and analyze payload data, and it also highlights privacy as a potential concern. Though Google’s privacy team never reviewed the program, many of Milner’s colleagues closely reviewed his source code. In 2008, Milner told one colleague in an email that analyzing the Wi-Fi payload data was “one of my to-do items.” Later, he ran a script to count the Web addresses contained in the collected data and sent his results to an unnamed “senior manager.” The manager responded as if he knew what was going on: “Are you saying that these are URLs that you sniffed out of Wi-Fi packets that we recorded while driving?” Milner responded by explaining exactly where the data came from. “The data was collected during the daytime when most traffic is at work,” he said.

Read the entire article after the jump.

Image courtesy of Fastcompany.

Send to Kindle

A Hidden World Revealed Through Nine Eyes

Since mid-2007 the restless nine-eyed cameras of Google Street View have been snapping millions, if not billions, of images of the world’s streets.

The mobile cameras with 360 degree views, perched atop Google’s fleet of specially adapted vehicles, have already covered most of North America, Brazil, South Africa, Australia and large swathes of Europe. In roaming many of the world roadways Google’s cameras have also snapped numerous accidental images: people caught unaware, car accidents, odd views into nearby buildings, eerie landscapes.

Regardless of the privacy issues here, the photographs make for some fascinating in-the-moment art. A number of enterprising artists and photographers have included some of these esoteric Google Street View “out-takes” into their work. A selection from Jon Rafman below. See more of his and Google’s work here.

 

 

Send to Kindle

What Did You Have for Breakfast Yesterday? Ask Google

Memory is, well, so 1990s. Who needs it when we have Google, Siri and any number of services to help answer and recall everything we’ve ever perceived and wished to remember or wanted to know. Will our personal memories become another shared service served up from the “cloud”?

From the Wilson Quarterly:

In an age when most information is just a few keystrokes away, it’s natural to wonder: Is Google weakening our powers of memory? According to psychologists Betsy Sparrow of Columbia University, Jenny Liu of the University of Wisconsin, Madison, and Daniel M. Wegner of Harvard, the Internet has not so much diminished intelligent recall as tweaked it.

The trio’s research shows what most computer users can tell you anecdotally: When you know you have the Internet at hand, your memory relaxes. In one of their experiments, 46 Harvard undergraduates were asked to answer 32 trivia questions on computers. After each one, they took a quick Stroop test, in which they were shown words printed in different colors and then asked to name the color of each word. They took more time to name the colors of Internet-related words, such as modem and browser. According to Stroop test conventions, this is because the words were related to something else that they were already thinking about—yes, they wanted to fire up Google to answer those tricky trivia questions.

In another experiment, the authors uncovered evidence suggesting that access to computers plays a fundamental role in what people choose to commit to their God-given hard drive. Subjects were instructed to type 40 trivia-like statements into a dialog box. Half were told that the computer would erase the information and half that it would be saved. Afterward, when asked to recall the statements, the students who were told their typing would be erased remembered much more. Lacking a computer backup, they apparently committed more to memory.

Read the entire article here.

Send to Kindle

Googlization of the Globe: For Good (or Evil)

Google’s oft quoted corporate mantra — do no evil — reminds us to remain vigilant even if the company believes it does good and can do no wrong.

Google serves up countless search results to ease our never-ending thirst for knowledge, deals, news, quotes, jokes, user manuals, contacts, products and so on. This is clearly of tremendous benefit to us, to Google and to Google’s advertisers. Of course in fulfilling our searches Google collects equally staggering amounts of information — about us. Increasingly the company will know where we are, what we like and dislike, what we prefer, what we do, where we travel, with whom and why, how our friends are, what we read, what we buy.

As Jaron Lanier remarked in a recent post, there is a fine line between being a global index to the world’s free and open library of information and being the paid gatekeeper to our collective knowledge and hoarder of our collective online (and increasingly offline) behaviors, tracks and memories. We have already seen how Google, and others, can personalize search results based on our previous tracks thus filtering and biasing what we see and read, limiting our exposure to alternate views and opinions.

It’s quite easy to imagine a rather more dystopian view of a society gone awry manipulated by a not-so-benevolent Google when, eventually, founders Brin and Page retire to their vacation bases on the moon.

With this in mind Daniel Soar over at London Review of Books reviews several recent books about Google and offers some interesting insights.

London Review of Books:

This spring, the billionaire Eric Schmidt announced that there were only four really significant technology companies: Apple, Amazon, Facebook and Google, the company he had until recently been running. People believed him. What distinguished his new ‘gang of four’ from the generation it had superseded – companies like Intel, Microsoft, Dell and Cisco, which mostly exist to sell gizmos and gadgets and innumerable hours of expensive support services to corporate clients – was that the newcomers sold their products and services to ordinary people. Since there are more ordinary people in the world than there are businesses, and since there’s nothing that ordinary people don’t want or need, or can’t be persuaded they want or need when it flashes up alluringly on their screens, the money to be made from them is virtually limitless. Together, Schmidt’s four companies are worth more than half a trillion dollars. The technology sector isn’t as big as, say, oil, but it’s growing, as more and more traditional industries – advertising, travel, real estate, used cars, new cars, porn, television, film, music, publishing, news – are subsumed into the digital economy. Schmidt, who as the ex-CEO of a multibillion-dollar corporation had learned to take the long view, warned that not all four of his disruptive gang could survive. So – as they all converge from their various beginnings to compete in the same area, the place usually referred to as ‘the cloud’, a place where everything that matters is online – the question is: who will be the first to blink?

If the company that falters is Google, it won’t be because it didn’t see the future coming. Of Schmidt’s four technology juggernauts, Google has always been the most ambitious, and the most committed to getting everything possible onto the internet, its mission being ‘to organise the world’s information and make it universally accessible and useful’. Its ubiquitous search box has changed the way information can be got at to such an extent that ten years after most people first learned of its existence you wouldn’t think of trying to find out anything without typing it into Google first. Searching on Google is automatic, a reflex, just part of what we do. But an insufficiently thought-about fact is that in order to organise the world’s information Google first has to get hold of the stuff. And in the long run ‘the world’s information’ means much more than anyone would ever have imagined it could. It means, of course, the totality of the information contained on the World Wide Web, or the contents of more than a trillion webpages (it was a trillion at the last count, in 2008; now, such a number would be meaningless). But that much goes without saying, since indexing and ranking webpages is where Google began when it got going as a research project at Stanford in 1996, just five years after the web itself was invented. It means – or would mean, if lawyers let Google have its way – the complete contents of every one of the more than 33 million books in the Library of Congress or, if you include slightly varying editions and pamphlets and other ephemera, the contents of the approximately 129,864,880 books published in every recorded language since printing was invented. It means every video uploaded to the public internet, a quantity – if you take the Google-owned YouTube alone – that is increasing at the rate of nearly an hour of video every second.

Read more here.

Send to Kindle

Google’s Earth

From The New York Times:

“I ACTUALLY think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot.

We have yet to take Google’s measure. We’ve seen nothing like it before, and we already perceive much of our world through it. We would all very much like to be sagely and reliably advised by our own private genie; we would like the genie to make the world more transparent, more easily navigable. Google does that for us: it makes everything in the world accessible to everyone, and everyone accessible to the world. But we see everyone looking in, and blame Google.

Google is not ours. Which feels confusing, because we are its unpaid content-providers, in one way or another. We generate product for Google, our every search a minuscule contribution. Google is made of us, a sort of coral reef of human minds and their products. And still we balk at Mr. Schmidt’s claim that we want Google to tell us what to do next. Is he saying that when we search for dinner recommendations, Google might recommend a movie instead? If our genie recommended the movie, I imagine we’d go, intrigued. If Google did that, I imagine, we’d bridle, then begin our next search.

We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies. We also seldom imagined (in spite of ample evidence) that emergent technologies would leave legislation in the dust, yet they do. In a world characterized by technologically driven change, we necessarily legislate after the fact, perpetually scrambling to catch up, while the core architectures of the future, increasingly, are erected by entities like Google.

William Gibson is the author of the forthcoming novel “Zero History.”

More from theSource here.

Send to Kindle