Tag Archives: privacy

The Rise of Beards and the Fall of Social Media

Google-search-hipster-beard

Perhaps the rise of the hipster beard, handle-bar mustache, oversized glasses, craft brew, fixie (fixed-gear bicycle), thrift store sweaters, indie folk and pickling is a sign. Some see it as a signal of the imminent demise of social media, no less.

Can the length of facial hair or jacket elbow pads and the end of Facebook be correlated? I doubt it, but it’s worth pondering. Though, like John Biggs over a TechCrunch I do believe that the technology pendulum will eventually swing back towards more guarded privacy — if only as the next generation strikes back at the unguarded, frivolous, over-the-top public sharing of its parents.

Then, we can only hope for the demise of the hipster trend.

From TechCrunch:

After the early, exciting expository years of the Internet – the Age of Jennicam where the web was supposed to act as confessional and stage – things changed swiftly. This new medium was a revelation, a gift of freedom that we all took for granted. Want to post rants against the government? Press publish on Blogspot. Want to yell at the world? Aggregate and comment upon some online news. Want to meet people with similar interests or kinks? There was a site for you although you probably had to hunt it down.

The way we shared deep feelings on the Internet grew out of its first written stage into other more interactive forms. It passed through chatrooms, Chatroulette, and photo sharing. It passed through YouTube and Indie gaming. It planted a long, clammy kiss on Tumblr where it will probably remain for a long time. But that was for the professional exhibitionists. Today the most confessional “static” writing you’ll find on a web page is the occasional Medium post about beating adversity through meditation and Apple Watch apps and we have hidden our human foibles behind dank memes and chatbots. Where could the average person, the civilian, go to share their deepest feelings of love, anger, and fear?

Social media.

But an important change is coming to social media. We are learning that all of our thoughts aren’t welcome, especially by social media company investors. We are also learning that social media companies are a business. This means conversation is encouraged as long as it runs the gamut from mundane to vicious but stops at the overtly sexual or violent. Early in its life-cycle Pinterest made a big stink about actively banning porn while Instagram essentially allowed all sorts of exposition as long as it was monetizable and censored. Facebook still actively polices its photographs for even the hint of sexuality as an artist named Justyna Kiesielewicz recently discovered. She posted a staid nude and wanted to run it as an targeted advertisement. Facebook mistakenly ran the ad for a while, grabbing $50 before it banned the image. In short the latest incarnation of the expository impulse is truncated and sites like Facebook and Twitter welcome most hate groups but most draw the line at underboobs.

Read the entire article here.

Image courtesy of Google Search and all hipsters.

The Biggest Threats to Democracy

Edward_SnowdenHistory reminds us of those critical events that pose threats to us on various levels: to our well being at a narrow level and to the foundations of our democracies at a much broader level. And, most of these existential threats seem to come from the outside: wars, terrorism, ethnic cleansing.

But it’s not quite that simple — the biggest threats come not from external sources of evil, but from within us. Perhaps the two most significant are our apathy and paranoia. Taken together they erode our duty to protect our democracy, and hand over ever-increasing power to those who claim to protect us. Thus, before the Nazi machine enslaved huge portions of Europe, the citizens of Germany allowed it to gain power; before Al-Qaeda and Isis and their terrorist look-a-likes gained notoriety local conditions allowed these groups to flourish. We are all complicit in our inaction — driven by indifference or fear, or both.

Two timely events serve to remind us of the huge costs and consequences of our inaction from apathy and paranoia. One from the not too distant past, and the other portends our future. First, it is Victory in Europe (VE) day, the anniversary of the Allied win in WWII, on May 8, 1945. Many millions perished through the brutal policies of the Nazi ideology and its instrument, the Wehrmacht, and millions more subsequently perished in the fight to restore moral order. Much of Europe first ignored the growing threat of the national socialists. As the threat grew, Europe continued to contemplate appeasement. Only later, as true scale of atrocities became apparent did leaders realize that the threat needed to be tackled head-on.

Second, a federal appeals court in the United States ruled on May 7, 2015 that the National Security Agency’s collection of millions of phone records is illegal. This serves to remind us of the threat that our own governments pose to our fundamental freedoms under the promise of continued comfort and security. For those who truly care about the fragility of democracy this is a momentous and rightful ruling. It is all the more remarkable that since the calamitous events of September 11, 2001 few have challenged this governmental overreach into our private lives: our phone calls, our movements, our internet surfing habits, our credit card history. We have seen few public demonstrations and all too little ongoing debate. Indeed, only through the recent revelations by Edward Snowden did the debate even enter the media cycle. And, the debate is only just beginning.

Both of these events show that only we, the people who are fortunate enough to live within a democracy, can choose a path that strengthens our governmental institutions and balances these against our fundamental rights. By corollary we can choose a path that weakens our institutions too. One path requires engagement and action against those who use fear to make us conform. The other path, often easier, requires that we do nothing, accept the status quo, curl up in the comfort of our cocoons and give in to fear.

So this is why the appeals court ruling is so important. While only three in number, the judges have established that our government has been acting illegally, yet supposedly on our behalf. While the judges did not terminate the unlawful program, they pointedly requested the US Congress to debate and then define laws that would be narrower and less at odds with citizens’ constitutional rights. So, the courts have done us all a great favor. One can only hope that this opens the eyes, ears and mouths of the apathetic and fearful so that they continuously demand fair and considered action from their elected representatives. Only then can we begin to make inroads against the real and insidious threats to our democracy — our apathy and our fear. And perhaps, also, Mr.Snowden can take a small helping of solace.

From the Guardian:

The US court of appeals has ruled that the bulk collection of telephone metadata is unlawful, in a landmark decision that clears the way for a full legal challenge against the National Security Agency.

A panel of three federal judges for the second circuit overturned an earlier rulingthat the controversial surveillance practice first revealed to the US public by NSA whistleblower Edward Snowden in 2013 could not be subject to judicial review.

But the judges also waded into the charged and ongoing debate over the reauthorization of a key Patriot Act provision currently before US legislators. That provision, which the appeals court ruled the NSA program surpassed, will expire on 1 June amid gridlock in Washington on what to do about it.

The judges opted not to end the domestic bulk collection while Congress decides its fate, calling judicial inaction “a lesser intrusion” on privacy than at the time the case was initially argued.

“In light of the asserted national security interests at stake, we deem it prudent to pause to allow an opportunity for debate in Congress that may (or may not) profoundly alter the legal landscape,” the judges ruled.

But they also sent a tacit warning to Senator Mitch McConnell, the Republican leader in the Senate who is pushing to re-authorize the provision, known as Section 215, without modification: “There will be time then to address appellants’ constitutional issues.”

“We hold that the text of section 215 cannot bear the weight the government asks us to assign to it, and that it does not authorize the telephone metadata program,” concluded their judgment.

“Such a monumental shift in our approach to combating terrorism requires a clearer signal from Congress than a recycling of oft?used language long held in similar contexts to mean something far narrower,” the judges added.

“We conclude that to allow the government to collect phone records only because they may become relevant to a possible authorized investigation in the future fails even the permissive ‘relevance’ test.

“We agree with appellants that the government’s argument is ‘irreconcilable with the statute’s plain text’.”

Read the entire story here.

Image: Edward Snowden. Courtesy of Wikipedia.

Privacy and Potato Chips

Google-search-potato-chip

Privacy and lack thereof is much in the news and on or minds. New revelations of data breaches, phone taps, corporate hackers and governmental overreach surface on a daily basis. So, it is no surprise to learn that researchers have found a cheap way to eavesdrop on our conversations via a potato chip (crisp, to our British-English readers) packet. No news yet on which flavor of chip makes for the best spying!

From ars technica:

Watch enough spy thrillers, and you’ll undoubtedly see someone setting up a bit of equipment that points a laser at a distant window, letting the snoop listen to conversations on the other side of the glass. This isn’t something Hollywood made up; high-tech snooping devices of this sort do exist, and they take advantage of the extremely high-precision measurements made possible with lasers in order to measure the subtle vibrations caused by sound waves.

A team of researchers has now shown, however, that you can skip the lasers. All you really need is a consumer-level digital camera and a conveniently located bag of Doritos. A glass of water or a plant would also do.

Good vibrations

Despite the differences in the technology involved, both approaches rely on the same principle: sound travels on waves of higher and lower pressure in the air. When these waves reach a flexible object, they set off small vibrations in the object. If you can detect these vibrations, it’s possible to reconstruct the sound. Laser-based systems detect the vibrations by watching for changes in the reflections of the laser light, but researchers wondered whether you could simply observe the object directly, using the ambient light it reflects. (The team involved researchers at MIT, Adobe Research, and Microsoft Research.)

The research team started with a simple test system made from a loudspeaker playing a rising tone, a high-speed camera, and a variety of objects: water, cardboard, a candy wrapper, some metallic foil, and (as a control) a brick. Each of these (even the brick) showed some response at the lowest end of the tonal range, but the other objects, particularly the cardboard and foil, had a response into much higher tonal regions. To observe the changes in ambient light, the camera didn’t have to capture the object at high resolution—it was used at 700 x 700 pixels or less—but it did have to be high-speed, capturing as many as 20,000 frames a second.

Processing the images wasn’t simple, however. A computer had to perform a weighted average over all the pixels captured, and even a twin 3.5GHz machine with 32GB of RAM took more than two hours to process one capture. Nevertheless, the results were impressive, as the algorithm was able to detect motion on the order of a thousandth of a pixel. This enabled the system to recreate the audio waves emitted by the loudspeaker.

Most of the rest of the paper describing the results involved making things harder on the system, as the researchers shifted to using human voices and moving the camera outside the room. They also showed that pre-testing the vibrating object’s response to a tone scale could help them improve their processing.

But perhaps the biggest surprise came when they showed that they didn’t actually need a specialized, high-speed camera. It turns out that most consumer-grade equipment doesn’t expose its entire sensor at once and instead scans an image across the sensor grid in a line-by-line fashion. Using a consumer video camera, the researchers were able to determine that there’s a 16 microsecond delay between each line, with a five millisecond delay between frames. Using this information, they treated each line as a separate exposure and were able to reproduce sound that way.

Read the entire article here.

Image courtesy of Google Search.

 

 

The Enigma of Privacy

Privacy is still a valued and valuable right. It should not be a mere benefit in a democratic society. But, in our current age privacy is becoming an increasingly threatened species. We are surrounded with social networks that share and mine our behaviors and we are assaulted by the snoopers and spooks from local and national governments.

From the Observer:

We have come to the end of privacy; our private lives, as our grandparents would have recognised them, have been winnowed away to the realm of the shameful and secret. To quote ex-tabloid hack Paul McMullan, “privacy is for paedos”. Insidiously, through small concessions that only mounted up over time, we have signed away rights and privileges that other generations fought for, undermining the very cornerstones of our personalities in the process. While outposts of civilisation fight pyrrhic battles, unplugging themselves from the web – “going dark” – the rest of us have come to accept that the majority of our social, financial and even sexual interactions take place over the internet and that someone, somewhere, whether state, press or corporation, is watching.

The past few years have brought an avalanche of news about the extent to which our communications are being monitored: WikiLeaks, the phone-hacking scandal, the Snowden files. Uproar greeted revelations about Facebook’s “emotional contagion” experiment (where it tweaked mathematical formulae driving the news feeds of 700,000 of its members in order to prompt different emotional responses). Cesar A Hidalgo of the Massachusetts Institute of Technology described the Facebook news feed as “like a sausage… Everyone eats it, even though nobody knows how it is made”.

Sitting behind the outrage was a particularly modern form of disquiet – the knowledge that we are being manipulated, surveyed, rendered and that the intelligence behind this is artificial as well as human. Everything we do on the web, from our social media interactions to our shopping on Amazon, to our Netflix selections, is driven by complex mathematical formulae that are invisible and arcane.

Most recently, campaigners’ anger has turned upon the so-called Drip (Data Retention and Investigatory Powers) bill in the UK, which will see internet and telephone companies forced to retain and store their customers’ communications (and provide access to this data to police, government and up to 600 public bodies). Every week, it seems, brings a new furore over corporations – Apple, Google, Facebook – sidling into the private sphere. Often, it’s unclear whether the companies act brazenly because our governments play so fast and loose with their citizens’ privacy (“If you have nothing to hide, you’ve nothing to fear,” William Hague famously intoned); or if governments see corporations feasting upon the private lives of their users and have taken this as a licence to snoop, pry, survey.

We, the public, have looked on, at first horrified, then cynical, then bored by the revelations, by the well-meaning but seemingly useless protests. But what is the personal and psychological impact of this loss of privacy? What legal protection is afforded to those wishing to defend themselves against intrusion? Is it too late to stem the tide now that scenes from science fiction have become part of the fabric of our everyday world?

Novels have long been the province of the great What If?, allowing us to see the ramifications from present events extending into the murky future. As long ago as 1921, Yevgeny Zamyatin imagined One State, the transparent society of his dystopian novel, We. For Orwell, Huxley, Bradbury, Atwood and many others, the loss of privacy was one of the establishing nightmares of the totalitarian future. Dave Eggers’s 2013 novel The Circle paints a portrait of an America without privacy, where a vast, internet-based, multimedia empire surveys and controls the lives of its people, relying on strict adherence to its motto: “Secrets are lies, sharing is caring, and privacy is theft.” We watch as the heroine, Mae, disintegrates under the pressure of scrutiny, finally becoming one of the faceless, obedient hordes. A contemporary (and because of this, even more chilling) account of life lived in the glare of the privacy-free internet is Nikesh Shukla’s Meatspace, which charts the existence of a lonely writer whose only escape is into the shallows of the web. “The first and last thing I do every day,” the book begins, “is see what strangers are saying about me.”

Our age has seen an almost complete conflation of the previously separate spheres of the private and the secret. A taint of shame has crept over from the secret into the private so that anything that is kept from the public gaze is perceived as suspect. This, I think, is why defecation is so often used as an example of the private sphere. Sex and shitting were the only actions that the authorities in Zamyatin’s One State permitted to take place in private, and these remain the battlegrounds of the privacy debate almost a century later. A rather prim leaked memo from a GCHQ operative monitoring Yahoo webcams notes that “a surprising number of people use webcam conversations to show intimate parts of their body to the other person”.

It is to the bathroom that Max Mosley turns when we speak about his own campaign for privacy. “The need for a private life is something that is completely subjective,” he tells me. “You either would mind somebody publishing a film of you doing your ablutions in the morning or you wouldn’t. Personally I would and I think most people would.” In 2008, Mosley’s “sick Nazi orgy”, as the News of the World glossed it, featured in photographs published first in the pages of the tabloid and then across the internet. Mosley’s defence argued, successfully, that the romp involved nothing more than a “standard S&M prison scenario” and the former president of the FIA won £60,000 damages under Article 8 of the European Convention on Human Rights. Now he has rounded on Google and the continued presence of both photographs and allegations on websites accessed via the company’s search engine. If you type “Max Mosley” into Google, the eager autocomplete presents you with “video,” “case”, “scandal” and “with prostitutes”. Half-way down the first page of the search we find a link to a professional-looking YouTube video montage of the NotW story, with no acknowledgment that the claims were later disproved. I watch it several times. I feel a bit grubby.

“The moment the Nazi element of the case fell apart,” Mosley tells me, “which it did immediately, because it was a lie, any claim for public interest also fell apart.”

Here we have a clear example of the blurred lines between secrecy and privacy. Mosley believed that what he chose to do in his private life, even if it included whips and nipple-clamps, should remain just that – private. The News of the World, on the other hand, thought it had uncovered a shameful secret that, given Mosley’s professional position, justified publication. There is a momentary tremor in Mosley’s otherwise fluid delivery as he speaks about the sense of invasion. “Your privacy or your private life belongs to you. Some of it you may choose to make available, some of it should be made available, because it’s in the public interest to make it known. The rest should be yours alone. And if anyone takes it from you, that’s theft and it’s the same as the theft of property.”

Mosley has scored some recent successes, notably in continental Europe, where he has found a culture more suspicious of Google’s sweeping powers than in Britain or, particularly, the US. Courts in France and then, interestingly, Germany, ordered Google to remove pictures of the orgy permanently, with far-reaching consequences for the company. Google is appealing against the rulings, seeing it as absurd that “providers are required to monitor even the smallest components of content they transmit or store for their users”. But Mosley last week extended his action to the UK, filing a claim in the high court in London.

Mosley’s willingness to continue fighting, even when he knows that it means keeping alive the image of his white, septuagenarian buttocks in the minds (if not on the computers) of the public, seems impressively principled. He has fallen victim to what is known as the Streisand Effect, where his very attempt to hide information about himself has led to its proliferation (in 2003 Barbra Streisand tried to stop people taking pictures of her Malibu home, ensuring photos were posted far and wide). Despite this, he continues to battle – both in court, in the media and by directly confronting the websites that continue to display the pictures. It is as if he is using that initial stab of shame, turning it against those who sought to humiliate him. It is noticeable that, having been accused of fetishising one dark period of German history, he uses another to attack Google. “I think, because of the Stasi,” he says, “the Germans can understand that there isn’t a huge difference between the state watching everything you do and Google watching everything you do. Except that, in most European countries, the state tends to be an elected body, whereas Google isn’t. There’s not a lot of difference between the actions of the government of East Germany and the actions of Google.”

All this brings us to some fundamental questions about the role of search engines. Is Google the de facto librarian of the internet, given that it is estimated to handle 40% of all traffic? Is it something more than a librarian, since its algorithms carefully (and with increasing use of your personal data) select the sites it wants you to view? To what extent can Google be held responsible for the content it puts before us?

Read the entire article here.

Google: The Standard Oil of Our Age

Google’s aim to organize the world’s information sounds benign enough. But delve a little deeper into its research and development efforts or witness its boundless encroachment into advertising, software, phones, glasses, cars, home automation, travel, internet services, artificial intelligence, robotics, online shopping (and so on), and you may get a more uneasy and prickly sensation. Is Google out to organize information or you? Perhaps it’s time to begin thinking about Google as a corporate hegemony, not quite a monopoly yet, but so powerful that counter-measures become warranted.

An open letter, excerpted below, from Mathias Döpfner, CEO of Axel Springer AG, does us all a service by raising the alarm bells.

From the Guardian:

Dear Eric Schmidt,

As you know, I am a great admirer of Google’s entrepreneurial success. Google’s employees are always extremely friendly to us and to other publishing houses, but we are not communicating with each other on equal terms. How could we? Google doesn’t need us. But we need Google. We are afraid of Google. I must state this very clearly and frankly, because few of my colleagues dare do so publicly. And as the biggest among the small, perhaps it is also up to us to be the first to speak out in this debate. You yourself speak of the new power of the creators, owners, and users.

In the long term I’m not so sure about the users. Power is soon followed by powerlessness. And this is precisely the reason why we now need to have this discussion in the interests of the long-term integrity of the digital economy’s ecosystem. This applies to competition – not only economic, but also political. As the situation stands, your company will play a leading role in the various areas of our professional and private lives – in the house, in the car, in healthcare, in robotronics. This is a huge opportunity and a no less serious threat. I am afraid that it is simply not enough to state, as you do, that you want to make the world a “better place”.

Google lists its own products, from e-commerce to pages from its own Google+ network, higher than those of its competitors, even if these are sometimes of less value for consumers and should not be displayed in accordance with the Google algorithm. It is not even clearly pointed out to the user that these search results are the result of self-advertising. Even when a Google service has fewer visitors than that of a competitor, it appears higher up the page until it eventually also receives more visitors.

You know very well that this would result in long-term discrimination against, and weakening of, any competition, meaning that Google would be able to develop its superior market position still further. And that this would further weaken the European digital economy in particular.

This also applies to the large and even more problematic set of issues concerning data security and data utilisation. Ever since Edward Snowden triggered the NSA affair, and ever since the close relations between major American online companies and the American secret services became public, the social climate – at least in Europe – has fundamentally changed. People have become more sensitive about what happens to their user data. Nobody knows as much about its customers as Google. Even private or business emails are read by Gmail and, if necessary, can be evaluated. You yourself said in 2010: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.” This is a remarkably honest sentence. The question is: are users happy with the fact that this information is used not only for commercial purposes – which may have many advantages, yet a number of spooky negative aspects as well – but could end up in the hands of the intelligence services, and to a certain extent already has?

Google is sitting on the entire current data trove of humanity, like the giant Fafner in The Ring of the Nibelung: “Here I lie and here I hold.” I hope you are aware of your company’s special responsibility. If fossil fuels were the fuels of the 20th century, then those of the 21st century are surely data and user profiles. We need to ask ourselves whether competition can generally still function in the digital age, if data is so extensively concentrated in the hands of one party.

There is a quote from you in this context that concerns me. In 2009 you said: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” The essence of freedom is precisely the fact that I am not obliged to disclose everything that I am doing, that I have a right to confidentiality and, yes, even to secrets; that I am able to determine for myself what I wish to disclose about myself. The individual right to this is what makes a democracy. Only dictatorships want transparent citizens instead of a free press.

Against this background, it greatly concerns me that Google – which has just announced the acquisition of drone manufacturer Titan Aerospace – has been seen for some time as being behind a number of planned enormous ships and floating working environments that can cruise and operate in the open ocean. What is the reason for this development? You don’t have to be a conspiracy theorist to find this alarming.

Historically, monopolies have never survived in the long term. Either they have failed as a result of their complacency, which breeds its own success, or they have been weakened by competition – both unlikely scenarios in Google’s case. Or they have been restricted by political initiatives.

Another way would be voluntary self-restraint on the part of the winner. Is it really smart to wait until the first serious politician demands the breakup of Google? Or even worse – until the people refuse to follow?

Sincerely yours,

Mathias Döpfner

Read the entire article here.

 

The Persistent Panopticon

microsoft-surveillance-system

Based on the ever-encroaching surveillance systems used by local and national governments and private organizations one has to wonder if we — the presumed innocent — are living inside or outside a prison facility. Advances in security and surveillance systems now make it possible to track swathes of the population over periods of time across an entire city.

From the Washington Post:

Shooter and victim were just a pair of pixels, dark specks on a gray streetscape. Hair color, bullet wounds, even the weapon were not visible in the series of pictures taken from an airplane flying two miles above.

But what the images revealed — to a degree impossible just a few years ago — was location, mapped over time. Second by second, they showed a gang assembling, blocking off access points, sending the shooter to meet his target and taking flight after the body hit the pavement. When the report reached police, it included a picture of the blue stucco building into which the killer ultimately retreated, at last beyond the view of the powerful camera overhead.

“I’ve witnessed 34 of these,” said Ross McNutt, the genial president of Persistent Surveillance Systems, which collected the images of the killing in Ciudad Juarez, Mexico, from a specially outfitted Cessna. “It’s like opening up a murder mystery in the middle, and you need to figure out what happened before and after.”

As Americans have grown increasingly comfortable with traditional surveillance cameras, a new, far more powerful generation is being quietly deployed that can track every vehicle and person across an area the size of a small city, for several hours at a time. Though these cameras can’t read license plates or see faces, they provide such a wealth of data that police, businesses, even private individuals can use them to help identify people and track their movements.

Already, the cameras have been flown above major public events, such as the Ohio political rally where Sen. John McCain (R-Ariz.) named Sarah Palin as his running mate in 2008, McNutt said. They’ve been flown above Baltimore; Philadelphia; Compton, Calif.; and Dayton in demonstrations for police. They’ve also been used for traffic impact studies, for security at NASCAR races — and at the request of a Mexican politician, who commissioned the flights over Ciudad Juarez.

Video: A time machine for police, letting them watch criminals—and everyone else.

Defense contractors are developing similar technology for the military, but its potential for civilian use is raising novel civil-liberty concerns. In Dayton, where Persistent Surveillance Systems is based, city officials balked last year when police considered paying for 200 hours of flights, in part because of privacy complaints.

“There are an infinite number of surveillance technologies that would help solve crimes .?.?. but there are reasons that we don’t do those things, or shouldn’t be doing those things,” said Joel Pruce, a University of Dayton post-doctoral fellow in human rights who opposed the plan. “You know where there’s a lot less crime? There’s a lot less crime in China.”

McNutt, a retired Air Force officer who once helped design a similar system for the skies above Fallujah, a key battleground city in Iraq, hopes to win over officials in Dayton and elsewhere by convincing them that cameras mounted on fixed-wing aircraft can provide far more useful intelligence than police helicopters do, for less money. The Supreme Court generally has given wide latitude to police using aerial surveillance so long as the photography captures images visible to the naked eye.

A single camera mounted atop the Washington Monument, McNutt boasts, could deter crime all around the National Mall. He thinks regular flights over the most dangerous parts of Washington — combined with publicity about how much police could now see — would make a significant dent in the number of burglaries, robberies and murders. His 192-megapixel cameras would spot as many as 50 crimes per six-hour flight, he estimates, providing police with a continuous stream of images covering more than a third of the city.

“We watch 25 square miles, so you see lots of crimes,” he said. “And by the way, after people commit crimes, they drive like idiots.”

What McNutt is trying to sell is not merely the latest techno-wizardry for police. He envisions such steep drops in crime that they will bring substantial side effects, including rising property values, better schools, increased development and, eventually, lower incarceration rates as the reality of long-term overhead surveillance deters those tempted to commit crimes.

Dayton Police Chief Richard Biehl, a supporter of McNutt’s efforts, has even proposed inviting the public to visit the operations center, to get a glimpse of the technology in action.

“I want them to be worried that we’re watching,” Biehl said. “I want them to be worried that they never know when we’re overhead.”

Technology in action

McNutt, a suburban father of four with a doctorate from the Massachusetts Institute of Technology, is not deaf to concerns about his company’s ambitions. Unlike many of the giant defense contractors that are eagerly repurposing wartime surveillance technology for domestic use, he sought advice from the American Civil Liberties Union in writing a privacy policy.

It has rules on how long data can be kept, when images can be accessed and by whom. Police are supposed to begin looking at the pictures only after a crime has been reported. Pure fishing expeditions are prohibited.

The technology has inherent limitations as well. From the airborne cameras, each person appears as a single pixel indistinguishable from any other person. What they are doing — even whether they are clothed or not — is impossible to see. As camera technology improves, McNutt said he intends to increase their range, not the precision of the imagery, so that larger areas can be monitored.

The notion that McNutt and his roughly 40 employees are peeping Toms clearly rankles. They made a PowerPoint presentation for the ACLU that includes pictures taken to aid the response to Hurricane Sandy and the severe Iowa floods last summer. The section is titled: “Good People Doing Good Things.”

“We get a little frustrated when people get so worried about us seeing them in their back yard,” McNutt said in his operation center, where the walls are adorned with 120-inch monitors, each showing a different grainy urban scene collected from above. “We can’t even see what they are doing in their backyard. And, by the way, we don’t care.”

Yet in a world of increasingly pervasive surveillance, location and identity are becoming all but inextricable — one quickly leads to the other for those with the right tools.

During one of the company’s demonstration flights over Dayton in 2012, police got reports of an attempted robbery at a bookstore and shots fired at a Subway sandwich shop. The cameras revealed a single car moving between the two locations.

By reviewing the images, frame by frame, analysts were able to help police piece together a larger story: The man had left a residential neighborhood midday, attempted to rob the bookstore but fled when somebody hit an alarm. Then he drove to Subway, where the owner pulled a gun and chased him off. His next stop was a Family Dollar Store, where the man paused for several minutes. He soon returned home, after a short stop at a gas station where a video camera captured an image of his face.

A few hours later, after the surveillance flight ended, the Family Dollar Store was robbed. Police used the detailed map of the man’s movements, along with other evidence from the crime scenes, to arrest him for all three crimes.

On another occasion, Dayton police got a report of a burglary in progress. The aerial cameras spotted a white truck driving away from the scene. Police stopped the driver before he got home from the heist, with the stolen goods sitting in the back of the truck. A witnessed identified him soon after.

Read the entire story here.

Image: Surveillance cameras. Courtesy of Mashable / Microsoft.

Techo-Blocking Technology

google-glass2

Many technologists, philosophers and social scientists who consider the ethics of technology have described it as a double-edged sword. Indeed observation does seem to uphold this idea; for every benefit gained from a new invention comes a mirroring disadvantage or a peril. Not that technology per se is a threat — but its human masters seem to be rather adept at deploying it for both good and evil means.

By corollary it is also evident that many a new technology spawns others, and sometimes entire industries, to counteract the first. The radar begets the radar-evading material; the radio begets the radio-jamming transmitter; cryptography begets hacking. You get the idea.

So not a moment too soon comes PlaceAvoider, a technology to suppress capturing and sharing of images seen through Google Glass. So, watch out Brin and Page and company, the watchers are watching you.

From Technology Review:

With last year’s launch of the Narrative Clip and Autographer, and Google Glass poised for release this year, technologies that can continuously capture our daily lives with photos and videos are inching closer to the mainstream. These gadgets can generate detailed visual diaries, drive self-improvement, and help those with memory problems. But do you really want to record in the bathroom or a sensitive work meeting?

Assuming that many people don’t, computer scientists at Indiana University have developed software that uses computer vision techniques to automatically identify potentially confidential or embarrassing pictures taken with these devices and prevent them from being shared. A prototype of the software, called PlaceAvoider, will be presented at the Network and Distributed System Security Symposium in San Diego in February.

“There simply isn’t the time to manually curate the thousands of images these devices can generate per day, and in a socially networked world that might lead to the inadvertent sharing of photos you don’t want to share,” says Apu Kapadia, who co-leads the team that developed the system. “Or those who are worried about that might just not share their life-log streams, so we’re trying to help people exploit these applications to the full by providing them with a way to share safely.”

Kapadia’s group began by acknowledging that devising algorithms that can identify sensitive pictures solely on the basis of visual content is probably impossible, since the things that people do and don’t want to share can vary widely and may be difficult to recognize. They set about designing software that users train by taking pictures of the rooms they want to blacklist. PlaceAvoider then flags new pictures taken in those rooms so the user will review them.

The system uses an existing computer-vision algorithm called scale-invariant feature transform (SIFT) to pinpoint regions of high contrast around corners and edges within the training images that are likely to stay visually constant even in varying light conditions and from different perspectives. For each of these, it produces a “numerical fingerprint” consisting of 128 separate numbers relating to properties such as color and texture, as well as its position relative to other regions of the image. Since images are sometimes blurry, PlaceAvoider also looks at more general properties such as colors and textures of walls and carpets, and takes into account the sequence in which shots are taken.

In tests, the system accurately determined whether images from streams captured in the homes and workplaces of the researchers were from blacklisted rooms an average of 89.8 percent of the time.

PlaceAvoider is currently a research prototype; its various components have been written but haven’t been combined as a completed product, and researchers used a smartphone worn around the neck to take photos rather than an existing device meant for life-logging. If developed to work on a life-logging device, an interface could be designed so that PlaceAvoider can flag potentially sensitive images at the time they are taken or place them in quarantine to be dealt with later.

Read the entire article here.

Image: Google Glass. Courtesy of Google.

The Case for Less NSA Spying

Cryptographer and security expert Bruce Schneier makes an eloquent case of less intrusion by the National Security Agency (NSA) into the private lives of US citizens.

From Technology Review:

Bruce Schneier, a cryptographer and author on security topics, last month took on a side gig: helping the Guardian newspaper pore through documents purloined from the U.S. National Security Agency by contractor Edward Snowden, lately of Moscow.

In recent months that newspaper and other media have issued a steady stream of revelations, including the vast scale at which the NSA accesses major cloud platforms, taps calls and text messages of wireless carriers, and tries to subvert encryption.

This year Schneier is also a fellow at Harvard’s Berkman Center for Internet and Society. In a conversation there with David Talbot, chief correspondent of MIT Technology Review, Schneier provided perspective on the revelations to date—and hinted that more were coming.

Taken together, what do all of the Snowden documents leaked thus far reveal that we didn’t know already?

Those of us in the security community who watch the NSA had made assumptions along the lines of what Snowden revealed. But there was scant evidence and no proof. What these leaks reveal is how robust NSA surveillance is, how pervasive it is, and to what degree the NSA has commandeered the entire Internet and turned it into a surveillance platform.

We are seeing the NSA collecting data from all of the cloud providers we use: Google and Facebook and Apple and Yahoo, etc. We see the NSA in partnerships with all the major telcos in the U.S., and many others around the world, to collect data on the backbone. We see the NSA deliberately subverting cryptography, through secret agreements with vendors, to make security systems less effective. The scope and scale are enormous.

The only analogy I can give is that it’s like death. We all know how the story ends. But seeing the actual details, and seeing the actual programs, is very different than knowing it theoretically.

The NSA mission is national security. How is the snooping really affecting the average person?

The NSA’s actions are making us all less safe. They’re not just spying on the bad guys, they’re deliberately weakening Internet security for everyone—including the good guys. It’s sheer folly to believe that only the NSA can exploit the vulnerabilities they create. Additionally, by eavesdropping on all Americans, they’re building the technical infrastructure for a police state.

We’re not there yet, but already we’ve learned that both the DEA and the IRS use NSA surveillance data in prosecutions and then lie about it in court. Power without accountability or oversight is dangerous to society at a very fundamental level.

Are you now looking at NSA documents that nobody has yet seen? Do they shed any light on whether ordinary people, and not just figures like al-Qaeda terrorists and North Korean generals, have been targeted?

I am reviewing some of the documents Snowden has provided to the Guardian. Because of the delicate nature of this, I cannot comment on what I have seen. What I can do is write news stories based on what I have learned, and I am doing that with Glenn Greenwald and the Guardian. My first story will be published soon.

Will the new stories contain new revelations at the scale we’ve seen to date?

They might.

There have been many allusions to NSA efforts to put back doors in consumer products and software. What’s the reality?

The reality is that we don’t know how pervasive this is; we just know that it happens. I have heard several stories from people and am working to get them published. The way it seems to go, it’s never an explicit request from the NSA. It’s more of a joking thing: “So, are you going to give us a back door?” If you act amenable, then the conversation progresses. If you don’t, it’s completely deniable. It’s like going out on a date. Sex might never be explicitly mentioned, but you know it’s on the table.

But what sorts of access, to what products, has been requested and given? What crypto is, and isn’t, back-doored or otherwise subverted? What has, and hasn’t, been fixed?

Near as I can tell, the answer on what has been requested is everything: deliberate weakenings of encryption algorithms, deliberate weakenings of random number generations, copies of master keys, encryption of the session key with an NSA-specific key … everything.

NSA surveillance is robust. I have no inside knowledge of which products are subverted and which are not. That’s probably the most frustrating thing. We have no choice but to mistrust everything. And we have no way of knowing if we’ve fixed anything.

Read the entire article (and let the NSA read it too), here.

Digital Romance is Alive (and Texting)

The last fifty years has seen a tremendous shift in our personal communications. We have moved from voice conversations via rotary phones molded in bakelite to anytime, anywhere texting via smartphones and public-private multimedia exposes held via social media. During all of this upheaval the process of romance may have changed too, but it remains alive and well, albeit rather different.

From Technology Review:

Boy meets girl; they grow up and fall in love. But technology interferes and threatens to destroy their blissful coupledom. The destructive potential of communication technologies is at the heart of Stephanie Jones’s self-published romance novel Dreams and Misunderstandings. Two childhood sweethearts, Rick and Jessie, use text messages, phone calls, and e-mail to manage the distance between them as Jessie attends college on the East Coast of the United States and Rick moves between Great Britain and the American West. Shortly before a summer reunion, their technological ties fail when Jessie is hospitalized after a traumatic attack. During her recovery, she loses access to her mobile phone, computer, and e-mail account. As a result, the lovers do not reunite and spend years apart, both thinking they have been deserted.

Jones blames digital innovations for the misunderstandings that prevent Rick and Jessie’s reunion. It’s no surprise this theme runs through a romance novel: it reflects a wider cultural fear that these technologies impede rather than strengthen human connection. One of the Internet’s earliest boosters, MIT professor Sherry Turkle, makes similar claims in her most recent book, Alone Together: Why We Expect More of Technology and Less from Each Other. She argues that despite their potential, communication technologies are threatening human relationships, especially intimate ones, because they offer “substitutes for connecting with each other face-to-face.”

If the technology is not fraying or undermining existing relationships, stories abound of how it is creating false or destructive ones among young people who send each other sexually explicit cell-phone photos or “catfish,” luring the credulous into online relationships with fabricated personalities. In her recent book about hookup culture, The End of Sex, Donna Freitas indicts mobile technologies for the ease with which they allow the hookup to happen.

It is true that communication technologies have been reshaping love, romance, and sex throughout the 2000s. The Internet, sociologists Michael ­Rosenfeld and Reuben Thomas have found, is now the third most common way to find a partner, after meeting through friends or in bars, restaurants, and other public places. Twenty-two percent of heterosexual couples now meet online. In many ways, the Internet has replaced families, churches, schools, neighborhoods, civic groups, and workplaces as a venue for finding romance. It has become especially important for those who have a “thin market” of potential romantic partners—middle-aged straight people, gays and lesbians of all ages, the elderly, and the geographically isolated. But even for those who are not isolated from current or potential partners, cell phones, social-network sites, and similar forms of communication now often play a central role in the formation, maintenance, and dissolution of intimate relationships.

While these developments are significant, fears about what they mean do not accurately reflect the complexity of how the technology is really used. This is not surprising: concerns about technology as a threat to the social order, particularly in matters of sexuality and intimacy, go back much further than Internet dating and cell phones. From the boxcar (critics worried that it could transport those of loose moral character from town to town) to the automobile (which gave young people a private space for sexual activity) to reproductive technologies like in vitro fertilization, technological innovations that affect intimate life have always prompted angst. Often, these fears have resulted in what sociologists call a “moral panic”—an episode of exaggerated public anxiety over a perceived threat to social order.

Moral panic is an appropriate description for the fears expressed by Jones, Turkle, and Freitas about the role of technology in romantic relationships. Rather than driving people apart, technology-­mediated communication is likely to have a “hyperpersonal effect,” communications professor Joseph Walther has found. That is, it allows people to be more intimate with one another—sometimes more intimate than would be sustainable face to face. “John,” a college freshman in Chicago whom I interviewed for research that I published in a 2009 book, Hanging Out, Messing Around and Geeking Out: Kids Living and Learning with New Media, highlights this paradox. He asks, “What happens after you’ve had a great online flirtatious chat … and then the conversation sucks in person?”

In the initial getting-to-know-you phase of a relationship, the asynchronous nature of written communication—texts, e-mails, and messages or comments on dating or social-network sites, as opposed to phone calls or video chatting—allows people to interact more continuously and to save face in potentially vulnerable situations. As people flirt and get to know each other this way, they can plan, edit, and reflect upon flirtatious messages before sending them. As John says of this type of communication, “I can think about things more. You can deliberate and answer however you want.”

As couples move into committed relationships, they use these communication technologies to maintain a digital togetherness regardless of their physical distance. With technologies like mobile phones and social-network sites, couples need never be truly apart. Often, this strengthens intimate relationships: in a study on couples’ use of technology in romantic relationships, Borae Jin and Jorge Peña found that couples who are in greater cell-phone contact exhibit less uncertainty about their relationships and higher levels of commitment. This type of communication becomes a form of “relationship work” in which couples trade digital objects of affection such as text messages or comments on online photos. As “Champ,” a 19-year-old in New York, told one of my collaborators on Hanging Out, Messing Around and Geeking Out about his relationship with his girlfriend, “You send a little text message—‘Oh I’m thinking of you,’ or something like that—while she’s working … Three times out of the day, you probably send little comments.”

To be sure, some of today’s fears are based on the perfectly accurate observation that communication technologies don’t always lend themselves to constructive relationship work. The public nature of Facebook posts, for example, appears to promote jealousy and decrease intimacy. When the anthropologist Ilana Gershon interviewed college students about their romantic lives, several told her that Facebook threatens their relationships. As one of her interviewees, “Cole,” said: “There is so much drama. It’s adding another stress.”

Read the entire article here.

Image courtesy of Google search.

Surveillance, British Style

While the revelations about the National Security Agency (NSA) snooping on private communications of U.S. citizens are extremely troubling, the situation could be much worse. Cast a sympathetic thought to the Her Majesty’s subjects in the United Kingdom of Great Britain and Northern Island, where almost everyone eavesdrops on everyone else. While the island nation of 60 million covers roughly the same area as Michigan, it is swathed in over 4 million CCTV (closed circuit television) surveillance cameras.

From Slate:

We adore the English here in the States. They’re just so precious! They call traffic circles “roundabouts,” prostitutes “prozzies,” and they have a queen. They’re ever so polite and carry themselves with such admirable poise. We love their accents so much, we use them in historical films to give them a bit more gravitas. (Just watch The Last Temptation of Christ to see what happens when we don’t: Judas doesn’t sound very intimidating with a Brooklyn accent.)

What’s not so cute is the surveillance society they’ve built—but the U.S. government seems pretty enamored with it.

The United Kingdom is home to an intense surveillance system. Most of the legal framework for this comes from the Regulation of Investigatory Powers Act, which dates all the way back to the year 2000. RIPA is meant to support criminal investigation, preventing disorder, public safety, public health, and, of course, “national security.” If this extremely broad application of law seems familiar, it should: The United States’ own PATRIOT Act is remarkably similar in scope and application. Why should the United Kingdom have the best toys, after all?

This is one of the problems with being the United Kingdom’s younger sibling. We always want what Big Brother has. Unless it’s soccer. Wiretaps, though? We just can’t get enough!

The PATRIOT Act, broad as it is, doesn’t match RIPA’s incredible wiretap allowances. In 1994, the United States passed the Communications Assistance for Law Enforcement Act, which mandated that service providers give the government “technical assistance” in the use of wiretaps. RIPA goes a step further and insists that wiretap capability be implemented right into the system. If you’re a service provider and can’t set up plug-and-play wiretap capability within a short time, Johnny English comes knocking at your door to say, ” ‘Allo, guvna! I ‘ear tell you ‘aven’t put in me wiretaps yet. Blimey! We’ll jus’ ‘ave to give you a hefty fine! Ods bodkins!” Wouldn’t that be awful (the law, not the accent)? It would, and it’s just what the FBI is hoping for. CALEA is getting a rewrite that, if it passes, would give the FBI that very capability.

I understand. Older siblings always get the new toys, and it’s only natural that we want to have them as well. But why does it have to be legal toys for surveillance? Why can’t it be chocolate? The United Kingdom enjoys chocolate that’s almost twice as good as American chocolate. Literally, they get 20 percent solid cocoa in their chocolate bars, while we suffer with a measly 11 percent. Instead, we’re learning to shut off the Internet for entire families.

That’s right. In the United Kingdom, if you are just suspected of having downloaded illegally obtained material three times (it’s known as the “three strikes” law), your Internet is cut off. Not just for you, but for your entire household. Life without the Internet, let’s face it, sucks. You’re not just missing out on videos of cats falling into bathtubs. You’re missing out of communication, jobs, and being a 21st-century citizen. Maybe this is OK in the United Kingdom because you can move up north, become a farmer, and enjoy a few pints down at the pub every night. Or you can just get a new ISP, because the United Kingdom actually has a competitive market for ISPs. The United States, as an homage, has developed the so-called “copyright alert system.” It works much the same way as the U.K. law, but it provides for six “strikes” instead of three and has a limited appeals system, in which the burden of proof lies on the suspected customer. In the United States, though, the rights-holders monitor users for suspected copyright infringement on their own, without the aid of ISPs. So far, we haven’t adopted the U.K. system in which ISPs are expected to monitor traffic and dole out their three strikes at their discretion.

These are examples of more targeted surveillance of criminal activities, though. What about untargeted mass surveillance? On June 21, one of Edward Snowden’s leaks revealed that the Government Communications Headquarters, the United Kingdom’s NSA equivalent, has been engaging in a staggering amount of data collection from civilians. This development generated far less fanfare than the NSA news, perhaps because the legal framework for this data collection has existed for a very long time under RIPA, and we expect surveillance in the United Kingdom. (Or maybe Americans were just living down to the stereotype of not caring about other countries.) The NSA models follow the GCHQ’s very closely, though, right down to the oversight, or lack thereof.

Media have labeled the FISA court that regulates the NSA’s surveillance as a “rubber-stamp” court, but it’s no match for the omnipotence of the Investigatory Powers Tribunal, which manages oversight for MI5, MI6, and the GCHQ. The Investigatory Powers Tribunal is exempt from the United Kingdom’s Freedom of Information Act, so it doesn’t have to share a thing about its activities (FISA apparently does not have this luxury—yet). On top of that, members of the tribunal are appointed by the queen. The queen. The one with the crown who has jubilees and a castle and probably a court wizard. Out of 956 complaints to the Investigatory Powers Tribunal, five have been upheld. Now that’s a rubber-stamp court we can aspire to!

Or perhaps not. The future of U.S. surveillance looks very grim if we’re set on following the U.K.’s lead. Across the United Kingdom, an estimated 4.2 million CCTV cameras, some with facial-recognition capability, keep watch on nearly the entire nation. (This can lead to some Monty Python-esque high jinks.) Washington, D.C., took its first step toward strong camera surveillance in 2008, when several thousand were installed ahead of President Obama’s inauguration.

Read the entire article here.

Image: Royal coat of arms of Queen Elizabeth II of the United Kingdom, as used in England and Wales, and Scotland. Courtesy of Wikipedia.

UnGoogleable: The Height of Cool

So, it is no longer a surprise — our digital lives are tracked, correlated, stored and examined. The NSA (National Security Agency) does it to determine if you are an unsavory type; Google does it to serve you better information and ads; and, a whole host of other companies do it to sell you more things that you probably don’t need and for a price that you can’t afford. This of course raises deep and troubling questions about privacy. With this in mind, some are taking ownership of the issue and seeking to erase themselves from the vast digital Orwellian eye. However, to some being untraceable online is a fashion statement, rather than a victory for privacy.

From the Guardian:

“The chicest thing,” said fashion designer Phoebe Philo recently, “is when you don’t exist on Google. God, I would love to be that person!”

Philo, creative director of Céline, is not that person. As the London Evening Standard put it: “Unfortunately for the famously publicity-shy London designer – Paris born, Harrow-on-the-Hill raised – who has reinvented the way modern women dress, privacy may well continue to be a luxury.” Nobody who is oxymoronically described as “famously publicity-shy” will ever be unGoogleable. And if you’re not unGoogleable then, if Philo is right, you can never be truly chic, even if you were born in Paris. And if you’re not truly chic, then you might as well die – at least if you’re in fashion.

If she truly wanted to disappear herself from Google, Philo could start by changing her superb name to something less diverting. Prize-winning novelist AM Homes is an outlier in this respect. Google “am homes” and you’re in a world of blah US real estate rather than cutting-edge literature. But then Homes has thought a lot about privacy, having written a play about the most famously private person in recent history, JD Salinger, and had him threaten to sue her as a result.

And Homes isn’t the only one to make herself difficult to detect online. UnGoogleable bands are 10 a penny. The New York-based band !!! (known verbally as “chick chick chick” or “bang bang bang” – apparently “Exclamation point, exclamation point, exclamation point” proved too verbose for their meagre fanbase) must drive their business manager nuts. As must the band Merchandise, whose name – one might think – is a nominalist satire of commodification by the music industry. Nice work, Brad, Con, John and Rick.

 

If Philo renamed herself online as Google Maps or @, she might make herself more chic.

Welcome to anonymity chic – the antidote to an online world of exhibitionism. But let’s not go crazy: anonymity may be chic, but it is no business model. For years XXX Porn Site, my confusingly named alt-folk combo, has remained undiscovered. There are several bands called Girls (at least one of them including, confusingly, dudes) and each one has worried – after a period of chic iconoclasm – that such a putatively cool name means no one can find them online.

But still, maybe we should all embrace anonymity, given this week’s revelations that technology giants cooperated in Prism, a top-secret system at the US National Security Agency that collects emails, documents, photos and other material for secret service agents to review. It has also been a week in which Lindsay Mills, girlfriend of NSA whistleblower Edward Snowden, has posted on her blog (entitled: “Adventures of a world-traveling, pole-dancing super hero” with many photos showing her performing with the Waikiki Acrobatic Troupe) her misery that her fugitive boyfriend has fled to Hong Kong. Only a cynic would suggest that this blog post might help the Waikiki Acrobating Troupe veteran’s career at this – serious face – difficult time. Better the dignity of silent anonymity than using the internet for that.

Furthermore, as social media diminishes us with not just information overload but the 24/7 servitude of liking, friending and status updating, this going under the radar reminds us that we might benefit from withdrawing the labour on which the founders of Facebook, Twitter and Instagram have built their billions. “Today our intense cultivation of a singular self is tied up in the drive to constantly produce and update,” argues Geert Lovink, research professor of interactive media at the Hogeschool van Amsterdam and author of Networks Without a Cause: A Critique of Social Media. “You have to tweet, be on Facebook, answer emails,” says Lovink. “So the time pressure on people to remain present and keep up their presence is a very heavy load that leads to what some call the psychopathology of online.”

Internet evangelists such as Clay Shirky and Charles Leadbeater hoped for something very different from this pathologised reality. In Shirky’s Here Comes Everybody and Leadbeater’s We-Think, both published in 2008, the nascent social media were to echo the anti-authoritarian, democratising tendencies of the 60s counterculture. Both men revelled in the fact that new web-based social tools helped single mothers looking online for social networks and pro-democracy campaigners in Belarus. Neither sufficiently realised that these tools could just as readily be co-opted by The Man. Or, if you prefer, Mark Zuckerberg.

Not that Zuckerberg is the devil in this story. Social media have changed the way we interact with other people in line with what the sociologist Zygmunt Bauman wrote in Liquid Love. For us “liquid moderns”, who have lost faith in the future, cannot commit to relationships and have few kinship ties, Zuckerberg created a new way of belonging, one in which we use our wits to create provisional bonds loose enough to stop suffocation, but tight enough to give a needed sense of security now that the traditional sources of solace (family, career, loving relationships) are less reliable than ever.

Read the entire article here.

Innocent Until Proven Guilty, But Always Under Suspicion

It is strange to see the reaction to a remarkable disclosure such as that by the leaker / whistleblower Edward Snowden about the National Security Agency (NSA) peering into all our daily, digital lives. One strange reaction comes from the political left: the left desires a broad and activist government, ready to protect us all, but decries the NSA’s snooping. Another odd reaction comes from the political right: the right wants government out of people’s lives, but yet embraces the idea that the NSA should be looking for virtual skeletons inside people’s digital closets.

But let’s humanize this for a second. Somewhere inside the bowels of the NSA there is (or was) a person, or a small group of people, who actively determines what to look for in your digital communications trail. This person sets some parameters in a computer program and the technology does the rest, sifting through vast mountains of data looking for matches and patterns. Perhaps today that filter may have been set to contain certain permutations of data: zone of originating call, region of the recipient, keywords or code words embedded in the data traffic. However, tomorrow a rather zealous NSA employee may well set the filter to look for different items: keywords highlighting a particular political affiliation, preference for certain TV shows or bars, likes and dislikes of certain foods or celebrities.

We have begun the slide down a very dangerous, slippery slope that imperils our core civil liberties. The First Amendment protects our speech and assembly, but now we know that someone or some group may be evaluating the quality of that speech and determining a course of action if they disagree or if they find us assembling with others with whom they disagree. The Fourth Amendment prohibits unreasonable search — well, it looks like this one is falling by the wayside in light of the NSA program. We presume the secret FISA court, overseeing the secret program determines in secret what may or may not be deemed “reasonable”.

Regardless of Edward Snowden’s motivations (and his girl friend’s reaction), this event raises extremely serious issues that citizens must contemplate and openly discuss. It raises questions about the exercise of power, about government overreach and about the appropriate balance between security and privacy. It also raises questions about due process and about the long held right that presumes us to be innocent first and above all else. It raises a fundamental question about U.S. law and the Constitution and to whom it does and does not apply.

The day before the PRISM program exploded in the national consciousness only a handful of people — in secret — were determining answers to these constitutional and societal questions. Now, thanks to Mr.Snowden we can all participate in that debate, and rightly so — while being watched of course.

From Slate:

Every April, I try to wade through mounds of paperwork to file my taxes. Like most Americans, I’m trying to follow the law and pay all of the taxes that I owe without getting screwed in the process. I try and make sure that every donation I made is backed by proof, every deduction is backed by logic and documentation that I’ll be able to make sense of seven years. Because, like many Americans, I completely and utterly dread the idea of being audited. Not because I’ve done anything wrong, but the exact opposite. I know that I’m filing my taxes to the best of my ability and yet, I also know that if I became a target of interest from the IRS, they’d inevitably find some checkbox I forgot to check or some subtle miscalculation that I didn’t see. And so what makes an audit intimidating and scary is not because I have something to hide but because proving oneself to be innocent takes time, money, effort, and emotional grit.

Sadly, I’m getting to experience this right now as Massachusetts refuses to believe that I moved to New York mid-last-year. It’s mind-blowing how hard it is to summon up the paperwork that “proves” to them that I’m telling the truth. When it was discovered that Verizon (and presumably other carriers) was giving metadata to government officials, my first thought was: Wouldn’t it be nice if the government would use that metadata to actually confirm that I was in NYC, not Massachusetts? But that’s the funny thing about how data is used by our current government. It’s used to create suspicion, not to confirm innocence.

The frameworks of “innocent until proven guilty” and “guilty beyond a reasonable doubt” are really, really important to civil liberties, even if they mean that some criminals get away. These frameworks put the burden on the powerful entity to prove that someone has done something wrong. Because it’s actually pretty easy to generate suspicion, even when someone is wholly innocent. And still, even with this protection, innocent people are sentenced to jail and even given the death penalty. Because if someone has a vested interest in you being guilty, it’s not impossible to paint that portrait, especially if you have enough data.

It’s disturbing to me how often I watch as someone’s likeness is constructed in ways that contorts the image of who they are. This doesn’t require a high-stakes political issue. This is playground stuff. In the world of bullying, I’m astonished at how often schools misinterpret situations and activities to construct narratives of perpetrators and victims. Teens get really frustrated when they’re positioned as perpetrators, especially when they feel as though they’ve done nothing wrong. Once the stakes get higher, all hell breaks loose. In Sticks and Stones, Slate senior editor Emily Bazelon details how media and legal involvement in bullying cases means that they often spin out of control, such as they did in South Hadley. I’m still bothered by the conviction of Dharun Ravi in the highly publicized death of Tyler Clementi. What happens when people are tarred and feathered as symbols for being imperfect?

Of course, it’s not just one’s own actions that can be used against one’s likeness. Guilt-through-association is a popular American pastime. Remember how the media used Billy Carter to embarrass Jimmy Carter? Of course, it doesn’t take the media or require an election cycle for these connections to be made. Throughout school, my little brother had to bear the brunt of teachers who despised me because I was a rather rebellious student. So when the Boston Marathon bombing occurred, it didn’t surprise me that the media went hogwild looking for any connection to the suspects. Over and over again, I watched as the media took friendships and song lyrics out of context to try to cast the suspects as devils. By all accounts, it looks as though the brothers are guilty of what they are accused of, but that doesn’t make their friends and other siblings evil or justify the media’s decision to portray the whole lot in such a negative light.

So where does this get us? People often feel immune from state surveillance because they’ve done nothing wrong. This rhetoric is perpetuated on American TV. And yet the same media who tells them they have nothing to fear will turn on them if they happen to be in close contact with someone who is of interest to—or if they themselves are the subject of—state interest. And it’s not just about now, but it’s about always.

And here’s where the implications are particularly devastating when we think about how inequality, racism, and religious intolerance play out. As a society, we generate suspicion of others who aren’t like us, particularly when we believe that we’re always under threat from some outside force. And so the more that we live in doubt of other people’s innocence, the more that we will self-segregate. And if we’re likely to believe that people who aren’t like us are inherently suspect, we won’t try to bridge those gaps. This creates societal ruptures and undermines any ability to create a meaningful republic. And it reinforces any desire to spy on the “other” in the hopes of finding something that justifies such an approach. But, like I said, it doesn’t take much to make someone appear suspect.

Read the entire article here.

Image: U.S. Constitution. Courtesy of Wikipedia.

The Internet of Things and Your (Lack of) Privacy

Ubiquitous connectivity for, and between, individuals and businesses is widely held to be beneficial for all concerned. We can connect rapidly and reliably with family, friends and colleagues from almost anywhere to anywhere via a wide array of internet enabled devices. Yet, as these devices become more powerful and interconnected, and enabled with location-based awareness, such as GPS (Global Positioning System) services, we are likely to face an increasing acute dilemma — connectedness or privacy?

From the Guardian:

The internet has turned into a massive surveillance tool. We’re constantly monitored on the internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated – sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behaviour, it’s still only behaviour that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is internet-enabled. Soon there will be internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be internet-connected tags on our clothing. In its extreme, everything can be connected to the internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about the “Internet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to – to monitor how people move about their house and how they spend their time.

Drones are the another “thing” moving onto the internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s internet-enabled glasses – Google Glass – are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behaviour and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals – not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyse this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days – and nights – with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Read the entire article here.

Image: Big Brother, 1984. Poster. Courtesy of Telegraph.

The Digital Afterlife and i-Death

Leave it to Google to help you auto-euthanize and die digitally. The presence of our online selves after death was of limited concern until recently. However, with the explosion of online media and social networks our digital tracks remain preserved and scattered across drives and backups in distributed, anonymous data centers. Physical death does not change this.

[A case in point: your friendly editor at theDiagonal was recently asked to befriend a colleague via LinkedIn. All well and good, except that the colleague had passed-away two years earlier.]

So, armed with Google’s new Inactive Account Manager, death — at least online — may be just a couple of clicks away. By corollary it would be a small leap indeed to imagine an enterprising company charging an annual fee to a dearly-departed member to maintain a digital afterlife ad infinitum.

From the Independent:

The search engine giant Google has announced a new feature designed to allow users to decide what happens to their data after they die.

The feature, which applies to the Google-run email system Gmail as well as Google Plus, YouTube, Picasa and other tools, represents an attempt by the company to be the first to deal with the sensitive issue of data after death.

In a post on the company’s Public Policy Blog Andreas Tuerk, Product Manager, writes: “We hope that this new feature will enable you to plan your digital afterlife – in a way that protects your privacy and security – and make life easier for your loved ones after you’re gone.”

Google says that the new account management tool will allow users to opt to have their data deleted after three, six, nine or 12 months of inactivity. Alternatively users can arrange for certain contacts to be sent data from some or all of their services.

The California-based company did however stress that individuals listed to receive data in the event of ‘inactivity’ would be warned by text or email before the information was sent.

Social Networking site Facebook already has a function that allows friends and family to “memorialize” an account once its owner has died.

Read the entire article following the jump.

Tracking and Monetizing Your Every Move

Your movements are valuable — but not in the way you may think. Mobile technology companies are moving rapidly to exploit the vast amount of data collected from the billions of mobile devices. This data is extremely valuable to an array of organizations, including urban planners, retailers, and travel and transportation marketers. And, of course, this raises significant privacy concerns. Many believe that when the data is used collectively it preserves user anonymity. However, if correlated with other data sources it could be used to discover a range of unintended and previously private information, relating both to individuals and to groups.

From MIT Technology Review:

Wireless operators have access to an unprecedented volume of information about users’ real-world activities, but for years these massive data troves were put to little use other than for internal planning and marketing.

This data is under lock and key no more. Under pressure to seek new revenue streams (see “AT&T Looks to Outside Developers for Innovation”), a growing number of mobile carriers are now carefully mining, packaging, and repurposing their subscriber data to create powerful statistics about how people are moving about in the real world.

More comprehensive than the data collected by any app, this is the kind of information that, experts believe, could help cities plan smarter road networks, businesses reach more potential customers, and health officials track diseases. But even if shared with the utmost of care to protect anonymity, it could also present new privacy risks for customers.

Verizon Wireless, the largest U.S. carrier with more than 98 million retail customers, shows how such a program could come together. In late 2011, the company changed its privacy policy so that it could share anonymous and aggregated subscriber data with outside parties. That made possible the launch of its Precision Market Insights division last October.

The program, still in its early days, is creating a natural extension of what already happens online, with websites tracking clicks and getting a detailed breakdown of where visitors come from and what they are interested in.

Similarly, Verizon is working to sell demographics about the people who, for example, attend an event, how they got there or the kinds of apps they use once they arrive. In a recent case study, says program spokeswoman Debra Lewis, Verizon showed that fans from Baltimore outnumbered fans from San Francisco by three to one inside the Super Bowl stadium. That information might have been expensive or difficult to obtain in other ways, such as through surveys, because not all the people in the stadium purchased their own tickets and had credit card information on file, nor had they all downloaded the Super Bowl’s app.

Other telecommunications companies are exploring similar ideas. In Europe, for example, Telefonica launched a similar program last October, and the head of this new business unit gave the keynote address at new industry conference on “big data monetization in telecoms” in January.

“It doesn’t look to me like it’s a big part of their [telcos’] business yet, though at the same time it could be,” says Vincent Blondel, an applied mathematician who is now working on a research challenge from the operator Orange to analyze two billion anonymous records of communications between five million customers in Africa.

The concerns about making such data available, Blondel says, are not that individual data points will leak out or contain compromising information but that they might be cross-referenced with other data sources to reveal unintended details about individuals or specific groups (see “How Access to Location Data Could Trample Your Privacy”).

Already, some startups are building businesses by aggregating this kind of data in useful ways, beyond what individual companies may offer. For example, AirSage, an Atlanta, Georgia, a company founded in 2000, has spent much of the last decade negotiating what it says are exclusive rights to put its hardware inside the firewalls of two of the top three U.S. wireless carriers and collect, anonymize, encrypt, and analyze cellular tower signaling data in real time. Since AirSage solidified the second of these major partnerships about a year ago (it won’t specify which specific carriers it works with), it has been processing 15 billion locations a day and can account for movement of about a third of the U.S. population in some places to within less than 100 meters, says marketing vice president Andrea Moe.

As users’ mobile devices ping cellular towers in different locations, AirSage’s algorithms look for patterns in that location data—mostly to help transportation planners and traffic reports, so far. For example, the software might infer that the owners of devices that spend time in a business park from nine to five are likely at work, so a highway engineer might be able to estimate how much traffic on the local freeway exit is due to commuters.

Other companies are starting to add additional layers of information beyond cellular network data. One customer of AirSage is a relatively small San Francisco startup, Streetlight Data which recently raised $3 million in financing backed partly by the venture capital arm of Deutsche Telekom.

Streetlight buys both cellular network and GPS navigation data that can be mined for useful market research. (The cellular data covers a larger number of people, but the GPS data, collected by mapping software providers, can improve accuracy.) Today, many companies already build massive demographic and behavioral databases on top of U.S. Census information about households to help retailers choose where to build new stores and plan marketing budgets. But Streetlight’s software, with interactive, color-coded maps of neighborhoods and roads, offers more practical information. It can be tied to the demographics of people who work nearby, commute through on a particular highway, or are just there for a visit, rather than just supplying information about who lives in the area.

Read the entire article following the jump.

Image: mobile devices. Courtesy of W3.org

Technology and the Exploitation of Children

Many herald the forward motion of technological innovation as progress. In many cases the momentum does genuinely seem to carry us towards a better place; it broadly alleviates pain and suffering; it generally delivers more and better nutrition to our bodies and our minds. Yet for all the positive steps, this progress is often accompanied by retrograde leaps — often paradoxical ones. Particularly disturbing is the relative ease to which technology allows us — the responsible adults – to sexualise and exploit children. Now, this is certainly not a new phenomenon, but our technical prowess certainly makes this problem more pervasive. A case in point, the Instagram beauty pageant. Move over Honey Boo-Boo.

From the Washington Post:

The photo-sharing site Instagram has become wildly popular as a way to trade pictures of pets and friends. But a new trend on the site is making parents cringe: beauty pageants, in which thousands of young girls — many appearing no older than 12 or 13 — submit photographs of themselves for others to judge.

In one case, the mug shots of four girls, middle-school-age or younger, have been pitted against each other. One is all dimples, wearing a hair bow and a big, toothy grin. Another is trying out a pensive, sultry look.

Any of Instagram’s 30 million users can vote on the appearance of the girls in a comments section of the post. Once a girl’s photo receives a certain number of negative remarks, the pageant host, who can remain anonymous, can update it with a big red X or the word “OUT” scratched across her face.

“U.G.L.Y,” wrote one user about a girl, who submitted her photo to one of the pageants identified on Instagram by the keyword “#beautycontest.”

The phenomenon has sparked concern among parents and child safety advocates who fear that young girls are making themselves vulnerable to adult strangers and participating in often cruel social interactions at a sensitive period of development.

But the contests are the latest example of how technology is pervading the lives of children in ways that parents and teachers struggle to understand or monitor.

“What started out as just a photo-sharing site has become something really pernicious for young girls,” said Rachel Simmons, author of “Odd Girl Out” and a speaker on youth and girls. “What happened was, like most social media experiences, girls co-opted it and imposed their social life on it to compete for attention and in a very exaggerated way.”

It’s difficult to track when the pageants began and who initially set them up. A keyword search of #beautycontest turned up 8,757 posts, while #rateme had 27,593 photo posts. Experts say those two terms represent only a fraction of the activity. Contests are also appearing on other social media sites, including Tumblr and Snapchat — mobile apps that have grown in popularity among youth.

Facebook, which bought Instagram last year, declined to comment. The company has a policy of not allowing anyone under the age of 13 to create an account or share photos on Instagram. But Facebook has been criticized for allowing pre-teens to get around the rule — two years ago, Consumer Reports estimated their presence on Facebook was 7.5 million. (Washington Post Co. Chairman Donald Graham sits on Facebook’s board of directors.)

Read the entire article after the jump.

Image: Instagram. Courtesy of Wired.

 

You Are a Google Datapoint

At first glance Google’s aim to make all known information accessible and searchable seems to be a fundamentally worthy goal, and in keeping with its “Do No Evil” mantra. Surely, giving all people access to the combined knowledge of the human race can do nothing but good, intellectually, politically and culturally.

However, what if that information includes you? After all, you are information: from the sequence of bases in your DNA, to the food you eat and the products you purchase, to your location and your planned vacations, your circle of friends and colleagues at work, to what you say and write and hear and see. You are a collection of datapoints, and if you don’t market and monetize them, someone else will.

Google continues to extend its technology boundaries and its vast indexed database of information. Now with the introduction of Google Glass the company extends its domain to a much more intimate level. Glass gives Google access to data on your precise location; it can record what you say and the sounds around you; it can capture what you are looking at and make it instantly shareable over the internet. Not surprisingly, this raises numerous concerns over privacy and security, and not only for the wearer of Google Glass. While active opt-in / opt-out features would allow a user a fair degree of control over how and what data is collected and shared with Google, it does not address those being observed.

So, beware the next time you are sitting in a Starbucks or shopping in a mall or riding the subway, you may be being recorded and your digital essence distributed over the internet. Perhaps, someone somewhere will even be making money from you. While the Orwellian dystopia of government surveillance and control may still be a nightmarish fiction, corporate snooping and monetization is no less troubling. Remember, to some, you are merely a datapoint (care of Google), a publication (via Facebook), and a product (courtesy of Twitter).

From the Telegraph:

In the online world – for now, at least – it’s the advertisers that make the world go round. If you’re Google, they represent more than 90% of your revenue and without them you would cease to exist.

So how do you reconcile the fact that there is a finite amount of data to be gathered online with the need to expand your data collection to keep ahead of your competitors?

There are two main routes. Firstly, try as hard as is legally possible to monopolise the data streams you already have, and hope regulators fine you less than the profit it generated. Secondly, you need to get up from behind the computer and hit the streets.

Google Glass is the first major salvo in an arms race that is going to see increasingly intrusive efforts made to join up our real lives with the digital businesses we have become accustomed to handing over huge amounts of personal data to.

The principles that underpin everyday consumer interactions – choice, informed consent, control – are at risk in a way that cannot be healthy. Our ability to walk away from a service depends on having a choice in the first place and knowing what data is collected and how it is used before we sign up.

Imagine if Google or Facebook decided to install their own CCTV cameras everywhere, gathering data about our movements, recording our lives and joining up every camera in the land in one giant control room. It’s Orwellian surveillance with fluffier branding. And this isn’t just video surveillance – Glass uses audio recording too. For added impact, if you’re not content with Google analysing the data, the person can share it to social media as they see fit too.

Yet that is the reality of Google Glass. Everything you see, Google sees. You don’t own the data, you don’t control the data and you definitely don’t know what happens to the data. Put another way – what would you say if instead of it being Google Glass, it was Government Glass? A revolutionary way of improving public services, some may say. Call me a cynic, but I don’t think it’d have much success.

More importantly, who gave you permission to collect data on the person sitting opposite you on the Tube? How about collecting information on your children’s friends? There is a gaping hole in the middle of the Google Glass world and it is one where privacy is not only seen as an annoying restriction on Google’s profit, but as something that simply does not even come into the equation. Google has empowered you to ignore the privacy of other people. Bravo.

It’s already led to reactions in the US. ‘Stop the Cyborgs’ might sound like the rallying cry of the next Terminator film, but this is the start of a campaign to ensure places of work, cafes, bars and public spaces are no-go areas for Google Glass. They’ve already produced stickers to put up informing people that they should take off their Glass.

They argue, rightly, that this is more than just a question of privacy. There’s a real issue about how much decision making is devolved to the display we see, in exactly the same way as the difference between appearing on page one or page two of Google’s search can spell the difference between commercial success and failure for small businesses. We trust what we see, it’s convenient and we don’t question the motives of a search engine in providing us with information.

The reality is very different. In abandoning critical thought and decision making, allowing ourselves to be guided by a melee of search results, social media and advertisements we do risk losing a part of what it is to be human. You can see the marketing already – Glass is all-knowing. The issue is that to be all-knowing, it needs you to help it be all-seeing.

Read the entire article after the jump.

Image: Google’s Sergin Brin wearing Google Glass. Courtesy of CBS News.

Big Brother is Mapping You

One hopes that Google’s intention to “organize the world’s information” will remain benign for the foreseeable future. Yet, as more and more of our surroundings and moves are mapped and tracked online, and increasingly offline, it would be wise to remain ever vigilant. Many put up with the encroachment of advertisers and promoters into almost every facet of their daily lives as a necessary, modern evil. But where is the dividing line that separates an ignorable irritation from an intrusion of privacy and a grab for control? For the paranoid amongst us, it may only be a matter of time before our digital footprints come under the increasing scrutiny, and control, of organizations with grander designs.

[div class=attrib]From the Guardian:[end-div]

Eight years ago, Google bought a cool little graphics business called Keyhole, which had been working on 3D maps. Along with the acquisition came Brian McClendon, aka “Bam”, a tall and serious Kansan who in a previous incarnation had supplied high-end graphics software that Hollywood used in films including Jurassic Park and Terminator 2. It turned out to be a very smart move.

Today McClendon is Google’s Mr Maps – presiding over one of the fastest-growing areas in the search giant’s business, one that has recently left arch-rival Apple red-faced and threatens to make Google the most powerful company in mapping the world has ever seen.

Google is throwing its considerable resources into building arguably the most comprehensive map ever made. It’s all part of the company’s self-avowed mission is to organize all the world’s information, says McClendon.

“You need to have the basic structure of the world so you can place the relevant information on top of it. If you don’t have an accurate map, everything else is inaccurate,” he says.

It’s a message that will make Apple cringe. Apple triggered howls of outrage when it pulled Google Maps off the latest iteration of its iPhone software for its own bug-riddled and often wildly inaccurate map system. “We screwed up,” Apple boss Tim Cook said earlier this week.

McClendon, pictured, won’t comment on when and if Apple will put Google’s application back on the iPhone. Talks are ongoing and he’s at pains to point out what a “great” product the iPhone is. But when – or if – Apple caves, it will be a huge climbdown. In the meantime, what McClendon really cares about is building a better map.

This not the first time Google has made a landgrab in the real world, as the publishing industry will attest. Unhappy that online search was missing all the good stuff inside old books, Google – controversially – set about scanning the treasures of Oxford’s Bodleian library and some of the world’s other most respected collections.

Its ambitions in maps may be bigger, more far reaching and perhaps more controversial still. For a company developing driverless cars and glasses that are wearable computers, maps are a serious business. There’s no doubting the scale of McClendon’s vision. His license plate reads: ITLLHPN.

Until the 1980s, maps were still largely a pen and ink affair. Then mainframe computers allowed the development of geographic information system software (GIS), which was able to display and organise geographic information in new ways. By 2005, when Google launched Google Maps, computing power allowed GIS to go mainstream. Maps were about to change the way we find a bar, a parcel or even a story. Washington DC’s homicidewatch.org, for example, uses Google Maps to track and follow deaths across the city. Now the rise of mobile devices has pushed mapping into everyone’s hands and to the front line in the battle of the tech giants.

It’s easy to see why Google is so keen on maps. Some 20% of Google’s queries are now “location specific”. The company doesn’t split the number out but on mobile the percentage is “even higher”, says McClendon, who believes maps are set to unfold themselves ever further into our lives.

Google’s approach to making better maps is about layers. Starting with an aerial view, in 2007 Google added Street View, an on-the-ground photographic map snapped from its own fleet of specially designed cars that now covers 5 million of the 27.9 million miles of roads on Google Maps.

Google isn’t stopping there. The company has put cameras on bikes to cover harder-to-reach trails, and you can tour the Great Barrier Reef thanks to diving mappers. Luc Vincent, the Google engineer known as “Mr Street View”, carried a 40lb pack of snapping cameras down to the bottom of the Grand Canyon and then back up along another trail as fellow hikers excitedly shouted “Google, Google” at the man with the space-age backpack. McClendon, pictured, has also played his part. He took his camera to Antarctica, taking 500 or more photos of a penguin-filled island to add to Google Maps. “The penguins were pretty oblivious. They just don’t care about people,” he says.

Now the company has projects called Ground Truth, which corrects errors online, and Map Maker, a service that lets people make their own maps. In the western world the product has been used to add a missing road or correct a one-way street that is pointing the wrong way, and to generally improve what’s already there. In Africa, Asia and other less well covered areas of the world, Google is – literally – helping people put themselves on the map.

In 2008, it could take six to 18 months for Google to update a map. The company would have to go back to the firm that provided its map information and get them to check the error, correct it and send it back. “At that point we decided we wanted to bring that information in house,” says McClendon. Google now updates its maps hundreds of times a day. Anyone can correct errors with roads signs or add missing roads and other details; Google double checks and relies on other users to spot mistakes.

Thousands of people use Google’s Map Maker daily to recreate their world online, says Michael Weiss-Malik, engineering director at Google Maps. “We have some Pakistanis living in the UK who have basically built the whole map,” he says. Using aerial shots and local information, people have created the most detailed, and certainly most up-to-date, maps of cities like Karachi that have probably ever existed. Regions of Africa and Asia have been added by map-mad volunteers.

[div class=attrib]Read the entire article following the jump.[end-div]

Beware, Big Telecomm is Watching You

Facebook trawls your profile, status and friends to target ads more effectively. It also allows 3rd parties, for a fee, to mine mountains of aggregated data for juicy analyses. Many online companies do the same. However, some companies are taking this to a whole, new and very personal level.

Here’s an example from Germany. Politician Malte Spitz gathered 6 months of his personal geolocation data from his mobile phone company. Then, he combined this data with his activity online, such as Twitter updates, blog entries and website visits. The interactive results seen here, plotted over time and space, show the detailed extent to which an individual’s life is being tracked and recorded.

[div class=attrib]From Zeit Online:[end-div]

By pushing the play button, you will set off on a trip through Malte Spitz’s life. The speed controller allows you to adjust how fast you travel, the pause button will let you stop at interesting points. In addition, a calendar at the bottom shows when he was in a particular location and can be used to jump to a specific time period. Each column corresponds to one day.

Not surprisingly, Spitz had to sue his phone company, Deutsche Telekom, to gain access to his own phone data.

[div class=attrib]From TED:[end-div]

On August 31, 2009, politician Malte Spitz traveled from Berlin to Erlangen, sending 29 text messages as he traveled. On November 5, 2009, he rocked out to U2 at the Brandenburg Gate. On January 10, 2010, he made 10 outgoing phone calls while on a trip to Dusseldorf, and spent 22 hours, 53 minutes and 57 seconds of the day connected to the internet.

How do we know all this? By looking at a detailed, interactive timeline of Spitz’s life, created using information obtained from his cell phone company, Deutsche Telekom, between September 2009 and February 2010.

In an impassioned talk given at TEDGlobal 2012, Spitz, a member of Germany’s Green Party, recalls his multiple-year quest to receive this data from his phone company. And he explains why he decided to make this shockingly precise log into public information in the newspaper Die Zeit – to sound a warning bell of sorts.

“If you have access to this information, you can see what your society is doing,” says Spitz. “If you have access to this information, you can control your country.”

[div class=attrib]Read the entire article after the jump.[end-div]

Keeping Secrets in the Age of Technology

[div class=attrib]From the Guardian:[end-div]

With the benefit of hindsight, life as I knew it came to an end in late 1994, round Seal’s house. We used to live round the corner from each other and if he was in between supermodels I’d pop over to watch a bit of Formula 1 on his pop star-sized flat-screen telly. I was probably on the sofa reading Vogue (we had that in common, albeit for different reasons) while he was “mucking about” on his computer (then the actual technical term for anything non-work-related, vis-à-vis computers), when he said something like: “Kate, have a look at this thing called the World Wide Web. It’s going to be massive!”

I can’t remember what we looked at then, at the tail-end of what I now nostalgically refer to as “The Tipp-Ex Years” – maybe The Well, accessed by Web Crawler – but whatever it was, it didn’t do it for me: “Information dual carriageway!” I said (trust me, this passed for witty in the 1990s). “Fancy a pizza?”

So there we are: Seal introduced me to the interweb. And although I remain a bit of a petrol-head and (nothing if not brand-loyal) own an iPad, an iPhone and two Macs, I am still basically rubbish at “modern”. Pre-Leveson, when I was writing a novel involving a phone-hacking scandal, my only concern was whether or not I’d come up with a plot that was: a) vaguely plausible and/or interesting, and b) technically possible. (A very nice man from Apple assured me that it was.)

I would gladly have used semaphore, telegrams or parchment scrolls delivered by magic owls to get the point across. Which is that ever since people started chiselling cuneiform on to big stones they’ve been writing things that will at some point almost certainly be misread and/or misinterpreted by someone else. But the speed of modern technology has made the problem rather more immediate. Confusing your public tweets with your Direct Messages and begging your young lover to take-me-now-cos-im-gagging-4-u? They didn’t have to worry about that when they were issuing decrees at Memphis on a nice bit of granodiorite.

These days the mis-sent (or indeed misread) text is still a relatively intimate intimation of an affair, while the notorious “reply all” email is the stuff of tired stand-up comedy. The boundary-less tweet is relatively new – and therefore still entertaining – territory, as evidenced most recently by American model Melissa Stetten, who, sitting on a plane next to a (married) soap actor called Brian Presley, tweeted as he appeared to hit on her.

Whenever and wherever words are written, somebody, somewhere will want to read them. And if those words are not meant to be read they very often will be – usually by the “wrong” people. A 2010 poll announced that six in 10 women would admit to regularly snooping on their partner’s phone, Twitter, or Facebook, although history doesn’t record whether the other four in 10 were then subjected to lie-detector tests.

Our compelling, self-sabotaging desire to snoop is usually informed by… well, if not paranoia, exactly, then insecurity, which in turn is more revealing about us than the words we find. If we seek out bad stuff – in a partner’s text, an ex’s Facebook status or best friend’s Twitter timeline – we will surely find it. And of course we don’t even have to make much effort to find the stuff we probably oughtn’t. Employers now routinely snoop on staff, and while this says more about the paranoid dynamic between boss classes and foot soldiers than we’d like, I have little sympathy for the employee who tweets their hangover status with one hand while phoning in “sick” with the other.

Take Google Maps: the more information we are given, the more we feel we’ve been gifted a licence to snoop. It’s the kind of thing we might be protesting about on the streets of Westminster were we not too busy invading our own privacy, as per the recent tweet-spat between Mr and Mrs Ben Goldsmith.

Technology feeds an increasing yet non-specific social unease – and that uneasiness inevitably trickles down to our more intimate relationships. For example, not long ago, I was blown out via text for a lunch date with a friend (“arrrgh, urgent deadline! SO SOZ!”), whose “urgent deadline” (their Twitter timeline helpfully revealed) turned out to involve lunch with someone else.

Did I like my friend any less when I found this out? Well yes, a tiny bit – until I acknowledged that I’ve done something similar 100 times but was “cleverer” at covering my tracks. Would it have been easier for my friend to tell me the truth? Arguably. Should I ever have looked at their Twitter timeline? Well, I had sought to confirm my suspicion that they weren’t telling the truth, so given that my paranoia gremlin was in charge it was no wonder I didn’t like what it found.

It is, of course, the paranoia gremlin that is in charge when we snoop – or are snooped upon – by partners, while “trust” is far more easily undermined than it has ever been. The randomly stumbled-across text (except they never are, are they?) is our generation’s lipstick-on-the-collar. And while Foursquare may say that your partner is in the pub, is that enough to stop you checking their Twitter/Facebook/emails/texts?

[div class=attrib]Read the entire article after the jump.[end-div]

You as a Data Strip Mine: What Facebook Knows

China, India, Facebook. With its 900 million member-citizens Facebook is the third largest country on the planet, ranked by population. This country has some benefits: no taxes, freedom to join and/or leave, and of course there’s freedom to assemble and a fair degree of free speech.

However, Facebook is no democracy. In fact, its data privacy policies and personal data mining might well put it in the same league as the Stalinist Soviet Union or cold war East Germany.

A fascinating article by Tom Simonite excerpted below sheds light on the data collection and data mining initiatives underway or planned at Facebook.

[div class=attrib]From Technology Review:[end-div]

If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.

It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.

And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.

Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from ­Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.

Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content. Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.

“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.

Contagious Information

Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.

In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”

Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.

It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but ­Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.

Social Engineering

Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)

This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.

Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of thejournal.ie / abracapocus_pocuscadabra (Flickr).[end-div]

Google: Please Don’t Be Evil

Google has been variously praised and derided for its corporate manta, “Don’t Be Evil”. For those who like to believe that Google has good intentions recent events strain these assumptions. The company was found to have been snooping on and collecting data from personal Wi-Fi routers. Is this the case of a lone-wolf or a corporate strategy?

[div class=attrib]From Slate:[end-div]

Was Google’s snooping on home Wi-Fi users the work of a rogue software engineer? Was it a deliberate corporate strategy? Was it simply an honest-to-goodness mistake? And which of these scenarios should we wish for—which would assuage your fears about the company that manages so much of our personal data?

These are the central questions raised by a damning FCC report on Google’s Street View program that was released last weekend. The Street View scandal began with a revolutionary idea—Larry Page wanted to snap photos of every public building in the world. Beginning in 2007, the search company’s vehicles began driving on streets in the United States (and later Europe, Canada, Mexico, and everywhere else), collecting a stream of images to feed into Google Maps.

While developing its Street View cars, Google’s engineers realized that the vehicles could also be used for “wardriving.” That’s a sinister-sounding name for the mainly noble effort to map the physical location of the world’s Wi-Fi routers. Creating a location database of Wi-Fi hotspots would make Google Maps more useful on mobile devices—phones without GPS chips could use the database to approximate their physical location, while GPS-enabled devices could use the system to speed up their location-monitoring systems. As a privacy matter, there was nothing unusual about wardriving. By the time Google began building its system, several startups had already created their own Wi-Fi mapping databases.

But Google, unlike other companies, wasn’t just recording the location of people’s Wi-Fi routers. When a Street View car encountered an open Wi-Fi network—that is, a router that was not protected by a password—it recorded all the digital traffic traveling across that router. As long as the car was within the vicinity, it sucked up a flood of personal data: login names, passwords, the full text of emails, Web histories, details of people’s medical conditions, online dating searches, and streaming music and movies.

Imagine a postal worker who opens and copies one letter from every mailbox along his route. Google’s sniffing was pretty much the same thing, except instead of one guy on one route it was a whole company operating around the world. The FCC report says that when French investigators looked at the data Google collected, they found “an exchange of emails between a married woman and man, both seeking an extra-marital relationship” and “Web addresses that revealed the sexual preferences of consumers at specific residences.” In the United States, Google’s cars collected 200 gigabytes of such data between 2008 and 2010, and they stopped only when regulators discovered the practice.

Why did Google collect all this data? What did it want to do with people’s private information? Was collecting it a mistake? Was it the inevitable result of Google’s maximalist philosophy about public data—its aim to collect and organize all of the world’s information?

Google says the answer to that final question is no. In its response to the FCC and its public blog posts, the company says it is sorry for what happened, and insists that it has established a much stricter set of internal policies to prevent something like this from happening again. The company characterizes the collection of Wi-Fi payload data as the idea of one guy, an engineer who contributed code to the Street View program. In the FCC report, he’s called Engineer Doe. On Monday, the New York Times identified him as Marius Milner, a network programmer who created Network Stumbler, a popular Wi-Fi network detection tool. The company argues that Milner—for reasons that aren’t really clear—slipped the snooping code into the Street View program without anyone else figuring out what he was up to. Nobody else on the Street View team wanted to collect Wi-Fi data, Google says—they didn’t think it would be useful in any way, and, in fact, the data was never used for any Google product.

Should we believe Google’s lone-coder theory? I have a hard time doing so. The FCC report points out that Milner’s “design document” mentions his intention to collect and analyze payload data, and it also highlights privacy as a potential concern. Though Google’s privacy team never reviewed the program, many of Milner’s colleagues closely reviewed his source code. In 2008, Milner told one colleague in an email that analyzing the Wi-Fi payload data was “one of my to-do items.” Later, he ran a script to count the Web addresses contained in the collected data and sent his results to an unnamed “senior manager.” The manager responded as if he knew what was going on: “Are you saying that these are URLs that you sniffed out of Wi-Fi packets that we recorded while driving?” Milner responded by explaining exactly where the data came from. “The data was collected during the daytime when most traffic is at work,” he said.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Fastcompany.[end-div]

Your Tween Online

Many parents with children in the pre-teenage years probably have a containment policy restricting them from participating on adult oriented social media such as Facebook. Well, these tech-savvy tweens may be doing more online than just playing Club Penguin.

[div class=attrib]From the WSJ:[end-div]

Celina McPhail’s mom wouldn’t let her have a Facebook account. The 12-year-old is on Instagram instead.

Her mother, Maria McPhail, agreed to let her download the app onto her iPod Touch, because she thought she was fostering an interest in photography. But Ms. McPhail, of Austin, Texas, has learned that Celina and her friends mostly use the service to post and “like” Photoshopped photo-jokes and text messages they create on another free app called Versagram. When kids can’t get on Facebook, “they’re good at finding ways around that,” she says.

It’s harder than ever to keep an eye on the children. Many parents limit their preteens’ access to well-known sites like Facebook and monitor what their children do online. But with kids constantly seeking new places to connect—preferably, unsupervised by their families—most parents are learning how difficult it is to prevent their kids from interacting with social media.

Children are using technology at ever-younger ages. About 15% of kids under the age of 11 have their own mobile phone, according to eMarketer. The Pew Research Center’s Internet & American Life Project reported last summer that 16% of kids 12 to 17 who are online used Twitter, double the number from two years earlier.

Parents worry about the risks of online predators and bullying, and there are other concerns. Kids are creating permanent public records, and they may encounter excessive or inappropriate advertising. Yet many parents also believe it is in their kids’ interest to be nimble with technology.

As families grapple with how to use social media safely, many marketers are working to create social networks and other interactive applications for kids that parents will approve. Some go even further, seeing themselves as providing a crucial education in online literacy—”training wheels for social media,” as Rebecca Levey of social-media site KidzVuz puts it.

Along with established social sites for kids, such as Walt Disney Co.’s Club Penguin, kids are flocking to newer sites such as FashionPlaytes.com, a meeting place aimed at girls ages 5 to 12 who are interested in designing clothes, and Everloop, a social network for kids under the age of 13. Viddy, a video-sharing site which functions similarly to Instagram, is becoming more popular with kids and teenagers as well.

Some kids do join YouTube, Google, Facebook, Tumblr and Twitter, despite policies meant to bar kids under 13. These sites require that users enter their date of birth upon signing up, and they must be at least 13 years old. Apple—which requires an account to download apps like Instagram to an iPhone—has the same requirement. But there is little to bar kids from entering a false date of birth or getting an adult to set up an account. Instagram declined to comment.

“If we learn that someone is not old enough to have a Google account, or we receive a report, we will investigate and take the appropriate action,” says Google spokesman Jay Nancarrow. He adds that “users first have a chance to demonstrate that they meet our age requirements. If they don’t, we will close the account.” Facebook and most other sites have similar policies.

Still, some children establish public identities on social-media networks like YouTube and Facebook with their parents’ permission. Autumn Miller, a 10-year-old from Southern California, has nearly 6,000 people following her Facebook fan-page postings, which include links to videos of her in makeup and costumes, dancing Laker-Girl style.

[div class=attrib]Read the entire article after the jump.[end-div]

You Are What You Share

The old maxim used to go something like, “you are what you eat”. Well, in the early 21st century it has been usurped by, “you are what you share online (knowingly or not)”.

[div class=attrib]From the Wall Street Journal:[end-div]

Not so long ago, there was a familiar product called software. It was sold in stores, in shrink-wrapped boxes. When you bought it, all that you gave away was your credit card number or a stack of bills.

Now there are “apps”—stylish, discrete chunks of software that live online or in your smartphone. To “buy” an app, all you have to do is click a button. Sometimes they cost a few dollars, but many apps are free, at least in monetary terms. You often pay in another way. Apps are gateways, and when you buy an app, there is a strong chance that you are supplying its developers with one of the most coveted commodities in today’s economy: personal data.

Some of the most widely used apps on Facebook—the games, quizzes and sharing services that define the social-networking site and give it such appeal—are gathering volumes of personal information.

A Wall Street Journal examination of 100 of the most popular Facebook apps found that some seek the email addresses, current location and sexual preference, among other details, not only of app users but also of their Facebook friends. One Yahoo service powered by Facebook requests access to a person’s religious and political leanings as a condition for using it. The popular Skype service for making online phone calls seeks the Facebook photos and birthdays of its users and their friends.

Yahoo and Skype say that they seek the information to customize their services for users and that they are committed to protecting privacy. “Data that is shared with Yahoo is managed carefully,” a Yahoo spokeswoman said.

The Journal also tested its own app, “WSJ Social,” which seeks data about users’ basic profile information and email and requests the ability to post an update when a user reads an article. A Journal spokeswoman says that the company asks only for information required to make the app work.

This appetite for personal data reflects a fundamental truth about Facebook and, by extension, the Internet economy as a whole: Facebook provides a free service that users pay for, in effect, by providing details about their lives, friendships, interests and activities. Facebook, in turn, uses that trove of information to attract advertisers, app makers and other business opportunities.

Up until a few years ago, such vast and easily accessible repositories of personal information were all but nonexistent. Their advent is driving a profound debate over the definition of privacy in an era when most people now carry information-transmitting devices with them all the time.

Capitalizing on personal data is a lucrative enterprise. Facebook is in the midst of planning for an initial public offering of its stock in May that could value the young company at more than $100 billion on the Nasdaq Stock Market.

Facebook requires apps to ask permission before accessing a user’s personal details. However, a user’s friends aren’t notified if information about them is used by a friend’s app. An examination of the apps’ activities also suggests that Facebook occasionally isn’t enforcing its own rules on data privacy.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Facebook is watching and selling you. Courtesy of Daily Mail.[end-div]

Your Guide to Online Morality

By most estimates Facebook has around 800 million registered users. This means that its policies governing what is or is not appropriate user content should bear detailed scrutiny. So, a look at Facebook’s recently publicized guidelines for sexual and violent content show a somewhat peculiar view of morality. It’s a view that some characterize as typically American prudishness, but with a blind eye towards violence.

[div class=attrib]From the Guardian:[end-div]

Facebook bans images of breastfeeding if nipples are exposed – but allows “graphic images” of animals if shown “in the context of food processing or hunting as it occurs in nature”. Equally, pictures of bodily fluids – except semen – are allowed as long as no human is included in the picture; but “deep flesh wounds” and “crushed heads, limbs” are OK (“as long as no insides are showing”), as are images of people using marijuana but not those of “drunk or unconscious” people.

The strange world of Facebook’s image and post approval system has been laid bare by a document leaked from the outsourcing company oDesk to the Gawker website, which indicates that the sometimes arbitrary nature of picture and post approval actually has a meticulous – if faintly gore-friendly and nipple-unfriendly – approach.

For the giant social network, which has 800 million users worldwide and recently set out plans for a stock market flotation which could value it at up to $100bn (£63bn), it is a glimpse of its inner workings – and odd prejudices about sex – that emphasise its American origins.

Facebook has previously faced an outcry from breastfeeding mothers over its treatment of images showing them with their babies. The issue has rumbled on, and now seems to have been embedded in its “Abuse Standards Violations”, which states that banned items include “breastfeeding photos showing other nudity, or nipple clearly exposed”. It also bans “naked private parts” including “female nipple bulges and naked butt cracks” – though “male nipples are OK”.

The guidelines, which have been set out in full, depict a world where sex is banned but gore is acceptable. Obvious sexual activity, even if “naked parts” are hidden, people “using the bathroom”, and “sexual fetishes in any form” are all also banned. The company also bans slurs or racial comments “of any kind” and “support for organisations and people primarily known for violence”. Also banned is anyone who shows “approval, delight, involvement etc in animal or human torture”.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Guardian / Photograph: Dominic Lipinski/PA.[end-div]

The Technology of Personalization and the Bubble Syndrome

A decade ago in another place and era during my days as director of technology research for a Fortune X company I tinkered with a cool array of then new personalization tools. The aim was simple, use some of these emerging technologies to deliver a more customized and personalized user experience for our customers and suppliers. What could be wrong with that? Surely, custom tools and more personalized data could do nothing but improve knowledge and enhance business relationships for all concerned. Our customers would benefit from seeing only the information they asked for, our suppliers would benefit from better analysis and filtered feedback, and we, the corporation in the middle, would benefit from making everyone in our supply chain more efficient and happy. Advertisers would be even happier since with more focused data they would be able to deliver messages that were increasingly more precise and relevant based on personal context.

Fast forward to the present. Customization, or filtering, technologies have indeed helped optimize the supply chain; personalization tools and services have made customer experiences more focused and efficient. In today’s online world it’s so much easier to find, navigate and transact when the supplier at the other end of our browser knows who we are, where we live, what we earn, what we like and dislike, and so on. After all, if a supplier knows my needs, requirements, options, status and even personality, I’m much more likely to only receive information, services or products that fall within the bounds that define “me” in the supplier’s database.

And, therein lies the crux of the issue that has helped me to realize that personalization offers a false promise despite the seemingly obvious benefits to all concerned. The benefits are outweighed by two key issues: erosion of privacy and the bubble syndrome.

Privacy as Commodity

I’ll not dwell too long on the issue of privacy since in this article I’m much more concerned with the personalization bubble. However, as we have increasingly seen in recent times privacy in all its forms is becoming a scarce, and tradable commodity. Much of our data is now in the hands of a plethora of suppliers, intermediaries and their partners, ready for continued monetization. Our locations are constantly pinged and polled; our internet browsers note our web surfing habits and preferences; our purchases generate genius suggestions and recommendations to further whet our consumerist desires. Now in digital form this data is open to legitimate sharing and highly vulnerable to discovery by hackers, phishers and spammers and any with technical or financial resources.

Bubble Syndrome

Personalization technologies filter content at various levels, minutely and broadly, both overtly and covertly. For instance, I may explicitly signal my preferences for certain types of clothing deals at my favorite online retailer by answering a quick retail survey or checking a handful of specific preference buttons on a website.

However, my previous online purchases, browsing behaviors, time spent of various online pages, visits to other online retailers and a range of other flags deliver a range of implicit or “covert” information to the same retailer (and others). This helps the retailer filter, customize and personalize what I get to see even before I have made a conscious decision to limit my searches and exposure to information. Clearly, this is not too concerning when my retailer knows I’m male and usually purchase size 32 inch jeans; after all why would I need to see deals or product information for women’s shoes.

But, this type of covert filtering becomes more worrisome when the data being filtered and personalized is information, news, opinion and comment in all its glorious diversity. Sophisticated media organizations, information portals, aggregators and news services can deliver personalized and filtered information based on your overt and covert personal preferences as well. So, if you subscribe only to a certain type of information based on topic, interest, political persuasion or other dimension your personalized news services will continue to deliver mostly or only this type of information. And, as I have already described, your online behaviors will deliver additional filtering parameters to these news and information providers so that they may further personalize and narrow your consumption of information.

Increasingly, we will not be aware of what we don’t know. Whether explicitly or not, our use of personalization technologies will have the ability to build a filter, a bubble, around us, which will permit only information that we wish to see or that which our online suppliers wish us to see. We’ll not even get exposed to peripheral and tangential information — that information which lies outside the bubble. This filtering of the rich oceans of diverse information to a mono-dimensional stream will have profound implications for our social and cultural fabric.

I assume that our increasingly crowded planet will require ever more creativity, insight, tolerance and empathy as we tackle humanity’s many social and political challenges in the future. And, these very seeds of creativity, insight, tolerance and empathy are those that are most at risk from the personalization filter. How are we to be more tolerant of others’ opinions if we are never exposed to them in the first place? How are we to gain insight when disparate knowledge is no longer available for serendipitous discovery? How are we to become more creative if we are less exposed to ideas outside of our normal sphere, our bubble?

For some ideas on how to punch a few holes in your online filter bubble read Eli Pariser’s practical guide, here.

Filter Bubble image courtesy of TechCrunch.

Your Digital Privacy? It May Already Be an Illusion

[div class=attrib]From Discover:[end-div]

As his friends flocked to social networks like Facebook and MySpace, Alessandro Acquisti, an associate professor of information technology at Carnegie Mellon University, worried about the downside of all this online sharing. “The personal information is not particularly sensitive, but what happens when you combine those pieces together?” he asks. “You can come up with something that is much more sensitive than the individual pieces.”

Acquisti tested his idea in a study, reported earlier this year in Proceedings of the National Academy of Sciences. He took seemingly innocuous pieces of personal data that many people put online (birthplace and date of birth, both frequently posted on social networking sites) and combined them with information from the Death Master File, a public database from the U.S. Social Security Administration. With a little clever analysis, he found he could determine, in as few as 1,000 tries, someone’s Social Security number 8.5 percent of the time. Data thieves could easily do the same thing: They could keep hitting the log-on page of a bank account until they got one right, then go on a spending spree. With an automated program, making thousands of attempts is no trouble at all.

The problem, Acquisti found, is that the way the Death Master File numbers are created is predictable. Typically the first three digits of a Social Security number, the “area number,” are based on the zip code of the person’s birthplace; the next two, the “group number,” are assigned in a predetermined order within a particular area-number group; and the final four, the “serial number,” are assigned consecutively within each group number. When Acquisti plotted the birth information and corresponding Social Security numbers on a graph, he found that the set of possible IDs that could be assigned to a person with a given date and place of birth fell within a restricted range, making it fairly simple to sift through all of the possibilities.

To check the accuracy of his guesses, Acquisti used a list of students who had posted their birth information on a social network and whose Social Security numbers were matched anon­ymously by the university they attended. His system worked—yet another reason why you should never use your Social Security number as a password for sensitive transactions.

Welcome to the unnerving world of data mining, the fine art (some might say black art) of extracting important or sensitive pieces from the growing cloud of information that surrounds almost all of us. Since data persist essentially forever online—just check out the Internet Archive Wayback Machine, the repository of almost everything that ever appeared on the Internet—some bit of seemingly harmless information that you post today could easily come back to haunt you years from now.

[div class=attrib]More from theSource here.[end-div]