Tag Archives: internet

The Internet of Things and Your (Lack of) Privacy

Ubiquitous connectivity for, and between, individuals and businesses is widely held to be beneficial for all concerned. We can connect rapidly and reliably with family, friends and colleagues from almost anywhere to anywhere via a wide array of internet enabled devices. Yet, as these devices become more powerful and interconnected, and enabled with location-based awareness, such as GPS (Global Positioning System) services, we are likely to face an increasing acute dilemma — connectedness or privacy?

From the Guardian:

The internet has turned into a massive surveillance tool. We’re constantly monitored on the internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated – sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behaviour, it’s still only behaviour that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is internet-enabled. Soon there will be internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be internet-connected tags on our clothing. In its extreme, everything can be connected to the internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about the “Internet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to – to monitor how people move about their house and how they spend their time.

Drones are the another “thing” moving onto the internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s internet-enabled glasses – Google Glass – are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behaviour and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals – not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyse this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days – and nights – with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Read the entire article here.

Image: Big Brother, 1984. Poster. Courtesy of Telegraph.

First Came Phishing, Now We Have Catfishing

The internet has revolutionized retailing, the music business, and the media landscape. It has anointed countless entrepreneurial millionaires and billionaires and helped launch arrays of new businesses in all spheres of life.

Of course, due to the peculiarities of human nature the internet has also become an enabler and/or a new home to less upstanding ventures such as online pornography, spamming, identify theft and phishing.

Now comes “catfishing“: posting false information online with the intent of reeling someone in (usually found on online dating sites). While this behavior is nothing new in the vast catalog of human deviousness, the internet has enabled an explosion in “catfishers“. This fascinating infographic below gives a neat summary.

Infographic courtesy of Checkmate.

Totalitarianism in the Age of the Internet

Google chair Eric Schmidt is in a very elite group. Not only does he run a major and very profitable U.S. corporation, and by extrapolation is thus a “googillionaire”, he’s also been to North Korea.

We excerpt below Schmidt’s recent essay, with co-author Jared Cohen, about freedom in both the real and digital worlds.

From the Wall Street Journal:

How do you explain to people that they are a YouTube sensation, when they have never heard of YouTube or the Internet? That’s a question we faced during our January visit to North Korea, when we attempted to engage with the Pyongyang traffic police. You may have seen videos on the Web of the capital city’s “traffic cops,” whose ballerina-like street rituals, featured in government propaganda videos, have made them famous online. The men and women themselves, however—like most North Koreans—have never seen a Web page, used a desktop computer, or held a tablet or smartphone. They have never even heard of Google (or Bing, for that matter).

Even the idea of the Internet has not yet permeated the public’s consciousness in North Korea. When foreigners visit, the government stages Internet browsing sessions by having “students” look at pre-downloaded and preapproved content, spending hours (as they did when we were there) scrolling up and down their screens in totalitarian unison. We ended up trying to describe the Internet to North Koreans we met in terms of its values: free expression, freedom of assembly, critical thinking, meritocracy. These are uncomfortable ideas in a society where the “Respected Leader” is supposedly the source of all information and where the penalty for defying him is the persecution of you and your family for three generations.

North Korea is at the beginning of a cat-and-mouse game that’s playing out all around the world between repressive regimes and their people. In most of the world, the spread of connectivity has transformed people’s expectations of their governments. North Korea is one of the last holdouts. Until only a few years ago, the price for being caught there with an unauthorized cellphone was the death penalty. Cellphones are now more common in North Korea since the government decided to allow one million citizens to have them; and in parts of the country near the border, the Internet is sometimes within reach as citizens can sometimes catch a signal from China. None of this will transform the country overnight, but one thing is certain: Though it is possible to curb and monitor technology, once it is available, even the most repressive regimes are unable to put it back in the box.

What does this mean for governments and would-be revolutionaries? While technology has great potential to bring about change, there is a dark side to the digital revolution that is too often ignored. There is a turbulent transition ahead for autocratic regimes as more of their citizens come online, but technology doesn’t just help the good guys pushing for democratic reform—it can also provide powerful new tools for dictators to suppress dissent.

Fifty-seven percent of the world’s population still lives under some sort of autocratic regime. In the span of a decade, the world’s autocracies will go from having a minority of their citizens online to a majority. From Tehran to Beijing, autocrats are building the technology and training the personnel to suppress democratic dissent, often with the help of Western companies.

Of course, this is no easy task—and it isn’t cheap. The world’s autocrats will have to spend a great deal of money to build systems capable of monitoring and containing dissident energy. They will need cell towers and servers, large data centers, specialized software, legions of trained personnel and reliable supplies of basic resources like electricity and Internet connectivity. Once such an infrastructure is in place, repressive regimes then will need supercomputers to manage the glut of information.

Despite the expense, everything a regime would need to build an incredibly intimidating digital police state—including software that facilitates data mining and real-time monitoring of citizens—is commercially available right now. What’s more, once one regime builds its surveillance state, it will share what it has learned with others. We know that autocratic governments share information, governance strategies and military hardware, and it’s only logical that the configuration that one state designs (if it works) will proliferate among its allies and assorted others. Companies that sell data-mining software, surveillance cameras and other products will flaunt their work with one government to attract new business. It’s the digital analog to arms sales, and like arms sales, it will not be cheap. Autocracies rich in national resources—oil, gas, minerals—will be able to afford it. Poorer dictatorships might be unable to sustain the state of the art and find themselves reliant on ideologically sympathetic patrons.

And don’t think that the data being collected by autocracies is limited to Facebook posts or Twitter comments. The most important data they will collect in the future is biometric information, which can be used to identify individuals through their unique physical and biological attributes. Fingerprints, photographs and DNA testing are all familiar biometric data types today. Indeed, future visitors to repressive countries might be surprised to find that airport security requires not just a customs form and passport check, but also a voice scan. In the future, software for voice and facial recognition will surpass all the current biometric tests in terms of accuracy and ease of use.

Today’s facial-recognition systems use a camera to zoom in on an individual’s eyes, mouth and nose, and extract a “feature vector,” a set of numbers that describes key aspects of the image, such as the precise distance between the eyes. (Remember, in the end, digital images are just numbers.) Those numbers can be fed back into a large database of faces in search of a match. The accuracy of this software is limited today (by, among other things, pictures shot in profile), but the progress in this field is remarkable. A team at Carnegie Mellon demonstrated in a 2011 study that the combination of “off-the-shelf” facial recognition software and publicly available online data (such as social-network profiles) can match a large number of faces very quickly. With cloud computing, it takes just seconds to compare millions of faces. The accuracy improves with people who have many pictures of themselves available online—which, in the age of Facebook, is practically everyone.

Dictators, of course, are not the only beneficiaries from advances in technology. In recent years, we have seen how large numbers of young people in countries such as Egypt and Tunisia, armed with little more than mobile phones, can fuel revolutions. Their connectivity has helped them to challenge decades of authority and control, hastening a process that, historically, has often taken decades. Still, given the range of possible outcomes in these situations—brutal crackdown, regime change, civil war, transition to democracy—it is also clear that technology is not the whole story.

Observers and participants alike have described the recent Arab Spring as “leaderless”—but this obviously has a downside to match its upside. In the day-to-day process of demonstrating, it was possible to retain a decentralized command structure (safer too, since the regimes could not kill the movement simply by capturing the leaders). But, over time, some sort of centralized authority must emerge if a democratic movement is to have any direction. Popular uprisings can overthrow dictators, but they’re only successful afterward if opposition forces have a plan and can execute it. Building a Facebook page does not constitute a plan.

History suggests that opposition movements need time to develop. Consider the African National Congress in South Africa. During its decades of exile from the apartheid state, the organization went through multiple iterations, and the men who would go on to become South African presidents (Nelson Mandela, Thabo Mbeki and Jacob Zuma) all had time to build their reputations, credentials and networks while honing their operational skills. Likewise with Lech Walesa and his Solidarity trade union in Eastern Europe. A decade passed before Solidarity leaders could contest seats in the Polish parliament, and their victory paved the way for the fall of communism.

Read the entire essay after the jump.

Image: North Korean students work in a computer lab. Courtesy of AP Photo/David Guttenfelder / Washington Post.

Geeks As Guardians of (Some of) Our Civil Liberties

It’s interesting to ponder what would have been if the internet and social media had been around during those more fractious times in Seneca Falls, Selma and Stonewall. Perhaps these tools would have helped accelerate progress.

[div class-attrib]From Technology Review:[end-div]

A decade-plus of anthropological fieldwork among hackers and like-minded geeks has led me to the firm conviction that these people are building one of the most vibrant civil liberties movements we’ve ever seen. It is a culture committed to freeing information, insisting on privacy, and fighting censorship, which in turn propels wide-ranging political activity. In the last year alone, hackers have been behind some of the most powerful political currents out there.

Before I elaborate, a brief word on the term “hacker” is probably in order. Even among hackers, it provokes debate. For instance, on the technical front, a hacker might program, administer a network, or tinker with hardware. Ethically and politically, the variability is just as prominent. Some hackers are part of a transgressive, law-breaking tradition, their activities opaque and below the radar. Other hackers write open-source software and pride themselves on access and transparency. While many steer clear of political activity, an increasingly important subset rise up to defend their productive autonomy, or engage in broader social justice and human rights campaigns.

Despite their differences, there are certain  websites and conferences that bring the various hacker clans together. Like any political movement, it is internally diverse but, under the right conditions, individuals with distinct abilities will work in unison toward a cause.

Take, for instance, the reaction to the Stop Online Piracy Act (SOPA), a far-reaching copyright bill meant to curtail piracy online. SOPA was unraveled before being codified into law due to a massive and elaborate outpouring of dissent driven by the hacker movement.

The linchpin was a “Blackout Day”—a Web-based protest of unprecedented scale. To voice their opposition to the bill, on January 17, 2012, nonprofits, some big Web companies, public interest groups, and thousands of individuals momentarily removed their websites from the Internet and thousands of other citizens called or e-mailed their representatives. Journalists eventually wrote a torrent of articles. Less than a week later, in response to these stunning events, SOPA and PIPA, its counterpart in the Senate, were tabled (see “SOPA Battle Won, but War Continues”).

The victory hinged on its broad base of support cultivated by hackers and geeks. The participation of corporate giants like Google, respected Internet personalities like Jimmy Wales, and the civil liberties organization EFF was crucial to its success. But the geek and hacker contingent was palpably present, and included, of course, Anonymous. Since 2008, activists have rallied under this banner to initiate targeted demonstrations, publicize various wrongdoings, leak sensitive data, engage in digital direct action, and provide technology assistance for revolutionary movements.

As part of the SOPA protests, Anonymous churned out videos and propaganda posters and provided constant updates on several prominent Twitter accounts, such as Your Anonymous News, which are brimming with followers. When the blackout ended, corporate players naturally receded from the limelight and went back to work. Anonymous and others, however, continue to fight for Internet freedoms.

In fact, just the next day, on January 18, 2012, federal authorities orchestrated the takedown of the popular file-sharing site MegaUpload. The company’s gregarious and controversial founder Kim Dotcom was also arrested in a dramatic early morning raid in New Zealand. The removal of this popular website was received ominously by Anonymous activists: it seemed to confirm that if bills like SOPA become law, censorship would become a far more common fixture on the Internet. Even though no court had yet found Kim Dotcom guilty of piracy, his property was still confiscated and his website knocked off the Internet.

As soon as the news broke, Anonymous coordinated its largest distributed denial of service campaign to date. It took down a slew of websites, including the homepage of Universal Music, the FBI, the U.S. Copyright Office, the Recording Industry Association of America, and the Motion Picture Association of America.

[div class=attrib]Read the entire article after the jump.[end-div]

Connectedness: A Force For Good

The internet has the potential to make our current political process obsolete. A review of “The End of Politics” by British politician Douglas Carswell shows how connectedness provides a significant opportunity to reshape the political process, and in some cases completely undermine government, for the good.

[div class=attrib]Charles Moore for the Telegraph:[end-div]

I think I can help you tackle this thought-provoking book. First of all, the title misleads. Enchanting though the idea will sound to many people, this is not about the end of politics. It is, after all, written by a Member of Parliament, Douglas Carswell (Con., Clacton) and he is fascinated by the subject. There’ll always be politics, he is saying, but not as we know it.

Second, you don’t really need to read the first half. It is essentially a passionately expressed set of arguments about why our current political arrangements do not work. It is good stuff, but there is plenty of it in the more independent-minded newspapers most days. The important bit is Part Two, beginning on page 145 and running for a modest 119 pages. It is called “The Birth of iDemocracy”.

Mr Carswell resembles those old barometers in which, in bad weather (Part One), a man with a mackintosh, an umbrella and a scowl comes out of the house. In good weather (Part Two), he pops out wearing a white suit, a straw hat and a broad smile. What makes him happy is the feeling that the digital revolution can restore to the people the power which, in the early days of the universal franchise, they possessed – and much, much more. He believes that the digital revolution has at last harnessed technology to express the “collective brain” of humanity. We develop our collective intelligence by exchanging the properties of our individual ones.

Throughout history, we have been impeded in doing this by physical barriers, such as distance, and by artificial ones, such as priesthoods of bureaucrats and experts. Today, i-this and e-that are cutting out these middlemen. He quotes the internet sage, Clay Shirky: “Here comes everybody”. Mr Carswell directs magnificent scorn at the aides to David Cameron who briefed the media that the Prime Minister now has an iPad app which will allow him, at a stroke of his finger, “to judge the success or failure of ministers with reference to performance-related data”.

The effect of the digital revolution is exactly the opposite of what the aides imagine. Far from now being able to survey everything, always, like God, the Prime Minister – any prime minister – is now in an unprecedentedly weak position in relation to the average citizen: “Digital technology is starting to allow us to choose for ourselves things that until recently Digital Dave and Co decided for us.”

A non-physical business, for instance, can often decide pretty freely where, for the purposes of taxation, it wants to live. Naturally, it will choose benign jurisdictions. Governments can try to ban it from doing so, but they will either fail, or find that they are cutting off their nose to spite their face. The very idea of a “tax base”, on which treasuries depend, wobbles when so much value lies in intellectual property and intellectual property is mobile. So taxes need to be flatter to keep their revenues up. If they are flatter, they will be paid by more people.

Therefore it becomes much harder for government to grow, since most people do not want to pay more.

[div class=attrib]Read the entire article after the jump.[end-div]

The Tubes of the Internets

Google lets the world peek at the many tubes that form a critical part of its search engine infrastructure — functional and pretty too.

[div class=attrib]From the Independent:[end-div]

They are the cathedrals of the information age – with the colour scheme of an adventure playground.

For the first time, Google has allowed cameras into its high security data centres – the beating hearts of its global network that allow the web giant to process 3 billion internet searches every day.

Only a small band of Google employees have ever been inside the doors of the data centres, which are hidden away in remote parts of North America, Belgium and Finland.

Their workplaces glow with the blinking lights of LEDs on internet servers reassuring technicians that all is well with the web, and hum to the sound of hundreds of giant fans and thousands of gallons of water, that stop the whole thing overheating.

“Very few people have stepped inside Google’s data centers [sic], and for good reason: our first priority is the privacy and security of your data, and we go to great lengths to protect it, keeping our sites under close guard,” the company said yesterday. Row upon row of glowing servers send and receive information from 20 billion web pages every day, while towering libraries store all the data that Google has ever processed – in case of a system failure.

With data speeds 200,000 times faster than an ordinary home internet connection, Google’s centres in America can share huge amounts of information with European counterparts like the remote, snow-packed Hamina centre in Finland, in the blink of an eye.

[div class=attrib]Read the entire article after the jump, or take a look at more images from the bowels of Google after the leap.[end-div]

GigaBytes and TeraWatts

Online social networks have expanded to include hundreds of millions of twitterati and their followers. An ever increasing volume of data, images, videos and documents continues to move into the expanding virtual “cloud”, hosted in many nameless data centers. Virtual processing and computation on demand is growing by leaps and bounds.

Yet while business models for the providers of these internet services remain ethereal, one segment of this business ecosystem is salivating — electricity companies and utilities — at the staggering demand for electrical power.

[div class=attrib]From the New York Times:[end-div]

Jeff Rothschild’s machines at Facebook had a problem he knew he had to solve immediately. They were about to melt.

The company had been packing a 40-by-60-foot rental space here with racks of computer servers that were needed to store and process information from members’ accounts. The electricity pouring into the computers was overheating Ethernet sockets and other crucial components.

Thinking fast, Mr. Rothschild, the company’s engineering chief, took some employees on an expedition to buy every fan they could find — “We cleaned out all of the Walgreens in the area,” he said — to blast cool air at the equipment and prevent the Web site from going down.

That was in early 2006, when Facebook had a quaint 10 million or so users and the one main server site. Today, the information generated by nearly one billion people requires outsize versions of these facilities, called data centers, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.

They are a mere fraction of the tens of thousands of data centers that now exist to support the overall explosion of digital information. Stupendous amounts of data are set in motion each day as, with an innocuous click or tap, people download movies on iTunes, check credit card balances through Visa’s Web site, send Yahoo e-mail with files attached, buy products on Amazon, post on Twitter or read newspapers online.

A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.

Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid, The Times found.

To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centers has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centers appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.

Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.

“It’s staggering for most people, even people in the industry, to understand the numbers, the sheer size of these systems,” said Peter Gross, who helped design hundreds of data centers. “A single data center can take more power than a medium-size town.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the AP / Thanassis Stavrakis.[end-div]

The Pros and Cons of Online Reviews

There is no doubt that online reviews for products and services, from books to news cars to a vacation spot, have revolutionized shopping behavior. Internet and mobile technology has made gathering, reviewing and publishing open and honest crowdsourced opinion simple, efficient and ubiquitous.

However, the same tools that allow frank online discussion empower those wishing to cheat and manipulate the system. Cyberspace is rife with fake reviews, fake reviewers, inflated ratings, edited opinion, and paid insertions.

So, just as in any purchase transaction since the time when buyers and sellers first met, caveat emptor still applies.

[div class=attrib]From Slate:[end-div]

The Internet has fundamentally changed the way that buyers and sellers meet and interact in the marketplace. Online retailers make it cheap and easy to browse, comparison shop, and make purchases with the click of a mouse. The Web can also, in theory, make for better-informed purchases—both online and off—thanks to sites that offer crowdsourced reviews of everything from dog walkers to dentists.

In a Web-enabled world, it should be harder for careless or unscrupulous businesses to exploit consumers. Yet recent studies suggest that online reviewing is hardly a perfect consumer defense system. Researchers at Yale, Dartmouth, and USC have found evidence that hotel owners post fake reviews to boost their ratings on the site—and might even be posting negative reviews of nearby competitors.

The preponderance of online reviews speaks to their basic weakness: Because it’s essentially free to post a review, it’s all too easy to dash off thoughtless praise or criticism, or, worse, to construct deliberately misleading reviews without facing any consequences. It’s what economists (and others) refer to as the cheap-talk problem. The obvious solution is to make it more costly to post a review, but that eliminates one of the main virtues of crowdsourcing: There is much more wisdom in a crowd of millions than in select opinions of a few dozen.

Of course, that wisdom depends on reviewers giving honest feedback. A few well-publicized incidents suggest that’s not always the case. For example, when Amazon’s Canadian site accidentally revealed the identities of anonymous book reviewers in 2004, it became apparent that many reviews came from publishers and from the authors themselves.

Technological idealists, perhaps not surprisingly, see a solution to this problem in cutting-edge computer science. One widely reported study last year showed that a text-analysis algorithm proved remarkably adept at detecting made-up reviews. The researchers instructed freelance writers to put themselves in the role of a hotel marketer who has been tasked by his boss with writing a fake customer review that is flattering to the hotel. They also compiled a set of comparison TripAdvisor reviews that the study’s authors felt were likely to be genuine. Human judges could not distinguish between the real ones and the fakes. But the algorithm correctly identified the reviews as real or phony with 90 percent accuracy by picking up on subtle differences, like whether the review described specific aspects of the hotel room layout (the real ones do) or mentioned matters that were unrelated to the hotel itself, like whether the reviewer was there on vacation or business (a marker of fakes). Great, but in the cat-and-mouse game of fraud vs. fraud detection, phony reviewers can now design feedback that won’t set off any alarm bells.
Just how prevalent are fake reviews? A trio of business school professors, Yale’s Judith Chevalier, Yaniv Dover of Dartmouth, and USC’s Dina Mayzlin, have taken a clever approach to inferring an answer by comparing the reviews on two travel sites, TripAdvisor and Expedia. In order to post an Expedia review, a traveler needs to have made her hotel booking through the site. Hence, a hotel looking to inflate its rating or malign a competitor would have to incur the cost of paying itself through the site, accumulating transaction fees and tax liabilities in the process. On TripAdvisor, all you need to post fake reviews are a few phony login names and email addresses.

Differences in the overall ratings on TripAdvisor versus Expedia could simply be the result of a more sympathetic community of reviewers. (In practice, TripAdvisor’s ratings are actually lower on average.) So Mayzlin and her co-authors focus on the places where the gaps between TripAdvisor and Expedia reviews are widest. In their analysis, they looked at hotels that probably appear identical to the average traveler but have different underlying ownership or management. There are, for example, companies that own scores of franchises from hotel chains like Marriott and Hilton. Other hotels operate under these same nameplates but are independently owned. Similarly, many hotels are run on behalf of their owners by large management companies, while others are owner-managed. The average traveler is unlikely to know the difference between a Fairfield Inn owned by, say, the Pillar Hotel Group and one owned and operated by Ray Fisman. The study’s authors argue that the small owners and independents have less to lose by trying to goose their online ratings (or torpedo the ratings of their neighbors), reasoning that larger companies would be more vulnerable to punishment, censure, and loss of business if their shenanigans were uncovered. (The authors give the example of a recent case in which a manager at Ireland’s Clare Inn was caught posting fake reviews. The hotel is part of the Lynch Hotel Group, and in the wake of the fake postings, TripAdvisor removed suspicious reviews from other Lynch hotels, and unflattering media accounts of the episode generated negative PR that was shared across all Lynch properties.)

The researchers find that, even comparing hotels under the same brand, small owners are around 10 percent more likely to get five-star reviews on TripAdvisor than they are on Expedia (relative to hotels owned by large corporations). The study also examines whether these small owners might be targeting the competition with bad reviews. The authors look at negative reviews for hotels that have competitors within half a kilometer. Hotels where the nearby competition comes from small owners have 16 percent more one- and two-star ratings than those with neighboring hotels that are owned by big companies like Pillar.
This isn’t to say that consumers are making a mistake by using TripAdvisor to guide them in their hotel reservations. Despite the fraudulent posts, there is still a high degree of concordance between the ratings assigned by TripAdvisor and Expedia. And across the Web, there are scores of posters who seem passionate about their reviews.

Consumers, in turn, do seem to take online reviews seriously. By comparing restaurants that fall just above and just below the threshold for an extra half-star on Yelp, Harvard Business School’s Michael Luca estimates that an extra star is worth an extra 5 to 9 percent in revenue. Luca’s intent isn’t to examine whether restaurants are gaming Yelp’s system, but his findings certainly indicate that they’d profit from trying. (Ironically, Luca also finds that independent restaurants—the establishments that Mayzlin et al. would predict are most likely to put up fake postings—benefit the most from an extra star. You don’t need to check out Yelp to know what to expect when you walk into McDonald’s or Pizza Hut.)

[div class=attrib]Read the entire article following the jump:[end-div]

[div class=attrib]Image courtesy of Mashable.[end-div]

Men are From LinkedIn, Women are From Pinterest

No surprise. Women and men use online social networks differently. A new study of online behavior by researchers in Vienna, Austria, shows that the sexes organize their networks very differently and for different reasons.

[div class=attrib]From Technology Review:[end-div]

One of the interesting insights that social networks offer is the difference between male and female behaviour.

In the past, behavioural differences have been hard to measure. Experiments could only be done on limited numbers of individuals and even then, the process of measurement often distorted people’s behaviour.

That’s all changed with the advent of massive online participation in gaming, professional and friendship  networks. For the first time, it has become possible to quantify exactly how the genders differ in their approach to things like risk and communication.

Gender specific studies are surprisingly rare, however. Nevertheless a growing body if evidence is emerging that social networks reflect many of the social and evolutionary differences that we’ve long suspected.

Earlier this year, for example, we looked at a remarkable study of a mobile phone network that demonstrated the different reproductive strategies that men and women employ throughout their lives, as revealed by how often they call friends, family and potential mates.

Today, Michael Szell and Stefan Thurner at the Medical University of Vienna in Austria say they’ve found significance differences in the way men and women manage their social networks in an online game called Pardus with over 300,000 players.

In this game, players  explore various solar systems in a virtual universe. On the way, they can mark other players as friends or enemies, exchange messages, gain wealth by trading  or doing battle but can also be killed.

The interesting thing about online games is that almost every action of every player is recorded, mostly without the players being consciously aware of this. That means measurement bias is minimal.

The networks of friends and enemies that are set up also differ in an important way from those on social networking sites such as Facebook. That’s because players can neither see nor influence other players’ networks. This prevents the kind of clustering and herding behaviour that sometimes dominates  other social networks.

Szell and Thurner say the data reveals clear and significant differences between men and women in Pardus.

For example, men and women  interact with the opposite sex differently.  “Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females,” say Szell and Thurner.

Women are also significantly more risk averse than men as measured by the amount of fighting they engage in and their likelihood of dying.

They are also more likely to be friends with each other than men.

These results are more or less as expected. More surprising is the finding that women tend to be more wealthy than men, probably because they engage more in economic than destructive behaviour.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of InformationWeek.[end-div]

Facebook: What Next?

Yawn…

The Facebook IPO (insider profit opportunity rather than Initial Public Offering) finally came and went. Much like its 900 million members, Facebook executives managed to garner enough fleeting “likes” from its Wall Street road show to ensure temporary short-term hype and big returns for key insiders. But, beneath the hyperbole lies a basic question that goes to the heart of its stratospheric valuation: Does Facebook have a long-term strategy beyond the rapidly deflating ad revenue model?

[div class=attrib]From Technology Review:[end-div]

Facebook is not only on course to go bust, but will take the rest of the ad-supported Web with it.

Given its vast cash reserves and the glacial pace of business reckonings, that will sound hyperbolic. But that doesn’t mean it isn’t true.

At the heart of the Internet business is one of the great business fallacies of our time: that the Web, with all its targeting abilities, can be a more efficient, and hence more profitable, advertising medium than traditional media. Facebook, with its 900 million users, valuation of around $100 billion, and the bulk of its business in traditional display advertising, is now at the heart of the heart of the fallacy.

The daily and stubborn reality for everybody building businesses on the strength of Web advertising is that the value of digital ads decreases every quarter, a consequence of their simultaneous ineffectiveness and efficiency. The nature of people’s behavior on the Web and of how they interact with advertising, as well as the character of those ads themselves and their inability to command real attention, has meant a marked decline in advertising’s impact.

At the same time, network technology allows advertisers to more precisely locate and assemble audiences outside of branded channels. Instead of having to go to CNN for your audience, a generic CNN-like audience can be assembled outside CNN’s walls and without the CNN-brand markup. This has resulted in the now famous and cruelly accurate formulation that $10 of offline advertising becomes $1 online.

I don’t know anyone in the ad-Web business who isn’t engaged in a relentless, demoralizing, no-exit operation to realign costs with falling per-user revenues, or who isn’t manically inflating traffic to compensate for ever-lower per-user value.

Facebook, however, has convinced large numbers of otherwise intelligent people that the magic of the medium will reinvent advertising in a heretofore unimaginably profitable way, or that the company will create something new that isn’t advertising, which will produce even more wonderful profits. But at a forward profit-to-earnings ratio of 56 (as of the close of trading on May 21), these innovations will have to be something like alchemy to make the company worth its sticker price. For comparison, Google trades at a forward P/E ratio of 12. (To gauge how much faith investors have that Google, Facebook, and other Web companies will extract value from their users, see our recent chart.)

Facebook currently derives 82 percent of its revenue from advertising. Most of that is the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles. Some is the kind of sponsorship that promises users further social relationships with companies: a kind of marketing that General Motors just announced it would no longer buy.

Facebook’s answer to its critics is: pay no attention to the carping. Sure, grunt-like advertising produces the overwhelming portion of our $4 billion in revenues; and, yes, on a per-user basis, these revenues are in pretty constant decline, but this stuff is really not what we have in mind. Just wait.

It’s quite a juxtaposition of realities. On the one hand, Facebook is mired in the same relentless downward pressure of falling per-user revenues as the rest of Web-based media. The company makes a pitiful and shrinking $5 per customer per year, which puts it somewhat ahead of the Huffington Post and somewhat behind the New York Times’ digital business. (Here’s the heartbreaking truth about the difference between new media and old: even in the New York Times’ declining traditional business, a subscriber is still worth more than $1,000 a year.) Facebook’s business only grows on the unsustainable basis that it can add new customers at a faster rate than the value of individual customers declines. It is peddling as fast as it can. And the present scenario gets much worse as its users increasingly interact with the social service on mobile devices, because it is vastly harder, on a small screen, to sell ads and profitably monetize users.

On the other hand, Facebook is, everyone has come to agree, profoundly different from the Web. First of all, it exerts a new level of hegemonic control over users’ experiences. And it has its vast scale: 900 million, soon a billion, eventually two billion (one of the problems with the logic of constant growth at this scale and speed, of course, is that eventually it runs out of humans with computers or smart phones). And then it is social. Facebook has, in some yet-to-be-defined way, redefined something. Relationships? Media? Communications? Communities? Something big, anyway.

The subtext—an overt subtext—of the popular account of Facebook is that the network has a proprietary claim and special insight into social behavior. For enterprises and advertising agencies, it is therefore the bridge to new modes of human connection.

Expressed so baldly, this account is hardly different from what was claimed for the most aggressively boosted companies during the dot-com boom. But there is, in fact, one company that created and harnessed a transformation in behavior and business: Google. Facebook could be, or in many people’s eyes should be, something similar. Lost in such analysis is the failure to describe the application that will drive revenues.

[div class=attrib]Read the entire article after the jump.[end-div]

Corporatespeak: Lingua Franca of the Internet

Author Lewis Lapham reminds us of the phrase made (in)famous by Emperor Charles V:

“I speak Spanish to God, Italian to women, French to men, and German to my horse.”

So, what of the language of the internet? Again, Lapham offers a fitting and damning summary, this time courtesy of a lesser mortal, critic George Steiner:

“The true catastrophe of Babel is not the scattering of tongues. It is the reduction of human speech to a handful of planetary, ‘multinational’ tongues…Anglo-American standardized vocabularies” and grammar shaped by “military technocratic megalomania” and “the imperatives of commercial greed.”

More from the keyboard of Lewis Lapham on how the communicative promise of the internet is being usurped by commerce and the “lowest common denominator”.

[div class=attrib]From TomDispatch:[end-div]

But in which language does one speak to a machine, and what can be expected by way of response? The questions arise from the accelerating datastreams out of which we’ve learned to draw the breath of life, posed in consultation with the equipment that scans the flesh and tracks the spirit, cues the ATM, the GPS, and the EKG, arranges the assignations on Match.com and the high-frequency trades at Goldman Sachs, catalogs the pornography and drives the car, tells us how and when and where to connect the dots and thus recognize ourselves as human beings.

Why then does it come to pass that the more data we collect—from Google, YouTube, and Facebook—the less likely we are to know what it means?

The conundrum is in line with the late Marshall McLuhan’s noticing 50 years ago the presence of “an acoustic world,” one with “no continuity, no homogeneity, no connections, no stasis,” a new “information environment of which humanity has no experience whatever.” He published Understanding Media in 1964, proceeding from the premise that “we become what we behold,” that “we shape our tools, and thereafter our tools shape us.”

Media were to be understood as “make-happen agents” rather than as “make-aware agents,” not as art or philosophy but as systems comparable to roads and waterfalls and sewers. Content follows form; new means of communication give rise to new structures of feeling and thought.

To account for the transference of the idioms of print to those of the electronic media, McLuhan examined two technological revolutions that overturned the epistemological status quo. First, in the mid-15th century, Johannes Gutenberg’s invention of moveable type, which deconstructed the illuminated wisdom preserved on manuscript in monasteries, encouraged people to organize their perceptions of the world along the straight lines of the printed page. Second, in the 19th and 20th centuries, the applications of electricity (telegraph, telephone, radio, movie camera, television screen, eventually the computer), favored a sensibility that runs in circles, compressing or eliminating the dimensions of space and time, narrative dissolving into montage, the word replaced with the icon and the rebus.

Within a year of its publication, Understanding Media acquired the standing of Holy Scripture and made of its author the foremost oracle of the age. The New York Herald Tribune proclaimed him “the most important thinker since Newton, Darwin, Freud, Einstein, and Pavlov.” Although never at a loss for Delphic aphorism—”The electric light is pure information”; “In the electric age, we wear all mankind as our skin”—McLuhan assumed that he had done nothing more than look into the window of the future at what was both obvious and certain.

[div class=attrib]Read the entire article following the jump.[end-div]

The Internet of Things

The term “Internet of Things” was first coined in 1999 by Kevin Ashton. It refers to the notion whereby physical objects of all kinds are equipped with small identifying devices and connected to a network. In essence: everything connected to anytime, anywhere by anyone. One of the potential benefits is that this would allow objects to be tracked, inventoried and status continuously monitored.

[div class=attrib]From the New York Times:[end-div]

THE Internet likes you, really likes you. It offers you so much, just a mouse click or finger tap away. Go Christmas shopping, find restaurants, locate partying friends, tell the world what you’re up to. Some of the finest minds in computer science, working at start-ups and big companies, are obsessed with tracking your online habits to offer targeted ads and coupons, just for you.

But now — nothing personal, mind you — the Internet is growing up and lifting its gaze to the wider world. To be sure, the economy of Internet self-gratification is thriving. Web start-ups for the consumer market still sprout at a torrid pace. And young corporate stars seeking to cash in for billions by selling shares to the public are consumer services — the online game company Zynga last week, and the social network giant Facebook, whose stock offering is scheduled for next year.

As this is happening, though, the protean Internet technologies of computing and communications are rapidly spreading beyond the lucrative consumer bailiwick. Low-cost sensors, clever software and advancing computer firepower are opening the door to new uses in energy conservation, transportation, health care and food distribution. The consumer Internet can be seen as the warm-up act for these technologies.

The concept has been around for years, sometimes called the Internet of Things or the Industrial Internet. Yet it takes time for the economics and engineering to catch up with the predictions. And that moment is upon us.

“We’re going to put the digital ‘smarts’ into everything,” said Edward D. Lazowska, a computer scientist at the University of Washington. These abundant smart devices, Dr. Lazowska added, will “interact intelligently with people and with the physical world.”

The role of sensors — once costly and clunky, now inexpensive and tiny — was described this month in an essay in The New York Times by Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology; he said the ultimate goal was “the sensor-aware planetary computer.”

That may sound like blue-sky futurism, but evidence shows that the vision is beginning to be realized on the ground, in recent investments, products and services, coming from large industrial and technology corporations and some ambitious start-ups.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Internet of Things. Courtesy of Cisco.[end-div]

The World Wide Web of Terrorism

[div class=attrib]From Eurozine:[end-div]

There are clear signs that Internet-radicalization was behind the terrorism of Anders Behring Breivik. Though most research on this points to jihadism, it can teach us a lot about how Internet-radicalization of all kinds can be fought.

On 21 September 2010, Interpol released a press statement on their homepage warning against extremist websites. They pointed out that this is a global threat and that ever more terrorist groups use the Internet to radicalize young people.

“Terrorist recruiters exploit the web to their full advantage as they target young, middle class vulnerable individuals who are usually not on the radar of law enforcement”, said Secretary General Ronald K. Noble. He continued: “The threat is global; it is virtual; and it is on our doorsteps. It is a global threat that only international police networks can fully address.”

Noble pointed out that the Internet has made the radicalization process easier and the war on terror more difficult. Part of the reason, he claimed, is that much of what takes place is not really criminal.

Much research has been done on Internet radicalization over the last few years but the emphasis has been on Islamist terror. The phenomenon can be summarized thus: young boys and men of Muslim background have, via the Internet, been exposed to propaganda, films from war zones, horrifying images of war in Afghanistan, Iraq and Chechnya, and also extreme interpretations of Islam. They are, so to speak, caught in the web, and some have resorted to terrorism, or at least planned it. The BBC documentary Generation Jihad gives an interesting and frightening insight into the phenomenon.

Researchers Tim Stevens and Peter Neumann write in a report focused on Islamist Internet radicalization that Islamist groups are hardly unique in putting the Internet in the service of political extremism:
Although Al Qaeda-inspired Islamist militants represented the most significant terrorist threat to the United Kingdom at the time of writing, Islamist militants are not the only – or even the predominant – group of political extremists engaged in radicalization and recruitment on the internet. Visitor numbers are notoriously difficult to verify, but some of the most popular Islamist militant web forums (for example, Al Ekhlaas, Al Hesbah, or Al Boraq) are easily rivalled in popularity by white supremacist websites such as Stormfront.

Strikingly, Stormfront – an international Internet forum advocating “white nationalism” and dominated by neo-Nazis – is one of the websites visited by the terrorist Anders Behring Breivik, and a forum where he also left comments. In one place he writes about his hope that “the various fractured rightwing movements in Europe and the US reach a common consensus regarding the ‘Islamification of Europe/US’ can try and reach a consensus regarding the issue”. He continues: “After all, we all want the best for our people, and we owe it to them to try to create the most potent alliance which will have the strength to overthrow the governments which support multiculturalism.”

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]Image courtesy of Eurozine.[end-div]

Global Interconnectedness: Submarine Cables

Apparently only 1 percent of global internet traffic is transmitted via satellite or terrestrially-based radio frequency. The remaining 99 percent is still carried via cable – fiber optic and copper. Much of this cable is strewn for many thousands of miles across the seabeds of our deepest oceans.

For a fascinating view of these intricate systems and to learn why and how Brazil is connected to Angola, or Auckland, New Zealand connected to Redondo Beach California via the 12,750 km long Pacific Fiber check the interactive Submarine Cable Map from TeleGeography.

The Lanier Effect

Twenty or so years ago the economic prognosticators and technology pundits would all have had us believe that the internet would transform society; it would level the playing field; it would help the little guy compete against the corporate behemoth; it would make us all “socially” rich if not financially. Yet, the promise of those early, heady days seems remarkably narrow nowadays. What happened? Or rather, what didn’t happen?

We excerpt a lengthy interview with Jaron Lanier over at the Edge. Lanier, a pioneer in the sphere of virtual reality, offers some well-laid arguments for and against concentration of market power as enabled by information systems and the internet. Though he leaves his most powerful criticism at the doors of Google. Their (in)famous corporate mantra — “do no evil” — will start to look remarkably disingenuous.

[div class=attrib]From the Edge:[end-div]

I’ve focused quite a lot on how this stealthy component of computation can affect our sense of ourselves, what it is to be a person. But lately I’ve been thinking a lot about what it means to economics.

In particular, I’m interested in a pretty simple problem, but one that is devastating. In recent years, many of us have worked very hard to make the Internet grow, to become available to people, and that’s happened. It’s one of the great topics of mankind of this era.  Everyone’s into Internet things, and yet we have this huge global economic trouble. If you had talked to anyone involved in it twenty years ago, everyone would have said that the ability for people to inexpensively have access to a tremendous global computation and networking facility ought to create wealth. This ought to create wellbeing; this ought to create this incredible expansion in just people living decently, and in personal liberty. And indeed, some of that’s happened. Yet if you look at the big picture, it obviously isn’t happening enough, if it’s happening at all.

The situation reminds me a little bit of something that is deeply connected, which is the way that computer networks transformed finance. You have more and more complex financial instruments, derivatives and so forth, and high frequency trading, all these extraordinary constructions that would be inconceivable without computation and networking technology.

At the start, the idea was, “Well, this is all in the service of the greater good because we’ll manage risk so much better, and we’ll increase the intelligence with which we collectively make decisions.” Yet if you look at what happened, risk was increased instead of decreased.

… We were doing a great job through the turn of the century. In the ’80s and ’90s, one of the things I liked about being in the Silicon Valley community was that we were growing the middle class. The personal computer revolution could have easily been mostly about enterprises. It could have been about just fighting IBM and getting computers on desks in big corporations or something, instead of this notion of the consumer, ordinary person having access to a computer, of a little mom and pop shop having a computer, and owning their own information. When you own information, you have power. Information is power. The personal computer gave people their own information, and it enabled a lot of lives.

… But at any rate, the Apple idea is that instead of the personal computer model where people own their own information, and everybody can be a creator as well as a consumer, we’re moving towards this iPad, iPhone model where it’s not as adequate for media creation as the real media creation tools, and even though you can become a seller over the network, you have to pass through Apple’s gate to accept what you do, and your chances of doing well are very small, and it’s not a person to person thing, it’s a business through a hub, through Apple to others, and it doesn’t create a middle class, it creates a new kind of upper class.

Google has done something that might even be more destructive of the middle class, which is they’ve said, “Well, since Moore’s law makes computation really cheap, let’s just give away the computation, but keep the data.” And that’s a disaster.

What’s happened now is that we’ve created this new regimen where the bigger your computer servers are, the more smart mathematicians you have working for you, and the more connected you are, the more powerful and rich you are. (Unless you own an oil field, which is the old way.) II benefit from it because I’m close to the big servers, but basically wealth is measured by how close you are to one of the big servers, and the servers have started to act like private spying agencies, essentially.

With Google, or with Facebook, if they can ever figure out how to steal some of Google’s business, there’s this notion that you get all of this stuff for free, except somebody else owns the data, and they use the data to sell access to you, and the ability to manipulate you, to third parties that you don’t necessarily get to know about. The third parties tend to be kind of tawdry.

[div class=attrib]Read the entire article.[end-div]

[div class=attrib]Image courtesy of Jaron Lanier.[end-div]

Tim Berners-Lee’s “Baby” Hits 20 – Happy Birthday World Wide Web

In early 1990 at CERN headquarters in Geneva, Switzerland, Tim Berners-Lee and Robert Cailliau published a formal proposal to build a “Hypertext project” called “WorldWideWeb” as a “web” of “hypertext documents” to be viewed by “browsers”.

Following development work the pair introduced the proposal to a wider audience in December, and on August 6, 1991, 20 years ago, the World Wide Web officially opened for business on the internet. On that day Berners-Lee posted the first web page — a short summary of the World Wide Web project on the alt.hypertext newsgroup.

The page authored by Tim Berners-Lee was http://info.cern.ch/hypertext/WWW/TheProject.html. A later version on the page can be found here. The page described Berners-Lee’s summary of a project for organizing information on a computer network using a web or links. In fact, the the effort was originally coined “Mesh”, but later became the “World Wide Web”.

The first photograph on the web was uploaded by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes. Twenty years on, one website alone — Flickr – hosts around 5.75 billion images.

[div class=attrib]Photograph of Les Horribles Cernettes, the very first photo to be published on the world wide web in 1992. Image courtesy of Cernettes / Silvano de Gennaro. Granted under fair use.[end-div]

Is Anyone There?

[div class=attrib]From the New York Times:[end-div]

“WHEN people don’t answer my e-mails, I always think maybe something tragic happened,” said John Leguizamo, the writer and performer, whose first marriage ended when his wife asked him by e-mail for a divorce. “Like maybe they got hit by a meteorite.”

Betsy Rapoport, an editor and life coach, said: “I don’t believe I have ever received an answer from any e-mail I’ve ever sent my children, now 21 and 18. Unless you count ‘idk’ as a response.”

The British linguist David Crystal said that his wife recently got a reply to an e-mail she sent in 2006. “It was like getting a postcard from the Second World War,” he said.

The roaring silence. The pause that does not refresh. The world is full of examples of how the anonymity and remove of the Internet cause us to write and post things that we later regret. But what of the way that anonymity and remove sometimes leave us dangling like a cartoon character that has run off a cliff?

For every fiery screed or gushy, tear-streaked confession in the ethersphere, it seems there’s a big patch of grainy, unresolved black. Though it would comfort us to think that these long silences are the product of technical failure or mishap, the more likely culprits are lack of courtesy and passive aggression.

“The Internet is something very informal that happened to a society that was already very informal,” said P. M. Forni, an etiquette expert and the author of “Choosing Civility.” “We can get away with murder, so to speak. The endless amount of people we can contact means we are not as cautious or kind as we might be. Consciously or unconsciously we think of our interlocutors as disposable or replaceable.”

Judith Kallos, who runs a site on Internet etiquette called netmanners.com, said the No. 1 complaint is that “people feel they’re being ignored.”

[div class=attrib]More from theSource here.[end-div]

NASA Retires Shuttle; France Telecom Guillotines Minitel

The lives of 2 technological marvels came to a close this week. First, NASA officially concluded the space shuttle program with the final flight of Atlantis.

Then, France Telecom announced the imminent demise of Minitel. Sacre Bleu! What next? Will the United Kingdom phase out afternoon tea and the Royal Family?

If you’re under 35 years of age, especially if you have never visited France, you may never have heard of Minitel. About ten years before the mainstream arrival of the World Wide Web and Mosaic, the first internet browser, there was Minitel. The Minitel network offered France Telecom subscribers a host of internet-like services such as email, white-pages, news and information services,  message boards, train reservations, airline schedules, stock quotes and online purchases. Users leased small, custom terminals for free that connected via telephone line. Think prehistoric internet services: no hyperlinks, no fancy search engines, no rich graphics and no multimedia — that was Minitel.

Though rudimentary, Minitel was clearly ahead of its time and garnered a wide and loyal following in France. France Telecom delivered millions of terminals for free to household and business telephone subscribers. By 2000, France Telecom estimates that almost 9 million terminals, covering 25 million people or over 41 percent of the French population, still had access to the Minitel network. Deploying the Minitel service allowed France Telecom to replace printed white-pages directories given to all its customers with a free, online Minitel version.

The Minitel equipment included a basic dumb terminal with a text based screen, keyboard and modem. The modem transmission speed was a rather slow 75 bits per second (upstream) and 1,200 bits per second (downstream). This compares with today’s basic broad speeds of 1 Mbit per second (upstream) and 4 Mbits per second (downstream).

In a bow to Minitel’s more attractive siblings, the internet and the World Wide Web, France Telecom finally plans to retire the service on the June 30, 2012.

[div class=attrib]Image courtesy of Wikipedia/Creative Commons.[end-div]

Book Review: Linchpin. Seth Godin

Phew! Another heartfelt call to action from business blogger Seth Godin to become indispensable.

Author, public speaker, orthogonal thinker and internet marketing maven, Seth Godin makes a compelling case to the artist within us all to get off our backsides, ignore the risk averse “lizard brain” as he puts it, get creative, and give the gift of art. After all there is no way to win the “race to the bottom” wrought by commoditization of both product and labor.

Bear in mind, Godin uses “art” in its most widely used sense, not merely a canvas or a sculpture. Here, art is anything that its maker so creates; it may be a service just as well as an object. Importantly also, to be art it has to be given with the correct intent — as a gift (a transcendent, unexpected act that surpasses expectation).

Critics maintain that his latest bestseller is short on specifics, but indeed it should be. After all if the process of creating art could be decomposed to an instruction manual it wouldn’t deliver art, it would deliver a Big Mac. So while, we do not get a “7 point plan” that leads to creative nirvana, Godin does a good job through his tireless combination of anecdote, repetition, historical analysis and social science at convincing the “anonymous cogs in the machine” to think and act more like the insightful, innovators that we can all become.

Godin rightly believes that the new world of work is rife with opportunity to add value through creativity, human connection and generosity, and this is the area where the indispensable artist gets to create his or her art, and to become a linchpin in the process. Godin’s linchpin is a rule-breaker, not a follower; a map-maker, not an order taker; a doer not a whiner.

In reading Linchpin we are reminded of the other side of the economy, in which we all unfortunately participate as well, the domain of commoditization, homogeneity and anonymity. This is the domain that artists so their utmost to avoid, and better still, subvert. Of course, this economy provides a benefit too – lower price. However, a “Volkswagen-sized jar of pickles for $3” can only go so far. Commoditization undermines our very social fabric: it undermines our desire for uniqueness and special connection in a service or product that we purchase; it removes our dignity and respect when we allow ourselves to become a disposable part, a human cog, in the job machine. So, jettison the bland, the average, and the subservient, learn to take risk, face fear and become an indispensable, passionate, discerning artist – one who creates and one who gives.

Hello Internet; Goodbye Memory

Imagine a world without books; you’d have to commit useful experiences, narratives and data to handwritten form and memory.Imagine a world without the internet and real-time search; you’d have to rely on a trusted expert or a printed dictionary to find answers to your questions. Imagine a world without the written word; you’d have to revert to memory and oral tradition to pass on meaningful life lessons and stories.

Technology is a wonderfully double-edged mechanism. It brings convenience. It helps in most aspects of our lives. Yet, it also brings fundamental cognitive change that brain scientists have only recently begun to fathom. Recent studies, including the one cited below from Columbia University explore this in detail.

[div class=attrib]From Technology Review:[end-div]

A study says that we rely on external tools, including the Internet, to augment our memory.

The flood of information available online with just a few clicks and finger-taps may be subtly changing the way we retain information, according to a new study. But this doesn’t mean we’re becoming less mentally agile or thoughtful, say the researchers involved. Instead, the change can be seen as a natural extension of the way we already rely upon social memory aids—like a friend who knows a particular subject inside out.

Researchers and writers have debated over how our growing reliance on Internet-connected computers may be changing our mental faculties. The constant assault of tweets and YouTube videos, the argument goes, might be making us more distracted and less thoughtful—in short, dumber. However, there is little empirical evidence of the Internet’s effects, particularly on memory.

Betsy Sparrow, assistant professor of psychology at Columbia University and lead author of the new study, put college students through a series of four experiments to explore this question.

One experiment involved participants reading and then typing out a series of statements, like “Rubber bands last longer when refrigerated,” on a computer. Half of the participants were told that their statements would be saved, and the other half were told they would be erased. Additionally, half of the people in each group were explicitly told to remember the statements they typed, while the other half were not. Participants who believed the statements would be erased were better at recalling them, regardless of whether they were told to remember them.

[div class=attrib]More from theSource here.[end-div]

The Homogenous Culture of “Like”

[div class=attrib]Echo and Narcissus, John William Waterhouse [Public domain], via Wikimedia Commons[end-div]

About 12 months ago I committed suicide — internet suicide that is. I closed my personal Facebook account after recognizing several important issues. First, it was a colossal waste of time; time that I could and should be using more productively. Second, it became apparent that following, belonging and agreeing with others through the trivial “wall” status-in-a-can postings and now pervasive “like button” was nothing other than a declaration of mindless group-think and a curious way to maintain social standing. So, my choice was clear: become part of a group that had similar interests, like-minded activities, same politics, parallel beliefs, common likes and dislikes; or revert to my own weirdly independent path. I chose the latter, rejecting the road towards a homogeneity of ideas and a points-based system of instant self-esteem.

This facet of the Facebook ecosystem has an affect similar to the filter bubble that I described is a previous post, The Technology of Personalization and the Bubble Syndrome. In both cases my explicit choices on Facebook, such as which friends I follow or which content I “like”, and my implicit browsing behaviors that increasingly filter what I see and don’t see causes a narrowing of the world of ideas to which I am a exposed. This cannot be good.

So, although I may incur the wrath of author Neil Strauss for including an excerpt of his recent column below, I cannot help but “like” what he has to say. More importantly, he does a much more eloquent job of describing the issue which commoditizes social relationships and, dare I say it, lowers the barrier to entry for narcissists to grow and fine tune their skills.

[div class=attrib]By Neil Strauss for the Wall Street Journal:[end-div]

If you happen to be reading this article online, you’ll notice that right above it, there is a button labeled “like.” Please stop reading and click on “like” right now.

Thank you. I feel much better. It’s good to be liked.

Don’t forget to comment on, tweet, blog about and StumbleUpon this article. And be sure to “+1” it if you’re on the newly launched Google+ social network. In fact, if you don’t want to read the rest of this article, at least stay on the page for a few minutes before clicking elsewhere. That way, it will appear to the site analytics as if you’ve read the whole thing.

Once, there was something called a point of view. And, after much strife and conflict, it eventually became a commonly held idea in some parts of the world that people were entitled to their own points of view.

Unfortunately, this idea is becoming an anachronism. When the Internet first came into public use, it was hailed as a liberation from conformity, a floating world ruled by passion, creativity, innovation and freedom of information. When it was hijacked first by advertising and then by commerce, it seemed like it had been fully co-opted and brought into line with human greed and ambition.

But there was one other element of human nature that the Internet still needed to conquer: the need to belong. The “like” button began on the website FriendFeed in 2007, appeared on Facebook in 2009, began spreading everywhere from YouTube to Amazon to most major news sites last year, and has now been officially embraced by Google as the agreeable, supportive and more status-conscious “+1.” As a result, we can now search not just for information, merchandise and kitten videos on the Internet, but for approval.

Just as stand-up comedians are trained to be funny by observing which of their lines and expressions are greeted with laughter, so too are our thoughts online molded to conform to popular opinion by these buttons. A status update that is met with no likes (or a clever tweet that isn’t retweeted) becomes the equivalent of a joke met with silence. It must be rethought and rewritten. And so we don’t show our true selves online, but a mask designed to conform to the opinions of those around us.

Conversely, when we’re looking at someone else’s content—whether a video or a news story—we are able to see first how many people liked it and, often, whether our friends liked it. And so we are encouraged not to form our own opinion but to look to others for cues on how to feel.

“Like” culture is antithetical to the concept of self-esteem, which a healthy individual should be developing from the inside out rather than from the outside in. Instead, we are shaped by our stats, which include not just “likes” but the number of comments generated in response to what we write and the number of friends or followers we have. I’ve seen rock stars agonize over the fact that another artist has far more Facebook “likes” and Twitter followers than they do.

[div class=attrib]More from theSource here.[end-div]

The Technology of Personalization and the Bubble Syndrome

A decade ago in another place and era during my days as director of technology research for a Fortune X company I tinkered with a cool array of then new personalization tools. The aim was simple, use some of these emerging technologies to deliver a more customized and personalized user experience for our customers and suppliers. What could be wrong with that? Surely, custom tools and more personalized data could do nothing but improve knowledge and enhance business relationships for all concerned. Our customers would benefit from seeing only the information they asked for, our suppliers would benefit from better analysis and filtered feedback, and we, the corporation in the middle, would benefit from making everyone in our supply chain more efficient and happy. Advertisers would be even happier since with more focused data they would be able to deliver messages that were increasingly more precise and relevant based on personal context.

Fast forward to the present. Customization, or filtering, technologies have indeed helped optimize the supply chain; personalization tools and services have made customer experiences more focused and efficient. In today’s online world it’s so much easier to find, navigate and transact when the supplier at the other end of our browser knows who we are, where we live, what we earn, what we like and dislike, and so on. After all, if a supplier knows my needs, requirements, options, status and even personality, I’m much more likely to only receive information, services or products that fall within the bounds that define “me” in the supplier’s database.

And, therein lies the crux of the issue that has helped me to realize that personalization offers a false promise despite the seemingly obvious benefits to all concerned. The benefits are outweighed by two key issues: erosion of privacy and the bubble syndrome.

Privacy as Commodity

I’ll not dwell too long on the issue of privacy since in this article I’m much more concerned with the personalization bubble. However, as we have increasingly seen in recent times privacy in all its forms is becoming a scarce, and tradable commodity. Much of our data is now in the hands of a plethora of suppliers, intermediaries and their partners, ready for continued monetization. Our locations are constantly pinged and polled; our internet browsers note our web surfing habits and preferences; our purchases generate genius suggestions and recommendations to further whet our consumerist desires. Now in digital form this data is open to legitimate sharing and highly vulnerable to discovery by hackers, phishers and spammers and any with technical or financial resources.

Bubble Syndrome

Personalization technologies filter content at various levels, minutely and broadly, both overtly and covertly. For instance, I may explicitly signal my preferences for certain types of clothing deals at my favorite online retailer by answering a quick retail survey or checking a handful of specific preference buttons on a website.

However, my previous online purchases, browsing behaviors, time spent of various online pages, visits to other online retailers and a range of other flags deliver a range of implicit or “covert” information to the same retailer (and others). This helps the retailer filter, customize and personalize what I get to see even before I have made a conscious decision to limit my searches and exposure to information. Clearly, this is not too concerning when my retailer knows I’m male and usually purchase size 32 inch jeans; after all why would I need to see deals or product information for women’s shoes.

But, this type of covert filtering becomes more worrisome when the data being filtered and personalized is information, news, opinion and comment in all its glorious diversity. Sophisticated media organizations, information portals, aggregators and news services can deliver personalized and filtered information based on your overt and covert personal preferences as well. So, if you subscribe only to a certain type of information based on topic, interest, political persuasion or other dimension your personalized news services will continue to deliver mostly or only this type of information. And, as I have already described, your online behaviors will deliver additional filtering parameters to these news and information providers so that they may further personalize and narrow your consumption of information.

Increasingly, we will not be aware of what we don’t know. Whether explicitly or not, our use of personalization technologies will have the ability to build a filter, a bubble, around us, which will permit only information that we wish to see or that which our online suppliers wish us to see. We’ll not even get exposed to peripheral and tangential information — that information which lies outside the bubble. This filtering of the rich oceans of diverse information to a mono-dimensional stream will have profound implications for our social and cultural fabric.

I assume that our increasingly crowded planet will require ever more creativity, insight, tolerance and empathy as we tackle humanity’s many social and political challenges in the future. And, these very seeds of creativity, insight, tolerance and empathy are those that are most at risk from the personalization filter. How are we to be more tolerant of others’ opinions if we are never exposed to them in the first place? How are we to gain insight when disparate knowledge is no longer available for serendipitous discovery? How are we to become more creative if we are less exposed to ideas outside of our normal sphere, our bubble?

For some ideas on how to punch a few holes in your online filter bubble read Eli Pariser’s practical guide, here.

Filter Bubble image courtesy of TechCrunch.

The internet: Everything you ever need to know

[div class=attrib]From The Observer:[end-div]

In spite of all the answers the internet has given us, its full potential to transform our lives remains the great unknown. Here are the nine key steps to understanding the most powerful tool of our age – and where it’s taking us.

A funny thing happened to us on the way to the future. The internet went from being something exotic to being boring utility, like mains electricity or running water – and we never really noticed. So we wound up being totally dependent on a system about which we are terminally incurious. You think I exaggerate about the dependence? Well, just ask Estonia, one of the most internet-dependent countries on the planet, which in 2007 was more or less shut down for two weeks by a sustained attack on its network infrastructure. Or imagine what it would be like if, one day, you suddenly found yourself unable to book flights, transfer funds from your bank account, check bus timetables, send email, search Google, call your family using Skype, buy music from Apple or books from Amazon, buy or sell stuff on eBay, watch clips on YouTube or BBC programmes on the iPlayer – or do the 1,001 other things that have become as natural as breathing.

The internet has quietly infiltrated our lives, and yet we seem to be remarkably unreflective about it. That’s not because we’re short of information about the network; on the contrary, we’re awash with the stuff. It’s just that we don’t know what it all means. We’re in the state once described by that great scholar of cyberspace, Manuel Castells, as “informed bewilderment”.

Mainstream media don’t exactly help here, because much – if not most – media coverage of the net is negative. It may be essential for our kids’ education, they concede, but it’s riddled with online predators, seeking children to “groom” for abuse. Google is supposedly “making us stupid” and shattering our concentration into the bargain. It’s also allegedly leading to an epidemic of plagiarism. File sharing is destroying music, online news is killing newspapers, and Amazon is killing bookshops. The network is making a mockery of legal injunctions and the web is full of lies, distortions and half-truths. Social networking fuels the growth of vindictive “flash mobs” which ambush innocent columnists such as Jan Moir. And so on.

All of which might lead a detached observer to ask: if the internet is such a disaster, how come 27% of the world’s population (or about 1.8 billion people) use it happily every day, while billions more are desperate to get access to it?

So how might we go about getting a more balanced view of the net ? What would you really need to know to understand the internet phenomenon? Having thought about it for a while, my conclusion is that all you need is a smallish number of big ideas, which, taken together, sharply reduce the bewilderment of which Castells writes so eloquently.

But how many ideas? In 1956, the psychologist George Miller published a famous paper in the journal Psychological Review. Its title was “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information” and in it Miller set out to summarise some earlier experiments which attempted to measure the limits of people’s short-term memory. In each case he reported that the effective “channel capacity” lay between five and nine choices. Miller did not draw any firm conclusions from this, however, and contented himself by merely conjecturing that “the recurring sevens might represent something deep and profound or be just coincidence”. And that, he probably thought, was that.

But Miller had underestimated the appetite of popular culture for anything with the word “magical’ in the title. Instead of being known as a mere aggregator of research results, Miller found himself identified as a kind of sage — a discoverer of a profound truth about human nature. “My problem,” he wrote, “is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals… Either there really is something unusual about the number or else I am suffering from delusions of persecution.”

[div class=attrib]More from theSource here.[end-div]

The Madness of Crowds and an Internet Delusion

[div class=attrib]From The New York Times:[end-div]

RETHINKING THE WEB Jaron Lanier, pictured here in 1999, was an early proponent of the Internet’s open culture. His new book examines the downsides.

In the 1990s, Jaron Lanier was one of the digital pioneers hailing the wonderful possibilities that would be realized once the Internet allowed musicians, artists, scientists and engineers around the world to instantly share their work. Now, like a lot of us, he is having second thoughts.

Mr. Lanier, a musician and avant-garde computer scientist — he popularized the term “virtual reality” — wonders if the Web’s structure and ideology are fostering nasty group dynamics and mediocre collaborations. His new book, “You Are Not a Gadget,” is a manifesto against “hive thinking” and “digital Maoism,” by which he means the glorification of open-source software, free information and collective work at the expense of individual creativity.

He blames the Web’s tradition of “drive-by anonymity” for fostering vicious pack behavior on blogs, forums and social networks. He acknowledges the examples of generous collaboration, like Wikipedia, but argues that the mantras of “open culture” and “information wants to be free” have produced a destructive new social contract.

“The basic idea of this contract,” he writes, “is that authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising.”

I find his critique intriguing, partly because Mr. Lanier isn’t your ordinary Luddite crank, and partly because I’ve felt the same kind of disappointment with the Web. In the 1990s, when I was writing paeans to the dawning spirit of digital collaboration, it didn’t occur to me that the Web’s “gift culture,” as anthropologists called it, could turn into a mandatory potlatch for so many professions — including my own.

So I have selfish reasons for appreciating Mr. Lanier’s complaints about masses of “digital peasants” being forced to provide free material to a few “lords of the clouds” like Google and YouTube. But I’m not sure Mr. Lanier has correctly diagnosed the causes of our discontent, particularly when he blames software design for leading to what he calls exploitative monopolies on the Web like Google.

He argues that old — and bad — digital systems tend to get locked in place because it’s too difficult and expensive for everyone to switch to a new one. That basic problem, known to economists as lock-in, has long been blamed for stifling the rise of superior technologies like the Dvorak typewriter keyboard and Betamax videotapes, and for perpetuating duds like the Windows operating system.

It can sound plausible enough in theory — particularly if your Windows computer has just crashed. In practice, though, better products win out, according to the economists Stan Liebowitz and Stephen Margolis. After reviewing battles like Dvorak-qwerty and Betamax-VHS, they concluded that consumers had good reasons for preferring qwerty keyboards and VHS tapes, and that sellers of superior technologies generally don’t get locked out. “Although software is often brought up as locking in people,” Dr. Liebowitz told me, “we have made a careful examination of that issue and find that the winning products are almost always the ones thought to be better by reviewers.” When a better new product appears, he said, the challenger can take over the software market relatively quickly by comparison with other industries.

Dr. Liebowitz, a professor at the University of Texas at Dallas, said the problem on the Web today has less to do with monopolies or software design than with intellectual piracy, which he has also studied extensively. In fact, Dr. Liebowitz used to be a favorite of the “information-wants-to-be-free” faction.

In the 1980s he asserted that photocopying actually helped copyright owners by exposing more people to their work, and he later reported that audio and video taping technologies offered large benefits to consumers without causing much harm to copyright owners in Hollywood and the music and television industries.

But when Napster and other music-sharing Web sites started becoming popular, Dr. Liebowitz correctly predicted that the music industry would be seriously hurt because it was so cheap and easy to make perfect copies and distribute them. Today he sees similar harm to other industries like publishing and television (and he is serving as a paid adviser to Viacom in its lawsuit seeking damages from Google for allowing Viacom’s videos to be posted on YouTube).

Trying to charge for songs and other digital content is sometimes dismissed as a losing cause because hackers can crack any copy-protection technology. But as Mr. Lanier notes in his book, any lock on a car or a home can be broken, yet few people do so — or condone break-ins.

“An intelligent person feels guilty for downloading music without paying the musician, but they use this free-open-culture ideology to cover it,” Mr. Lanier told me. In the book he disputes the assertion that there’s no harm in copying a digital music file because you haven’t damaged the original file.

“The same thing could be said if you hacked into a bank and just added money to your online account,” he writes. “The problem in each case is not that you stole from a specific person but that you undermined the artificial scarcities that allow the economy to function.”

Mr. Lanier was once an advocate himself for piracy, arguing that his fellow musicians would make up for the lost revenue in other ways. Sure enough, some musicians have done well selling T-shirts and concert tickets, but it is striking how many of the top-grossing acts began in the predigital era, and how much of today’s music is a mash-up of the old.

“It’s as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump,” Mr. Lanier writes. Or, to use another of his grim metaphors: “Creative people — the new peasants — come to resemble animals converging on shrinking oases of old media in a depleted desert.”

To save those endangered species, Mr. Lanier proposes rethinking the Web’s ideology, revising its software structure and introducing innovations like a universal system of micropayments. (To debate reforms, go to Tierney Lab at nytimes.com/tierneylab.

Dr. Liebowitz suggests a more traditional reform for cyberspace: punishing thieves. The big difference between Web piracy and house burglary, he says, is that the penalties for piracy are tiny and rarely enforced. He expects people to keep pilfering (and rationalizing their thefts) as long as the benefits of piracy greatly exceed the costs.

In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.

The result is a problem a bit like trying to stop a mob of looters. When the majority of people feel entitled to someone’s property, who’s going to stand in their way?

[div class=attrib]More from theSource here.[end-div]

Your Digital Privacy? It May Already Be an Illusion

[div class=attrib]From Discover:[end-div]

As his friends flocked to social networks like Facebook and MySpace, Alessandro Acquisti, an associate professor of information technology at Carnegie Mellon University, worried about the downside of all this online sharing. “The personal information is not particularly sensitive, but what happens when you combine those pieces together?” he asks. “You can come up with something that is much more sensitive than the individual pieces.”

Acquisti tested his idea in a study, reported earlier this year in Proceedings of the National Academy of Sciences. He took seemingly innocuous pieces of personal data that many people put online (birthplace and date of birth, both frequently posted on social networking sites) and combined them with information from the Death Master File, a public database from the U.S. Social Security Administration. With a little clever analysis, he found he could determine, in as few as 1,000 tries, someone’s Social Security number 8.5 percent of the time. Data thieves could easily do the same thing: They could keep hitting the log-on page of a bank account until they got one right, then go on a spending spree. With an automated program, making thousands of attempts is no trouble at all.

The problem, Acquisti found, is that the way the Death Master File numbers are created is predictable. Typically the first three digits of a Social Security number, the “area number,” are based on the zip code of the person’s birthplace; the next two, the “group number,” are assigned in a predetermined order within a particular area-number group; and the final four, the “serial number,” are assigned consecutively within each group number. When Acquisti plotted the birth information and corresponding Social Security numbers on a graph, he found that the set of possible IDs that could be assigned to a person with a given date and place of birth fell within a restricted range, making it fairly simple to sift through all of the possibilities.

To check the accuracy of his guesses, Acquisti used a list of students who had posted their birth information on a social network and whose Social Security numbers were matched anon­ymously by the university they attended. His system worked—yet another reason why you should never use your Social Security number as a password for sensitive transactions.

Welcome to the unnerving world of data mining, the fine art (some might say black art) of extracting important or sensitive pieces from the growing cloud of information that surrounds almost all of us. Since data persist essentially forever online—just check out the Internet Archive Wayback Machine, the repository of almost everything that ever appeared on the Internet—some bit of seemingly harmless information that you post today could easily come back to haunt you years from now.

[div class=attrib]More from theSource here.[end-div]