Tag Archives: filter bubble

The Existential Dangers of the Online Echo Chamber

google-search-fake-news

The online filter bubble is a natural extension of our preexisting biases, particularly evident in our media consumption. Those of us of a certain age — above 30 years — once purchased (and maybe still do) our favorite paper-based newspapers and glued ourselves to our favorite TV news channels. These sources mirrored, for the most part, our cultural and political preferences. The internet took this a step further by building a tightly wound, self-reinforcing feedback loop. We consume our favorite online media, which solicits algorithms to deliver more of the same. I’ve written about the filter bubble for years (here, here and here).

The online filter bubble in which each of us lives — those of us online — may seem no more dangerous than its offline predecessor. After all, the online version of the NYT delivers left-of-center news, just like its printed cousin. So what’s the big deal? Well, the pervasiveness of our technology has now enabled these filters to creep insidiously into many aspects of our lives, from news consumption and entertainment programming to shopping and even dating. And, since we now spend growing  swathes of our time online, our serendipitous exposure to varied content that typically lies outside this bubble in the real, offline world is diminishing. Consequently, the online filter bubble is taking on a much more critical role and having greater effect in maintaining our tunnel vision.

However, that’s not all. Over the last few years we have become exposed to yet another dangerous phenomenon to have made the jump from the offline world to online — the echo chamber. The online echo chamber is enabled by our like-minded online communities and catalyzed by the tools of social media. And, it turns our filter bubble into a self-reinforcing, exclusionary community that is harmful to varied, reasoned opinion and healthy skepticism.

Those of us who reside on Facebook are likely to be part of a very homogeneous social circle, which trusts, shares and reinforces information accepted by the group and discards information that does not match the group’s social norms. This makes the spread of misinformation — fake stories, conspiracy theories, hoaxes, rumors — so very effective. Importantly, this is increasingly to the exclusion of all else, including real news and accepted scientific fact.

Why embrace objective journalism, trusted science and thoughtful political dialogue when you can get a juicy, emotive meme from a friend of a friend on Facebook? Why trust a story from Reuters or science from Scientific American when you get your “news” via a friend’s link from Alex Jones and the Brietbart News Network?

And, there’s no simple solution, which puts many of our once trusted institutions in severe jeopardy. Those of us who care have a duty to ensure these issues are in the minds of our public officials and the guardians of our technology and media networks.

From Scientific American:

If you get your news from social media, as most Americans do, you are exposed to a daily dose of hoaxes, rumors, conspiracy theories and misleading news. When it’s all mixed in with reliable information from honest sources, the truth can be very hard to discern.

In fact, my research team’s analysis of data from Columbia University’s Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.

Many are asking whether this onslaught of digital misinformation affected the outcome of the 2016 U.S. election. The truth is we do not know, although there are reasons to believe it is entirely possible, based on past analysis and accounts from other countries. Each piece of misinformation contributes to the shaping of our opinions. Overall, the harm can be very real: If people can be conned into jeopardizing our children’s lives, as they do when they opt out of immunizations, why not our democracy?

As a researcher on the spread of misinformation through social media, I know that limiting news fakers’ ability to sell ads, as recently announced by Google and Facebook, is a step in the right direction. But it will not curb abuses driven by political motives.

Read the entire article here.

Image courtesy of Google Search.

Search and the Invisible Hand of Bias

duck-duck-go

I’ve written about the online filter bubble for a while now. It’s an insidious and disturbing consequence of our online world. It refers to the phenomenon whereby our profile, personal preferences, history and connections pre-select and filter the type of content that reaches us, eliminating things we don’t need to see. The filter bubble reduces our exposure to the wider world of information and serendipitous discovery.

If this were not bad enough the online world enables a much more dangerous threat — one of hidden bias through explicit manipulation. We’re all familiar with the pull and push exerted by the constant bombardment from overt advertising. We’re also familiar with more subtle techniques of ambient and subliminal control, which aim to sway our minds without our conscious awareness — think mood music in your grocery store (it really does work).

So, now comes another more subtle form of manipulation, but with more powerful results, and it’s tied to search engines and the central role these tools play in our daily lives.

Online search engines, such as Google, know you. They know your eye movements and your click habits; they know your proclivity to select a search result near the top of the first search engine results page (SERP). Advertisers part with a fortune each day with the goal of appearing in this sweet spot on a SERP. This is a tried and tested process — higher ranking on a SERP leads to more clicks and shifts more product.

Google and many other search engines will list a handful of sponsored results at the top of a SERP, followed by a collection of random results listed in order that best fit your search query. Your expectation is that these results are tailored to your query, but that they’re non-biased. That’s the key.

New research shows that you believe these SERP results to be non-biased, even if they are manipulated behind the scenes. Moreover, these manipulated results can greatly sway your opinion. The phenomenon now comes with a name, the search engine manipulation effect, or SEME (pronounced “seem”).

In the wrong hands — government overlords or technology oligarchs — this heralds a disturbing possible (and probable) future, already underway in countries with tightly controlled media and flows of information.

Check out a detailed essay on SEME by Robert Epstein here. Epstein is an author and research psychologist at the American Institute for Behavioral Research and Technology in California.

Finally, if you’re interested in using an alternative search engine that’s less interested in taking over the world, check out DuckDuckGo.

Image courtesy of DuckDuckGo.

Facebook’s Growing Filter Bubble

I’ve been writing about the filter bubble for quite sometime. The filter bubble refers to the tendency for online search tools, and now social media, to screen and deliver results that fit our online history and profile thereby returning only results that are deemed relevant. Eli Pariser coined the term in his book The Filter Bubble, published in 2011.

The filter bubble presents us with a clear faustian bargain: give up knowledge and serendipitous discovery of the wider world for narrow, personalized news and information that matches our immediate needs and agrees with our profile.

The great irony is that our technologies promise a limitless, interconnected web of data and information, but these same technologies ensure that we will see only the small sliver of information that passes through our personal, and social, filters. This consigns us to live inside our very own personal echo chambers, separated from disagreeable information that does not pass criteria in our profiles or measures gleaned across our social networks.

So, we should all be concerned as Facebook turns its attention to delivering and filtering news, and curating it in a quest for a more profitable return. Without question we are in the early stages of the reinvention of journalism as a whole and digital news in particular. The logical conclusion of this evolution has yet to be written, but it is certainly clear that handing so much power over the dissemination of news and information to one company cannot be in our long-term interests. If Mr. Zuckerberg and team deem certain political news to be personally distasteful or contrary to their corporate mission, should we sit back and allow them to filter it for us? I think not.

From Wired:

When Facebook News Feed guru Will Cathcart took the stage at F8 to talk about news, the audience was packed. Some followed along on Twitter. Others streamed the session online. Journalists, developers, and media types all clamored to catch a glimpse of “Creating Value for News Publishers and Readers on Facebook”—value that has become the most coveted asset in the news business as Facebook becomes a primary way the public finds and shares news.

As Cathcart kicked off the session, he took the captive audience to a Syrian refugee camp via Facebook’s new, innovative, and immersive 360 video experience. He didn’t say much about where the camp was (“I believe in Greece?”), nor anything about the camp situation. He didn’t offer the audio of the journalist describing the scene. No matter!

The refugee camp is a placeholder. A placeholder, in fact, that has become so overused that it was actually the second time yesterday that Facebook execs waved their hands about the importance of media before playing a video clip of refugees. It could have been a tour of the White House, the Boston bombing, Coachella. It could have been anything to Facebook. It’s “content.” It’s a commodity. What matters to Facebook is the product it’s selling—and who’s buying is you and the news industry.

What Facebook is selling you is pretty simple. It’s selling an experience, part of which includes news. That experience is dependent on content creators—you know, journalists and newsrooms—who come up with ideas, use their own resources to realize them, and then put them out into the world. All of which takes time, money, and skill. For its “media partners” (the CNNs, BuzzFeeds, and WIREDs of the world), Facebook is selling a promise that their future will be bright if they use Facebook’s latest news products to distribute those new, innovative, and immersive stories to Facebook’s giant audience.

The only problem is that Facebook’s promise isn’t a real one. It’s false hope; or at its worst, a threat.

Read the entire article here.

A Case For Less News

Google-search-cable-news

I find myself agreeing with columnist Oliver Burkeman over at the Guardian that we need to carefully manage our access to the 24/7 news cycle. Our news media has learned to thrive on hyperbole and sensationalism, which — let’s face it — tends to be mostly negative. This unending and unnerving stream of gloom and doom tends to make us believe that we are surrounded by more badness than there actually is. I have to believe that most of the 7 billion+ personal stories each day that we could be hearing about — however mundane — are likely to not be bad or evil. So, while it may not be wise to switch off cable or satellite news completely, we should consider a more measured, and balanced, approach to the media monster.

From the Guardian:

A few days before Christmas, feeling rather furtive about it, I went on a media diet: I quietly unsubscribed from, unfollowed or otherwise disconnected from several people and news sources whose output, I’d noticed, did nothing but bring me down. This felt like defeat. I’ve railed against the popular self-help advice that you should “give up reading the news” on the grounds that it’s depressing and distracting: if bad stuff’s happening out there, my reasoning goes, I don’t want to live in an artificial bubble of privilege and positivity; I want to face reality. But at some point during 2015’s relentless awfulness, it became unignorable: the days when I read about another mass shooting, another tale of desperate refugees or anything involving the words “Donald Trump” were the days I’d end up gloomier, tetchier, more attention-scattered. Needless to say, I channelled none of this malaise into making the planet better. I just got grumbly about the world, like a walking embodiment of that bumper-sticker: “Where are we going, and why are we in this handbasket?”

One problem is that merely knowing that the news focuses disproportionately on negative and scary stories doesn’t mean you’ll adjust your emotions accordingly. People like me scorn Trump and the Daily Mail for sowing unwarranted fears. We know that the risk of dying in traffic is vastly greater than from terrorism. We may even know that US gun crime is in dramatic decline, that global economic inequality is decreasing, or that there’s not much evidence that police brutality is on the rise. (We just see more of it, thanks to smartphones.) But, apparently, the part of our minds that knows these facts isn’t the same part that decides whether to feel upbeat or despairing. It’s entirely possible to know things are pretty good, yet feel as if they’re terrible.

This phenomenon has curious parallels with the “busyness epidemic”. Data on leisure time suggests we’re not much busier than we were, yet we feel busier, partly because – for “knowledge workers”, anyway – there’s no limit to the number of emails we can get, the demands that can be made of us, or the hours of the day we can be in touch with the office. Work feels infinite, but our capacities are finite, therefore overwhelm is inevitable. Similarly, technology connects us to more and more of the world’s suffering, of which there’s an essentially infinite amount, until feeling steamrollered by it becomes structurally inevitable – not a sign that life’s getting worse. And the consequences go beyond glumness. They include “compassion fade”, the well-studied effect whereby our urge to help the unfortunate declines as their numbers increase.

Read the whole column here.

Image courtesy of Google Search.

Playing Music, Playing Ads – Same Difference

pandoraThe internet music radio service Pandora knows a lot about you and another 200 million or so registered members. If you use the service regularly it comes to recognize your musical likes and dislikes. In this way Pandora learns to deliver more music programming that it thinks you will like, and it works rather well.

But, the story does not end there since Pandora is not just fun, it’s a business. For in its quest to monetize you even more effectively Pandora is seeking to pair personalized ads to your specific musical tastes. So, beware forthcoming ads tailored to your music perferences — metalheads, you have been warned!

From the NYT:

Pandora, the Internet radio service, is plying a new tune.

After years of customizing playlists to individual listeners by analyzing components of the songs they like, then playing them tracks with similar traits, the company has started data-mining users’ musical tastes for clues about the kinds of ads most likely to engage them.

“It’s becoming quite apparent to us that the world of playing the perfect music to people and the world of playing perfect advertising to them are strikingly similar,” says Eric Bieschke, Pandora’s chief scientist.

Consider someone who’s in an adventurous musical mood on a weekend afternoon, he says. One hypothesis is that this listener may be more likely to click on an ad for, say, adventure travel in Costa Rica than a person in an office on a Monday morning listening to familiar tunes. And that person at the office, Mr. Bieschke says, may be more inclined to respond to a more conservative travel ad for a restaurant-and-museum tour of Paris. Pandora is now testing hypotheses like these by, among other methods, measuring the frequency of ad clicks. “There are a lot of interesting things we can do on the music side that bridge the way to advertising,” says Mr. Bieschke, who led the development of Pandora’s music recommendation engine.

A few services, like Pandora, Amazon and Netflix, were early in developing algorithms to recommend products based on an individual customer’s preferences or those of people with similar profiles. Now, some companies are trying to differentiate themselves by using their proprietary data sets to make deeper inferences about individuals and try to influence their behavior.

This online ad customization technique is known as behavioral targeting, but Pandora adds a music layer. Pandora has collected song preference and other details about more than 200 million registered users, and those people have expressed their song likes and dislikes by pressing the site’s thumbs-up and thumbs-down buttons more than 35 billion times. Because Pandora needs to understand the type of device a listener is using in order to deliver songs in a playable format, its system also knows whether people are tuning in from their cars, from iPhones or Android phones or from desktops.

So it seems only logical for the company to start seeking correlations between users’ listening habits and the kinds of ads they might be most receptive to.

“The advantage of using our own in-house data is that we have it down to the individual level, to the specific person who is using Pandora,” Mr. Bieschke says. “We take all of these signals and look at correlations that lead us to come up with magical insights about somebody.”

People’s music, movie or book choices may reveal much more than commercial likes and dislikes. Certain product or cultural preferences can give glimpses into consumers’ political beliefs, religious faith, sexual orientation or other intimate issues. That means many organizations now are not merely collecting details about where we go and what we buy, but are also making inferences about who we are.

“I would guess, looking at music choices, you could probably predict with high accuracy a person’s worldview,” says Vitaly Shmatikov, an associate professor of computer science at the University of Texas at Austin, where he studies computer security and privacy. “You might be able to predict people’s stance on issues like gun control or the environment because there are bands and music tracks that do express strong positions.”

Pandora, for one, has a political ad-targeting system that has been used in presidential and congressional campaigns, and even a few for governor. It can deconstruct users’ song preferences to predict their political party of choice. (The company does not analyze listeners’ attitudes to individual political issues like abortion or fracking.)

During the next federal election cycle, for instance, Pandora users tuning into country music acts, stand-up comedians or Christian bands might hear or see ads for Republican candidates for Congress. Others listening to hip-hop tunes, or to classical acts like the Berlin Philharmonic, might hear ads for Democrats.

Because Pandora users provide their ZIP codes when they register, Mr. Bieschke says, “we can play ads only for the specific districts political campaigns want to target,” and “we can use their music to predict users’ political affiliations.” But he cautioned that the predictions about users’ political parties are machine-generated forecasts for groups of listeners with certain similar characteristics and may not be correct for any particular listener.

Shazam, the song recognition app with 80 million unique monthly users, also plays ads based on users’ preferred music genres. “Hypothetically, a Ford F-150 pickup truck might over-index to country music listeners,” says Kevin McGurn, Shazam’s chief revenue officer. For those who prefer U2 and Coldplay, a demographic that skews to middle-age people with relatively high incomes, he says, the app might play ads for luxury cars like Jaguars.

Read the entire article here.

Image courtesy of Pandora.

How to Burst the Filter Bubble

[tube]B8ofWFx525s[/tube]

As the customer service systems of all online retailers and media companies become ever-more attuned to their shoppers’ and members’ preferences the power of the filter bubble grows ever-greater. And, that’s not a good thing.

The filter bubble ensures that digital consumers see more content that matches their preferences and, by extension, continues to reinforce their opinions and beliefs. Conversely, consumers see less and less content that diverges from historical behavior and calculated preferences, often called “signals”.

And, that’s not a good thing.

What of diverse opinion and diverse views? Without a plurality of views and a rich spectrum of positions creativity loses in its battle with banality and conformity. So how can digital consumers break free of the systems that deliver custom recommendations and filtered content and reduce serendipitous discovery?

From Technology Review:

The term “filter bubble” entered the public domain back in 2011when the internet activist Eli Pariser coined it to refer to the way recommendation engines shield people from certain aspects of the real world.

Pariser used the example of two people who googled the term “BP”. One received links to investment news about BP while the other received links to the Deepwater Horizon oil spill, presumably as a result of some recommendation algorithm.

This is an insidious problem. Much social research shows that people prefer to receive information that they agree with instead of information that challenges their beliefs. This problem is compounded when social networks recommend content based on what users already like and on what people similar to them also like.

This the filter bubble—being surrounded only by people you like and content that you agree with.

And the danger is that it can polarise populations creating potentially harmful divisions in society.

Today, Eduardo Graells-Garrido at the Universitat Pompeu Fabra in Barcelona as well as Mounia Lalmas and Daniel Quercia, both at Yahoo Labs, say they’ve hit on a way to burst the filter bubble. Their idea that although people may have opposing views on sensitive topics, they may also share interests in other areas. And they’ve built a recommendation engine that points these kinds of people towards each other based on their own preferences.

The result is that individuals are exposed to a much wider range of opinions, ideas and people than they would otherwise experience. And because this is done using their own interests, they end up being equally satisfied with the results (although not without a period of acclimitisation). “We nudge users to read content from people who may have opposite views, or high view gaps, in those issues, while still being relevant according to their preferences,” say Graells-Garrido and co.

These guys have tested this approach by focusing on the topic of abortion as discussed by people in Chile in August and September this year. Chile has some of the most restrictive anti-abortion laws on the planet–it was legalised here in 1931 and then made illegal again in 1989. With presidential elections in November, a highly polarised debate was raging in the country at that time.

They found over 40,000 Twitter users who had expressed an opinion using the hashtags such as #pro-life and #pro-choice. They trimmed this group by choosing only those who gave their location as Chile and by excluding those who tweeted rarely. That left over 3000 Twitter users.

The team then computed the difference in the views of these users on this and other topics using the regularity with which they used certain other keywords. This allowed them to create a kind of wordcloud for each user that acted like a kind of data portrait.

They then recommended tweets to each person based on similarities between their word clouds and especially when they differed in their views on the topic of abortion.

The results show that people can be more open than expected to ideas that oppose their own. It turns out that users who openly speak about sensitive issues are more open to receive recommendations authored by people with opposing views, say Graells-Garrido and co.

They also say that challenging people with new ideas makes them generally more receptive to change. That has important implications for social media sites. There is good evidence that users can sometimes become so resistant to change than any form of redesign dramatically reduces the popularity of the service. Giving them a greater range of content could change that.

“We conclude that an indirect approach to connecting people with opposing views has great potential,” say Graells-Garrido and co.

It’s certainly a start. But whether it can prevent the herding behaviour in which users sometimes desert social media sites overnight, is debatable. But the overall approach is admirable. Connecting people is important when they share similar interests but arguably even more so when their views clash.

Read the entire article here.

Video: Eli Pariser, beware online “filter bubbles”. Courtesy of Eli Pariser, thefilterbubble.

Big Bad Data; Growing Discrimination

You may be an anonymous data point online, but it does not follow that you’ll not still be a victim of personal discrimination. As technology to gather and track your every move online steadily improves so do the opportunities to misuse that information. Many of us are already unwitting participants in the growing internet filter bubble — a phenomenon that amplifies our personal tastes, opinions and shopping habits by pre-screening and delivering only more of the same based on our online footprints. Many argue that this is benign and even beneficial — after all isn’t it wonderful when Google’s ad network pops up product recommendations for you on “random” websites based on your previous searches, or isn’t it that much more effective when news organizations only deliver stories based on your previous browsing history, interests, affiliations or demographic?

Not so. We are in ever-increasing danger of allowing others to control what we see and hear online. So kiss discovery and serendipity goodbye. More troubling still, beyond the ability to deliver personalized experiences online, as corporations gather more and more data from and about you, they can decide if you are of value. While your data may be aggregated and anonymized, the results can still help a business target you, or not, whether you are explicitly identified by name or not.

So, perhaps your previous online shopping history divulged a proclivity for certain medications; well, kiss goodbye to that pre-existing health condition waiver. Or, perhaps the online groups that you belong to are rather left-of-center or way out in left-field; well, say hello to a smaller annual bonus from your conservative employer. Perhaps, the news or social groups that you subscribe to don’t align very well with the values of your landlord or prospective employer. Or, perhaps, Amazon will not allow you to shop online any more because the company knows your annual take-home pay and that you are a potential credit risk. You get the idea.

Without adequate safe-guards and controls those who gather the data about you will be in the driver’s seat. Whereas, put simply, it should be the other way around — you should own the data that describes who you are and what your do, and you should determine who gets to see it and how it’s used. Welcome to the age of Big (Bad) Data and the new age of data-driven discrimination.

From Technology Review:

Data analytics are being used to implement a subtle form of discrimination, while anonymous data sets can be mined to reveal health data and other private information, a Microsoft researcher warned this morning at MIT Technology Review’s EmTech conference.

Kate Crawford, principal researcher at Microsoft Research, argued that these problems could be addressed with new legal approaches to the use of personal data.

In a new paper, she and a colleague propose a system of “due process” that would give people more legal rights to understand how data analytics are used in determinations made against them, such as denial of health insurance or a job. “It’s the very start of a conversation about how to do this better,” Crawford, who is also a visiting professor at the MIT Center for Civic Media, said in an interview before the event. “People think ‘big data’ avoids the problem of discrimination, because you are dealing with big data sets, but in fact big data is being used for more and more precise forms of discrimination—a form of data redlining.”

During her talk this morning, Crawford added that with big data, “you will never know what those discriminations are, and I think that’s where the concern begins.”

Health data is particularly vulnerable, the researcher says. Search terms for disease symptoms, online purchases of medical supplies, and even the RFID tags on drug packaging can provide websites and retailers with information about a person’s health.

As Crawford and Jason Schultz, a professor at New York University Law School, wrote in their paper: “When these data sets are cross-referenced with traditional health information, as big data is designed to do, it is possible to generate a detailed picture about a person’s health, including information a person may never have disclosed to a health provider.”

And a recent Cambridge University study, which Crawford alluded to during her talk, found that “highly sensitive personal attributes”— including sexual orientation, personality traits, use of addictive substances, and even parental separation—are highly predictable by analyzing what people click on to indicate they “like” on Facebook. The study analyzed the “likes” of 58,000 Facebook users.

Similarly, purchasing histories, tweets, and demographic, location, and other information gathered about individual Web users, when combined with data from other sources, can result in new kinds of profiles that an employer or landlord might use to deny someone a job or an apartment.

In response to such risks, the paper’s authors propose a legal framework they call “big data due process.” Under this concept, a person who has been subject to some determination—whether denial of health insurance, rejection of a job or housing application, or an arrest—would have the right to learn how big data analytics were used.

This would entail the sorts of disclosure and cross-examination rights that are already enshrined in the legal systems of the United States and many other nations. “Before there can be greater social acceptance of big data’s role in decision-making, especially within government, it must also appear fair, and have an acceptable degree of predictability, transparency, and rationality,” the authors write.

Data analytics can also get things deeply wrong, Crawford notes. Even the formerly successful use of Google search terms to identify flu outbreaks failed last year, when actual cases fell far short of predictions. Increased flu-related media coverage and chatter about the flu in social media were mistaken for signs of people complaining they were sick, leading to the overestimates.  “This is where social media data can get complicated,” Crawford said.

Read the entire article here.

Filter Bubble on the Move

Personalization technology that allows marketers and media organizations to customize their products and content specifically to you seems to be a win-win for all: businesses win by addressing the needs — perceived or real — of specific customers; you win by seeing or receiving only items in which you’re interested.

But, this is a rather simplistic calculation for it fails to address the consequences of narrow targeting and a cycle of blinkered self-reinforcement, resulting in tunnel vision. More recently this has become known as filter bubble. The filter bubble eliminates serendipitous discovery and reduces creative connections by limiting our exposure to contrarian viewpoints and the unexpected. Or to put it more bluntly, it helps maintain a closed mind. This is true while you sit on the couch surfing the internet and increasingly, while you travel.

From the New York Times:

I’m half a world from home, in a city I’ve never explored, with fresh sights and sounds around every corner. And what am I doing?

I’m watching exactly the kind of television program I might watch in my Manhattan apartment.

Before I left New York, I downloaded a season of “The Wire,” in case I wanted to binge, in case I needed the comfort. It’s on my iPad with a slew of books I’m sure to find gripping, a bunch of the music I like best, issues of favorite magazines: a portable trove of the tried and true, guaranteed to insulate me from the strange and new.

I force myself to quit “The Wire” after about 20 minutes and I venture into the streets, because Baltimore’s drug dealers will wait and Shanghai’s soup dumplings won’t. But I’m haunted by how tempting it was to stay put, by how easily a person these days can travel the globe, and travel through life, in a thoroughly customized cocoon.

I’m not talking about the chain hotels or chain restaurants that we’ve long had and that somehow manage to be identical from time zone to time zone, language to language: carbon-copy refuges for unadventurous souls and stomachs.

I’m talking about our hard drives, our wired ways, “the cloud” and all of that. I’m talking about our unprecedented ability to tote around and dwell in a snugly tailored reality of our own creation, a monochromatic gallery of our own curation.

This coddling involves more than earphones, touch pads, palm-sized screens and gigabytes of memory. It’s a function of how so many of us use this technology and how we let it use us. We tune out by tucking ourselves into virtual enclaves in which our ingrained tastes are mirrored and our established opinions reflected back at us.

In theory the Internet, along with its kindred advances, should expand our horizons, speeding us to aesthetic and intellectual territories we haven’t charted before. Often it does.

But at our instigation and with our assent, it also herds us into tribes of common thought and shared temperament, amplifying the timeless human tropism toward cliques. Cyberspace, like suburbia, has gated communities.

Our Web bookmarks and our chosen social-media feeds help us retreat deeper into our partisan camps. (Cable-television news lends its own mighty hand.) “It’s the great irony of the Internet era: people have more access than ever to an array of viewpoints, but also the technological ability to screen out anything that doesn’t reinforce their views,” Jonathan Martin wrote in Politico last year, explaining how so many strategists and analysts on the right convinced themselves, in defiance of polls, that Mitt Romney was about to win the presidency.

But this sort of echo chamber also exists on cultural fronts, where we’re exhorted toward sameness and sorted into categories. The helpful video-store clerk or bookstore owner has been replaced, refined, automated: we now have Netflix suggestions for what we should watch next, based on what we’ve watched before, and we’re given Amazon prods for purchasing novels that have been shown to please readers just like us. We’re profiled, then clustered accordingly.

By joining particular threads on Facebook and Twitter, we can linger interminably on the one or two television shows that obsess us. Through music-streaming services and their formulas for our sweet spots, we meet new bands that might as well be reconfigurations of the old ones. Algorithms lead us to anagrams.

Read the entire article here.

The Filter Bubble Eats the Book World

Last week Amazon purchased Goodreads the online book review site. Since 2007 Goodreads has grown to become home to over 16 million members who share a passion for discovering and sharing great literature. Now, with Amazon’s acquisition many are concerned that this represents another step towards a monolithic and monopolistic enterprise that controls vast swathes of the market. While Amazon’s innovation has upended the bricks-and-mortar worlds of publishing and retailing, its increasingly dominant market power raises serious concerns over access, distribution and choice. This is another worrying example of the so-called filter bubble — where increasingly edited selections and personalized recommendations act to limit and dumb-down content.

From the Guardian:

“Truly devastating” for some authors but “like finding out my mom is marrying that cool dude next door that I’ve been palling around with” for another, Amazon’s announcement late last week that it was buying the hugely popular reader review site Goodreads has sent shockwaves through the book industry.

The acquisition, terms of which Amazon.com did not reveal, will close in the second quarter of this year. Goodreads, founded in 2007, has more than 16m members, who have added more than four books per second to their “want to read” shelves over the past 90 days, according to Amazon. The internet retailer’s vice president of Kindle content, Russ Grandinetti, said the two sites “share a passion for reinventing reading”.

“Goodreads has helped change how we discover and discuss books and, with Kindle, Amazon has helped expand reading around the world. In addition, both Amazon and Goodreads have helped thousands of authors reach a wider audience and make a better living at their craft. Together we intend to build many new ways to delight readers and authors alike,” said Grandinetti, announcing the buy. Goodreads co-founder Otis Chandler said the deal with Amazon meant “we’re now going to be able to move faster in bringing the Goodreads experience to millions of readers around the world”, adding on his blog that “we have no plans to change the Goodreads experience and Goodreads will continue to be the wonderful community we all cherish”.

But despite Chandler’s reassurances, many readers and authors reacted negatively to the news. American writers’ organisation the Authors’ Guild called the acquisition a “truly devastating act of vertical integration” which meant that “Amazon’s control of online bookselling approaches the insurmountable”. Bestselling legal thriller author Scott Turow, president of the Guild, said it was “a textbook example of how modern internet monopolies can be built”.

“The key is to eliminate or absorb competitors before they pose a serious threat,” said Turow. “With its 16 million subscribers, Goodreads could easily have become a competing online bookseller, or played a role in directing buyers to a site other than Amazon. Instead, Amazon has scuttled that potential and also squelched what was fast becoming the go-to venue for online reviews, attracting far more attention than Amazon for those seeking independent assessment and discussion of books. As those in advertising have long known, the key to driving sales is controlling information.”

Turow was joined in his concerns by members of Goodreads, many of whom expressed their fears about what the deal would mean on Chandler’s blog. “I have to admit I’m not entirely thrilled by this development,” wrote one of the more level-headed commenters. “As a general rule I like Amazon, but unless they take an entirely 100% hands-off attitude toward Goodreads I find it hard to believe this will be in the best interest for the readers. There are simply too many ways they can interfere with the neutral Goodreads experience and/or try to profit from the strictly volunteer efforts of Goodreads users.”

But not all authors were against the move. Hugh Howey, author of the smash hit dystopian thriller Wool – which took off after he self-published it via Amazon – said it was “like finding out my mom is marrying that cool dude next door that I’ve been palling around with”. While Howey predicted “a lot of hand-wringing over the acquisition”, he said there were “so many ways this can be good for all involved. I’m still trying to think of a way it could suck.”

Read the entire article following the jump.

Image: Amazon.com screen. Courtesy of New York Times.

How to Make Social Networking Even More Annoying

What do you get when you take a social network, add sprinkles of mobile telephony, and throw in a liberal dose of proximity sensing? You get the first “social accessory” that creates a proximity network around you as you move about your daily life. Welcome to the world of a yet another social networking technology startup, this one, called magnetU. The company’s tagline is:

It was only a matter of time before your social desires became wearable!

magnetU markets a wearable device, about the size of a memory stick, that lets people wear and broadcast their social desires, allowing immediate social gratification anywhere and anytime. When a magnetU user comes into proximity with others having similar social profiles the system notifies the user of a match. A social match is signaled as either “attractive”, “hot” or “red hot”. So, if you want to find a group of anonymous but like minds (or bodies) for some seriously homogeneous partying magnetU is for you.

Time will tell whether this will become successful and pervasive, or whether it will be consigned to the tech start-up waste bin of history. If magnetU becomes as ubiquitous as Facebook then humanity be entering a disastrous new phase characterized by the following: all social connections become a marketing opportunity; computer algorithms determine when and whom to like (or not) instantly; the content filter bubble extends to every interaction online and in the real world; people become ratings and nodes on a network; advertisers insert themselves into your daily conversations; Big Brother is watching you!

[div class=attrib]From Technology Review:[end-div]

MagnetU is a $24 device that broadcasts your social media profile to everyone around you. If anyone else with a MagnetU has a profile that matches yours sufficiently, the device will alert both of you via text and/or an app. Or, as founder Yaron Moradi told Mashable in a video interview, “MagnetU brings Facebook, Linkedin, Twitter and other online social networks to the street.”

Moradi calls this process “wearing your social desires,” and anyone who’s ever attempted online dating can tell you that machines are poor substitutes for your own judgement when it comes to determining with whom you’ll actually want to connect.

You don’t have to be a pundit to come up with a long list of Mr. McCrankypants reasons this is a terrible idea, from the overwhelming volume of distraction we already face to the fact that unless this is a smash hit, the only people MagnetU will connect you to are other desperately lonely geeks.

My primary objection, however, is not that this device or something like it won’t work, but that if it does, it will have the Facebook-like effect of pushing even those who loathe it on principle into participating, just because everyone else is using it and those who don’t will be left out in real life.

“MagnetU lets you wear your social desires… Anything from your social and dating preferences to business matches in conferences,” says Moradi. By which he means this will be very popular with Robert Scoble and anyone who already has Grindr loaded onto his or her phone.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Facebook founder Mark Zuckerberg. Courtesy of Rocketboom.[end-div]

The Technology of Personalization and the Bubble Syndrome

A decade ago in another place and era during my days as director of technology research for a Fortune X company I tinkered with a cool array of then new personalization tools. The aim was simple, use some of these emerging technologies to deliver a more customized and personalized user experience for our customers and suppliers. What could be wrong with that? Surely, custom tools and more personalized data could do nothing but improve knowledge and enhance business relationships for all concerned. Our customers would benefit from seeing only the information they asked for, our suppliers would benefit from better analysis and filtered feedback, and we, the corporation in the middle, would benefit from making everyone in our supply chain more efficient and happy. Advertisers would be even happier since with more focused data they would be able to deliver messages that were increasingly more precise and relevant based on personal context.

Fast forward to the present. Customization, or filtering, technologies have indeed helped optimize the supply chain; personalization tools and services have made customer experiences more focused and efficient. In today’s online world it’s so much easier to find, navigate and transact when the supplier at the other end of our browser knows who we are, where we live, what we earn, what we like and dislike, and so on. After all, if a supplier knows my needs, requirements, options, status and even personality, I’m much more likely to only receive information, services or products that fall within the bounds that define “me” in the supplier’s database.

And, therein lies the crux of the issue that has helped me to realize that personalization offers a false promise despite the seemingly obvious benefits to all concerned. The benefits are outweighed by two key issues: erosion of privacy and the bubble syndrome.

Privacy as Commodity

I’ll not dwell too long on the issue of privacy since in this article I’m much more concerned with the personalization bubble. However, as we have increasingly seen in recent times privacy in all its forms is becoming a scarce, and tradable commodity. Much of our data is now in the hands of a plethora of suppliers, intermediaries and their partners, ready for continued monetization. Our locations are constantly pinged and polled; our internet browsers note our web surfing habits and preferences; our purchases generate genius suggestions and recommendations to further whet our consumerist desires. Now in digital form this data is open to legitimate sharing and highly vulnerable to discovery by hackers, phishers and spammers and any with technical or financial resources.

Bubble Syndrome

Personalization technologies filter content at various levels, minutely and broadly, both overtly and covertly. For instance, I may explicitly signal my preferences for certain types of clothing deals at my favorite online retailer by answering a quick retail survey or checking a handful of specific preference buttons on a website.

However, my previous online purchases, browsing behaviors, time spent of various online pages, visits to other online retailers and a range of other flags deliver a range of implicit or “covert” information to the same retailer (and others). This helps the retailer filter, customize and personalize what I get to see even before I have made a conscious decision to limit my searches and exposure to information. Clearly, this is not too concerning when my retailer knows I’m male and usually purchase size 32 inch jeans; after all why would I need to see deals or product information for women’s shoes.

But, this type of covert filtering becomes more worrisome when the data being filtered and personalized is information, news, opinion and comment in all its glorious diversity. Sophisticated media organizations, information portals, aggregators and news services can deliver personalized and filtered information based on your overt and covert personal preferences as well. So, if you subscribe only to a certain type of information based on topic, interest, political persuasion or other dimension your personalized news services will continue to deliver mostly or only this type of information. And, as I have already described, your online behaviors will deliver additional filtering parameters to these news and information providers so that they may further personalize and narrow your consumption of information.

Increasingly, we will not be aware of what we don’t know. Whether explicitly or not, our use of personalization technologies will have the ability to build a filter, a bubble, around us, which will permit only information that we wish to see or that which our online suppliers wish us to see. We’ll not even get exposed to peripheral and tangential information — that information which lies outside the bubble. This filtering of the rich oceans of diverse information to a mono-dimensional stream will have profound implications for our social and cultural fabric.

I assume that our increasingly crowded planet will require ever more creativity, insight, tolerance and empathy as we tackle humanity’s many social and political challenges in the future. And, these very seeds of creativity, insight, tolerance and empathy are those that are most at risk from the personalization filter. How are we to be more tolerant of others’ opinions if we are never exposed to them in the first place? How are we to gain insight when disparate knowledge is no longer available for serendipitous discovery? How are we to become more creative if we are less exposed to ideas outside of our normal sphere, our bubble?

For some ideas on how to punch a few holes in your online filter bubble read Eli Pariser’s practical guide, here.

Filter Bubble image courtesy of TechCrunch.