Category Archives: Technica

The Emperor Has Transparent Clothes

Hot from the TechnoSensual Exposition in Vienna, Austria, come clothes that can be made transparent or opaque, and clothes that can detect a wearer telling a lie. While the value of the former may seem dubious outside of the home, the latter invention should be a mandatory garment for all politicians and bankers. Or, for the less adventurous, millinery fashionistas, how about a hat that reacts to ambient radio waves?

All these innovations find their way from the realms of a Philip K. Dick science fiction novel, courtesy of the confluence of new technologies and innovative textile design.

[div class=attrib]From New Scientist:[end-div]

WHAT if the world could see your innermost emotions? For the wearer of the Bubelle dress created by Philips Design, it’s not simply a thought experiment.

Aptly nicknamed “the blushing dress”, the futuristic garment has an inner layer fitted with sensors that measure heart rate, respiration and galvanic skin response. The measurements are fed to 18 miniature projectors that shine corresponding colours, shapes, and intensities onto an outer layer of fabric – turning the dress into something like a giant, high-tech mood ring. As a natural blusher, I feel like I already know what it would be like to wear this dress – like going emotionally, instead of physically, naked.

The Bubelle dress is just one of the technologically enhanced items of clothing on show at the Technosensual exhibition in Vienna, Austria, which celebrates the overlapping worlds of technology, fashion and design.

Other garments are even more revealing. Holy Dress, created by Melissa Coleman and Leonie Smelt, is a wearable lie detector – that also metes out punishment. Using voice-stress analysis, the garment is designed to catch the wearer out in a lie, whereupon it twinkles conspicuously and gives her a small shock. Though the garment is beautiful, a slim white dress under a geometric structure of copper tubes, I’d rather try it on a politician than myself. “You can become a martyr for truth,” says Coleman. To make it, she hacked a 1990s lie detector and added a novelty shocking pen.

Laying the wearer bare in a less metaphorical way, a dress that alternates between opaque and transparent is also on show. Designed by the exhibition’s curator, Anouk Wipprecht with interactive design laboratory Studio Roosegaarde, Intimacy 2.0 was made using conductive liquid crystal foil. When a very low electrical current is applied to the foil, the liquid crystals stand to attention in parallel, making the material transparent. Wipprecht expects the next iteration could be available commercially. It’s time to take the dresses “out of the museum and get them on the streets”, she says.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Taiknam Hat, a hat sensitive to ambient radio waves. Courtesy of Ricardo O’Nascimento, Ebru Kurbak, Fabiana Shizue / New Scientist.[end-div]

Beware, Big Telecomm is Watching You

Facebook trawls your profile, status and friends to target ads more effectively. It also allows 3rd parties, for a fee, to mine mountains of aggregated data for juicy analyses. Many online companies do the same. However, some companies are taking this to a whole, new and very personal level.

Here’s an example from Germany. Politician Malte Spitz gathered 6 months of his personal geolocation data from his mobile phone company. Then, he combined this data with his activity online, such as Twitter updates, blog entries and website visits. The interactive results seen here, plotted over time and space, show the detailed extent to which an individual’s life is being tracked and recorded.

[div class=attrib]From Zeit Online:[end-div]

By pushing the play button, you will set off on a trip through Malte Spitz’s life. The speed controller allows you to adjust how fast you travel, the pause button will let you stop at interesting points. In addition, a calendar at the bottom shows when he was in a particular location and can be used to jump to a specific time period. Each column corresponds to one day.

Not surprisingly, Spitz had to sue his phone company, Deutsche Telekom, to gain access to his own phone data.

[div class=attrib]From TED:[end-div]

On August 31, 2009, politician Malte Spitz traveled from Berlin to Erlangen, sending 29 text messages as he traveled. On November 5, 2009, he rocked out to U2 at the Brandenburg Gate. On January 10, 2010, he made 10 outgoing phone calls while on a trip to Dusseldorf, and spent 22 hours, 53 minutes and 57 seconds of the day connected to the internet.

How do we know all this? By looking at a detailed, interactive timeline of Spitz’s life, created using information obtained from his cell phone company, Deutsche Telekom, between September 2009 and February 2010.

In an impassioned talk given at TEDGlobal 2012, Spitz, a member of Germany’s Green Party, recalls his multiple-year quest to receive this data from his phone company. And he explains why he decided to make this shockingly precise log into public information in the newspaper Die Zeit – to sound a warning bell of sorts.

“If you have access to this information, you can see what your society is doing,” says Spitz. “If you have access to this information, you can control your country.”

[div class=attrib]Read the entire article after the jump.[end-div]

How Do Startup Companies Succeed?

A view from Esther Dyson, one of world’s leading digital technology entrepreneurs. She has served as a an early investor in numerous startups, including Flickr, del.icio.us, ZEDO, and Medspace, and is currently focused on startups in medical technology and aviation.

[div class=attrib]From Project Syndicate:[end-div]

The most popular stories often seem to end at the beginning. “…and so Juan and Alice got married.” Did they actually live happily ever after? “He was elected President.” But how did the country do under his rule? “The entrepreneur got her startup funding.” But did the company succeed?

Let’s consider that last one. Specifically, what happens to entrepreneurs once they get their money? Everywhere I go – and I have been in Moscow, Libreville (Gabon), and Dublin in the last few weeks – smart people ask how to get companies through the next phase of growth. How can we scale entrepreneurship to the point that it has a measurable and meaningful impact on the economy?

The real impact of both Microsoft and Google is not on their shareholders, or even on the people that they employ directly, but on the millions of people whom they have made more productive. That argues for companies that solve real problems, rather than for yet another photo-sharing app for rich, appealing (to advertisers) people with time on their hands.

It turns out that money is rarely enough – not just that there is not enough of it, but that entrepreneurs need something else. They need advice, contacts, customers, and employees immersed in a culture of effectiveness to succeed. But they also have to create something of real value to have meaningful economic impact in the long term.

The easy, increasingly popular answer is accelerators, incubators, camps, weekends – a host of locations and events to foster the development of startups. But these are just buildings and conferences unless they include people who can help with the software – contacts, customers, and culture. The people in charge, from NGOs to government officials, have great ideas about structures – tax policy, official financing, etc. – while the entrepreneurs themselves are too busy running their companies to find out about these things.

But this week in Dublin, I found what we need: not policies or theories, but actual living examples. Not far from the fancy hotel at which I was staying, and across from Google’s modish Irish offices, sits a squat old warehouse with a new sign: Startupbootcamp. You enter through a side door, into a cavern full of sawdust and cheap furniture (plus a pool table and a bar, of course).

What makes this place interesting is its sponsor: venerable old IBM. The mission of Startupbootcamp Europe is not to celebrate entrepreneurs, or even to educate them, but to help them scale up to meaningful businesses. Their new products can use IBM’s and other mentors’ contacts with the much broader world, whether for strategic marketing alliances, the power of an IBM endorsement, or, ultimately, an acquisition.

I was invited by Martin Kelly, who represents IBM’s venture arm in Ireland. He introduced me to the manager of the place, Eoghan Jennings, and a bunch of seasoned executives.

There was a three-time entrepreneur, Conor Hanley, co-founder of BiancaMed (recently sold to Resmed), who now has a sleep-monitoring tool and an exciting distribution deal with a large company he can’t yet mention; Jim Joyce, a former sales executive for Schering Plough who is now running Point of Care, which helps clinicians to help patients to manage their own care after they leave hospital; and Johnny Walker, a radiologist whose company operates scanners in the field and interprets them through a network of radiologists worldwide. Currently, Walker’s company, Global Diagnostics, is focused on pre-natal care, but give him time.

These guys are not the “startups”; they are the mentors, carefully solicited by Kelly from within the tightly knit Irish business community. He knew exactly what he was looking for: “In Ireland, we have people from lots of large companies. Joyce, for example, can put a startup in touch with senior management from virtually any pharma company around the world. Hanley knows manufacturing and tech partners. Walker understands how to operate in rural conditions.”

According to Jennings, a former chief financial officer of Xing, Europe’s leading social network, “We spent years trying to persuade people that they had a problem we could solve; now I am working with companies solving problems that people know they have.”  And that usually involves more than an Internet solution; it requires distribution channels, production facilities, market education, and the like. Startupbootcamp’s next batch of startups, not coincidentally, will be in the health-care sector.

Each of the mentors can help a startup to go global. Precisely because the Irish market is so small, it’s a good place to find people who know how to expand globally. In Ireland right now, as in so many countries, many large companies are laying off people with experience. Not all of them have the makings of an entrepreneur. But most of them have skills worth sharing, whether it’s how to run a sales meeting, oversee a development project, or manage a database of customers.

[div class=attrib]Read the entire article after the jump.[end-div]

Extending Moore’s Law Through Evolution

[div class=attrib]From Smithsonian:[end-div]

In 1965, Intel co-founder Gordon Moore made a prediction about computing that has held true to this day. Moore’s law, as it came to be known, forecasted that the number of transistors we’d be able to cram onto a circuit—and thereby, the effective processing speed of our computers—would double roughly every two years. Remarkably enough, this rule has been accurate for nearly 50 years, but most experts now predict that this growth will slow by the end of the decade.

Someday, though, a radical new approach to creating silicon semiconductors might enable this rate to continue—and could even accelerate it. As detailed in a study published in this month’s Proceedings of the National Academy of Sciences, a team of researchers from the University of California at Santa Barbara and elsewhere have harnessed the process of evolution to produce enzymes that create novel semiconductor structures.

“It’s like natural selection, but here, it’s artificial selection,” Daniel Morse, professor emeritus at UCSB and a co-author of the study, said in an interview. After taking an enzyme found in marine sponges and mutating it into many various forms, “we’ve selected the one in a million mutant DNAs capable of making a semiconductor.”

In an earlier study, Morse and other members of the research team had discovered silicatein—a natural enzyme used used by marine sponges to construct their silica skeletons. The mineral, as it happens, also serves as the building block of semiconductor computer chips. “We then asked the question—could we genetically engineer the structure of the enzyme to make it possible to produce other minerals and semiconductors not normally produced by living organisms?” Morse said.

To make this possible, the researchers isolated and made many copies of the part of the sponge’s DNA that codes for silicatein, then intentionally introduced millions of different mutations in the DNA. By chance, some of these would likely lead to mutant forms of silicatein that would produce different semiconductors, rather than silica—a process that mirrors natural selection, albeit on a much shorter time scale, and directed by human choice rather than survival of the fittest.

[div class=attrib]Read the entire article after the jump.[end-div]

La Macchina: The Machine as Art, for Caffeine Addicts

You may not know their names, but Desiderio Pavoni and Luigi Bezzerra are to coffee as are Steve Jobs and Steve Wozniak to computers. Modern day espresso machines owe all to the innovative design and business savvy of this early 20th century Italian duo.

[div class=attrib]From Smithsonian:[end-div]

For many coffee drinkers, espresso is coffee. It is the purest distillation of the coffee bean, the literal essence of a bean. In another sense, it is also the first instant coffee. Before espresso, it could take up to five minutes –five minutes!– for a cup of coffee to brew. But what exactly is espresso and how did it come to dominate our morning routines? Although many people are familiar with espresso these days thanks to the Starbucksification of the world, there is often still some confusion over what it actually is – largely due to “espresso roasts” available on supermarket shelves everywhere. First, and most importantly, espresso is not a roasting method. It is neither a bean nor a blend. It is a method of preparation. More specifically, it is a preparation method in which highly-pressurized hot water is forced over coffee grounds to produce a very concentrated coffee drink with a deep, robust flavor. While there is no standardized process for pulling a shot of espresso, Italian coffeemaker Illy’s definition of the authentic espresso seems as good a measure as any:

A jet of hot water at 88°-93°C (190°-200°F) passes under a pressure of nine or more atmospheres through a seven-gram (.25 oz) cake-like layer of ground and tamped coffee. Done right, the result is a concentrate of not more than 30 ml (one oz) of pure sensorial pleasure.

For those of you who, like me, are more than a few years out of science class, nine atmospheres of pressure is the equivalent to nine times the amount of pressure normally exerted by the earth’s atmosphere. As you might be able to tell from the precision of Illy’s description, good espresso is good chemistry. It’s all about precision and consistency and finding the perfect balance between grind, temperature, and pressure. Espresso happens at the molecular level. This is why technology has been such an important part of the historical development of espresso and a key to the ongoing search for the perfect shot. While espresso was never designed per se, the machines –or Macchina– that make our cappuccinos and lattes have a history that stretches back more than a century.

In the 19th century, coffee was a huge business in Europe with cafes flourishing across the continent. But coffee brewing was a slow process and, as is still the case today, customers often had to wait for their brew. Seeing an opportunity, inventors across Europe began to explore ways of using steam machines to reduce brewing time – this was, after all, the age of steam. Though there were surely innumerable patents and prototypes, the invention of the machine and the method that would lead to espresso is usually attributed to Angelo Moriondo of Turin, Italy, who was granted a patent in 1884 for “new steam machinery for the economic and instantaneous confection of coffee beverage.” The machine consisted of a large boiler, heated to 1.5 bars of pressure, that pushed water through a large bed of coffee grounds on demand, with a second boiler producing steam that would flash the bed of coffee and complete the brew. Though Moriondo’s invention was the first coffee machine to use both water and steam, it was purely a bulk brewer created for the Turin General Exposition. Not much more is known about Moriondo, due in large part to what we might think of today as a branding failure. There were never any “Moriondo” machines, there are no verifiable machines still in existence, and there aren’t even photographs of his work. With the exception of his patent, Moriondo has been largely lost to history. The two men who would improve on Morinodo’s design to produce a single serving espresso would not make that same mistake.

Luigi Bezzerra and Desiderio Pavoni were the Steve Wozniak and Steve Jobs of espresso. Milanese manufacturer and “maker of liquors” Luigi Bezzera had the know-how. He invented single-shot espresso in the early years of the 20th century while looking for a method of quickly brewing coffee directly into the cup. He made several improvements to Moriondo’s machine, introduced the portafilter, multiple brewheads, and many other innovations still associated with espresso machines today. In Bezzera’s original patent, a large boiler with built-in burner chambers filled with water was heated until it pushed water and steam through a tamped puck of ground coffee. The mechanism through which the heated water passed also functioned as heat radiators, lowering the temperature of the water from 250°F in the boiler to the ideal brewing temperature of approximately 195°F (90°C). Et voila, espresso. For the first time, a cup of coffee was brewed to order in a matter of seconds. But Bezzera’s machine was heated over an open flame, which made it difficult to control pressure and temperature, and nearly impossible to to produce a consistent shot. And consistency is key in the world of espresso. Bezzera designed and built a few prototypes of his machine but his beverage remained largely unappreciated because he didn’t have any money to expand his business or any idea how to market the machine. But he knew someone who did. Enter Desiderio Pavoni.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A 1910 Ideale espresso machine. Courtesy of Smithsonian.[end-div]

Keeping Secrets in the Age of Technology

[div class=attrib]From the Guardian:[end-div]

With the benefit of hindsight, life as I knew it came to an end in late 1994, round Seal’s house. We used to live round the corner from each other and if he was in between supermodels I’d pop over to watch a bit of Formula 1 on his pop star-sized flat-screen telly. I was probably on the sofa reading Vogue (we had that in common, albeit for different reasons) while he was “mucking about” on his computer (then the actual technical term for anything non-work-related, vis-à-vis computers), when he said something like: “Kate, have a look at this thing called the World Wide Web. It’s going to be massive!”

I can’t remember what we looked at then, at the tail-end of what I now nostalgically refer to as “The Tipp-Ex Years” – maybe The Well, accessed by Web Crawler – but whatever it was, it didn’t do it for me: “Information dual carriageway!” I said (trust me, this passed for witty in the 1990s). “Fancy a pizza?”

So there we are: Seal introduced me to the interweb. And although I remain a bit of a petrol-head and (nothing if not brand-loyal) own an iPad, an iPhone and two Macs, I am still basically rubbish at “modern”. Pre-Leveson, when I was writing a novel involving a phone-hacking scandal, my only concern was whether or not I’d come up with a plot that was: a) vaguely plausible and/or interesting, and b) technically possible. (A very nice man from Apple assured me that it was.)

I would gladly have used semaphore, telegrams or parchment scrolls delivered by magic owls to get the point across. Which is that ever since people started chiselling cuneiform on to big stones they’ve been writing things that will at some point almost certainly be misread and/or misinterpreted by someone else. But the speed of modern technology has made the problem rather more immediate. Confusing your public tweets with your Direct Messages and begging your young lover to take-me-now-cos-im-gagging-4-u? They didn’t have to worry about that when they were issuing decrees at Memphis on a nice bit of granodiorite.

These days the mis-sent (or indeed misread) text is still a relatively intimate intimation of an affair, while the notorious “reply all” email is the stuff of tired stand-up comedy. The boundary-less tweet is relatively new – and therefore still entertaining – territory, as evidenced most recently by American model Melissa Stetten, who, sitting on a plane next to a (married) soap actor called Brian Presley, tweeted as he appeared to hit on her.

Whenever and wherever words are written, somebody, somewhere will want to read them. And if those words are not meant to be read they very often will be – usually by the “wrong” people. A 2010 poll announced that six in 10 women would admit to regularly snooping on their partner’s phone, Twitter, or Facebook, although history doesn’t record whether the other four in 10 were then subjected to lie-detector tests.

Our compelling, self-sabotaging desire to snoop is usually informed by… well, if not paranoia, exactly, then insecurity, which in turn is more revealing about us than the words we find. If we seek out bad stuff – in a partner’s text, an ex’s Facebook status or best friend’s Twitter timeline – we will surely find it. And of course we don’t even have to make much effort to find the stuff we probably oughtn’t. Employers now routinely snoop on staff, and while this says more about the paranoid dynamic between boss classes and foot soldiers than we’d like, I have little sympathy for the employee who tweets their hangover status with one hand while phoning in “sick” with the other.

Take Google Maps: the more information we are given, the more we feel we’ve been gifted a licence to snoop. It’s the kind of thing we might be protesting about on the streets of Westminster were we not too busy invading our own privacy, as per the recent tweet-spat between Mr and Mrs Ben Goldsmith.

Technology feeds an increasing yet non-specific social unease – and that uneasiness inevitably trickles down to our more intimate relationships. For example, not long ago, I was blown out via text for a lunch date with a friend (“arrrgh, urgent deadline! SO SOZ!”), whose “urgent deadline” (their Twitter timeline helpfully revealed) turned out to involve lunch with someone else.

Did I like my friend any less when I found this out? Well yes, a tiny bit – until I acknowledged that I’ve done something similar 100 times but was “cleverer” at covering my tracks. Would it have been easier for my friend to tell me the truth? Arguably. Should I ever have looked at their Twitter timeline? Well, I had sought to confirm my suspicion that they weren’t telling the truth, so given that my paranoia gremlin was in charge it was no wonder I didn’t like what it found.

It is, of course, the paranoia gremlin that is in charge when we snoop – or are snooped upon – by partners, while “trust” is far more easily undermined than it has ever been. The randomly stumbled-across text (except they never are, are they?) is our generation’s lipstick-on-the-collar. And while Foursquare may say that your partner is in the pub, is that enough to stop you checking their Twitter/Facebook/emails/texts?

[div class=attrib]Read the entire article after the jump.[end-div]

You as a Data Strip Mine: What Facebook Knows

China, India, Facebook. With its 900 million member-citizens Facebook is the third largest country on the planet, ranked by population. This country has some benefits: no taxes, freedom to join and/or leave, and of course there’s freedom to assemble and a fair degree of free speech.

However, Facebook is no democracy. In fact, its data privacy policies and personal data mining might well put it in the same league as the Stalinist Soviet Union or cold war East Germany.

A fascinating article by Tom Simonite excerpted below sheds light on the data collection and data mining initiatives underway or planned at Facebook.

[div class=attrib]From Technology Review:[end-div]

If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.

It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.

And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.

Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from ­Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.

Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content. Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.

“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.

Contagious Information

Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.

In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”

Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.

It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but ­Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.

Social Engineering

Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)

This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.

Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of thejournal.ie / abracapocus_pocuscadabra (Flickr).[end-div]

The SpeechJammer and Other Innovations to Come

The mind boggles at the possible situations when a SpeechJammer (affectionately known as the “Shutup Gun”) might come in handy – raucous parties, boring office meetings, spousal arguments, playdates with whiny children.

[div class=attrib]From the New York Times:[end-div]

When you aim the SpeechJammer at someone, it records that person’s voice and plays it back to him with a delay of a few hundred milliseconds. This seems to gum up the brain’s cognitive processes — a phenomenon known as delayed auditory feedback — and can painlessly render the person unable to speak. Kazutaka Kurihara, one of the SpeechJammer’s creators, sees it as a tool to prevent loudmouths from overtaking meetings and public forums, and he’d like to miniaturize his invention so that it can be built into cellphones. “It’s different from conventional weapons such as samurai swords,” Kurihara says. “We hope it will build a more peaceful world.”

[div class=attrib]Read the entire list of 32 weird and wonderful innovations after the jump.[end-div]

[div class=attrib]Graphic courtesy of Chris Nosenzo / New York Times.[end-div]

Ray Bradbury’s Real World Dystopia

Ray Bradbury’s death on June 5 reminds us of his uncanny gift for inventing a future that is much like our modern day reality.

Bradbury’s body of work beginning in the early 1940s introduced us to ATMs, wall mounted flat screen TVs, ear-piece radios, online social networks, self-driving cars, and electronic surveillance. Bravely and presciently he also warned us of technologically induced cultural amnesia, social isolation, indifference to violence, and dumbed-down 24/7 mass media.

An especially thoughtful opinion from author Tim Kreider on Bradbury’s life as a “misanthropic humanist”.

[div class=attrib]From the New York Times:[end-div]

IF you’d wanted to know which way the world was headed in the mid-20th century, you wouldn’t have found much indication in any of the day’s literary prizewinners. You’d have been better advised to consult a book from a marginal genre with a cover illustration of a stricken figure made of newsprint catching fire.

Prescience is not the measure of a science-fiction author’s success — we don’t value the work of H. G. Wells because he foresaw the atomic bomb or Arthur C. Clarke for inventing the communications satellite — but it is worth pausing, on the occasion of Ray Bradbury’s death, to notice how uncannily accurate was his vision of the numb, cruel future we now inhabit.

Mr. Bradbury’s most famous novel, “Fahrenheit 451,” features wall-size television screens that are the centerpieces of “parlors” where people spend their evenings watching interactive soaps and vicious slapstick, live police chases and true-crime dramatizations that invite viewers to help catch the criminals. People wear “seashell” transistor radios that fit into their ears. Note the perversion of quaint terms like “parlor” and “seashell,” harking back to bygone days and vanished places, where people might visit with their neighbors or listen for the sound of the sea in a chambered nautilus.

Mr. Bradbury didn’t just extrapolate the evolution of gadgetry; he foresaw how it would stunt and deform our psyches. “It’s easy to say the wrong thing on telephones; the telephone changes your meaning on you,” says the protagonist of the prophetic short story “The Murderer.” “First thing you know, you’ve made an enemy.”

Anyone who’s had his intended tone flattened out or irony deleted by e-mail and had to explain himself knows what he means. The character complains that he’s relentlessly pestered with calls from friends and employers, salesmen and pollsters, people calling simply because they can. Mr. Bradbury’s vision of “tired commuters with their wrist radios, talking to their wives, saying, ‘Now I’m at Forty-third, now I’m at Forty-fourth, here I am at Forty-ninth, now turning at Sixty-first” has gone from science-fiction satire to dreary realism.

“It was all so enchanting at first,” muses our protagonist. “They were almost toys, to be played with, but the people got too involved, went too far, and got wrapped up in a pattern of social behavior and couldn’t get out, couldn’t admit they were in, even.”

Most of all, Mr. Bradbury knew how the future would feel: louder, faster, stupider, meaner, increasingly inane and violent. Collective cultural amnesia, anhedonia, isolation. The hysterical censoriousness of political correctness. Teenagers killing one another for kicks. Grown-ups reading comic books. A postliterate populace. “I remember the newspapers dying like huge moths,” says the fire captain in “Fahrenheit,” written in 1953. “No one wanted them back. No one missed them.” Civilization drowned out and obliterated by electronic chatter. The book’s protagonist, Guy Montag, secretly trying to memorize the Book of Ecclesiastes on a train, finally leaps up screaming, maddened by an incessant jingle for “Denham’s Dentrifice.” A man is arrested for walking on a residential street. Everyone locked indoors at night, immersed in the social lives of imaginary friends and families on TV, while the government bombs someone on the other side of the planet. Does any of this sound familiar?

The hero of “The Murderer” finally goes on a rampage and smashes all the yammering, blatting devices around him, expressing remorse only over the Insinkerator — “a practical device indeed,” he mourns, “which never said a word.” It’s often been remarked that for a science-fiction writer, Mr. Bradbury was something of a Luddite — anti-technology, anti-modern, even anti-intellectual. (“Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers,” he challenged a Wired magazine interviewer, and swore he would “outcreate” every one.)

But it was more complicated than that; his objections were not so much reactionary or political as they were aesthetic. He hated ugliness, noise and vulgarity. He opposed the kind of technology that deadened imagination, the modernity that would trash the past, the kind of intellectualism that tried to centrifuge out awe and beauty. He famously did not care to drive or fly, but he was a passionate proponent of space travel, not because of its practical benefits but because he saw it as the great spiritual endeavor of the age, our generation’s cathedral building, a bid for immortality among the stars.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Technorati.[end-div]

Killer Ideas

It’s possible that most households on the planet have one. It’s equally possible that most humans have used one — excepting members of PETA (People for the Ethical Treatment of Animals) and other tolerant souls.

United States Patent 640,790 covers a simple and effective technology, invented by Robert Montgomery. The patent for a “Fly Killer”, or fly swatter as it is now more commonly known, was issued in 1900.

Sometimes the simplest design is the most pervasive and effective.

[div class=attrib]From the New York Times:[end-div]

The first modern fly-destruction device was invented in 1900 by Robert R. Montgomery, an entrepreneur based in Decatur, Ill. Montgomery was issued Patent No. 640,790 for the Fly-Killer, a “cheap device of unusual elasticity and durability” made of wire netting, “preferably oblong,” attached to a handle. The material of the handle remained unspecified, but the netting was crucial: it reduced wind drag, giving the swatter a “whiplike swing.” By 1901, Montgomery’s invention was advertised in Ladies’ Home Journal as a tool that “kills without crushing” and “soils nothing,” unlike, say, a rolled-up newspaper might.

Montgomery sold the patent rights in 1903 to an industrialist named John L. Bennett, who later invented the beer can. Bennett improved the design — stitching around the edge of the netting to keep it from fraying — but left the name.

The various fly-killing implements on the market at the time got the name “swatter” from Samuel Crumbine, secretary of the Kansas Board of Health. In 1905, he titled one of his fly bulletins, which warned of flyborne diseases, “Swat the Fly,” after a chant he heard at a ballgame. Crumbine took an invention known as the Fly Bat — a screen attached to a yardstick — and renamed it the Fly Swatter, which became the generic term we use today.

Fly-killing technology has advanced to include fly zappers (electrified tennis rackets that roast flies on contact) and fly guns (spinning discs that mulch insects). But there will always be less techy solutions: flypaper (sticky tape that traps the bugs), Fly Bottles (glass containers lined with an attractive liquid substance) and the Venus’ flytrap (a plant that eats insects).

During a 2009 CNBC interview, President Obama killed a fly with his bare hands, triumphantly exclaiming, “I got the sucker!” PETA was less gleeful, calling it a public “execution” and sending the White House a device that traps flies so that they may be set free.

But for the rest of us, as the product blogger Sean Byrne notes, “it’s hard to beat the good old-fashioned fly swatter.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Goodgrips.[end-div]

Men are From LinkedIn, Women are From Pinterest

No surprise. Women and men use online social networks differently. A new study of online behavior by researchers in Vienna, Austria, shows that the sexes organize their networks very differently and for different reasons.

[div class=attrib]From Technology Review:[end-div]

One of the interesting insights that social networks offer is the difference between male and female behaviour.

In the past, behavioural differences have been hard to measure. Experiments could only be done on limited numbers of individuals and even then, the process of measurement often distorted people’s behaviour.

That’s all changed with the advent of massive online participation in gaming, professional and friendship  networks. For the first time, it has become possible to quantify exactly how the genders differ in their approach to things like risk and communication.

Gender specific studies are surprisingly rare, however. Nevertheless a growing body if evidence is emerging that social networks reflect many of the social and evolutionary differences that we’ve long suspected.

Earlier this year, for example, we looked at a remarkable study of a mobile phone network that demonstrated the different reproductive strategies that men and women employ throughout their lives, as revealed by how often they call friends, family and potential mates.

Today, Michael Szell and Stefan Thurner at the Medical University of Vienna in Austria say they’ve found significance differences in the way men and women manage their social networks in an online game called Pardus with over 300,000 players.

In this game, players  explore various solar systems in a virtual universe. On the way, they can mark other players as friends or enemies, exchange messages, gain wealth by trading  or doing battle but can also be killed.

The interesting thing about online games is that almost every action of every player is recorded, mostly without the players being consciously aware of this. That means measurement bias is minimal.

The networks of friends and enemies that are set up also differ in an important way from those on social networking sites such as Facebook. That’s because players can neither see nor influence other players’ networks. This prevents the kind of clustering and herding behaviour that sometimes dominates  other social networks.

Szell and Thurner say the data reveals clear and significant differences between men and women in Pardus.

For example, men and women  interact with the opposite sex differently.  “Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females,” say Szell and Thurner.

Women are also significantly more risk averse than men as measured by the amount of fighting they engage in and their likelihood of dying.

They are also more likely to be friends with each other than men.

These results are more or less as expected. More surprising is the finding that women tend to be more wealthy than men, probably because they engage more in economic than destructive behaviour.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of InformationWeek.[end-div]

Facebook: What Next?

Yawn…

The Facebook IPO (insider profit opportunity rather than Initial Public Offering) finally came and went. Much like its 900 million members, Facebook executives managed to garner enough fleeting “likes” from its Wall Street road show to ensure temporary short-term hype and big returns for key insiders. But, beneath the hyperbole lies a basic question that goes to the heart of its stratospheric valuation: Does Facebook have a long-term strategy beyond the rapidly deflating ad revenue model?

[div class=attrib]From Technology Review:[end-div]

Facebook is not only on course to go bust, but will take the rest of the ad-supported Web with it.

Given its vast cash reserves and the glacial pace of business reckonings, that will sound hyperbolic. But that doesn’t mean it isn’t true.

At the heart of the Internet business is one of the great business fallacies of our time: that the Web, with all its targeting abilities, can be a more efficient, and hence more profitable, advertising medium than traditional media. Facebook, with its 900 million users, valuation of around $100 billion, and the bulk of its business in traditional display advertising, is now at the heart of the heart of the fallacy.

The daily and stubborn reality for everybody building businesses on the strength of Web advertising is that the value of digital ads decreases every quarter, a consequence of their simultaneous ineffectiveness and efficiency. The nature of people’s behavior on the Web and of how they interact with advertising, as well as the character of those ads themselves and their inability to command real attention, has meant a marked decline in advertising’s impact.

At the same time, network technology allows advertisers to more precisely locate and assemble audiences outside of branded channels. Instead of having to go to CNN for your audience, a generic CNN-like audience can be assembled outside CNN’s walls and without the CNN-brand markup. This has resulted in the now famous and cruelly accurate formulation that $10 of offline advertising becomes $1 online.

I don’t know anyone in the ad-Web business who isn’t engaged in a relentless, demoralizing, no-exit operation to realign costs with falling per-user revenues, or who isn’t manically inflating traffic to compensate for ever-lower per-user value.

Facebook, however, has convinced large numbers of otherwise intelligent people that the magic of the medium will reinvent advertising in a heretofore unimaginably profitable way, or that the company will create something new that isn’t advertising, which will produce even more wonderful profits. But at a forward profit-to-earnings ratio of 56 (as of the close of trading on May 21), these innovations will have to be something like alchemy to make the company worth its sticker price. For comparison, Google trades at a forward P/E ratio of 12. (To gauge how much faith investors have that Google, Facebook, and other Web companies will extract value from their users, see our recent chart.)

Facebook currently derives 82 percent of its revenue from advertising. Most of that is the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles. Some is the kind of sponsorship that promises users further social relationships with companies: a kind of marketing that General Motors just announced it would no longer buy.

Facebook’s answer to its critics is: pay no attention to the carping. Sure, grunt-like advertising produces the overwhelming portion of our $4 billion in revenues; and, yes, on a per-user basis, these revenues are in pretty constant decline, but this stuff is really not what we have in mind. Just wait.

It’s quite a juxtaposition of realities. On the one hand, Facebook is mired in the same relentless downward pressure of falling per-user revenues as the rest of Web-based media. The company makes a pitiful and shrinking $5 per customer per year, which puts it somewhat ahead of the Huffington Post and somewhat behind the New York Times’ digital business. (Here’s the heartbreaking truth about the difference between new media and old: even in the New York Times’ declining traditional business, a subscriber is still worth more than $1,000 a year.) Facebook’s business only grows on the unsustainable basis that it can add new customers at a faster rate than the value of individual customers declines. It is peddling as fast as it can. And the present scenario gets much worse as its users increasingly interact with the social service on mobile devices, because it is vastly harder, on a small screen, to sell ads and profitably monetize users.

On the other hand, Facebook is, everyone has come to agree, profoundly different from the Web. First of all, it exerts a new level of hegemonic control over users’ experiences. And it has its vast scale: 900 million, soon a billion, eventually two billion (one of the problems with the logic of constant growth at this scale and speed, of course, is that eventually it runs out of humans with computers or smart phones). And then it is social. Facebook has, in some yet-to-be-defined way, redefined something. Relationships? Media? Communications? Communities? Something big, anyway.

The subtext—an overt subtext—of the popular account of Facebook is that the network has a proprietary claim and special insight into social behavior. For enterprises and advertising agencies, it is therefore the bridge to new modes of human connection.

Expressed so baldly, this account is hardly different from what was claimed for the most aggressively boosted companies during the dot-com boom. But there is, in fact, one company that created and harnessed a transformation in behavior and business: Google. Facebook could be, or in many people’s eyes should be, something similar. Lost in such analysis is the failure to describe the application that will drive revenues.

[div class=attrib]Read the entire article after the jump.[end-div]

Quantum Computer Leap

The practical science behind quantum computers continues to make exciting progress. Quantum computers promise, in theory, immense gains in power and speed through the use of atomic scale parallel processing.

[div class=attrib]From the Observer:[end-div]

The reality of the universe in which we live is an outrage to common sense. Over the past 100 years, scientists have been forced to abandon a theory in which the stuff of the universe constitutes a single, concrete reality in exchange for one in which a single particle can be in two (or more) places at the same time. This is the universe as revealed by the laws of quantum physics and it is a model we are forced to accept – we have been battered into it by the weight of the scientific evidence. Without it, we would not have discovered and exploited the tiny switches present in their billions on every microchip, in every mobile phone and computer around the world. The modern world is built using quantum physics: through its technological applications in medicine, global communications and scientific computing it has shaped the world in which we live.

Although modern computing relies on the fidelity of quantum physics, the action of those tiny switches remains firmly in the domain of everyday logic. Each switch can be either “on” or “off”, and computer programs are implemented by controlling the flow of electricity through a network of wires and switches: the electricity flows through open switches and is blocked by closed switches. The result is a plethora of extremely useful devices that process information in a fantastic variety of ways.

Modern “classical” computers seem to have almost limitless potential – there is so much we can do with them. But there is an awful lot we cannot do with them too. There are problems in science that are of tremendous importance but which we have no hope of solving, not ever, using classical computers. The trouble is that some problems require so much information processing that there simply aren’t enough atoms in the universe to build a switch-based computer to solve them. This isn’t an esoteric matter of mere academic interest – classical computers can’t ever hope to model the behaviour of some systems that contain even just a few tens of atoms. This is a serious obstacle to those who are trying to understand the way molecules behave or how certain materials work – without the possibility to build computer models they are hampered in their efforts. One example is the field of high-temperature superconductivity. Certain materials are able to conduct electricity “for free” at surprisingly high temperatures (still pretty cold, though, at well but still below -100 degrees celsius). The trouble is, nobody really knows how they work and that seriously hinders any attempt to make a commercially viable technology. The difficulty in simulating physical systems of this type arises whenever quantum effects are playing an important role and that is the clue we need to identify a possible way to make progress.

It was American physicist Richard Feynman who, in 1981, first recognised that nature evidently does not need to employ vast computing resources to manufacture complicated quantum systems. That means if we can mimic nature then we might be able to simulate these systems without the prohibitive computational cost. Simulating nature is already done every day in science labs around the world – simulations allow scientists to play around in ways that cannot be realised in an experiment, either because the experiment would be too difficult or expensive or even impossible. Feynman’s insight was that simulations that inherently include quantum physics from the outset have the potential to tackle those otherwise impossible problems.

Quantum simulations have, in the past year, really taken off. The ability to delicately manipulate and measure systems containing just a few atoms is a requirement of any attempt at quantum simulation and it is thanks to recent technical advances that this is now becoming possible. Most recently, in an article published in the journal Nature last week, physicists from the US, Australia and South Africa have teamed up to build a device capable of simulating a particular type of magnetism that is of interest to those who are studying high-temperature superconductivity. Their simulator is esoteric. It is a small pancake-like layer less than 1 millimetre across made from 300 beryllium atoms that is delicately disturbed using laser beams… and it paves the way for future studies into quantum magnetism that will be impossible using a classical computer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A crystal of beryllium ions confined by a large magnetic field at the US National Institute of Standards and Technology’s quantum simulator. The outermost electron of each ion is a quantum bit (qubit), and here they are fluorescing blue, which indicates they are all in the same state. Photograph courtesy of Britton/NIST, Observer.[end-div]

Nanotech: Bane and Boon

An insightful opinion on the benefits and perils of nanotechnology from essayist and naturalist, Diane Ackerman.

[div class=attrib]From the New York Times:[end-div]

“I SING the body electric,” Walt Whitman wrote in 1855, inspired by the novelty of useful electricity, which he would live to see power streetlights and telephones, locomotives and dynamos. In “Leaves of Grass,” his ecstatic epic poem of American life, he depicted himself as a live wire, a relay station for all the voices of the earth, natural or invented, human or mineral. “I have instant conductors all over me,” he wrote. “They seize every object and lead it harmlessly through me… My flesh and blood playing out lightning to strike what is hardly different from myself.”

Electricity equipped Whitman and other poets with a scintillation of metaphors. Like inspiration, it was a lightning flash. Like prophetic insight, it illuminated the darkness. Like sex, it tingled the flesh. Like life, it energized raw matter. Whitman didn’t know that our cells really do generate electricity, that the heart’s pacemaker relies on such signals and that billions of axons in the brain create their own electrical charge (equivalent to about a 60-watt bulb). A force of nature himself, he admired the range and raw power of electricity.

Deeply as he believed the vow “I sing the body electric” — a line sure to become a winning trademark — I suspect one of nanotechnology’s recent breakthroughs would have stunned him. A team at the University of Exeter in England has invented the lightest, supplest, most diaphanous material ever made for conducting electricity, a dream textile named GraphExeter, which could revolutionize electronics by making it fashionable to wear your computer, cellphone and MP3 player. Only one atom thick, it’s an ideal fabric for street clothes and couture lines alike. You could start your laptop by plugging it into your jeans, recharge your cellphone by plugging it into your T-shirt. Then, not only would your cells sizzle with electricity, but even your clothing would chime in.

I don’t know if a fully electric suit would upset flight electronics, pacemakers, airport security monitors or the brain’s cellular dispatches. If you wore an electric coat in a lightning storm, would the hairs on the back of your neck stand up? Would you be more likely to fall prey to a lightning strike? How long will it be before a jokester plays the sound of one-hand-clapping from a mitten? How long before late-night hosts riff about electric undies? Will people tethered to recharging poles haunt the airport waiting rooms? Will it become hip to wear flashing neon ads, quotes and designs — maybe a name in a luminous tattoo?

Another recent marvel of nanotechnology promises to alter daily life, too, but this one, despite its silver lining, strikes me as wickedly dangerous, though probably inevitable. As a result, it’s bound to inspire labyrinthine laws and a welter of patents and to ignite bioethical debates.

Nano-engineers have developed a way to coat both hard surfaces (like hospital bed rails, doorknobs and furniture) and also soft surfaces (sheets, gowns and curtains) with microscopic nanoparticles of silver, an element known to kill microbes. You’d think the new nano-coating would offer a silver bullet, be a godsend to patients stricken with hospital-acquired sepsis and pneumonia, and to doctors fighting what has become a nightmare of antibiotic-resistant micro-organisms that can kill tens of thousands of people a year.

It does, and it is. That’s the problem. It’s too effective. Most micro-organisms are harmless, many are beneficial, but some are absolutely essential for the environment and human life. Bacteria were the first life forms on the planet, and we owe them everything. Our biochemistry is interwoven with theirs. Swarms of bacteria blanket us on the outside, other swarms colonize our insides. Kill all the gut bacteria, essential for breaking down large molecules, and digestion slows.

Friendly bacteria aid the immune system. They release biotin, folic acid and vitamin K; help eliminate heavy metals from the body; calm inflammation; and prevent cancers. During childbirth, a baby picks up beneficial bacteria in the birth canal. Nitrogen-fixing bacteria ensure healthy plants and ecosystems. We use bacteria to decontaminate sewage and also to create protein-rich foods like kefir and yogurt.

How tempting for nanotechnology companies, capitalizing on our fears and fetishes, to engineer superbly effective nanosilver microbe-killers, deodorants and sanitizers of all sorts for home and industry.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Technorati.[end-div]

Google: Please Don’t Be Evil

Google has been variously praised and derided for its corporate manta, “Don’t Be Evil”. For those who like to believe that Google has good intentions recent events strain these assumptions. The company was found to have been snooping on and collecting data from personal Wi-Fi routers. Is this the case of a lone-wolf or a corporate strategy?

[div class=attrib]From Slate:[end-div]

Was Google’s snooping on home Wi-Fi users the work of a rogue software engineer? Was it a deliberate corporate strategy? Was it simply an honest-to-goodness mistake? And which of these scenarios should we wish for—which would assuage your fears about the company that manages so much of our personal data?

These are the central questions raised by a damning FCC report on Google’s Street View program that was released last weekend. The Street View scandal began with a revolutionary idea—Larry Page wanted to snap photos of every public building in the world. Beginning in 2007, the search company’s vehicles began driving on streets in the United States (and later Europe, Canada, Mexico, and everywhere else), collecting a stream of images to feed into Google Maps.

While developing its Street View cars, Google’s engineers realized that the vehicles could also be used for “wardriving.” That’s a sinister-sounding name for the mainly noble effort to map the physical location of the world’s Wi-Fi routers. Creating a location database of Wi-Fi hotspots would make Google Maps more useful on mobile devices—phones without GPS chips could use the database to approximate their physical location, while GPS-enabled devices could use the system to speed up their location-monitoring systems. As a privacy matter, there was nothing unusual about wardriving. By the time Google began building its system, several startups had already created their own Wi-Fi mapping databases.

But Google, unlike other companies, wasn’t just recording the location of people’s Wi-Fi routers. When a Street View car encountered an open Wi-Fi network—that is, a router that was not protected by a password—it recorded all the digital traffic traveling across that router. As long as the car was within the vicinity, it sucked up a flood of personal data: login names, passwords, the full text of emails, Web histories, details of people’s medical conditions, online dating searches, and streaming music and movies.

Imagine a postal worker who opens and copies one letter from every mailbox along his route. Google’s sniffing was pretty much the same thing, except instead of one guy on one route it was a whole company operating around the world. The FCC report says that when French investigators looked at the data Google collected, they found “an exchange of emails between a married woman and man, both seeking an extra-marital relationship” and “Web addresses that revealed the sexual preferences of consumers at specific residences.” In the United States, Google’s cars collected 200 gigabytes of such data between 2008 and 2010, and they stopped only when regulators discovered the practice.

Why did Google collect all this data? What did it want to do with people’s private information? Was collecting it a mistake? Was it the inevitable result of Google’s maximalist philosophy about public data—its aim to collect and organize all of the world’s information?

Google says the answer to that final question is no. In its response to the FCC and its public blog posts, the company says it is sorry for what happened, and insists that it has established a much stricter set of internal policies to prevent something like this from happening again. The company characterizes the collection of Wi-Fi payload data as the idea of one guy, an engineer who contributed code to the Street View program. In the FCC report, he’s called Engineer Doe. On Monday, the New York Times identified him as Marius Milner, a network programmer who created Network Stumbler, a popular Wi-Fi network detection tool. The company argues that Milner—for reasons that aren’t really clear—slipped the snooping code into the Street View program without anyone else figuring out what he was up to. Nobody else on the Street View team wanted to collect Wi-Fi data, Google says—they didn’t think it would be useful in any way, and, in fact, the data was never used for any Google product.

Should we believe Google’s lone-coder theory? I have a hard time doing so. The FCC report points out that Milner’s “design document” mentions his intention to collect and analyze payload data, and it also highlights privacy as a potential concern. Though Google’s privacy team never reviewed the program, many of Milner’s colleagues closely reviewed his source code. In 2008, Milner told one colleague in an email that analyzing the Wi-Fi payload data was “one of my to-do items.” Later, he ran a script to count the Web addresses contained in the collected data and sent his results to an unnamed “senior manager.” The manager responded as if he knew what was going on: “Are you saying that these are URLs that you sniffed out of Wi-Fi packets that we recorded while driving?” Milner responded by explaining exactly where the data came from. “The data was collected during the daytime when most traffic is at work,” he said.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Fastcompany.[end-div]

Your Tween Online

Many parents with children in the pre-teenage years probably have a containment policy restricting them from participating on adult oriented social media such as Facebook. Well, these tech-savvy tweens may be doing more online than just playing Club Penguin.

[div class=attrib]From the WSJ:[end-div]

Celina McPhail’s mom wouldn’t let her have a Facebook account. The 12-year-old is on Instagram instead.

Her mother, Maria McPhail, agreed to let her download the app onto her iPod Touch, because she thought she was fostering an interest in photography. But Ms. McPhail, of Austin, Texas, has learned that Celina and her friends mostly use the service to post and “like” Photoshopped photo-jokes and text messages they create on another free app called Versagram. When kids can’t get on Facebook, “they’re good at finding ways around that,” she says.

It’s harder than ever to keep an eye on the children. Many parents limit their preteens’ access to well-known sites like Facebook and monitor what their children do online. But with kids constantly seeking new places to connect—preferably, unsupervised by their families—most parents are learning how difficult it is to prevent their kids from interacting with social media.

Children are using technology at ever-younger ages. About 15% of kids under the age of 11 have their own mobile phone, according to eMarketer. The Pew Research Center’s Internet & American Life Project reported last summer that 16% of kids 12 to 17 who are online used Twitter, double the number from two years earlier.

Parents worry about the risks of online predators and bullying, and there are other concerns. Kids are creating permanent public records, and they may encounter excessive or inappropriate advertising. Yet many parents also believe it is in their kids’ interest to be nimble with technology.

As families grapple with how to use social media safely, many marketers are working to create social networks and other interactive applications for kids that parents will approve. Some go even further, seeing themselves as providing a crucial education in online literacy—”training wheels for social media,” as Rebecca Levey of social-media site KidzVuz puts it.

Along with established social sites for kids, such as Walt Disney Co.’s Club Penguin, kids are flocking to newer sites such as FashionPlaytes.com, a meeting place aimed at girls ages 5 to 12 who are interested in designing clothes, and Everloop, a social network for kids under the age of 13. Viddy, a video-sharing site which functions similarly to Instagram, is becoming more popular with kids and teenagers as well.

Some kids do join YouTube, Google, Facebook, Tumblr and Twitter, despite policies meant to bar kids under 13. These sites require that users enter their date of birth upon signing up, and they must be at least 13 years old. Apple—which requires an account to download apps like Instagram to an iPhone—has the same requirement. But there is little to bar kids from entering a false date of birth or getting an adult to set up an account. Instagram declined to comment.

“If we learn that someone is not old enough to have a Google account, or we receive a report, we will investigate and take the appropriate action,” says Google spokesman Jay Nancarrow. He adds that “users first have a chance to demonstrate that they meet our age requirements. If they don’t, we will close the account.” Facebook and most other sites have similar policies.

Still, some children establish public identities on social-media networks like YouTube and Facebook with their parents’ permission. Autumn Miller, a 10-year-old from Southern California, has nearly 6,000 people following her Facebook fan-page postings, which include links to videos of her in makeup and costumes, dancing Laker-Girl style.

[div class=attrib]Read the entire article after the jump.[end-div]

You Are What You Share

The old maxim used to go something like, “you are what you eat”. Well, in the early 21st century it has been usurped by, “you are what you share online (knowingly or not)”.

[div class=attrib]From the Wall Street Journal:[end-div]

Not so long ago, there was a familiar product called software. It was sold in stores, in shrink-wrapped boxes. When you bought it, all that you gave away was your credit card number or a stack of bills.

Now there are “apps”—stylish, discrete chunks of software that live online or in your smartphone. To “buy” an app, all you have to do is click a button. Sometimes they cost a few dollars, but many apps are free, at least in monetary terms. You often pay in another way. Apps are gateways, and when you buy an app, there is a strong chance that you are supplying its developers with one of the most coveted commodities in today’s economy: personal data.

Some of the most widely used apps on Facebook—the games, quizzes and sharing services that define the social-networking site and give it such appeal—are gathering volumes of personal information.

A Wall Street Journal examination of 100 of the most popular Facebook apps found that some seek the email addresses, current location and sexual preference, among other details, not only of app users but also of their Facebook friends. One Yahoo service powered by Facebook requests access to a person’s religious and political leanings as a condition for using it. The popular Skype service for making online phone calls seeks the Facebook photos and birthdays of its users and their friends.

Yahoo and Skype say that they seek the information to customize their services for users and that they are committed to protecting privacy. “Data that is shared with Yahoo is managed carefully,” a Yahoo spokeswoman said.

The Journal also tested its own app, “WSJ Social,” which seeks data about users’ basic profile information and email and requests the ability to post an update when a user reads an article. A Journal spokeswoman says that the company asks only for information required to make the app work.

This appetite for personal data reflects a fundamental truth about Facebook and, by extension, the Internet economy as a whole: Facebook provides a free service that users pay for, in effect, by providing details about their lives, friendships, interests and activities. Facebook, in turn, uses that trove of information to attract advertisers, app makers and other business opportunities.

Up until a few years ago, such vast and easily accessible repositories of personal information were all but nonexistent. Their advent is driving a profound debate over the definition of privacy in an era when most people now carry information-transmitting devices with them all the time.

Capitalizing on personal data is a lucrative enterprise. Facebook is in the midst of planning for an initial public offering of its stock in May that could value the young company at more than $100 billion on the Nasdaq Stock Market.

Facebook requires apps to ask permission before accessing a user’s personal details. However, a user’s friends aren’t notified if information about them is used by a friend’s app. An examination of the apps’ activities also suggests that Facebook occasionally isn’t enforcing its own rules on data privacy.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Facebook is watching and selling you. Courtesy of Daily Mail.[end-div]

First, There Was Bell Labs

The results of innovation surround us. Innovation nourishes our food supply and helps us heal when we are sick; innovation lubricates our businesses, underlies our products, and facilitates our interactions. Innovation stokes our forward momentum.

But, before many of our recent technological marvels could come in to being, some fundamental innovations were necessary. These were the technical precursors and catalysts that paves the way for the iPad and the smartphone , GPS and search engines and microwave ovens. The building blocks that made much of this possible included the transistor, the laser, the Unix operating system, the communication satellite. And, all of these came from one place, Bell Labs, during a short but highly productive period from 1920 to 1980.

In his new book, “The Idea Factory”, Jon Gertner explores how and why so much innovation sprung from the visionary leaders, engineers and scientists of Bell Labs

[div class=attrib]From the New York Times:[end-div]

In today’s world of Apple, Google and Facebook, the name may not ring any bells for most readers, but for decades — from the 1920s through the 1980s — Bell Labs, the research and development wing of AT&T, was the most innovative scientific organization in the world. As Jon Gertner argues in his riveting new book, “The Idea Factory,” it was where the future was invented.

Indeed, Bell Labs was behind many of the innovations that have come to define modern life, including the transistor (the building block of all digital products), the laser, the silicon solar cell and the computer operating system called Unix (which would serve as the basis for a host of other computer languages). Bell Labs developed the first communications satellites, the first cellular telephone systems and the first fiber-optic cable systems.

The Bell Labs scientist Claude Elwood Shannon effectively founded the field of information theory, which would revolutionize thinking about communications; other Bell Labs researchers helped push the boundaries of physics, chemistry and mathematics, while defining new industrial processes like quality control.

In “The Idea Factory,” Mr. Gertner — an editor at Fast Company magazine and a writer for The New York Times Magazine — not only gives us spirited portraits of the scientists behind Bell Labs’ phenomenal success, but he also looks at the reasons that research organization became such a fount of innovation, laying the groundwork for the networked world we now live in.

It’s clear from this volume that the visionary leadership of the researcher turned executive Mervin Kelly played a large role in Bell Labs’ sense of mission and its ability to institutionalize the process of innovation so effectively. Kelly believed that an “institute of creative technology” needed a critical mass of talented scientists — whom he housed in a single building, where physicists, chemists, mathematicians and engineers were encouraged to exchange ideas — and he gave his researchers the time to pursue their own investigations “sometimes without concrete goals, for years on end.”

That freedom, of course, was predicated on the steady stream of revenue provided (in the years before the AT&T monopoly was broken up in the early 1980s) by the monthly bills paid by telephone subscribers, which allowed Bell Labs to function “much like a national laboratory.” Unlike, say, many Silicon Valley companies today, which need to keep an eye on quarterly reports, Bell Labs in its heyday could patiently search out what Mr. Gertner calls “new and fundamental ideas,” while using its immense engineering staff to “develop and perfect those ideas” — creating new products, then making them cheaper, more efficient and more durable.

Given the evolution of the digital world we inhabit today, Kelly’s prescience is stunning in retrospect. “He had predicted grand vistas for the postwar electronics industry even before the transistor,” Mr. Gertner writes. “He had also insisted that basic scientific research could translate into astounding computer and military applications, as well as miracles within the communications systems — ‘a telephone system of the future,’ as he had said in 1951, ‘much more like the biological systems of man’s brain and nervous system.’ ”

[div class=attrib]Read the entire article after jump.[end-div]

[div class=attrib]Image: Jack A. Morton (left) and J. R. Wilson at Bell Laboratories, circa 1948. Courtesy of Computer History Museum.[end-div]

Language Translation With a Cool Twist

The last couple of decades has shown a remarkable improvement in the ability of software to translate the written word from one language to another. Yahoo Babel Fish and Google Translate are good examples. Also, voice recognition systems, such as those you encounter every day when trying desperately to connect with a real customer service rep, have taken great leaps forward. Apple’s Siri now leads the pack.

But, what do you get if you combine translation and voice recognition technology? Well, you get a new service that translates the spoken word in your native language to a second. And, here’s the neat twist. The system translates into the second language while keeping a voice like yours. The technology springs from Microsoft’s Research division in Redmond, WA.

[div class=attrib]From Technology Review:[end-div]

Researchers at Microsoft have made software that can learn the sound of your voice, and then use it to speak a language that you don’t. The system could be used to make language tutoring software more personal, or to make tools for travelers.

In a demonstration at Microsoft’s Redmond, Washington, campus on Tuesday, Microsoft research scientist Frank Soong showed how his software could read out text in Spanish using the voice of his boss, Rick Rashid, who leads Microsoft’s research efforts. In a second demonstration, Soong used his software to grant Craig Mundie, Microsoft’s chief research and strategy officer, the ability to speak Mandarin.

Hear Rick Rashid’s voice in his native language and then translated into several other languages:

English:

Italian:

Mandarin:

In English, a synthetic version of Mundie’s voice welcomed the audience to an open day held by Microsoft Research, concluding, “With the help of this system, now I can speak Mandarin.” The phrase was repeated in Mandarin Chinese, in what was still recognizably Mundie’s voice.

“We will be able to do quite a few scenario applications,” said Soong, who created the system with colleagues at Microsoft Research Asia, the company’s second-largest research lab, in Beijing, China.

[div class=attrib]Read the entire article here.[end-div]

Turing Test 2.0 – Intelligent Behavior Free of Bigotry

One wonders what the world would look like today had Alan Turing been criminally prosecuted and jailed by the British government for his homosexuality before the Second World War, rather than in 1952. Would the British have been able to break German Naval ciphers encoded by their Enigma machine? Would the German Navy have prevailed, and would the Nazis have gone on to conquer the British Isles?

Actually, Turing was not imprisoned in 1952 — rather, he “accepted” chemical castration at the hands of the British government rather than face jail. He died two years later of self-inflicted cyanide poisoning, just short of his 42nd birthday.

Now a hundred years on from his birthday, historians are reflecting on his short life and his lasting legacy. Turing is widely regarded to have founded the discipline of artificial intelligence and he made significant contributions to computing. Yet most of his achievements went unrecognized for many decades or were given short shrift, perhaps, due to his confidential work for the government, or more likely, because of his persona non grata status.

In 2009 the British government offered Turing an apology. And, of course, we now have the Turing Test. (The Turing Test is a test of a machine’s ability to exhibit intelligent behavior). So, one hundred years after Turing’s birth to honor his life we should launch a new and improved Turing Test. Let’s call it the Turing Test 2.0.

This test would measure a human’s ability to exhibit intelligent behavior free of bigotry.

[div class=attrib]From Nature:[end-div]

Alan Turing is always in the news — for his place in science, but also for his 1952 conviction for having gay sex (illegal in Britain until 1967) and his suicide two years later. Former Prime Minister Gordon Brown issued an apology to Turing in 2009, and a campaign for a ‘pardon’ was rebuffed earlier this month.

Must you be a great figure to merit a ‘pardon’ for being gay? If so, how great? Is it enough to break the Enigma ciphers used by Nazi Germany in the Second World War? Or do you need to invent the computer as well, with artificial intelligence as a bonus? Is that great enough?

Turing’s reputation has gone from zero to hero, but defining what he achieved is not simple. Is it correct to credit Turing with the computer? To historians who focus on the engineering of early machines, Turing is an also-ran. Today’s scientists know the maxim ‘publish or perish’, and Turing just did not publish enough about computers. He quickly became perishable goods. His major published papers on computability (in 1936) and artificial intelligence (in 1950) are some of the most cited in the scientific literature, but they leave a yawning gap. His extensive computer plans of 1946, 1947 and 1948 were left as unpublished reports. He never put into scientific journals the simple claim that he had worked out how to turn his 1936 “universal machine” into the practical electronic computer of 1945. Turing missed those first opportunities to explain the theory and strategy of programming, and instead got trapped in the technicalities of primitive storage mechanisms.

He could have caught up after 1949, had he used his time at the University of Manchester, UK, to write a definitive account of the theory and practice of computing. Instead, he founded a new field in mathematical biology and left other people to record the landscape of computers. They painted him out of it. The first book on computers to be published in Britain, Faster than Thought (Pitman, 1953), offered this derisive definition of Turing’s theoretical contribution:

“Türing machine. In 1936 Dr. Turing wrote a paper on the design and limitations of computing machines. For this reason they are sometimes known by his name. The umlaut is an unearned and undesirable addition, due, presumably, to an impression that anything so incomprehensible must be Teutonic.”

That a book on computers should describe the theory of computing as incomprehensible neatly illustrates the climate Turing had to endure. He did make a brief contribution to the book, buried in chapter 26, in which he summarized computability and the universal machine. However, his low-key account never conveyed that these central concepts were his own, or that he had planned the computer revolution.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Alan Mathison Turing at the time of his election to a Fellowship of the Royal Society. Photograph was taken at the Elliott & Fry studio on 29 March 1951.[end-div]

Your Guide to Online Morality

By most estimates Facebook has around 800 million registered users. This means that its policies governing what is or is not appropriate user content should bear detailed scrutiny. So, a look at Facebook’s recently publicized guidelines for sexual and violent content show a somewhat peculiar view of morality. It’s a view that some characterize as typically American prudishness, but with a blind eye towards violence.

[div class=attrib]From the Guardian:[end-div]

Facebook bans images of breastfeeding if nipples are exposed – but allows “graphic images” of animals if shown “in the context of food processing or hunting as it occurs in nature”. Equally, pictures of bodily fluids – except semen – are allowed as long as no human is included in the picture; but “deep flesh wounds” and “crushed heads, limbs” are OK (“as long as no insides are showing”), as are images of people using marijuana but not those of “drunk or unconscious” people.

The strange world of Facebook’s image and post approval system has been laid bare by a document leaked from the outsourcing company oDesk to the Gawker website, which indicates that the sometimes arbitrary nature of picture and post approval actually has a meticulous – if faintly gore-friendly and nipple-unfriendly – approach.

For the giant social network, which has 800 million users worldwide and recently set out plans for a stock market flotation which could value it at up to $100bn (£63bn), it is a glimpse of its inner workings – and odd prejudices about sex – that emphasise its American origins.

Facebook has previously faced an outcry from breastfeeding mothers over its treatment of images showing them with their babies. The issue has rumbled on, and now seems to have been embedded in its “Abuse Standards Violations”, which states that banned items include “breastfeeding photos showing other nudity, or nipple clearly exposed”. It also bans “naked private parts” including “female nipple bulges and naked butt cracks” – though “male nipples are OK”.

The guidelines, which have been set out in full, depict a world where sex is banned but gore is acceptable. Obvious sexual activity, even if “naked parts” are hidden, people “using the bathroom”, and “sexual fetishes in any form” are all also banned. The company also bans slurs or racial comments “of any kind” and “support for organisations and people primarily known for violence”. Also banned is anyone who shows “approval, delight, involvement etc in animal or human torture”.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Guardian / Photograph: Dominic Lipinski/PA.[end-div]

Travel Photo Clean-up

[tube]flNomXIIWr4[/tube]

We’ve all experienced this phenomenon when on vacation: you’re at a beautiful location with a significant other, friends or kids; the backdrop is idyllic, the subjects are exquisitely posed, you need to preserve and share this perfect moment with a photograph, you get ready to snap the shutter. Then, at that very moment an oblivious tourist, unperturbed locals or a stray goat wander into the picture, too late, the picture is ruined, and it’s getting dark, so there’s no time to reinvent that perfect scene! Oh well, you’ll still be able to talk about the scene’s unspoiled perfection when you get home.

But now, there’s an app for that.

[div class=attrib]From New Scientist:[end-div]

 

It’s the same scene played out at tourist sites the world over: You’re trying to take a picture of a partner or friend in front of some monument, statue or building and other tourists keep striding unwittingly – or so they say – into the frame.

Now a new smartphone app promises to let you edit out these unwelcome intruders, leaving just leave your loved one and a beautiful view intact.

Remove, developed by Swedish photography firm Scalada, takes a burst of shots of your scene. It then identifies the objects which are moving – based on their relative position in each frame. These objects are then highlighted and you can delete the ones you don’t want and keep the ones you do, leaving you with a nice, clean composite shot.

Loud party of schoolchildren stepping in front of the Trevi Fountain? Select and delete. Unwanted, drunken stag party making the Charles Bridge in Prague look untidy? See you later.

Remove uses similar technology to the firm’s Rewind app, launched last year, which merges composite group shots to create the best single image.

The app is just a prototype at the moment – as is the video above – but Scalado will demonstrate a full version at the 2012 Mobile World Conference in Barcelona later this month.

Barcode as Art

The ubiquitous and utilitarian barcode turns 60 years old. Now, it’s upstart and more fashionable sibling, the QR or quick response, code, seems to be stealing the show by finding its way from the product on the grocery store shelf to the world of art and design.

[div class=attrib]From the New York Times:[end-div]

It’s usually cause for celebration when a product turns 60. How could it have survived for so long, unless it is genuinely wanted or needed, or maybe both?

One of the sexagenarians this year, the bar code, has more reasons than most to celebrate. Having been a familiar part of daily life for decades, those black vertical lines have taken on a new role of telling ethically aware consumers whether their prospective purchases are ecologically and socially responsible. Not bad for a 60-year-old.

But a new rival has surfaced. A younger version of the bar code, the QR, or “Quick Response” code, threatens to become as ubiquitous as the original, and is usurping some of its functions. Both symbols are black and white, geometric in style and rectangular in shape, but there the similarities end, because each one has a dramatically different impact on the visual landscape, aesthetically and symbolically.

First, the bar code. The idea of embedding information about a product, including its price, in a visual code that could be decrypted quickly and accurately at supermarket checkouts was hatched in the late 1940s by Bernard Silver and Norman Joseph Woodland, graduate students at the Drexel Institute of Technology in Philadelphia. Their idea was that retailers would benefit from speeding up the checkout process, enabling them to employ fewer staff, and from reducing the expense and inconvenience caused when employees keyed in the wrong prices.

At 8.01 a.m. on June 26, 1974, a packet of Wrigley’s Juicy Fruit chewing gum was sold for 67 cents at a Marsh Supermarket in Troy, Ohio — the first commercial transaction to use a bar code. More than five billion bar-coded products are now scanned at checkouts worldwide every day. Some of those codes will also have been vetted on the cellphones of shoppers who wanted to check the product’s impact on their health and the environment, and the ethical credentials of the manufacturer. They do so by photographing the bar code with their phones and using an application to access information about the product on ethical rating Web sites like GoodGuide.

As for the QR code, it was developed in the mid-1990s by the Japanese carmaker Toyota to track components during the manufacturing process. A mosaic of tiny black squares on a white background, the QR code has greater storage capacity than the original bar code. Soon, Japanese cellphone makers were adding QR readers to camera phones, and people were using them to download text, films and Web links from QR codes on magazines, newspapers, billboards and packaging. The mosaic codes then appeared in other countries and are now common all over the world. Anyone who has downloaded a QR reading application can decrypt them with a camera phone.

 

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Morality and Machines

Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.

Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.

[div class=attrib]From the New York Times:[end-div]

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Asimov Foundation / Wikipedia.[end-div]

The Internet of Things

The term “Internet of Things” was first coined in 1999 by Kevin Ashton. It refers to the notion whereby physical objects of all kinds are equipped with small identifying devices and connected to a network. In essence: everything connected to anytime, anywhere by anyone. One of the potential benefits is that this would allow objects to be tracked, inventoried and status continuously monitored.

[div class=attrib]From the New York Times:[end-div]

THE Internet likes you, really likes you. It offers you so much, just a mouse click or finger tap away. Go Christmas shopping, find restaurants, locate partying friends, tell the world what you’re up to. Some of the finest minds in computer science, working at start-ups and big companies, are obsessed with tracking your online habits to offer targeted ads and coupons, just for you.

But now — nothing personal, mind you — the Internet is growing up and lifting its gaze to the wider world. To be sure, the economy of Internet self-gratification is thriving. Web start-ups for the consumer market still sprout at a torrid pace. And young corporate stars seeking to cash in for billions by selling shares to the public are consumer services — the online game company Zynga last week, and the social network giant Facebook, whose stock offering is scheduled for next year.

As this is happening, though, the protean Internet technologies of computing and communications are rapidly spreading beyond the lucrative consumer bailiwick. Low-cost sensors, clever software and advancing computer firepower are opening the door to new uses in energy conservation, transportation, health care and food distribution. The consumer Internet can be seen as the warm-up act for these technologies.

The concept has been around for years, sometimes called the Internet of Things or the Industrial Internet. Yet it takes time for the economics and engineering to catch up with the predictions. And that moment is upon us.

“We’re going to put the digital ‘smarts’ into everything,” said Edward D. Lazowska, a computer scientist at the University of Washington. These abundant smart devices, Dr. Lazowska added, will “interact intelligently with people and with the physical world.”

The role of sensors — once costly and clunky, now inexpensive and tiny — was described this month in an essay in The New York Times by Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology; he said the ultimate goal was “the sensor-aware planetary computer.”

That may sound like blue-sky futurism, but evidence shows that the vision is beginning to be realized on the ground, in recent investments, products and services, coming from large industrial and technology corporations and some ambitious start-ups.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Internet of Things. Courtesy of Cisco.[end-div]

What Did You Have for Breakfast Yesterday? Ask Google

Memory is, well, so 1990s. Who needs it when we have Google, Siri and any number of services to help answer and recall everything we’ve ever perceived and wished to remember or wanted to know. Will our personal memories become another shared service served up from the “cloud”?

[div class=attrib]From the Wilson Quarterly:[end-div]

In an age when most information is just a few keystrokes away, it’s natural to wonder: Is Google weakening our powers of memory? According to psychologists Betsy Sparrow of Columbia University, Jenny Liu of the University of Wisconsin, Madison, and Daniel M. Wegner of Harvard, the Internet has not so much diminished intelligent recall as tweaked it.

The trio’s research shows what most computer users can tell you anecdotally: When you know you have the Internet at hand, your memory relaxes. In one of their experiments, 46 Harvard undergraduates were asked to answer 32 trivia questions on computers. After each one, they took a quick Stroop test, in which they were shown words printed in different colors and then asked to name the color of each word. They took more time to name the colors of Internet-related words, such as modem and browser. According to Stroop test conventions, this is because the words were related to something else that they were already thinking about—yes, they wanted to fire up Google to answer those tricky trivia questions.

In another experiment, the authors uncovered evidence suggesting that access to computers plays a fundamental role in what people choose to commit to their God-given hard drive. Subjects were instructed to type 40 trivia-like statements into a dialog box. Half were told that the computer would erase the information and half that it would be saved. Afterward, when asked to recall the statements, the students who were told their typing would be erased remembered much more. Lacking a computer backup, they apparently committed more to memory.

[div class=attrib]Read the entire article here.[end-div]

Life Without Facebook

Perhaps it’s time to re-think your social network when through it you know all about the stranger with whom you are sharing the elevator.

[div class=attrib]From the New York Times:[end-div]

Tyson Balcomb quit Facebook after a chance encounter on an elevator. He found himself standing next to a woman he had never met — yet through Facebook he knew what her older brother looked like, that she was from a tiny island off the coast of Washington and that she had recently visited the Space Needle in Seattle.

“I knew all these things about her, but I’d never even talked to her,” said Mr. Balcomb, a pre-med student in Oregon who had some real-life friends in common with the woman. “At that point I thought, maybe this is a little unhealthy.”

As Facebook prepares for a much-anticipated public offering, the company is eager to show off its momentum by building on its huge membership: more than 800 million active users around the world, Facebook says, and roughly 200 million in the United States, or two-thirds of the population.

But the company is running into a roadblock in this country. Some people, even on the younger end of the age spectrum, just refuse to participate, including people who have given it a try.

One of Facebook’s main selling points is that it builds closer ties among friends and colleagues. But some who steer clear of the site say it can have the opposite effect of making them feel more, not less, alienated.

“I wasn’t calling my friends anymore,” said Ashleigh Elser, 24, who is in graduate school in Charlottesville, Va. “I was just seeing their pictures and updates and felt like that was really connecting to them.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Facebook user. Courtesy of the New York Times.[end-div]

How to Make Social Networking Even More Annoying

What do you get when you take a social network, add sprinkles of mobile telephony, and throw in a liberal dose of proximity sensing? You get the first “social accessory” that creates a proximity network around you as you move about your daily life. Welcome to the world of a yet another social networking technology startup, this one, called magnetU. The company’s tagline is:

It was only a matter of time before your social desires became wearable!

magnetU markets a wearable device, about the size of a memory stick, that lets people wear and broadcast their social desires, allowing immediate social gratification anywhere and anytime. When a magnetU user comes into proximity with others having similar social profiles the system notifies the user of a match. A social match is signaled as either “attractive”, “hot” or “red hot”. So, if you want to find a group of anonymous but like minds (or bodies) for some seriously homogeneous partying magnetU is for you.

Time will tell whether this will become successful and pervasive, or whether it will be consigned to the tech start-up waste bin of history. If magnetU becomes as ubiquitous as Facebook then humanity be entering a disastrous new phase characterized by the following: all social connections become a marketing opportunity; computer algorithms determine when and whom to like (or not) instantly; the content filter bubble extends to every interaction online and in the real world; people become ratings and nodes on a network; advertisers insert themselves into your daily conversations; Big Brother is watching you!

[div class=attrib]From Technology Review:[end-div]

MagnetU is a $24 device that broadcasts your social media profile to everyone around you. If anyone else with a MagnetU has a profile that matches yours sufficiently, the device will alert both of you via text and/or an app. Or, as founder Yaron Moradi told Mashable in a video interview, “MagnetU brings Facebook, Linkedin, Twitter and other online social networks to the street.”

Moradi calls this process “wearing your social desires,” and anyone who’s ever attempted online dating can tell you that machines are poor substitutes for your own judgement when it comes to determining with whom you’ll actually want to connect.

You don’t have to be a pundit to come up with a long list of Mr. McCrankypants reasons this is a terrible idea, from the overwhelming volume of distraction we already face to the fact that unless this is a smash hit, the only people MagnetU will connect you to are other desperately lonely geeks.

My primary objection, however, is not that this device or something like it won’t work, but that if it does, it will have the Facebook-like effect of pushing even those who loathe it on principle into participating, just because everyone else is using it and those who don’t will be left out in real life.

“MagnetU lets you wear your social desires… Anything from your social and dating preferences to business matches in conferences,” says Moradi. By which he means this will be very popular with Robert Scoble and anyone who already has Grindr loaded onto his or her phone.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Facebook founder Mark Zuckerberg. Courtesy of Rocketboom.[end-div]