- Amazon All the Time and Google Toilet Paper>
Soon courtesy of Amazon, Google and other retail giants, and of course lubricated by the likes of the ubiquitous UPS and Fedex trucks, you may be able to dispense with the weekly or even daily trip to the grocery store. Amazon is expanding a trial of its same-day grocery delivery service, and others are following suit in select local and regional tests.
You may recall the spectacular implosion of the online grocery delivery service Webvan — a dot.com darling — that came and went in the blink of an internet eye, finally going bankrupt in 2001. Well, times have changed and now avaricious Amazon and its peers have their eyes trained on your groceries.
So now all you need to do is find a service to deliver your kids to and from school, an employer who will let you work from home, convince your spouse that “staycations” are cool, use Google Street View to become a virtual tourist, and you will never, ever, ever, EVER need to leave your house again!
The other day I ran out of toilet paper. You know how that goes. The last roll in the house sets off a ticking clock; depending on how many people you live with and their TP profligacy, you’re going to need to run to the store within a few hours, a day at the max, or you’re SOL. (Unless you’re a man who lives alone, in which case you can wait till the next equinox.) But it gets worse. My last roll of toilet paper happened to coincide with a shortage of paper towels, a severe run on diapers (you know, for kids!), and the last load of dishwashing soap. It was a perfect storm of household need. And, as usual, I was busy and in no mood to go to the store.
This quotidian catastrophe has a happy ending. In April, I got into the “pilot test” for Google Shopping Express, the search company’s effort to create an e-commerce service that delivers goods within a few hours of your order. The service, which is currently being offered in the San Francisco Bay Area, allows you to shop online at Target, Walgreens, Toys R Us, Office Depot, and several smaller, local stores, like Blue Bottle Coffee. Shopping Express combines most of those stores’ goods into a single interface, which means you can include all sorts of disparate items in the same purchase. Shopping Express also offers the same prices you’d find at the store. After you choose your items, you select a delivery window—something like “Anytime Today” or “Between 2 p.m. and 6 p.m.”—and you’re done. On the fateful day that I’d run out of toilet paper, I placed my order at around noon. Shortly after 4, a green-shirted Google delivery guy strode up to my door with my goods. I was back in business, and I never left the house.
Google is reportedly thinking about charging $60 to $70 a year for the service, making it a competitor to Amazon’s Prime subscription plan. But at this point the company hasn’t finalized pricing, and during the trial period, the whole thing is free. I’ve found it easy to use, cheap, and reliable. Similar to my experience when I first got Amazon Prime, it has transformed how I think about shopping. In fact, in the short time I’ve been using it, Shopping Express has replaced Amazon as my go-to source for many household items. I used to buy toilet paper, paper towels, and diapers through Amazon’s Subscribe & Save plan, which offers deep discounts on bulk goods if you choose a regular delivery schedule. I like that plan when it works, but subscribing to items whose use is unpredictable—like diapers for a newborn—is tricky. I often either run out of my Subscribe & Save items before my next delivery, or I get a new delivery while I still have a big load of the old stuff. Shopping Express is far simpler. You get access to low-priced big-box-store goods without all the hassle of big-box stores—driving, parking, waiting in line. And you get all the items you want immediately.
After using it for a few weeks, it’s hard to escape the notion that a service like Shopping Express represents the future of shopping. (Also the past of shopping—the return of profitless late-1990s’ services like Kozmo and WebVan, though presumably with some way of making money this time.) It’s not just Google: Yesterday, Reuters reported that Amazon is expanding AmazonFresh, its grocery delivery service, to big cities beyond Seattle, where it has been running for several years. Amazon’s move confirms the theory I floated a year ago, that the e-commerce giant’s long-term goal is to make same-day shipping the norm for most of its customers.
Amazon’s main competitive disadvantage, today, is shipping delays. While shopping online makes sense for many purchases, the vast majority of the world’s retail commerce involves stuff like toilet paper and dishwashing soap—items that people need (or think they need) immediately. That explains why Wal-Mart sells half a trillion dollars worth of goods every year, and Amazon sells only $61 billion. Wal-Mart’s customers return several times a week to buy what they need for dinner, and while they’re there, they sometimes pick up higher-margin stuff, too. By offering same-day delivery on groceries and household items, Amazon and Google are trying to edge in on that market.
As I learned while using Shopping Express, the plan could be a hit. If done well, same-day shipping erases the distinctions between the kinds of goods we buy online and those we buy offline. Today, when you think of something you need, you have to go through a mental checklist: Do I need it now? Can it wait two days? Is it worth driving for? With same-day shipping, you don’t have to do that. All shopping becomes online shopping.
Read the entire article here.
Image: Webvan truck. Courtesy of Wikipedia.
- Law, Common Sense and Your DNA>
Paradoxically the law and common sense often seem to be at odds. Justice may still be blind, at least in most open democracies, but there seems to be no question as to the stupidity of much of our law.
Some examples: in Missouri it’s illegal to drive with an uncaged bear in the car; in Maine, it’s illegal to keep Christmas decorations up after January 14th; in New Jersey, it’s illegal to wear a bulletproof vest while committing murder; in Connecticut, a pickle is not an official, legal pickle unless it can bounce; in Louisiana, you can be fined $500 for instructing a pizza delivery service to deliver pizza to a friend unknowingly.
So, today we celebrate a victory for common sense and justice over thoroughly ill-conceived and badly written law — the U.S. Supreme Court unanimously struck down laws granting patents to corporations for human genes.
Unfortunately though, due to the extremely high financial stakes this is not likely to be the last we hear about big business seeking to patent or control the building blocks to life.
From the WSJ:
The Supreme Court unanimously ruled Thursday that human genes isolated from the body can’t be patented, a victory for doctors and patients who argued that such patents interfere with scientific research and the practice of medicine.
The court was handing down one of its most significant rulings in the age of molecular medicine, deciding who may own the fundamental building blocks of life.
The case involved Myriad Genetics Inc., which holds patents related to two genes, known as BRCA1 and BRCA2, that can indicate whether a woman has a heightened risk of developing breast cancer or ovarian cancer.
Justice Clarence Thomas, writing for the court, said the genes Myriad isolated are products of nature, which aren’t eligible for patents.
“Myriad did not create anything,” Justice Thomas wrote in an 18-page opinion. “To be sure, it found an important and useful gene, but separating that gene from its surrounding genetic material is not an act of invention.”
Even if a discovery is brilliant or groundbreaking, that doesn’t necessarily mean it’s patentable, the court said.
However, the ruling wasn’t a complete loss for Myriad. The court said that DNA molecules synthesized in a laboratory were eligible for patent protection. Myriad’s shares soared after the court’s ruling.
The court adopted the position advanced by the Obama administration, which argued that isolated forms of naturally occurring DNA weren’t patentable, but artificial DNA molecules were.
Myriad also has patent claims on artificial genes, known as cDNA.
The high court’s ruling was a win for a coalition of cancer patients, medical groups and geneticists who filed a lawsuit in 2009 challenging Myriad’s patents. Thanks to those patents, the Salt Lake City company has been the exclusive U.S. commercial provider of genetic tests for breast cancer and ovarian cancer.
“Today, the court struck down a major barrier to patient care and medical innovation,” said Sandra Park of the American Civil Liberties Union, which represented the groups challenging the patents. “Because of this ruling, patients will have greater access to genetic testing and scientists can engage in research on these genes without fear of being sued.”
Myriad didn’t immediately respond to a request for comment.
The challengers argued the patents have allowed Myriad to dictate the type and terms of genetic screening available for the diseases, while also dissuading research by other laboratories.
Read the entire article here.
Image: Gene showing the coding region in a segment of eukaryotic DNA. Courtesy of Wikipedia.
- The Death of Photojournalism>
Really, it was only a matter of time. First, digital cameras killed off their film-dependent predecessors and then dealt a death knell for Kodak. Now social media and the #hashtag is doing the same to the professional photographer.
Camera-enabled smartphones are ubiquitous, making everyone a photographer. And, with almost everyone jacked into at least one social network or photo-sharing site it takes only one point and a couple of clicks to get a fresh image posted to the internet. Ironically, the newsprint media, despite being in the business of news, have failed to recognize this news until recently.
So, now with an eye to cutting costs, and making images more immediate and compelling — via citizens — news organizations are re-tooling their staffs in four ways: first, fire the photographers; second, re-train reporters to take photographs with their smartphones; third, video, video, video; fourth, rely on the ever willing public to snap images, post, tweet, #hashtag and like — for free of course.
From Cult of Mac:
The Chicago Sun-Times, one of the remnants of traditional paper journalism, has let go its entire photography staff of 28 people. Now its reporters will start receiving “iPhone photography basics” training to start producing their own photos and videos.
The move is part of a growing trend towards publications using the iPhone as a replacement for fancy, expensive DSLRs. It’s a also a sign of how traditional journalism is being changed by technology like the iPhone and the advent of digital publishing.
When Hurricane Sandy hit New York City, reporters for Time used the iPhone to take photos on the field and upload to the publication’s Instagram account. Even the cover photo used on the corresponding issue of Time was taken on an iPhone.
Sun-Times photographer Alex Garcia argues that the “idea that freelancers and reporters could replace a photo staff with iPhones is idiotic at worst, and hopelessly uninformed at best.” Garcia believes that reporters are incapable of writing articles and also producing quality media, but she’s fighting an uphill battle.
Big newspaper companies aren’t making anywhere near the amount of money they used to due to the popularity of online publications and blogs. Free news is a click away nowadays. Getting rid of professional photographers and equipping reporters with iPhones is another way to cut costs.
The iPhone has a better camera than most digital point-and-shoots, and more importantly, it is in everyone’s pocket. It’s a great camera that’s always with you, and that makes it an invaluable tool for any journalist. There will always be a need for videographers and pro photographers that can make studio-level work, but the iPhone is proving to be an invaluable tool for reporters in the modern world.
Read the entire article here.
Image: Kodak 1949-56 Retina IIa 35mm Camera. Courtesy of Wikipedia / Kodak.
- Beware! RoboBee May Be Watching You>
History will probably show that humans are the likely cause for the mass disappearance and death of honey bees around the world.
So, while ecologists try to understand why and how to reverse bee death and colony collapse, engineers are busy building alternatives to our once nectar-loving friends. Meet RoboBee, also known as the Micro Air Vehicles Project.
From Scientific American:
We take for granted the effortless flight of insects, thinking nothing of swatting a pesky fly and crushing its wings. But this insect is a model of complexity. After 12 years of work, researchers at the Harvard School of Engineering and Applied Sciences have succeeded in creating a fly-like robot. And in early May, they announced that their tiny RoboBee (yes, it’s called a RoboBee even though it’s based on the mechanics of a fly) took flight. In the future, that could mean big things for everything from disaster relief to colony collapse disorder.
The RoboBee isn’t the only miniature flying robot in existence, but the 80-milligram, quarter-sized robot is certainly one of the smallest. “The motivations are really thinking about this as a platform to drive a host of really challenging open questions and drive new technology and engineering,” says Harvard professor Robert Wood, the engineering team lead for the project.
When Wood and his colleagues first set out to create a robotic fly, there were no off the shelf parts for them to use. “There were no motors small enough, no sensors that could fit on board. The microcontrollers, the microprocessors–everything had to be developed fresh,” says Wood. As a result, the RoboBee project has led to numerous innovations, including vision sensors for the bot, high power density piezoelectric actuators (ceramic strips that expand and contract when exposed to an electrical field), and a new kind of rapid manufacturing that involves layering laser-cut materials that fold like a pop-up book. The actuators assist with the bot’s wing-flapping, while the vision sensors monitor the world in relation to the RoboBee.
“Manufacturing took us quite awhile. Then it was control, how do you design the thing so we can fly it around, and the next one is going to be power, how we develop and integrate power sources,” says Wood. In a paper recently published by Science, the researchers describe the RoboBee’s power quandary: it can fly for just 20 seconds–and that’s while it’s tethered to a power source. “Batteries don’t exist at the size that we would want,” explains Wood. The researchers explain further in the report: ” If we implement on-board power with current technologies, we estimate no more than a few minutes of untethered, powered flight. Long duration power autonomy awaits advances in small, high-energy-density power sources.”
The RoboBees don’t last a particularly long time–Wood says the flight time is “on the order of tens of minutes”–but they can keep flapping their wings long enough for the Harvard researchers to learn everything they need to know from each successive generation of bots. For commercial applications, however, the RoboBees would need to be more durable.
Read the entire article here.
Image courtesy of Micro Air Vehicles Project, Harvard.
- Leadership and the Tyranny of Big Data>
“There are three kinds of lies: lies, damned lies, and statistics”, goes the adage popularized by author Mark Twain.
Most people take for granted that numbers can be persuasive — just take a look at your bank balance. Also, most accept the notion that data can be used, misused, misinterpreted, re-interpreted and distorted to support or counter almost any argument. Just listen to a politician quote polling numbers and then hear an opposing politician make a contrary argument using the very same statistics. Or, better still, familiarize yourself with pseudo-science of economics.
Authors Kenneth Cukier (data editor for The Economist) and Viktor Mayer-Schönberger (professor of Internet governance) examine this phenomenon in their book Big Data: A Revolution That Will Transform How We Live, Work, and Think. They eloquently present the example of Robert McNamara, U.S. defense secretary during the Vietnam war, who in(famously) used his detailed spreadsheets — including daily body count — to manage and measure progress. Following the end of the war, many U.S. generals later described this over-reliance on numbers as misguided dictatorship that led many to make ill-informed decisions — based solely on numbers — and to fudge their figures.
This classic example leads them to a timely and important caution: as the range and scale of big data becomes ever greater, and while it may offer us great benefits, it can and will be used to mislead.
From Technology review:
Big data is poised to transform society, from how we diagnose illness to how we educate children, even making it possible for a car to drive itself. Information is emerging as a new economic input, a vital resource. Companies, governments, and even individuals will be measuring and optimizing everything possible.
But there is a dark side. Big data erodes privacy. And when it is used to make predictions about what we are likely to do but haven’t yet done, it threatens freedom as well. Yet big data also exacerbates a very old problem: relying on the numbers when they are far more fallible than we think. Nothing underscores the consequences of data analysis gone awry more than the story of Robert McNamara.
McNamara was a numbers guy. Appointed the U.S. secretary of defense when tensions in Vietnam rose in the early 1960s, he insisted on getting data on everything he could. Only by applying statistical rigor, he believed, could decision makers understand a complex situation and make the right choices. The world in his view was a mass of unruly information that—if delineated, denoted, demarcated, and quantified—could be tamed by human hand and fall under human will. McNamara sought Truth, and that Truth could be found in data. Among the numbers that came back to him was the “body count.”
McNamara developed his love of numbers as a student at Harvard Business School and then as its youngest assistant professor at age 24. He applied this rigor during the Second World War as part of an elite Pentagon team called Statistical Control, which brought data-driven decision making to one of the world’s largest bureaucracies. Before this, the military was blind. It didn’t know, for instance, the type, quantity, or location of spare airplane parts. Data came to the rescue. Just making armament procurement more efficient saved $3.6 billion in 1943. Modern war demanded the efficient allocation of resources; the team’s work was a stunning success.
At war’s end, the members of this group offered their skills to corporate America. The Ford Motor Company was floundering, and a desperate Henry Ford II handed them the reins. Just as they knew nothing about the military when they helped win the war, so too were they clueless about making cars. Still, the so-called “Whiz Kids” turned the company around.
McNamara rose swiftly up the ranks, trotting out a data point for every situation. Harried factory managers produced the figures he demanded—whether they were correct or not. When an edict came down that all inventory from one car model must be used before a new model could begin production, exasperated line managers simply dumped excess parts into a nearby river. The joke at the factory was that a fellow could walk on water—atop rusted pieces of 1950 and 1951 cars.
McNamara epitomized the hyper-rational executive who relied on numbers rather than sentiments, and who could apply his quantitative skills to any industry he turned them to. In 1960 he was named president of Ford, a position he held for only a few weeks before being tapped to join President Kennedy’s cabinet as secretary of defense.
As the Vietnam conflict escalated and the United States sent more troops, it became clear that this was a war of wills, not of territory. America’s strategy was to pound the Viet Cong to the negotiation table. The way to measure progress, therefore, was by the number of enemy killed. The body count was published daily in the newspapers. To the war’s supporters it was proof of progress; to critics, evidence of its immorality. The body count was the data point that defined an era.
McNamara relied on the figures, fetishized them. With his perfectly combed-back hair and his flawlessly knotted tie, McNamara felt he could comprehend what was happening on the ground only by staring at a spreadsheet—at all those orderly rows and columns, calculations and charts, whose mastery seemed to bring him one standard deviation closer to God.
In 1977, two years after the last helicopter lifted off the rooftop of the U.S. embassy in Saigon, a retired Army general, Douglas Kinnard, published a landmark survey called The War Managers that revealed the quagmire of quantification. A mere 2 percent of America’s generals considered the body count a valid way to measure progress. “A fake—totally worthless,” wrote one general in his comments. “Often blatant lies,” wrote another. “They were grossly exaggerated by many units primarily because of the incredible interest shown by people like McNamara,” said a third.
Read the entire article after the jump.
Image: Robert McNamara at a cabinet meeting, 22 Nov 1967. Courtesy of Wikipedia / Public domain.
- MondayMap: Your Taxes and Google Street View>
The fear of an annual tax audit brings many people to their knees. It’s one of many techniques that government authorities use to milk their citizens of every last penny of taxes. Well, authorities now have an even more powerful weapon to add to their tax collecting arsenal — Google Street View. And, if you are reading this from Lithuania you will know what we are talking about.
From the Wall Street Journal:
The apparently innocuous photograph is now being used as evidence in a tax-evasion case brought by Lithuanian authorities against the undisclosed owners of the home.
Some European countries have been going after Google, complaining that the search giant is invading the privacy of their citizens. But tax inspectors here have turned to the prying eyes of Street View for their own purposes.
After Google’s car-borne cameras were driven through the Vilnius area last year, the tax men in this small Baltic nation got busy. They have spent months combing through footage looking for unreported taxable wealth.
From aerial surveillance to dedicated iPhone apps, cash-strapped governments across Europe are employing increasingly unconventional measures against tax cheats to raise revenue. In some countries, authorities have tried to enlist citizens to help keep watch. Customers in Greece, for instance, are insisting on getting receipts for what they buy.
For Lithuania, which only two decades ago began its transition away from communist central planning and remains one of the poorest countries in the European Union, Street View has been a big help. After the global financial crisis struck in 2008, belt tightening cut the tax authority’s budget by a third. A quarter of its employees were let go, leaving it with fewer resources just as it was being asked to do more.
Street View has let Mr. Kaseliauskas’s team see things it would have otherwise missed. Its images are better—and cheaper—than aerial photos, which authorities complain often aren’t clear enough to be useful.
Sitting in their city office 10 miles away, they were able to detect that, contrary to official records, the house with the hammock existed and that, in one photograph, three cars were parked in the driveway.
An undeclared semidetached house owned by the former board chairman of Bank Snoras, Raimundas Baranauskas, was recently identified using Street View and is estimated by the government to be worth about $260,000. Authorities knew Mr. Baranauskas owned land there, but not buildings. A quick look online led to the discovery of several houses on his land, in a quiet residential street of Vilnius.
Read the entire article here.
Image courtesy of (who else?), Google Maps.
- Big Data and Even Bigger Problems>
First a definition. Big data: typically a collection of large and complex datasets that are too cumbersome to process and analyze using traditional computational approaches and database applications. Usually the big data moniker will be accompanied by an IT vendor’s pitch for shiny new software (and possible hardware) solution able to crunch through petabytes (one petabyte is a million gigabytes) of data and produce a visualizable result that mere mortals can decipher.
Many companies see big data and related solutions as a panacea to a range of business challenges: customer service, medical diagnostics, product development, shipping and logistics, climate change studies, genomic analysis and so on. A great example was the last U.S. election. Many political wonks — from both sides of the aisle — agreed that President Obama was significantly aided in his won re-election with the help of big data. So, with that in mind, many are now looking at more important big data problems.
From Technology Review:
As chief scientist for President Obama’s reëlection effort, Rayid Ghani helped revolutionize the use of data in politics. During the final 18 months of the campaign, he joined a sprawling team of data and software experts who sifted, collated, and combined dozens of pieces of information on each registered U.S. voter to discover patterns that let them target fund-raising appeals and ads.
Now, with Obama again ensconced in the Oval Office, some veterans of the campaign’s data squad are applying lessons from the campaign to tackle social issues such as education and environmental stewardship. Edgeflip, a startup Ghani founded in January with two other campaign members, plans to turn the ad hoc data analysis tools developed for Obama for America into software that can make nonprofits more effective at raising money and recruiting volunteers.
Ghani isn’t the only one thinking along these lines. In Chicago, Ghani’s hometown and the site of Obama for America headquarters, some campaign members are helping the city make available records of utility usage and crime statistics so developers can build apps that attempt to improve life there. It’s all part of a bigger idea to engineer social systems by scanning the numerical exhaust from mundane activities for patterns that might bear on everything from traffic snarls to human trafficking. Among those pursuing such humanitarian goals are startups like DataKind as well as large companies like IBM, which is redrawing bus routes in Ivory Coast (see “African Bus Routes Redrawn Using Cell-Phone Data”), and Google, with its flu-tracking software (see “Sick Searchers Help Track Flu”).
Ghani, who is 35, has had a longstanding interest in social causes, like tutoring disadvantaged kids. But he developed his data-mining savvy during 10 years as director of analytics at Accenture, helping retail chains forecast sales, creating models of consumer behavior, and writing papers with titles like “Data Mining for Business Applications.”
Before joining the Obama campaign in July 2011, Ghani wasn’t even sure his expertise in machine learning and predicting online prices could have an impact on a social cause. But the campaign’s success in applying such methods on the fly to sway voters is now recognized as having been potentially decisive in the election’s outcome (see “A More Perfect Union”).
“I realized two things,” says Ghani. “It’s doable at the massive scale of the campaign, and that means it’s doable in the context of other problems.”
At Obama for America, Ghani helped build statistical models that assessed each voter along five axes: support for the president; susceptibility to being persuaded to support the president; willingness to donate money; willingness to volunteer; and likelihood of casting a vote. These models allowed the campaign to target door knocks, phone calls, TV spots, and online ads to where they were most likely to benefit Obama.
One of the most important ideas he developed, dubbed “targeted sharing,” now forms the basis of Edgeflip’s first product. It’s a Facebook app that prompts people to share information from a nonprofit, but only with those friends predicted to respond favorably. That’s a big change from the usual scattershot approach of posting pleas for money or help and hoping they’ll reach the right people.
Edgeflip’s app, like the one Ghani conceived for Obama, will ask people who share a post to provide access to their list of friends. This will pull in not only friends’ names but also personal details, like their age, that can feed models of who is most likely to help.
Say a hurricane strikes the southeastern United States and the Red Cross needs clean-up workers. The app would ask Facebook users to share the Red Cross message, but only with friends who live in the storm zone, are young and likely to do manual labor, and have previously shown interest in content shared by that user. But if the same person shared an appeal for donations instead, he or she would be prompted to pass it along to friends who are older, live farther away, and have donated money in the past.
Michael Slaby, a senior technology official for Obama who hired Ghani for the 2012 election season, sees great promise in the targeted sharing technique. “It’s one of the most compelling innovations to come out of the campaign,” says Slaby. “It has the potential to make online activism much more efficient and effective.”
For instance, Ghani has been working with Fidel Vargas, CEO of the Hispanic Scholarship Fund, to increase that organization’s analytical savvy. Vargas thinks social data could predict which scholarship recipients are most likely to contribute to the fund after they graduate. “Then you’d be able to give away scholarships to qualified students who would have a higher probability of giving back,” he says. “Everyone would be much better off.”
Ghani sees a far bigger role for technology in the social sphere. He imagines online petitions that act like open-source software, getting passed around and improved. Social programs, too, could get constantly tested and improved. “I can imagine policies being designed a lot more collaboratively,” he says. “I don’t know if the politicians are ready to deal with it.” He also thinks there’s a huge amount of untapped information out there about childhood obesity, gang membership, and infant mortality, all ready for big data’s touch.
Read the entire article here.
Inforgraphic courtesy of visua.ly. See the original here.
- You Can Check Out Anytime You Like...>
“… But You Can Never Leave”. So goes one of the most memorable of lyrical phrases from The Eagles (Hotel California).
Of late, it seems that this state of affairs also applies to a vast collection of people on Facebook; many wish to leave but lack the social capital or wisdom or backbone to do so.
From the Washington Post:
Bad news, everyone. We’re trapped. We may well be stuck here for the rest of our lives. I hope you brought canned goods.
A dreary line of tagged pictures and status updates stretches before us from here to the tomb.
Like life, Facebook seems to get less exciting the longer we spend there. And now everyone hates Facebook, officially.
Last week, Pew reported that 94 percent of teenagers are on Facebook, but that they are miserable about it. Then again, when are teenagers anything else? Pew’s focus groups of teens complained about the drama, said Twitter felt more natural, said that it seemed like a lot of effort to keep up with everyone you’d ever met, found the cliques and competition for friends offputting –
All right, teenagers. You have a point. And it doesn’t get better.
The trouble with Facebook is that 94 percent of people are there. Anything with 94 Percent of People involved ceases to have a personality and becomes a kind of public utility. There’s no broad generalization you can make about people who use flush toilets. Sure, toilets are a little odd, and they become quickly ridiculous when you stare at them long enough, the way a word used too often falls apart into meaningless letters under scrutiny, but we don’t think of them as peculiar. Everyone’s got one. The only thing weirder than having one of those funny porcelain thrones in your home would be not having one.
Facebook is like that, and not just because we deposit the same sort of thing in both. It used to define a particular crowd. But it’s no longer the bastion of college students and high schoolers avoiding parental scrutiny. Mom’s there. Heck, Velveeta Cheesy Skillets are there.
It’s just another space in which all the daily drama of actual life plays out. All the interactions that used only to be annoying to the people in the room with you at the time are now played out indelibly in text and pictures that can be seen from great distances by anyone who wants to take an afternoon and stalk you. Oscar Wilde complained about married couples who flirted with each other, saying that it was like washing clean linen in public. Well, just look at the wall exchanges of You Know The Couple I Mean. “Nothing is more irritating than not being invited to a party you wouldn’t be seen dead at,” Bill Vaughan said. On Facebook, that’s magnified to parties in entirely different states.
Facebook has been doing its best to approximate our actual social experience — that creepy foray into chairs aside. But what it forgot was that our actual social experience leaves much to be desired. After spending time with Other People smiling politely at news of what their sonograms are doing, we often want to rush from the room screaming wordlessly and bang our heads into something.
Hell is other people, updating their statuses with news that Yay The Strange Growth Checked Out Just Fine.
This is the point where someone says, “Well, if it’s that annoying, why don’t you unsubscribe?”
But you can’t.
Read the entire article here.
Image: Facebook logo courtesy of Mirror / Facebook.
- Friendships of Utility>
The average Facebook user is said to have 142 “friends”, and many active members have over 500. This certainly seems to be a textbook case of quantity over quality in the increasingly competitive status wars and popularity stakes of online neo- or pseudo-celebrity. That said, and regardless of your relationship with online social media, the one good to come from the likes — a small pun intended — of Facebook is that social scientists can now dissect and analyze your online behaviors and relationships as never before.
So, while Facebook, and its peers, may not represent a qualitative leap in human relationships the data and experiences that come from it may help future generations figure out what is truly important.
From the Wall Street Journal:
Facebook has made an indelible mark on my generation’s concept of friendship. The average Facebook user has 142 friends (many people I know have upward of 500). Without Facebook many of us “Millennials” wouldn’t know what our friends are up to or what their babies or boyfriends look like. We wouldn’t even remember their birthdays. Is this progress?
Aristotle wrote that friendship involves a degree of love. If we were to ask ourselves whether all of our Facebook friends were those we loved, we’d certainly answer that they’re not. These days, we devote equal if not more time to tracking the people we have had very limited human interaction with than to those whom we truly love. Aristotle would call the former “friendships of utility,” which, he wrote, are “for the commercially minded.”
I’d venture to guess that at least 90% of Facebook friendships are those of utility. Knowing this instinctively, we increasingly use Facebook as a vehicle for self-promotion rather than as a means to stay connected to those whom we love. Instead of sharing our lives, we compare and contrast them, based on carefully calculated posts, always striving to put our best face forward.
Friendship also, as Aristotle described it, can be based on pleasure. All of the comments, well-wishes and “likes” we can get from our numerous Facebook friends may give us pleasure. But something feels false about this. Aristotle wrote: “Those who love for the sake of pleasure do so for the sake of what is pleasant to themselves, and not insofar as the other is the person loved.” Few of us expect the dozens of Facebook friends who wish us a happy birthday ever to share a birthday celebration with us, let alone care for us when we’re sick or in need.
One thing’s for sure, my generation’s friendships are less personal than my parents’ or grandparents’ generation. Since we can rely on Facebook to manage our friendships, it’s easy to neglect more human forms of communication. Why visit a person, write a letter, deliver a card, or even pick up the phone when we can simply click a “like” button?
The ultimate form of friendship is described by Aristotle as “virtuous”—meaning the kind that involves a concern for our friend’s sake and not for our own. “Perfect friendship is the friendship of men who are good, and alike in virtue . . . . But it is natural that such friendships should be infrequent; for such men are rare.”
Those who came before the Millennial generation still say as much. My father and grandfather always told me that the number of such “true” friends can be counted on one hand over the course of a lifetime. Has Facebook increased our capacity for true friendship? I suspect Aristotle would say no.
Ms. Kelly joined Facebook in 2004 and quit in 2013.
Read the entire article here.
- Pain Ray>
We humans are capable of the most sublime creations, from soaring literary inventions to intensely moving music and gorgeous works of visual art. This stands in stark and paradoxical contrast to our range of inventions that enable efficient mass destruction, torture and death. The latest in this sad catalog of human tools of terror is the “pain ray”, otherwise known by its military euphemism as an Active Denial weapon. The good news is that it only delivers intense pain, rather than death. How inventive we humans really are — we should be so proud.
From the New Scientist:
THE pain, when it comes, is unbearable. At first it’s comparable to a hairdryer blast on the skin. But within a couple of seconds, most of the body surface feels roasted to an excruciating degree. Nobody has ever resisted it: the deep-rooted instinct to writhe and escape is too strong.
The source of this pain is an entirely new type of weapon, originally developed in secret by the US military – and now ready for use. It is a genuine pain ray, designed to subdue people in war zones, prisons and riots. Its name is Active Denial. In the last decade, no other non-lethal weapon has had as much research and testing, and some $120 million has already been spent on development in the US.
Many want to shelve this pain ray before it is fired for real but the argument is far from cut and dried. Active Denial’s supporters claim that its introduction will save lives: the chances of serious injury are tiny, they claim, and it causes less harm than tasers, rubber bullets or batons. It is a persuasive argument. Until, that is, you bring the dark side of human nature into the equation.
The idea for Active Denial can be traced back to research on the effects of radar on biological tissue. Since the 1940s, researchers have known that the microwave radiation produced by radar devices at certain frequencies could heat the skin of bystanders. But attempts to use such microwave energy as a non-lethal weapon only began in the late 1980s, in secret, at the Air Force Research Laboratory (AFRL) at Kirtland Air Force Base in Albuquerque, New Mexico.
The first question facing the AFRL researchers was whether microwaves could trigger pain without causing skin damage. Radiation equivalent to that used in oven microwaves, for example, was out of the question since it penetrates deep into objects, and causes cells to break down within seconds.
The AFRL team found that the key was to use millimetre waves, very-short-wavelength microwaves, with a frequency of about 95 gigahertz. By conducting tests on human volunteers, they discovered that these waves would penetrate only the outer 0.4 millimetres of skin, because they are absorbed by water in surface tissue. So long as the beam power was capped – keeping the energy per square centimetre of skin below a certain level – the tissue temperature would not exceed 55 °C, which is just below the threshold for damaging cells (Bioelectromagnetics, vol 18, p 403).
The sensation, however, was extremely painful, because the outer skin holds a type of pain receptor called thermal nociceptors. These respond rapidly to threats and trigger reflexive “repel” reactions when stimulated (see diagram).
To build a weapon, the next step was to produce a high-power beam capable of reaching hundreds of metres. At the time, it was possible to beam longer-wavelength microwaves over great distances – as with radar systems – but it was not feasible to use the same underlying technology to produce millimetre waves.
Working with the AFRL, the military contractor Raytheon Company, based in Waltham, Massachusetts, built a prototype with a key bit of hardware: a gyrotron, a device for amplifying millimetre microwaves. Gyrotrons generate a rotating ring of electrons, held in a magnetic field by powerful cryogenically cooled superconducting magnets. The frequency at which these electrons rotate matches the frequency of millimetre microwaves, causing a resonating effect. The souped-up millimetre waves then pass to an antenna, which fires the beam.
The first working prototype of the Active Denial weapon, dubbed “System 0″, was completed in 2000. At 7.5 tonnes, it was too big to be easily transported. A few years later, it was followed by mobile versions that could be carried on heavy vehicles.
Today’s Active Denial device, designed for military use, looks similar to a large, flat satellite dish mounted on a truck. The microwave beam it produces has a diameter of about 2 metres and can reach targets several hundred metres away. It fires in bursts of about 3 to 5 seconds.
Those who have been at the wrong end of the beam report that the pain is impossible to resist. “You might think you can withstand getting blasted. Your body disagrees quite strongly,” says Spencer Ackerman, a reporter for Wired magazine’s blog, Danger Room. He stood in the beam at an event arranged for the media last year. “One second my shoulder and upper chest were at a crisp, early-spring outdoor temperature on a Virginia field. Literally the next second, they felt like they were roasted, with what can be likened to a super-hot tingling feeling. The sensation causes your nerves to take control of your feeble consciousness, so it wasn’t like I thought getting out of the way of the beam was a good idea – I did what my body told me to do.” There’s also little chance of shielding yourself; the waves penetrate clothing.
Read the entire article here.
Related video courtesy of CBS 60 Minutes.
- Please Press 1 to Avoid Phone Menu Hell>
Good customer service once meant that a store or service employee would know you by name. This person would know your previous purchasing habits and your preferences; this person would know the names of your kids and your dog. Great customer service once meant that an employee could use this knowledge to anticipate your needs or personalize a specific deal. Well, this type of service still exists — in some places — but many businesses have outsourced it to offshore call center personnel or to machines, or both. Service may seem personal, but it’s not — service is customized to suit your profile, but it’s not personal in the same sense that once held true.
And, to rub more salt into the customer service wound, businesses now use their automated phone systems seemingly to shield themselves from you, rather than to provide you with the service you want. After all, when was the last time you managed to speak to a real customer service employee after making it through “please press 1 for English“, the poor choice of musak or sponsored ads and the never-ending phone menus?
Welcome to Please Press 1. Founded by Nigel Clarke (alumnus of 400 year old Dame Alice Owens School in London), Please Press 1 provides shortcuts for customer service phone menus for many of the top businesses in Britain [ed: we desperately need this service in the United States].
From the MailOnline:
A frustrated IT manager who has spent seven years making 12,000 calls to automated phone centres has launched a new website listing ‘short cut’ codes which can shave up to eight minutes off calls.
Nigel Clarke, 53, has painstakingly catalogued the intricate phone menus of hundreds of leading multi-national companies – some of which have up to 80 options.
He has now formulated his results into the website pleasepress1.com, which lists which number options to press to reach the desired department.
The father-of-three, from Fawkham, Kent, reckons the free service can save consumers more than eight minutes by cutting out up to seven menu options.
For example, a Lloyds TSB home insurance customer who wishes to report a water leak would normally have to wade through 78 menu options over seven levels to get through to the correct department.
But the new service informs callers that the combination 1-3-2-1-1-5-4 will get them straight through – saving over four minutes of waiting.
Mr Clarke reckons the service could save consumers up to one billion minutes a year.
He said: ‘Everyone knows that calling your insurance or gas company is a pain but for most, it’s not an everyday problem.
‘However, the cumulative effect of these calls is really quite devastating when you’re moving house or having an issue.
‘I’ve been working in IT for over 30 years and nothing gets me riled up like having my time wasted through inefficient design.
‘This is why I’ve devoted the best part of seven years to solving this issue.’
Mr Clarke describes call centre menu options as the ‘modern equivalent of Dante’s circles of hell’.
He sites the HMRC as one of the worst offenders, where callers can take up to six minutes to reach the correct department.
As one of the UK’s busiest call centres, the Revenue receives 79 million calls per year, or a potential 4.3 million working hours just navigating menus.
Mr Clarke believes that with better menu design, at least three million caller hours could be saved here alone.
He began his quest seven years ago as a self-confessed ‘call centre menu enthusiast’.
‘The idea began with the frustration of being met with a seemingly endless list of menu options,’ he said.
‘Whether calling my phone, insurance or energy company, they each had a different and often worse way of trying to “help” me.
‘I could sit there for minutes that seemed like hours, trying to get through their phone menus only to end up at the wrong place and having to redial and start again.’
He began noting down the menu options and soon realised he could shave several minutes off the waiting time.
Mr Clarke said: ‘When I called numbers regularly, I started keeping notes of the options to press. The numbers didn’t change very often and then it hit me.
Images courtesy of Time and Please Press 1.
- The Internet of Things and Your (Lack of) Privacy>
Ubiquitous connectivity for, and between, individuals and businesses is widely held to be beneficial for all concerned. We can connect rapidly and reliably with family, friends and colleagues from almost anywhere to anywhere via a wide array of internet enabled devices. Yet, as these devices become more powerful and interconnected, and enabled with location-based awareness, such as GPS (Global Positioning System) services, we are likely to face an increasing acute dilemma — connectedness or privacy?
From the Guardian:
The internet has turned into a massive surveillance tool. We’re constantly monitored on the internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated – sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.
Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.
It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behaviour, it’s still only behaviour that involves computers.
The Internet of Things refers to a world where much more than our computers and cell phones is internet-enabled. Soon there will be internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be internet-connected tags on our clothing. In its extreme, everything can be connected to the internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.
Lots has been written about the “Internet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.
Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.
We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.
Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to – to monitor how people move about their house and how they spend their time.
Drones are the another “thing” moving onto the internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.
Google’s internet-enabled glasses – Google Glass – are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.
In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behaviour and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals – not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyse this massive data stream will improve.
In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days – and nights – with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.
Read the entire article here.
Image: Big Brother, 1984. Poster. Courtesy of Telegraph.
- Off World Living>
Will humanity ever transcend gravity to become a space-faring race? A simple napkin-based calculation will give you the answer.
From Scientific American:
Optimistic visions of a human future in space seem to have given way to a confusing mix of possibilities, maybes, ifs, and buts. It’s not just the fault of governments and space agencies, basic physics is in part the culprit. Hoisting mass away from Earth is tremendously difficult, and thus far in fifty years we’ve barely managed a total the equivalent of a large oil-tanker. But there’s hope.
Back in the 1970?s the physicist Gerard O’Neill and his students investigated concepts of vast orbital structures capable of sustaining entire human populations. It was the tail end of the Apollo era, and despite the looming specter of budget restrictions and terrestrial pessimism there was still a sense of what might be, what could be, and what was truly within reach.
The result was a series of blueprints for habitats that solved all manner of problems for space life, from artificial gravity (spin up giant cylinders), to atmospheres, and radiation (let the atmosphere shield you). They’re pretty amazing, and they’ve remained perhaps one of the most optimistic visions of a future where we expand beyond the Earth.
But there’s a lurking problem, and it comes down to basic physics. It is awfully hard to move stuff from the surface of our planet into orbit or beyond. O’Neill knew this, as does anyone else who’s thought of grand space schemes. The solution is to ‘live of the land’, extracting raw materials from either the Moon with its shallower gravity well, or by processing asteroids. To get to that point though we’d still have to loft an awful lot of stuff into space – the basic tools and infrastructure have to start somewhere.
And there’s the rub. To put it into perspective I took a look at the amount of ‘stuff’ we’ve managed to get off Earth in the past 50-60 years. It’s actually pretty hard to evaluate, lots of the mass we send up comes back down in short order – either as spent rocket stages or as short-lived low-altitude satellites. But we can still get a feel for it.
To start with, a lower limit on the mass hoisted to space is the present day artificial satellite population. Altogether there are in excess of about 3,000 satellites up there, plus vast amounts of small debris. Current estimates suggest this amounts to a total of around 6,000 metric tons. The biggest single structure is the International Space Station, currently coming in at about 450 metric tons (about 992,000 lb for reference).
These numbers don’t reflect launch mass – the total of a rocket + payload + fuel. To put that into context, a fully loaded Saturn V was about 2,000 metric tons, but most of that was fuel.
When the Space Shuttle flew it amounted to about 115 metric tons (Shuttle + payload) making it into low-Earth orbit. Since there were 135 launches of the Shuttle that amounts to a total hoisted mass of about 15,000 metric tons over a 30 year period.
Read the entire article after the jump.
Image: A pair of O’Neill cylinders. NASA ID number AC75-1085. Courtesy of NASA / Wikipedia.
- Ray Kurzweil and Living a Googol Years>
By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.
From the Wall Street Journal:
Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?
This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.
Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.
In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.
If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”
If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.
In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”
There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.
Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”
“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “
To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.
“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.
“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.
By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.
How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”
We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”
“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.
Read the entire article after the jump.
- Cheap Hydrogen>
Researchers at the University of Glasgow, Scotland, have discovered an alternative and possibly more efficient way to make hydrogen at industrial scales. Typically, hydrogen is produced from reacting high temperature steam with methane or natural gas. A small volume of hydrogen, less than five percent annually, is also made through the process of electrolysis — passing an electric current through water.
This new method of production appears to be less costly, less dangerous and also more environmentally sound.
From the Independent:
Scientists have harnessed the principles of photosynthesis to develop a new way of producing hydrogen – in a breakthrough that offers a possible solution to global energy problems.
The researchers claim the development could help unlock the potential of hydrogen as a clean, cheap and reliable power source.
Unlike fossil fuels, hydrogen can be burned to produce energy without producing emissions. It is also the most abundant element on the planet.
Hydrogen gas is produced by splitting water into its constituent elements – hydrogen and oxygen. But scientists have been struggling for decades to find a way of extracting these elements at different times, which would make the process more energy-efficient and reduce the risk of dangerous explosions.
In a paper published today in the journal Nature Chemistry, scientists at the University of Glasgow outline how they have managed to replicate the way plants use the sun’s energy to split water molecules into hydrogen and oxygen at separate times and at separate physical locations.
Experts heralded the “important” discovery yesterday, saying it could make hydrogen a more practicable source of green energy.
Professor Xile Hu, director of the Laboratory of Inorganic Synthesis and Catalysis at the Swiss Federal Institute of Technology in Lausanne, said: “This work provides an important demonstration of the principle of separating hydrogen and oxygen production in electrolysis and is very original. Of course, further developments are needed to improve the capacity of the system, energy efficiency, lifetime and so on. But this research already offers potential and promise and can help in making the storage of green energy cheaper.”
Until now, scientists have separated hydrogen and oxygen atoms using electrolysis, which involves running electricity through water. This is energy-intensive and potentially explosive, because the oxygen and hydrogen are removed at the same time.
But in the new variation of electrolysis developed at the University of Glasgow, hydrogen and oxygen are produced from the water at different times, thanks to what researchers call an “electron-coupled proton buffer”. This acts to collect and store hydrogen while the current runs through the water, meaning that in the first instance only oxygen is released. The hydrogen can then be released when convenient.
Because pure hydrogen does not occur naturally, it takes energy to make it. This new version of electrolysis takes longer, but is safer and uses less energy per minute, making it easier to rely on renewable energy sources for the electricity needed to separate the atoms.
Dr Mark Symes, the report’s co-author, said: “What we have developed is a system for producing hydrogen on an industrial scale much more cheaply and safely than is currently possible. Currently much of the industrial production of hydrogen relies on reformation of fossil fuels, but if the electricity is provided via solar, wind or wave sources we can create an almost totally clean source of power.”
Professor Lee Cronin, the other author of the research, said: “The existing gas infrastructure which brings gas to homes across the country could just as easily carry hydrogen as it currently does methane. If we were to use renewable power to generate hydrogen using the cheaper, more efficient decoupled process we’ve created, the country could switch to hydrogen to generate our electrical power at home. It would also allow us to significantly reduce the country’s carbon footprint.”
Nathan Lewis, a chemistry professor at the California Institute of Technology and a green energy expert, said: “This seems like an interesting scientific demonstration that may possibly address one of the problems involved with water electrolysis, which remains a relatively expensive method of producing hydrogen.”
Read the entire article following the jump.
- The Digital Afterlife and i-Death>
Leave it to Google to help you auto-euthanize and die digitally. The presence of our online selves after death was of limited concern until recently. However, with the explosion of online media and social networks our digital tracks remain preserved and scattered across drives and backups in distributed, anonymous data centers. Physical death does not change this.
[A case in point: your friendly editor at theDiagonal was recently asked to befriend a colleague via LinkedIn. All well and good, except that the colleague had passed-away two years earlier.]
So, armed with Google’s new Inactive Account Manager, death — at least online — may be just a couple of clicks away. By corollary it would be a small leap indeed to imagine an enterprising company charging an annual fee to a dearly-departed member to maintain a digital afterlife ad infinitum.
From the Independent:
The search engine giant Google has announced a new feature designed to allow users to decide what happens to their data after they die.
The feature, which applies to the Google-run email system Gmail as well as Google Plus, YouTube, Picasa and other tools, represents an attempt by the company to be the first to deal with the sensitive issue of data after death.
In a post on the company’s Public Policy Blog Andreas Tuerk, Product Manager, writes: “We hope that this new feature will enable you to plan your digital afterlife – in a way that protects your privacy and security – and make life easier for your loved ones after you’re gone.”
Google says that the new account management tool will allow users to opt to have their data deleted after three, six, nine or 12 months of inactivity. Alternatively users can arrange for certain contacts to be sent data from some or all of their services.
The California-based company did however stress that individuals listed to receive data in the event of ‘inactivity’ would be warned by text or email before the information was sent.
Social Networking site Facebook already has a function that allows friends and family to “memorialize” an account once its owner has died.
Read the entire article following the jump.
- Tracking and Monetizing Your Every Move>
Your movements are valuable — but not in the way you may think. Mobile technology companies are moving rapidly to exploit the vast amount of data collected from the billions of mobile devices. This data is extremely valuable to an array of organizations, including urban planners, retailers, and travel and transportation marketers. And, of course, this raises significant privacy concerns. Many believe that when the data is used collectively it preserves user anonymity. However, if correlated with other data sources it could be used to discover a range of unintended and previously private information, relating both to individuals and to groups.
From MIT Technology Review:
Wireless operators have access to an unprecedented volume of information about users’ real-world activities, but for years these massive data troves were put to little use other than for internal planning and marketing.
This data is under lock and key no more. Under pressure to seek new revenue streams (see “AT&T Looks to Outside Developers for Innovation”), a growing number of mobile carriers are now carefully mining, packaging, and repurposing their subscriber data to create powerful statistics about how people are moving about in the real world.
More comprehensive than the data collected by any app, this is the kind of information that, experts believe, could help cities plan smarter road networks, businesses reach more potential customers, and health officials track diseases. But even if shared with the utmost of care to protect anonymity, it could also present new privacy risks for customers.
The program, still in its early days, is creating a natural extension of what already happens online, with websites tracking clicks and getting a detailed breakdown of where visitors come from and what they are interested in.
Similarly, Verizon is working to sell demographics about the people who, for example, attend an event, how they got there or the kinds of apps they use once they arrive. In a recent case study, says program spokeswoman Debra Lewis, Verizon showed that fans from Baltimore outnumbered fans from San Francisco by three to one inside the Super Bowl stadium. That information might have been expensive or difficult to obtain in other ways, such as through surveys, because not all the people in the stadium purchased their own tickets and had credit card information on file, nor had they all downloaded the Super Bowl’s app.
Other telecommunications companies are exploring similar ideas. In Europe, for example, Telefonica launched a similar program last October, and the head of this new business unit gave the keynote address at new industry conference on “big data monetization in telecoms” in January.
“It doesn’t look to me like it’s a big part of their [telcos’] business yet, though at the same time it could be,” says Vincent Blondel, an applied mathematician who is now working on a research challenge from the operator Orange to analyze two billion anonymous records of communications between five million customers in Africa.
The concerns about making such data available, Blondel says, are not that individual data points will leak out or contain compromising information but that they might be cross-referenced with other data sources to reveal unintended details about individuals or specific groups (see “How Access to Location Data Could Trample Your Privacy”).
Already, some startups are building businesses by aggregating this kind of data in useful ways, beyond what individual companies may offer. For example, AirSage, an Atlanta, Georgia, a company founded in 2000, has spent much of the last decade negotiating what it says are exclusive rights to put its hardware inside the firewalls of two of the top three U.S. wireless carriers and collect, anonymize, encrypt, and analyze cellular tower signaling data in real time. Since AirSage solidified the second of these major partnerships about a year ago (it won’t specify which specific carriers it works with), it has been processing 15 billion locations a day and can account for movement of about a third of the U.S. population in some places to within less than 100 meters, says marketing vice president Andrea Moe.
As users’ mobile devices ping cellular towers in different locations, AirSage’s algorithms look for patterns in that location data—mostly to help transportation planners and traffic reports, so far. For example, the software might infer that the owners of devices that spend time in a business park from nine to five are likely at work, so a highway engineer might be able to estimate how much traffic on the local freeway exit is due to commuters.
Other companies are starting to add additional layers of information beyond cellular network data. One customer of AirSage is a relatively small San Francisco startup, Streetlight Data which recently raised $3 million in financing backed partly by the venture capital arm of Deutsche Telekom.
Streetlight buys both cellular network and GPS navigation data that can be mined for useful market research. (The cellular data covers a larger number of people, but the GPS data, collected by mapping software providers, can improve accuracy.) Today, many companies already build massive demographic and behavioral databases on top of U.S. Census information about households to help retailers choose where to build new stores and plan marketing budgets. But Streetlight’s software, with interactive, color-coded maps of neighborhoods and roads, offers more practical information. It can be tied to the demographics of people who work nearby, commute through on a particular highway, or are just there for a visit, rather than just supplying information about who lives in the area.
Read the entire article following the jump.
Image: mobile devices. Courtesy of W3.org
- Technology and the Exploitation of Children>
Many herald the forward motion of technological innovation as progress. In many cases the momentum does genuinely seem to carry us towards a better place; it broadly alleviates pain and suffering; it generally delivers more and better nutrition to our bodies and our minds. Yet for all the positive steps, this progress is often accompanied by retrograde leaps — often paradoxical ones. Particularly disturbing is the relative ease to which technology allows us — the responsible adults – to sexualise and exploit children. Now, this is certainly not a new phenomenon, but our technical prowess certainly makes this problem more pervasive. A case in point, the Instagram beauty pageant. Move over Honey Boo-Boo.
From the Washington Post:
The photo-sharing site Instagram has become wildly popular as a way to trade pictures of pets and friends. But a new trend on the site is making parents cringe: beauty pageants, in which thousands of young girls — many appearing no older than 12 or 13 — submit photographs of themselves for others to judge.
In one case, the mug shots of four girls, middle-school-age or younger, have been pitted against each other. One is all dimples, wearing a hair bow and a big, toothy grin. Another is trying out a pensive, sultry look.
Any of Instagram’s 30 million users can vote on the appearance of the girls in a comments section of the post. Once a girl’s photo receives a certain number of negative remarks, the pageant host, who can remain anonymous, can update it with a big red X or the word “OUT” scratched across her face.
“U.G.L.Y,” wrote one user about a girl, who submitted her photo to one of the pageants identified on Instagram by the keyword “#beautycontest.”
The phenomenon has sparked concern among parents and child safety advocates who fear that young girls are making themselves vulnerable to adult strangers and participating in often cruel social interactions at a sensitive period of development.
But the contests are the latest example of how technology is pervading the lives of children in ways that parents and teachers struggle to understand or monitor.
“What started out as just a photo-sharing site has become something really pernicious for young girls,” said Rachel Simmons, author of “Odd Girl Out” and a speaker on youth and girls. “What happened was, like most social media experiences, girls co-opted it and imposed their social life on it to compete for attention and in a very exaggerated way.”
It’s difficult to track when the pageants began and who initially set them up. A keyword search of #beautycontest turned up 8,757 posts, while #rateme had 27,593 photo posts. Experts say those two terms represent only a fraction of the activity. Contests are also appearing on other social media sites, including Tumblr and Snapchat — mobile apps that have grown in popularity among youth.
Facebook, which bought Instagram last year, declined to comment. The company has a policy of not allowing anyone under the age of 13 to create an account or share photos on Instagram. But Facebook has been criticized for allowing pre-teens to get around the rule — two years ago, Consumer Reports estimated their presence on Facebook was 7.5 million. (Washington Post Co. Chairman Donald Graham sits on Facebook’s board of directors.)
Read the entire article after the jump.
Image: Instagram. Courtesy of Wired.
- Blame (Or Hug) Martin Cooper>
Martin Cooper. You may not know that name, but you and a fair proportion of the world’s 7 billion inhabitants have surely held or dropped or prodded or cursed his offspring.
You see, forty years ago Martin Cooper used his baby to make the first public mobile phone call. Martin Cooper invented the cell phone.
From the Guardian:
It is 40 years this week since the first public mobile phone call. On 3 April, 1973, Martin Cooper, a pioneering inventor working for Motorola in New York, called a rival engineer from the pavement of Sixth Avenue to brag and was met with a stunned, defeated silence. The race to make the first portable phone had been won. The Pandora’s box containing txt-speak, pocket-dials and pig-hating suicidal birds was open.
Many people at Motorola, however, felt mobile phones would never be a mass-market consumer product. They wanted the firm to focus on business carphones. But Cooper and his team persisted. Ten years after that first boastful phonecall they brought the portable phone to market, at a retail price of around $4,000.
Thirty years on, the number of mobile phone subscribers worldwide is estimated at six and a half billion. And Angry Birds games have been downloaded 1.7bn times.
This is the story of the mobile phone in 40 facts:
1 That first portable phone was called a DynaTAC. The original model had 35 minutes of battery life and weighed one kilogram.
2 Several prototypes of the DynaTAC were created just 90 days after Cooper had first suggested the idea. He held a competition among Motorola engineers from various departments to design it and ended up choosing “the least glamorous”.
3 The DynaTAC’s weight was reduced to 794g before it came to market. It was still heavy enough to beat someone to death with, although this fact was never used as a selling point.
4 Nonetheless, people cottoned on. DynaTAC became the phone of choice for fictional psychopaths, including Wall Street’s Gordon Gekko, American Psycho’s Patrick Bateman and Saved by the Bell’s Zack Morris.
5 The UK’s first public mobile phone call was made by comedian Ernie Wise in 1985 from St Katharine dock to the Vodafone head offices over a curry house in Newbury.
6 Vodafone’s 1985 monopoly of the UK mobile market lasted just nine days before Cellnet (now O2) launched its rival service. A Vodafone spokesperson was probably all like: “Aw, shucks!”
7 Cellnet and Vodafone were the only UK mobile providers until 1993.
8 It took Vodafone just less than nine years to reach the one million customers mark. They reached two million just 18 months later.
9 The first smartphone was IBM’s Simon, which debuted at the Wireless World Conference in 1993. It had an early LCD touchscreen and also functioned as an email device, electronic pager, calendar, address book and calculator.
10 The first cameraphone was created by French entrepreneur Philippe Kahn. He took the first photograph with a mobile phone, of his newborn daughter Sophie, on 11 June, 1997.
Read the entire article after the jump.
Image: Dr. Martin Cooper, the inventor of the cell phone, with DynaTAC prototype from 1973 (in the year 2007). Courtesy of Wikipedia.
- Next Up: Apple TV>
Robert Hof argues that the time is ripe for Steve Jobs’ corporate legacy to reinvent the TV. Apple transformed the personal computer industry, the mobile phone market and the music business. Clearly the company has all the components in place to assemble another innovation.
From Technology Review:
Steve Jobs couldn’t hide his frustration. Asked at a technology conference in 2010 whether Apple might finally turn its attention to television, he launched into an exasperated critique of TV. Cable and satellite TV companies make cheap, primitive set-top boxes that “squash any opportunity for innovation,” he fumed. Viewers are stuck with “a table full of remotes, a cluster full of boxes, a bunch of different [interfaces].” It was the kind of technological mess that cried out for Apple to clean it up with an elegant product. But Jobs professed to have no idea how his company could transform the TV.
Scarcely a year later, however, he sounded far more confident. Before he died on October 5, 2011, he told his biographer, Walter Isaacson, that Apple wanted to create an “integrated television set that is completely easy to use.” It would sync with other devices and Apple’s iCloud online storage service and provide “the simplest user interface you could imagine.” He added, tantalizingly, “I finally cracked it.”
Precisely what he cracked remains hidden behind Apple’s shroud of secrecy. Apple has had only one television-related product—the black, hockey-puck-size Apple TV device, which streams shows and movies to a TV. For years, Jobs and Tim Cook, his successor as CEO, called that device a “hobby.” But under the guise of this hobby, Apple has been steadily building hardware, software, and services that make it easier for people to watch shows and movies in whatever way they wish. Already, the company has more of the pieces for a compelling next-generation TV experience than people might realize.
And as Apple showed with the iPad and iPhone, it doesn’t have to invent every aspect of a product in order for it to be disruptive. Instead, it has become the leader in consumer electronics by combining existing technologies with some of its own and packaging them into products that are simple to use. TV seems to be at that moment now. People crave something better than the fusty, rigidly controlled cable TV experience, and indeed, the technologies exist for something better to come along. Speedier broadband connections, mobile TV apps, and the availability of some shows and movies on demand from Netflix and Hulu have made it easier to watch TV anytime, anywhere. The number of U.S. cable and satellite subscribers has been flat since 2010.
Apple would not comment. But it’s clear from two dozen interviews with people close to Apple suppliers and partners, and with people Apple has spoken to in the TV industry, that television—the medium and the device—is indeed its next target.
The biggest question is not whether Apple will take on TV, but when. The company must eventually come up with another breakthrough product; with annual revenue already topping $156 billion, it needs something very big to keep growth humming after the next year or two of the iPad boom. Walter Price, managing director of Allianz Global Investors, which holds nearly $1 billion in Apple shares, met with Apple executives in September and came away convinced that it would be years before Apple could get a significant share of the $345 billion worldwide market for televisions. But at $1,000, the bare minimum most analysts expect an Apple television to cost, such a product would eventually be a significant revenue generator. “You sell 10 million of those, it can move the needle,” he says.
Cook, who replaced Jobs as CEO in August 2011, could use a boost, too. He has presided over missteps such as a flawed iPhone mapping app that led to a rare apology and a major management departure. Seen as a peerless operations whiz, Cook still needs a revolutionary product of his own to cement his place next to Saint Steve. Corey Ferengul, a principal at the digital media investment firm Apace Equities and a former executive at Rovi, which provided TV programming guide services to Apple and other companies, says an Apple TV will be that product: “This will be Tim Cook’s first ‘holy shit’ innovation.”
What Apple Already Has
Rapt attention would be paid to whatever round-edged piece of brushed-aluminum hardware Apple produced, but a television set itself would probably be the least important piece of its television strategy. In fact, many well-connected people in technology and television, from TV and online video maven Mark Cuban to venture capitalist and former Apple executive Jean-Louis Gassée, can’t figure out why Apple would even bother with the machines.
For one thing, selling televisions is a low-margin business. No one subsidizes the purchase of a TV the way your wireless carrier does with the iPhone (an iPhone might cost you $200, but Apple’s revenue from it is much higher than that). TVs are also huge and difficult to stock in stores, let alone ship to homes. Most of all, the upgrade cycle that powers Apple’s iPhone and iPad profit engine doesn’t apply to television sets—no one replaces them every year or two.
But even though TVs don’t line up neatly with the way Apple makes money on other hardware, they are likely to remain central to people’s ever-increasing consumption of video, games, and other forms of media. Apple at least initially could sell the screens as a kind of Trojan horse—a way of entering or expanding its role in lines of business that are more profitable, such as selling movies, shows, games, and other Apple hardware.
Read the entire article following the jump.
Image courtesy of Apple, Inc.
- Startup Ideas>
For technologists the barriers to developing a new product have never been so low. Tools to develop, integrate and distribute software apps are to all intents negligible. Of course, most would recognize that development is often the easy part. The real difficulty lies in building an effective and sustainable marketing and communication strategy and getting the product adopted.
The recent headlines of 17 year old British app developer Nick D’Aloisio selling his Summly app to Yahoo! for the tidy sum of $30 million, has lots of young and seasoned developers scratching their heads. After all, if a school kid can do it, why not anybody? Why not me?
Paul Graham may have some of the answers. He sold his first company to Yahoo in 1998. He now runs YCombinator a successful startup incubator. We excerpt his recent, observant and insightful essay below.
From Paul Graham:
The way to get startup ideas is not to try to think of startup ideas. It’s to look for problems, preferably problems you have yourself.
The very best startup ideas tend to have three things in common: they’re something the founders themselves want, that they themselves can build, and that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and Facebook all began this way.
Why is it so important to work on a problem you have? Among other things, it ensures the problem really exists. It sounds obvious to say you should only work on problems that exist. And yet by far the most common mistake startups make is to solve problems no one has.
I made it myself. In 1995 I started a company to put art galleries online. But galleries didn’t want to be online. It’s not how the art business works. So why did I spend 6 months working on this stupid idea? Because I didn’t pay attention to users. I invented a model of the world that didn’t correspond to reality, and worked from that. I didn’t notice my model was wrong until I tried to convince users to pay for what we’d built. Even then I took embarrassingly long to catch on. I was attached to my model of the world, and I’d spent a lot of time on the software. They had to want it!
Why do so many founders build things no one wants? Because they begin by trying to think of startup ideas. That m.o. is doubly dangerous: it doesn’t merely yield few good ideas; it yields bad ideas that sound plausible enough to fool you into working on them.
At YC we call these “made-up” or “sitcom” startup ideas. Imagine one of the characters on a TV show was starting a startup. The writers would have to invent something for it to do. But coming up with good startup ideas is hard. It’s not something you can do for the asking. So (unless they got amazingly lucky) the writers would come up with an idea that sounded plausible, but was actually bad.
For example, a social network for pet owners. It doesn’t sound obviously mistaken. Millions of people have pets. Often they care a lot about their pets and spend a lot of money on them. Surely many of these people would like a site where they could talk to other pet owners. Not all of them perhaps, but if just 2 or 3 percent were regular visitors, you could have millions of users. You could serve them targeted offers, and maybe charge for premium features.
The danger of an idea like this is that when you run it by your friends with pets, they don’t say “I would never use this.” They say “Yeah, maybe I could see using something like that.” Even when the startup launches, it will sound plausible to a lot of people. They don’t want to use it themselves, at least not right now, but they could imagine other people wanting it. Sum that reaction across the entire population, and you have zero users.
When a startup launches, there have to be at least some users who really need what they’re making—not just people who could see themselves using it one day, but who want it urgently. Usually this initial group of users is small, for the simple reason that if there were something that large numbers of people urgently needed and that could be built with the amount of effort a startup usually puts into a version one, it would probably already exist. Which means you have to compromise on one dimension: you can either build something a large number of people want a small amount, or something a small number of people want a large amount. Choose the latter. Not all ideas of that type are good startup ideas, but nearly all good startup ideas are of that type.
Imagine a graph whose x axis represents all the people who might want what you’re making and whose y axis represents how much they want it. If you invert the scale on the y axis, you can envision companies as holes. Google is an immense crater: hundreds of millions of people use it, and they need it a lot. A startup just starting out can’t expect to excavate that much volume. So you have two choices about the shape of hole you start with. You can either dig a hole that’s broad but shallow, or one that’s narrow and deep, like a well.
Made-up startup ideas are usually of the first type. Lots of people are mildly interested in a social network for pet owners.
Nearly all good startup ideas are of the second type. Microsoft was a well when they made Altair Basic. There were only a couple thousand Altair owners, but without this software they were programming in machine language. Thirty years later Facebook had the same shape. Their first site was exclusively for Harvard students, of which there are only a few thousand, but those few thousand users wanted it a lot.
When you have an idea for a startup, ask yourself: who wants this right now? Who wants this so much that they’ll use it even when it’s a crappy version one made by a two-person startup they’ve never heard of? If you can’t answer that, the idea is probably bad.
You don’t need the narrowness of the well per se. It’s depth you need; you get narrowness as a byproduct of optimizing for depth (and speed). But you almost always do get it. In practice the link between depth and narrowness is so strong that it’s a good sign when you know that an idea will appeal strongly to a specific group or type of user.
But while demand shaped like a well is almost a necessary condition for a good startup idea, it’s not a sufficient one. If Mark Zuckerberg had built something that could only ever have appealed to Harvard students, it would not have been a good startup idea. Facebook was a good idea because it started with a small market there was a fast path out of. Colleges are similar enough that if you build a facebook that works at Harvard, it will work at any college. So you spread rapidly through all the colleges. Once you have all the college students, you get everyone else simply by letting them in.
Similarly for Microsoft: Basic for the Altair; Basic for other machines; other languages besides Basic; operating systems; applications; IPO.
How do you tell whether there’s a path out of an idea? How do you tell whether something is the germ of a giant company, or just a niche product? Often you can’t. The founders of Airbnb didn’t realize at first how big a market they were tapping. Initially they had a much narrower idea. They were going to let hosts rent out space on their floors during conventions. They didn’t foresee the expansion of this idea; it forced itself upon them gradually. All they knew at first is that they were onto something. That’s probably as much as Bill Gates or Mark Zuckerberg knew at first.
Occasionally it’s obvious from the beginning when there’s a path out of the initial niche. And sometimes I can see a path that’s not immediately obvious; that’s one of our specialties at YC. But there are limits to how well this can be done, no matter how much experience you have. The most important thing to understand about paths out of the initial idea is the meta-fact that these are hard to see.
So if you can’t predict whether there’s a path out of an idea, how do you choose between ideas? The truth is disappointing but interesting: if you’re the right sort of person, you have the right sort of hunches. If you’re at the leading edge of a field that’s changing fast, when you have a hunch that something is worth doing, you’re more likely to be right.
In Zen and the Art of Motorcycle Maintenance, Robert Pirsig says:
You want to know how to paint a perfect painting? It’s easy. Make yourself perfect and then just paint naturally.
I’ve wondered about that passage since I read it in high school. I’m not sure how useful his advice is for painting specifically, but it fits this situation well. Empirically, the way to have good startup ideas is to become the sort of person who has them.
Being at the leading edge of a field doesn’t mean you have to be one of the people pushing it forward. You can also be at the leading edge as a user. It was not so much because he was a programmer that Facebook seemed a good idea to Mark Zuckerberg as because he used computers so much. If you’d asked most 40 year olds in 2004 whether they’d like to publish their lives semi-publicly on the Internet, they’d have been horrified at the idea. But Mark already lived online; to him it seemed natural.
Paul Buchheit says that people at the leading edge of a rapidly changing field “live in the future.” Combine that with Pirsig and you get:
Live in the future, then build what’s missing.
That describes the way many if not most of the biggest startups got started. Neither Apple nor Yahoo nor Google nor Facebook were even supposed to be companies at first. They grew out of things their founders built because there seemed a gap in the world.
If you look at the way successful founders have had their ideas, it’s generally the result of some external stimulus hitting a prepared mind. Bill Gates and Paul Allen hear about the Altair and think “I bet we could write a Basic interpreter for it.” Drew Houston realizes he’s forgotten his USB stick and thinks “I really need to make my files live online.” Lots of people heard about the Altair. Lots forgot USB sticks. The reason those stimuli caused those founders to start companies was that their experiences had prepared them to notice the opportunities they represented.
The verb you want to be using with respect to startup ideas is not “think up” but “notice.” At YC we call ideas that grow naturally out of the founders’ own experiences “organic” startup ideas. The most successful startups almost all begin this way.
That may not have been what you wanted to hear. You may have expected recipes for coming up with startup ideas, and instead I’m telling you that the key is to have a mind that’s prepared in the right way. But disappointing though it may be, this is the truth. And it is a recipe of a sort, just one that in the worst case takes a year rather than a weekend.
If you’re not at the leading edge of some rapidly changing field, you can get to one. For example, anyone reasonably smart can probably get to an edge of programming (e.g. building mobile apps) in a year. Since a successful startup will consume at least 3-5 years of your life, a year’s preparation would be a reasonable investment. Especially if you’re also looking for a cofounder.
You don’t have to learn programming to be at the leading edge of a domain that’s changing fast. Other domains change fast. But while learning to hack is not necessary, it is for the forseeable future sufficient. As Marc Andreessen put it, software is eating the world, and this trend has decades left to run.
Knowing how to hack also means that when you have ideas, you’ll be able to implement them. That’s not absolutely necessary (Jeff Bezos couldn’t) but it’s an advantage. It’s a big advantage, when you’re considering an idea like putting a college facebook online, if instead of merely thinking “That’s an interesting idea,” you can think instead “That’s an interesting idea. I’ll try building an initial version tonight.” It’s even better when you’re both a programmer and the target user, because then the cycle of generating new versions and testing them on users can happen inside one head.
Once you’re living in the future in some respect, the way to notice startup ideas is to look for things that seem to be missing. If you’re really at the leading edge of a rapidly changing field, there will be things that are obviously missing. What won’t be obvious is that they’re startup ideas. So if you want to find startup ideas, don’t merely turn on the filter “What’s missing?” Also turn off every other filter, particularly “Could this be a big company?” There’s plenty of time to apply that test later. But if you’re thinking about that initially, it may not only filter out lots of good ideas, but also cause you to focus on bad ones.
Most things that are missing will take some time to see. You almost have to trick yourself into seeing the ideas around you.
But you know the ideas are out there. This is not one of those problems where there might not be an answer. It’s impossibly unlikely that this is the exact moment when technological progress stops. You can be sure people are going to build things in the next few years that will make you think “What did I do before x?”
And when these problems get solved, they will probably seem flamingly obvious in retrospect. What you need to do is turn off the filters that usually prevent you from seeing them. The most powerful is simply taking the current state of the world for granted. Even the most radically open-minded of us mostly do that. You couldn’t get from your bed to the front door if you stopped to question everything.
But if you’re looking for startup ideas you can sacrifice some of the efficiency of taking the status quo for granted and start to question things. Why is your inbox overflowing? Because you get a lot of email, or because it’s hard to get email out of your inbox? Why do you get so much email? What problems are people trying to solve by sending you email? Are there better ways to solve them? And why is it hard to get emails out of your inbox? Why do you keep emails around after you’ve read them? Is an inbox the optimal tool for that?
Pay particular attention to things that chafe you. The advantage of taking the status quo for granted is not just that it makes life (locally) more efficient, but also that it makes life more tolerable. If you knew about all the things we’ll get in the next 50 years but don’t have yet, you’d find present day life pretty constraining, just as someone from the present would if they were sent back 50 years in a time machine. When something annoys you, it could be because you’re living in the future.
When you find the right sort of problem, you should probably be able to describe it as obvious, at least to you. When we started Viaweb, all the online stores were built by hand, by web designers making individual HTML pages. It was obvious to us as programmers that these sites would have to be generated by software.
Which means, strangely enough, that coming up with startup ideas is a question of seeing the obvious. That suggests how weird this process is: you’re trying to see things that are obvious, and yet that you hadn’t seen.
Since what you need to do here is loosen up your own mind, it may be best not to make too much of a direct frontal attack on the problem—i.e. to sit down and try to think of ideas. The best plan may be just to keep a background process running, looking for things that seem to be missing. Work on hard problems, driven mainly by curiousity, but have a second self watching over your shoulder, taking note of gaps and anomalies.
Give yourself some time. You have a lot of control over the rate at which you turn yours into a prepared mind, but you have less control over the stimuli that spark ideas when they hit it. If Bill Gates and Paul Allen had constrained themselves to come up with a startup idea in one month, what if they’d chosen a month before the Altair appeared? They probably would have worked on a less promising idea. Drew Houston did work on a less promising idea before Dropbox: an SAT prep startup. But Dropbox was a much better idea, both in the absolute sense and also as a match for his skills.
A good way to trick yourself into noticing ideas is to work on projects that seem like they’d be cool. If you do that, you’ll naturally tend to build things that are missing. It wouldn’t seem as interesting to build something that already existed.
Just as trying to think up startup ideas tends to produce bad ones, working on things that could be dismissed as “toys” often produces good ones. When something is described as a toy, that means it has everything an idea needs except being important. It’s cool; users love it; it just doesn’t matter. But if you’re living in the future and you build something cool that users love, it may matter more than outsiders think. Microcomputers seemed like toys when Apple and Microsoft started working on them. I’m old enough to remember that era; the usual term for people with their own microcomputers was “hobbyists.” BackRub seemed like an inconsequential science project. The Facebook was just a way for undergrads to stalk one another.
At YC we’re excited when we meet startups working on things that we could imagine know-it-alls on forums dismissing as toys. To us that’s positive evidence an idea is good.
If you can afford to take a long view (and arguably you can’t afford not to), you can turn “Live in the future and build what’s missing” into something even better:
Live in the future and build what seems interesting.
That’s what I’d advise college students to do, rather than trying to learn about “entrepreneurship.” “Entrepreneurship” is something you learn best by doing it. The examples of the most successful founders make that clear. What you should be spending your time on in college is ratcheting yourself into the future. College is an incomparable opportunity to do that. What a waste to sacrifice an opportunity to solve the hard part of starting a startup—becoming the sort of person who can have organic startup ideas—by spending time learning about the easy part. Especially since you won’t even really learn about it, any more than you’d learn about sex in a class. All you’ll learn is the words for things.
The clash of domains is a particularly fruitful source of ideas. If you know a lot about programming and you start learning about some other field, you’ll probably see problems that software could solve. In fact, you’re doubly likely to find good problems in another domain: (a) the inhabitants of that domain are not as likely as software people to have already solved their problems with software, and (b) since you come into the new domain totally ignorant, you don’t even know what the status quo is to take it for granted.
So if you’re a CS major and you want to start a startup, instead of taking a class on entrepreneurship you’re better off taking a class on, say, genetics. Or better still, go work for a biotech company. CS majors normally get summer jobs at computer hardware or software companies. But if you want to find startup ideas, you might do better to get a summer job in some unrelated field.
Or don’t take any extra classes, and just build things. It’s no coincidence that Microsoft and Facebook both got started in January. At Harvard that is (or was) Reading Period, when students have no classes to attend because they’re supposed to be studying for finals.
But don’t feel like you have to build things that will become startups. That’s premature optimization. Just build things. Preferably with other students. It’s not just the classes that make a university such a good place to crank oneself into the future. You’re also surrounded by other people trying to do the same thing. If you work together with them on projects, you’ll end up producing not just organic ideas, but organic ideas with organic founding teams—and that, empirically, is the best combination.
Beware of research. If an undergrad writes something all his friends start using, it’s quite likely to represent a good startup idea. Whereas a PhD dissertation is extremely unlikely to. For some reason, the more a project has to count as research, the less likely it is to be something that could be turned into a startup.  I think the reason is that the subset of ideas that count as research is so narrow that it’s unlikely that a project that satisfied that constraint would also satisfy the orthogonal constraint of solving users’ problems. Whereas when students (or professors) build something as a side-project, they automatically gravitate toward solving users’ problems—perhaps even with an additional energy that comes from being freed from the constraints of research.
Because a good idea should seem obvious, when you have one you’ll tend to feel that you’re late. Don’t let that deter you. Worrying that you’re late is one of the signs of a good idea. Ten minutes of searching the web will usually settle the question. Even if you find someone else working on the same thing, you’re probably not too late. It’s exceptionally rare for startups to be killed by competitors—so rare that you can almost discount the possibility. So unless you discover a competitor with the sort of lock-in that would prevent users from choosing you, don’t discard the idea.
If you’re uncertain, ask users. The question of whether you’re too late is subsumed by the question of whether anyone urgently needs what you plan to make. If you have something that no competitor does and that some subset of users urgently need, you have a beachhead.
The question then is whether that beachhead is big enough. Or more importantly, who’s in it: if the beachhead consists of people doing something lots more people will be doing in the future, then it’s probably big enough no matter how small it is. For example, if you’re building something differentiated from competitors by the fact that it works on phones, but it only works on the newest phones, that’s probably a big enough beachhead.
Err on the side of doing things where you’ll face competitors. Inexperienced founders usually give competitors more credit than they deserve. Whether you succeed depends far more on you than on your competitors. So better a good idea with competitors than a bad one without.
You don’t need to worry about entering a “crowded market” so long as you have a thesis about what everyone else in it is overlooking. In fact that’s a very promising starting point. Google was that type of idea. Your thesis has to be more precise than “we’re going to make an x that doesn’t suck” though. You have to be able to phrase it in terms of something the incumbents are overlooking. Best of all is when you can say that they didn’t have the courage of their convictions, and that your plan is what they’d have done if they’d followed through on their own insights. Google was that type of idea too. The search engines that preceded them shied away from the most radical implications of what they were doing—particularly that the better a job they did, the faster users would leave.
A crowded market is actually a good sign, because it means both that there’s demand and that none of the existing solutions are good enough. A startup can’t hope to enter a market that’s obviously big and yet in which they have no competitors. So any startup that succeeds is either going to be entering a market with existing competitors, but armed with some secret weapon that will get them all the users (like Google), or entering a market that looks small but which will turn out to be big (like Microsoft).
There are two more filters you’ll need to turn off if you want to notice startup ideas: the unsexy filter and the schlep filter.
Most programmers wish they could start a startup by just writing some brilliant code, pushing it to a server, and having users pay them lots of money. They’d prefer not to deal with tedious problems or get involved in messy ways with the real world. Which is a reasonable preference, because such things slow you down. But this preference is so widespread that the space of convenient startup ideas has been stripped pretty clean. If you let your mind wander a few blocks down the street to the messy, tedious ideas, you’ll find valuable ones just sitting there waiting to be implemented.
The schlep filter is so dangerous that I wrote a separate essay about the condition it induces, which I called schlep blindness. I gave Stripe as an example of a startup that benefited from turning off this filter, and a pretty striking example it is. Thousands of programmers were in a position to see this idea; thousands of programmers knew how painful it was to process payments before Stripe. But when they looked for startup ideas they didn’t see this one, because unconsciously they shrank from having to deal with payments. And dealing with payments is a schlep for Stripe, but not an intolerable one. In fact they might have had net less pain; because the fear of dealing with payments kept most people away from this idea, Stripe has had comparatively smooth sailing in other areas that are sometimes painful, like user acquisition. They didn’t have to try very hard to make themselves heard by users, because users were desperately waiting for what they were building.
The unsexy filter is similar to the schlep filter, except it keeps you from working on problems you despise rather than ones you fear. We overcame this one to work on Viaweb. There were interesting things about the architecture of our software, but we weren’t interested in ecommerce per se. We could see the problem was one that needed to be solved though.
Turning off the schlep filter is more important than turning off the unsexy filter, because the schlep filter is more likely to be an illusion. And even to the degree it isn’t, it’s a worse form of self-indulgence. Starting a successful startup is going to be fairly laborious no matter what. Even if the product doesn’t entail a lot of schleps, you’ll still have plenty dealing with investors, hiring and firing people, and so on. So if there’s some idea you think would be cool but you’re kept away from by fear of the schleps involved, don’t worry: any sufficiently good idea will have as many.
The unsexy filter, while still a source of error, is not as entirely useless as the schlep filter. If you’re at the leading edge of a field that’s changing rapidly, your ideas about what’s sexy will be somewhat correlated with what’s valuable in practice. Particularly as you get older and more experienced. Plus if you find an idea sexy, you’ll work on it more enthusiastically.
While the best way to discover startup ideas is to become the sort of person who has them and then build whatever interests you, sometimes you don’t have that luxury. Sometimes you need an idea now. For example, if you’re working on a startup and your initial idea turns out to be bad.
For the rest of this essay I’ll talk about tricks for coming up with startup ideas on demand. Although empirically you’re better off using the organic strategy, you could succeed this way. You just have to be more disciplined. When you use the organic method, you don’t even notice an idea unless it’s evidence that something is truly missing. But when you make a conscious effort to think of startup ideas, you have to replace this natural constraint with self-discipline. You’ll see a lot more ideas, most of them bad, so you need to be able to filter them.
One of the biggest dangers of not using the organic method is the example of the organic method. Organic ideas feel like inspirations. There are a lot of stories about successful startups that began when the founders had what seemed a crazy idea but “just knew” it was promising. When you feel that about an idea you’ve had while trying to come up with startup ideas, you’re probably mistaken.
When searching for ideas, look in areas where you have some expertise. If you’re a database expert, don’t build a chat app for teenagers (unless you’re also a teenager). Maybe it’s a good idea, but you can’t trust your judgment about that, so ignore it. There have to be other ideas that involve databases, and whose quality you can judge. Do you find it hard to come up with good ideas involving databases? That’s because your expertise raises your standards. Your ideas about chat apps are just as bad, but you’re giving yourself a Dunning-Kruger pass in that domain.
The place to start looking for ideas is things you need. There must be things you need.
One good trick is to ask yourself whether in your previous job you ever found yourself saying “Why doesn’t someone make x? If someone made x we’d buy it in a second.” If you can think of any x people said that about, you probably have an idea. You know there’s demand, and people don’t say that about things that are impossible to build.
More generally, try asking yourself whether there’s something unusual about you that makes your needs different from most other people’s. You’re probably not the only one. It’s especially good if you’re different in a way people will increasingly be.
If you’re changing ideas, one unusual thing about you is the idea you’d previously been working on. Did you discover any needs while working on it? Several well-known startups began this way. Hotmail began as something its founders wrote to talk about their previous startup idea while they were working at their day jobs. 
A particularly promising way to be unusual is to be young. Some of the most valuable new ideas take root first among people in their teens and early twenties. And while young founders are at a disadvantage in some respects, they’re the only ones who really understand their peers. It would have been very hard for someone who wasn’t a college student to start Facebook. So if you’re a young founder (under 23 say), are there things you and your friends would like to do that current technology won’t let you?
The next best thing to an unmet need of your own is an unmet need of someone else. Try talking to everyone you can about the gaps they find in the world. What’s missing? What would they like to do that they can’t? What’s tedious or annoying, particularly in their work? Let the conversation get general; don’t be trying too hard to find startup ideas. You’re just looking for something to spark a thought. Maybe you’ll notice a problem they didn’t consciously realize they had, because you know how to solve it.
When you find an unmet need that isn’t your own, it may be somewhat blurry at first. The person who needs something may not know exactly what they need. In that case I often recommend that founders act like consultants—that they do what they’d do if they’d been retained to solve the problems of this one user. People’s problems are similar enough that nearly all the code you write this way will be reusable, and whatever isn’t will be a small price to start out certain that you’ve reached the bottom of the well.
One way to ensure you do a good job solving other people’s problems is to make them your own. When Rajat Suri of E la Carte decided to write software for restaurants, he got a job as a waiter to learn how restaurants worked. That may seem like taking things to extremes, but startups are extreme. We love it when founders do such things.
In fact, one strategy I recommend to people who need a new idea is not merely to turn off their schlep and unsexy filters, but to seek out ideas that are unsexy or involve schleps. Don’t try to start Twitter. Those ideas are so rare that you can’t find them by looking for them. Make something unsexy that people will pay you for.
A good trick for bypassing the schlep and to some extent the unsexy filter is to ask what you wish someone else would build, so that you could use it. What would you pay for right now?
Since startups often garbage-collect broken companies and industries, it can be a good trick to look for those that are dying, or deserve to, and try to imagine what kind of company would profit from their demise. For example, journalism is in free fall at the moment. But there may still be money to be made from something like journalism. What sort of company might cause people in the future to say “this replaced journalism” on some axis?
But imagine asking that in the future, not now. When one company or industry replaces another, it usually comes in from the side. So don’t look for a replacement for x; look for something that people will later say turned out to be a replacement for x. And be imaginative about the axis along which the replacement occurs. Traditional journalism, for example, is a way for readers to get information and to kill time, a way for writers to make money and to get attention, and a vehicle for several different types of advertising. It could be replaced on any of these axes (it has already started to be on most).
When startups consume incumbents, they usually start by serving some small but important market that the big players ignore. It’s particularly good if there’s an admixture of disdain in the big players’ attitude, because that often misleads them. For example, after Steve Wozniak built the computer that became the Apple I, he felt obliged to give his then-employer Hewlett-Packard the option to produce it. Fortunately for him, they turned it down, and one of the reasons they did was that it used a TV for a monitor, which seemed intolerably déclassé to a high-end hardware company like HP was at the time.
Are there groups of scruffy but sophisticated users like the early microcomputer “hobbyists” that are currently being ignored by the big players? A startup with its sights set on bigger things can often capture a small market easily by expending an effort that wouldn’t be justified by that market alone.
Similarly, since the most successful startups generally ride some wave bigger than themselves, it could be a good trick to look for waves and ask how one could benefit from them. The prices of gene sequencing and 3D printing are both experiencing Moore’s Law-like declines. What new things will we be able to do in the new world we’ll have in a few years? What are we unconsciously ruling out as impossible that will soon be possible?
But talking about looking explicitly for waves makes it clear that such recipes are plan B for getting startup ideas. Looking for waves is essentially a way to simulate the organic method. If you’re at the leading edge of some rapidly changing field, you don’t have to look for waves; you are the wave.
Finding startup ideas is a subtle business, and that’s why most people who try fail so miserably. It doesn’t work well simply to try to think of startup ideas. If you do that, you get bad ones that sound dangerously plausible. The best approach is more indirect: if you have the right sort of background, good startup ideas will seem obvious to you. But even then, not immediately. It takes time to come across situations where you notice something missing. And often these gaps won’t seem to be ideas for companies, just things that would be interesting to build. Which is why it’s good to have the time and the inclination to build things just because they’re interesting.
Live in the future and build what seems interesting. Strange as it sounds, that’s the real recipe.
Read the entire article after the jump.
Image: Nick D’Aloisio with his Summly app. Courtesy of Telegraph.
- You Are a Google Datapoint>
At first glance Google’s aim to make all known information accessible and searchable seems to be a fundamentally worthy goal, and in keeping with its “Do No Evil” mantra. Surely, giving all people access to the combined knowledge of the human race can do nothing but good, intellectually, politically and culturally.
However, what if that information includes you? After all, you are information: from the sequence of bases in your DNA, to the food you eat and the products you purchase, to your location and your planned vacations, your circle of friends and colleagues at work, to what you say and write and hear and see. You are a collection of datapoints, and if you don’t market and monetize them, someone else will.
Google continues to extend its technology boundaries and its vast indexed database of information. Now with the introduction of Google Glass the company extends its domain to a much more intimate level. Glass gives Google access to data on your precise location; it can record what you say and the sounds around you; it can capture what you are looking at and make it instantly shareable over the internet. Not surprisingly, this raises numerous concerns over privacy and security, and not only for the wearer of Google Glass. While active opt-in / opt-out features would allow a user a fair degree of control over how and what data is collected and shared with Google, it does not address those being observed.
So, beware the next time you are sitting in a Starbucks or shopping in a mall or riding the subway, you may be being recorded and your digital essence distributed over the internet. Perhaps, someone somewhere will even be making money from you. While the Orwellian dystopia of government surveillance and control may still be a nightmarish fiction, corporate snooping and monetization is no less troubling. Remember, to some, you are merely a datapoint (care of Google), a publication (via Facebook), and a product (courtesy of Twitter).
From the Telegraph:
In the online world – for now, at least – it’s the advertisers that make the world go round. If you’re Google, they represent more than 90% of your revenue and without them you would cease to exist.
So how do you reconcile the fact that there is a finite amount of data to be gathered online with the need to expand your data collection to keep ahead of your competitors?
There are two main routes. Firstly, try as hard as is legally possible to monopolise the data streams you already have, and hope regulators fine you less than the profit it generated. Secondly, you need to get up from behind the computer and hit the streets.
Google Glass is the first major salvo in an arms race that is going to see increasingly intrusive efforts made to join up our real lives with the digital businesses we have become accustomed to handing over huge amounts of personal data to.
The principles that underpin everyday consumer interactions – choice, informed consent, control – are at risk in a way that cannot be healthy. Our ability to walk away from a service depends on having a choice in the first place and knowing what data is collected and how it is used before we sign up.
Imagine if Google or Facebook decided to install their own CCTV cameras everywhere, gathering data about our movements, recording our lives and joining up every camera in the land in one giant control room. It’s Orwellian surveillance with fluffier branding. And this isn’t just video surveillance – Glass uses audio recording too. For added impact, if you’re not content with Google analysing the data, the person can share it to social media as they see fit too.
Yet that is the reality of Google Glass. Everything you see, Google sees. You don’t own the data, you don’t control the data and you definitely don’t know what happens to the data. Put another way – what would you say if instead of it being Google Glass, it was Government Glass? A revolutionary way of improving public services, some may say. Call me a cynic, but I don’t think it’d have much success.
More importantly, who gave you permission to collect data on the person sitting opposite you on the Tube? How about collecting information on your children’s friends? There is a gaping hole in the middle of the Google Glass world and it is one where privacy is not only seen as an annoying restriction on Google’s profit, but as something that simply does not even come into the equation. Google has empowered you to ignore the privacy of other people. Bravo.
It’s already led to reactions in the US. ‘Stop the Cyborgs’ might sound like the rallying cry of the next Terminator film, but this is the start of a campaign to ensure places of work, cafes, bars and public spaces are no-go areas for Google Glass. They’ve already produced stickers to put up informing people that they should take off their Glass.
They argue, rightly, that this is more than just a question of privacy. There’s a real issue about how much decision making is devolved to the display we see, in exactly the same way as the difference between appearing on page one or page two of Google’s search can spell the difference between commercial success and failure for small businesses. We trust what we see, it’s convenient and we don’t question the motives of a search engine in providing us with information.
The reality is very different. In abandoning critical thought and decision making, allowing ourselves to be guided by a melee of search results, social media and advertisements we do risk losing a part of what it is to be human. You can see the marketing already – Glass is all-knowing. The issue is that to be all-knowing, it needs you to help it be all-seeing.
Read the entire article after the jump.
Image: Google’s Sergin Brin wearing Google Glass. Courtesy of CBS News.
- If Your Favorite Website Were a Person>
Four minutes of hilarity courtesy of cracked.com – a look at our world if some name-brand websites were people.
Video courtesy of cracked.com.
- Electronic Tattoos>
Forget wearable electronics, like Google Glass. That’s so, well, 2012. Welcome to the new world of epidermal electronics — electronic tattoos that contain circuits and sensors printed directly on to the body.
From MIT Technology Review:
Taking advantage of recent advances in flexible electronics, researchers have devised a way to “print” devices directly onto the skin so people can wear them for an extended period while performing normal daily activities. Such systems could be used to track health and monitor healing near the skin’s surface, as in the case of surgical wounds.
So-called “epidermal electronics” were demonstrated previously in research from the lab of John Rogers, a materials scientist at the University of Illinois at Urbana-Champaign; the devices consist of ultrathin electrodes, electronics, sensors, and wireless power and communication systems. In theory, they could attach to the skin and record and transmit electrophysiological measurements for medical purposes. These early versions of the technology, which were designed to be applied to a thin, soft elastomer backing, were “fine for an office environment,” says Rogers, “but if you wanted to go swimming or take a shower they weren’t able to hold up.” Now, Rogers and his coworkers have figured out how to print the electronics right on the skin, making the device more durable and rugged.
“What we’ve found is that you don’t even need the elastomer backing,” Rogers says. “You can use a rubber stamp to just deliver the ultrathin mesh electronics directly to the surface of the skin.” The researchers also found that they could use commercially available “spray-on bandage” products to add a thin protective layer and bond the system to the skin in a “very robust way,” he says.
Eliminating the elastomer backing makes the device one-thirtieth as thick, and thus “more conformal to the kind of roughness that’s present naturally on the surface of the skin,” says Rogers. It can be worn for up to two weeks before the skin’s natural exfoliation process causes it to flake off.
During the two weeks that it’s attached, the device can measure things like temperature, strain, and the hydration state of the skin, all of which are useful in tracking general health and wellness. One specific application could be to monitor wound healing: if a doctor or nurse attached the system near a surgical wound before the patient left the hospital, it could take measurements and transmit the information wirelessly to the health-care providers.
Read the entire article after the jump.
Image: Epidermal electronic snesor printed on the skin. Courtesy of MIT.
- Technology: Mind Exp(a/e)nder>
Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.
So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.
Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.
Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?
If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.
True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.
The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.
Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.
The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.
So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”
Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.
The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”
So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?
Read the entire article after the jump.
Image: Google Glass. Courtesy of Google.
- The Police Drones Next Door>
You might expect to find police drones in the pages of a science fiction novel by Philip K. Dick or Iain M. Banks. But by 2015, citizens of the United States may well see these unmanned flying machines patrolling the skies over the homeland. The U.S. government recently pledged to loosen Federal Aviation Administration (FAA) restrictions that would allow local law enforcement agencies to use drones in just a few short years. So, soon the least of your worries will be traffic signal cameras and the local police officer armed with a radar gun. Our home-grown drones are likely to be deployed first for surveillance. But, undoubtedly armaments will follow. Hellfire missiles over Helena, Montana anyone?From National Geographic:
At the edge of a stubbly, dried-out alfalfa field outside Grand Junction, Colorado, Deputy Sheriff Derek Johnson, a stocky young man with a buzz cut, squints at a speck crawling across the brilliant, hazy sky. It’s not a vulture or crow but a Falcon—a new brand of unmanned aerial vehicle, or drone, and Johnson is flying it. The sheriff ’s office here in Mesa County, a plateau of farms and ranches corralled by bone-hued mountains, is weighing the Falcon’s potential for spotting lost hikers and criminals on the lam. A laptop on a table in front of Johnson shows the drone’s flickering images of a nearby highway.
Standing behind Johnson, watching him watch the Falcon, is its designer, Chris Miser. Rock-jawed, arms crossed, sunglasses pushed atop his shaved head, Miser is a former Air Force captain who worked on military drones before quitting in 2007 to found his own company in Aurora, Colorado. The Falcon has an eight-foot wingspan but weighs just 9.5 pounds. Powered by an electric motor, it carries two swiveling cameras, visible and infrared, and a GPS-guided autopilot. Sophisticated enough that it can’t be exported without a U.S. government license, the Falcon is roughly comparable, Miser says, to the Raven, a hand-launched military drone—but much cheaper. He plans to sell two drones and support equipment for about the price of a squad car.
A law signed by President Barack Obama in February 2012 directs the Federal Aviation Administration (FAA) to throw American airspace wide open to drones by September 30, 2015. But for now Mesa County, with its empty skies, is one of only a few jurisdictions with an FAA permit to fly one. The sheriff ’s office has a three-foot-wide helicopter drone called a Draganflyer, which stays aloft for just 20 minutes.
The Falcon can fly for an hour, and it’s easy to operate. “You just put in the coordinates, and it flies itself,” says Benjamin Miller, who manages the unmanned aircraft program for the sheriff ’s office. To navigate, Johnson types the desired altitude and airspeed into the laptop and clicks targets on a digital map; the autopilot does the rest. To launch the Falcon, you simply hurl it into the air. An accelerometer switches on the propeller only after the bird has taken flight, so it won’t slice the hand that launches it.
The stench from a nearby chicken-processing plant wafts over the alfalfa field. “Let’s go ahead and tell it to land,” Miser says to Johnson. After the deputy sheriff clicks on the laptop, the Falcon swoops lower, releases a neon orange parachute, and drifts gently to the ground, just yards from the spot Johnson clicked on. “The Raven can’t do that,” Miser says proudly.
Offspring of 9/11
A dozen years ago only two communities cared much about drones. One was hobbyists who flew radio-controlled planes and choppers for fun. The other was the military, which carried out surveillance missions with unmanned aircraft like the General Atomics Predator.
Then came 9/11, followed by the U.S. invasions of Afghanistan and Iraq, and drones rapidly became an essential tool of the U.S. armed forces. The Pentagon armed the Predator and a larger unmanned surveillance plane, the Reaper, with missiles, so that their operators—sitting in offices in places like Nevada or New York—could destroy as well as spy on targets thousands of miles away. Aerospace firms churned out a host of smaller drones with increasingly clever computer chips and keen sensors—cameras but also instruments that measure airborne chemicals, pathogens, radioactive materials.
The U.S. has deployed more than 11,000 military drones, up from fewer than 200 in 2002. They carry out a wide variety of missions while saving money and American lives. Within a generation they could replace most manned military aircraft, says John Pike, a defense expert at the think tank GlobalSecurity.org. Pike suspects that the F-35 Lightning II, now under development by Lockheed Martin, might be “the last fighter with an ejector seat, and might get converted into a drone itself.”
At least 50 other countries have drones, and some, notably China, Israel, and Iran, have their own manufacturers. Aviation firms—as well as university and government researchers—are designing a flock of next-generation aircraft, ranging in size from robotic moths and hummingbirds to Boeing’s Phantom Eye, a hydrogen-fueled behemoth with a 150-foot wingspan that can cruise at 65,000 feet for up to four days.
More than a thousand companies, from tiny start-ups like Miser’s to major defense contractors, are now in the drone business—and some are trying to steer drones into the civilian world. Predators already help Customs and Border Protection agents spot smugglers and illegal immigrants sneaking into the U.S. NASA-operated Global Hawks record atmospheric data and peer into hurricanes. Drones have helped scientists gather data on volcanoes in Costa Rica, archaeological sites in Russia and Peru, and flooding in North Dakota.
So far only a dozen police departments, including ones in Miami and Seattle, have applied to the FAA for permits to fly drones. But drone advocates—who generally prefer the term UAV, for unmanned aerial vehicle—say all 18,000 law enforcement agencies in the U.S. are potential customers. They hope UAVs will soon become essential too for agriculture (checking and spraying crops, finding lost cattle), journalism (scoping out public events or celebrity backyards), weather forecasting, traffic control. “The sky’s the limit, pun intended,” says Bill Borgia, an engineer at Lockheed Martin. “Once we get UAVs in the hands of potential users, they’ll think of lots of cool applications.”
The biggest obstacle, advocates say, is current FAA rules, which tightly restrict drone flights by private companies and government agencies (though not by individual hobbyists). Even with an FAA permit, operators can’t fly UAVs above 400 feet or near airports or other zones with heavy air traffic, and they must maintain visual contact with the drones. All that may change, though, under the new law, which requires the FAA to allow the “safe integration” of UAVs into U.S. airspace.
If the FAA relaxes its rules, says Mark Brown, the civilian market for drones—and especially small, low-cost, tactical drones—could soon dwarf military sales, which in 2011 totaled more than three billion dollars. Brown, a former astronaut who is now an aerospace consultant in Dayton, Ohio, helps bring drone manufacturers and potential customers together. The success of military UAVs, he contends, has created “an appetite for more, more, more!” Brown’s PowerPoint presentation is called “On the Threshold of a Dream.”Image: Unmanned drone used to patrol the U.S.-Canadian border. (U.S. Customs and Border Protection/AP).
- First, Build A Blue Box; Second, Build Apple>
Edward Tufte built the first little blue box in 1962. The blue box contained home-made circuitry and a tone generator that could place free calls over the phone network to anywhere in the world.
This electronic revelation spawned groups of “phone phreaks” (hackers) who would build their own blue boxes to fight MaBell (AT&T), illegally of course. The phreaks assumed suitably disguised names, such as Captain Crunch and Cheshire Cat, to hide from the long-arm of the FBI.
This later caught the attention of a pair of new recruits to the subversive cause, Berkeley Blue and Oaf Tobar, who would go on to found Apple under their more common pseudonyms, Steve Wozniak and Steve Jobs. The rest, as the saying goes, is history.
Put it down to curiosity, an anti-authoritarian streak and a quest to ever-improve.From Slate:
One of the most heartfelt—and unexpected—remembrances of Aaron Swartz, who committed suicide last month at the age of 26, came from Yale professor Edward Tufte. During a speech at a recent memorial service for Swartz in New York City, Tufte reflected on his secret past as a hacker—50 years ago.
“In 1962, my housemate and I invented the first blue box,” Tufte said to the crowd. “That’s a device that allows for undetectable, unbillable long distance telephone calls. We played around with it and the end of our research came when we completed what we thought was the longest long-distance phone call ever made, which was from Palo Alto to New York … via Hawaii.”
Tufte was never busted for his youthful forays into phone hacking, also known as phone phreaking. He rose to become one of Yale’s most famous professors, a world authority on data visualization and information design. One can’t help but think that Swartz might have followed in the distinguished footsteps of a professor like Tufte, had he lived.
Swartz faced 13 felony charges and up to 35 years in prison for downloading 4.8 million academic articles from the digital repository JSTOR, using MIT’s network. In the face of the impending trial, Swartz—a brilliant young hacker and activist who was a key force behind many worthy projects, including the RSS 1.0 specification and Creative Commons—killed himself on Jan. 11.
“Aaron’s unique quality was that he was marvelously and vigorously different,” Tufte said, a tear in his eye, as he closed his speech. “There is a scarcity of that. Perhaps we can all be a little more different, too.”
Swartz was too young to be a phone phreak like Tufte. In our present era of Skype and smartphones, the old days of outsmarting Ma Bell with 2600 Hertz sine wave tones and homemade “blue boxes” seems quaint, charmingly retro. But there is a thread that connects these old-school phone hackers to Swartz—common traits that Tufte recognized. It’s not just that, like Swartz, many phone phreaks faced trumped-up charges (wire fraud, in their cases). The best of these proto-computer hackers possessed Swartz’s enterprising spirit, his penchant for questioning authority, and his drive to figure out how a complicated system works from the inside. They were nerds, they were misfits; like Swartz, they were a little more different.
In his new history of phone phreaking, Exploding the Phone, engineer and consultant Phil Lapsley details the story of the 1960s and 1970s culture of hackers who, like Tufte, devised numerous ways to outwit the phone system. The foreword of the book is by Steve Wozniak, co-founder of Apple—and, as it happens, an old-school hacker himself. Before Wozniak and Steve Jobs built Apple in the 1970s, they were phone phreaks. (Wozniak’s hacker name was Berkeley Blue; Jobs’ handle was Oaf Tobar.)
In 1971, Esquire published an article about phone phreaking called “Secrets of the Little Blue Box,” by Ron Rosenbaum (a Slate columnist). It chronicled a ragtag crew sporting names like Captain Crunch and the Cheshire Cat, who prided themselves on using ingenuity and rudimentary electronics to outsmart the many-tentacled monstrosities of Ma Bell and the FBI. A blind 22-year-old named Joe Engressia was one of the scene’s heroes; according to Rosenbaum, Engressia could whistle at exactly the right frequency to place a free phone call.
Wozniak, age 20 in ’71, devoured the now-legendary article. “You know how some articles just grab you from the first paragraph?” he wrote in his 2006 memoir, iWoz, quoted in Lapsley’s book. “Well, it was one of those articles. It was the most amazing article I’d ever read!” Wozniak was entranced by the way these hackers seemed so much like himself. “I could tell that the characters being described were really tech people, much like me, people who liked to design things just to see what was possible, and for no other reason, really.” Building a blue box—a device that could generate the same tones that the phone system used to route phone calls, in a certain sequence—required technical smarts, and Wozniak loved nerdy challenges. Plus, the payoff—and the potential for epic pranks—was irresistible. (Wozniak once used a blue box to call the Vatican; impersonating Henry Kissinger he asked to talk to the pope.)
Wozniak immediately called Jobs, who was then a 17-year-old senior in high school. The friends drove to the technical library at Stanford’s Linear Accelerator Center to find a phone manual that listed tone frequencies. That same day, as Lapsley details in the book, Wozniak and Jobs bought analog tone generator kits, but were soon frustrated that the generators weren’t good enough for really high-quality phone phreaking.
Wozniak had a better, geekier idea: They needed to build their own blue boxes, but make them with digital circuits, which were more precise and easier to control than the usual analog ones. Wozniak and Jobs didn’t just build one blue box—they went on to build dozens of them, which they sold for about $170 apiece. In a way, their sophisticated, compact design foreshadowed the Apple products to come. Their digital circuitry incorporated several smart tricks, including a method to make the battery last longer. “I have never designed a circuit I was prouder of,” Wozniak says.Image: Exploding the Phone by Phil Lapsley, book cover. Courtesy of Barnes & Noble.
- Shakespearian Sonnets Now Available on DNA>
Shakespeare meet thy DNA. The most famous literary figure in the English language had a recent rendezvous with that most famous and studied of molecules. Together chemists, cell biologists, geneticists and computer scientists are doing some amazing things — storing information using the base-pair sequences of amino-acids on the DNA molecule.From ars technica:
It’s easy to get excited about the idea of encoding information in single molecules, which seems to be the ultimate end of the miniaturization that has been driving the electronics industry. But it’s also easy to forget that we’ve been beaten there—by a few billion years. The chemical information present in biomolecules was critical to the origin of life and probably dates back to whatever interesting chemical reactions preceded it.
It’s only within the past few decades, however, that humans have learned to speak DNA. Even then, it took a while to develop the technology needed to synthesize and determine the sequence of large populations of molecules. But we’re there now, and people have started experimenting with putting binary data in biological form. Now, a new study has confirmed the flexibility of the approach by encoding everything from an MP3 to the decoding algorithm into fragments of DNA. The cost analysis done by the authors suggest that the technology may soon be suitable for decade-scale storage, provided current trends continue.
Computer data is in binary, while each location in a DNA molecule can hold any one of four bases (A, T, C, and G). Rather than using all that extra information capacity, however, the authors used it to avoid a technical problem. Stretches of a single type of base (say, TTTTT) are often not sequenced properly by current techniques—in fact, this was the biggest source of errors in the previous DNA data storage effort. So for this new encoding, they used one of the bases to break up long runs of any of the other three.
(To explain how this works practically, let’s say the A, T, and C encoded information, while G represents “more of the same.” If you had a run of four A’s, you could represent it as AAGA. But since the G doesn’t encode for anything in particular, TTGT can be used to represent four T’s. The only thing that matters is that there are no more than two identical bases in a row.)
That leaves three bases to encode information, so the authors converted their information into trinary. In all, they encoded a large number of works: all 154 Shakespeare sonnets, a PDF of a scientific paper, a photograph of the lab some of them work in, and an MP3 of part of Martin Luther King’s “I have a dream” speech. For good measure, they also threw in the algorithm they use for converting binary data into trinary.
Once in trinary, the results were encoded into the error-avoiding DNA code described above. The resulting sequence was then broken into chunks that were easy to synthesize. Each chunk came with parity information (for error correction), a short file ID, and some data that indicates the offset within the file (so, for example, that the sequence holds digits 500-600). To provide an added level of data security, 100-bases-long DNA inserts were staggered by 25 bases so that consecutive fragments had a 75-base overlap. Thus, many sections of the file were carried by four different DNA molecules.
And it all worked brilliantly—mostly. For most of the files, the authors’ sequencing and analysis protocol could reconstruct an error-free version of the file without any intervention. One, however, ended up with two 25-base-long gaps, presumably resulting from a particular sequence that is very difficult to synthesize. Based on parity and other data, they were able to reconstruct the contents of the gaps, but understanding why things went wrong in the first place would be critical to understanding how well suited this method is to long-term archiving of data.Read the entire article following the jump.Image: Title page of Shakespeare’s Sonnets (1609). Courtesy of Wikipedia / Public Domain.
- Your City as an Information Warehouse>
Big data keeps getting bigger and computers keep getting faster. Some theorists believe that the universe is a giant computer or a computer simulation; that principles of information science govern the cosmos. While this notion is one of the most recent radical ideas to explain our existence, there is no doubt that information is our future. Data surrounds us, we are becoming data-points and our cities are our information-rich databases.From the Economist:
IN 1995 GEORGE GILDER, an American writer, declared that “cities are leftover baggage from the industrial era.” Electronic communications would become so easy and universal that people and businesses would have no need to be near one another. Humanity, Mr Gilder thought, was “headed for the death of cities”.
It hasn’t turned out that way. People are still flocking to cities, especially in developing countries. Cisco’s Mr Elfrink reckons that in the next decade 100 cities, mainly in Asia, will reach a population of more than 1m. In rich countries, to be sure, some cities are sad shadows of their old selves (Detroit, New Orleans), but plenty are thriving. In Silicon Valley and the newer tech hubs what Edward Glaeser, a Harvard economist, calls “the urban ability to create collaborative brilliance” is alive and well.
Cheap and easy electronic communication has probably helped rather than hindered this. First, connectivity is usually better in cities than in the countryside, because it is more lucrative to build telecoms networks for dense populations than for sparse ones. Second, electronic chatter may reinforce rather than replace the face-to-face kind. In his 2011 book, “Triumph of the City”, Mr Glaeser theorises that this may be an example of what economists call “Jevons’s paradox”. In the 19th century the invention of more efficient steam engines boosted rather than cut the consumption of coal, because they made energy cheaper across the board. In the same way, cheap electronic communication may have made modern economies more “relationship-intensive”, requiring more contact of all kinds.
Recent research by Carlo Ratti, director of the SENSEable City Laboratory at the Massachusetts Institute of Technology, and colleagues, suggests there is something to this. The study, based on the geographical pattern of 1m mobile-phone calls in Portugal, found that calls between phones far apart (a first contact, perhaps) are often followed by a flurry within a small area (just before a meeting).
A third factor is becoming increasingly important: the production of huge quantities of data by connected devices, including smartphones. These are densely concentrated in cities, because that is where the people, machines, buildings and infrastructures that carry and contain them are packed together. They are turning cities into vast data factories. “That kind of merger between physical and digital environments presents an opportunity for us to think about the city almost like a computer in the open air,” says Assaf Biderman of the SENSEable lab. As those data are collected and analysed, and the results are recycled into urban life, they may turn cities into even more productive and attractive places.
Some of these “open-air computers” are being designed from scratch, most of them in Asia. At Songdo, a South Korean city built on reclaimed land, Cisco has fitted every home and business with video screens and supplied clever systems to manage transport and the use of energy and water. But most cities are stuck with the infrastructure they have, at least in the short term. Exploiting the data they generate gives them a chance to upgrade it. Potholes in Boston, for instance, are reported automatically if the drivers of the cars that hit them have an app called Street Bump on their smartphones. And, particularly in poorer countries, places without a well-planned infrastructure have the chance of a leap forward. Researchers from the SENSEable lab have been working with informal waste-collecting co-operatives in São Paulo whose members sift the city’s rubbish for things to sell or recycle. By attaching tags to the trash, the researchers have been able to help the co-operatives work out the best routes through the city so they can raise more money and save time and expense.
Exploiting data may also mean fewer traffic jams. A few years ago Alexandre Bayen, of the University of California, Berkeley, and his colleagues ran a project (with Nokia, then the leader of the mobile-phone world) to collect signals from participating drivers’ smartphones, showing where the busiest roads were, and feed the information back to the phones, with congested routes glowing red. These days this feature is common on smartphones. Mr Bayen’s group and IBM Research are now moving on to controlling traffic and thus easing jams rather than just telling drivers about them. Within the next three years the team is due to build a prototype traffic-management system for California’s Department of Transportation.
Cleverer cars should help, too, by communicating with each other and warning drivers of unexpected changes in road conditions. Eventually they may not even have drivers at all. And thanks to all those data they may be cleaner, too. At the Fraunhofer FOKUS Institute in Berlin, Ilja Radusch and his colleagues show how hybrid cars can be automatically instructed to switch from petrol to electric power if local air quality is poor, say, or if they are going past a school.Images of cities courtesy of Google search.
- Light From Gravity>
Often the best creative ideas and the most elegant solutions are the simplest. GravityLight is an example of this type of innovation. Here’s the problem: replace damaging and expensive kerosene fuel lamps in Africa with a less harmful and cheaper alternative. And, the solution:From ars technica:
A London design consultancy has developed a cheap, clean, and safer alternative to the kerosene lamp. Kerosene burning lamps are thought to be used by over a billion people in developing nations, often in remote rural parts where electricity is either prohibitively expensive or simply unavailable. Kerosene’s potential replacement, GravityLight, is powered by gravity without the need of a battery—it’s also seen by its creators as a superior alternative to solar-powered lamps.
Kerosene lamps are problematic in three ways: they release pollutants which can contribute to respiratory disease; they pose a fire risk; and, thanks to the ongoing need to buy kerosene fuel, they are expensive to run. Research out of Brown University from July of last year called kerosene lamps a “significant contributor to respiratory diseases, which kill over 1.5 million people every year” in developing countries. The same paper found that kerosene lamps were responsible for 70 percent of fires (which cause 300,000 deaths every year) and 80 percent of burns. The World Bank has compared the indoor use of a kerosene lamp with smoking two packs of cigarettes per day.
The economics of the kerosene lamps are nearly as problematic, with the fuel costing many rural families a significant proportion of their money. The designers of the GravityLight say 10 to 20 percent of household income is typical, and they describe kerosene as a poverty trap, locking people into a “permanent state of subsistence living.” Considering that the median rural price of kerosene in Tanzania, Mali, Ghana, Kenya, and Senegal is $1.30 per liter, and the average rural income in Tanzania is under $9 per month, the designers’ figures seem depressingly plausible.
Approached by the charity Solar Aid to design a solar-powered LED alternative, London design consultancy Therefore shifted the emphasis away from solar, which requires expensive batteries that degrade over time. The company’s answer is both more simple and more radical: an LED lamp driven by a bag of sand, earth, or stones, pulled toward the Earth by gravity.
It takes only seconds to hoist the bag into place, after which the lamp provides up to half an hour of ambient light, or about 18 minutes of brighter task lighting. Though it isn’t clear quite how much light the GravityLight emits, its makers insist it is more than a kerosene lamp. Also unclear are the precise inner workings of the device, though clearly the weighted bag pulls a cord, driving an inner mechanism with a low-powered dynamo, with the aid of some robust plastic gearing. Talking to Ars by telephone, Therefore’s Jim Fullalove was loath to divulge details, but did reveal the gearing took the kinetic energy from a weighted bag descending at a rate of a millimeter per second to power a dynamo spinning at 2000rpm.Read more about GravityLight after the jump.Video courtesy of GravityLight.
- Consumer Electronics Gone Mad>
If you eat too quickly, then HAPIfork is the new eating device for you. If you have trouble seeing text on your palm-sized iPad, then Lenovo’s 27 inch tablet is for you. If you need musical motivation from One Direction to get your children to brush their teeth, then the Brush Buddies toothbrush is for you, and your kids. If you’re tired of technology, then stay away from this year’s Consumer Electronics Show (CES 2013).
If you’d like to see other strange products looking for a buyer follow this jump.Image: The HAPIfork monitors how fast its user is eating and alerts them if their speed is faster than a pre-determined rate by vibrating, which altogether sounds like an incredibly strange eating experience. Courtesy of CES / Telegraph.
- AnNoyIng gOOgle, Purpoogle and Elgoog>
Bored with Google’s homepage? Paranoid over Google’s omniscience? If so, take a break from the omnipresent search engine and visit some of Google’s lesser known relatives. Our two favorites below:More Google parodies after the jump.
- Big Brother is Mapping You>
One hopes that Google’s intention to “organize the world’s information” will remain benign for the foreseeable future. Yet, as more and more of our surroundings and moves are mapped and tracked online, and increasingly offline, it would be wise to remain ever vigilant. Many put up with the encroachment of advertisers and promoters into almost every facet of their daily lives as a necessary, modern evil. But where is the dividing line that separates an ignorable irritation from an intrusion of privacy and a grab for control? For the paranoid amongst us, it may only be a matter of time before our digital footprints come under the increasing scrutiny, and control, of organizations with grander designs.From the Guardian:
Eight years ago, Google bought a cool little graphics business called Keyhole, which had been working on 3D maps. Along with the acquisition came Brian McClendon, aka “Bam”, a tall and serious Kansan who in a previous incarnation had supplied high-end graphics software that Hollywood used in films including Jurassic Park and Terminator 2. It turned out to be a very smart move.
Today McClendon is Google’s Mr Maps – presiding over one of the fastest-growing areas in the search giant’s business, one that has recently left arch-rival Apple red-faced and threatens to make Google the most powerful company in mapping the world has ever seen.
Google is throwing its considerable resources into building arguably the most comprehensive map ever made. It’s all part of the company’s self-avowed mission is to organize all the world’s information, says McClendon.
“You need to have the basic structure of the world so you can place the relevant information on top of it. If you don’t have an accurate map, everything else is inaccurate,” he says.
It’s a message that will make Apple cringe. Apple triggered howls of outrage when it pulled Google Maps off the latest iteration of its iPhone software for its own bug-riddled and often wildly inaccurate map system. “We screwed up,” Apple boss Tim Cook said earlier this week.
McClendon, pictured, won’t comment on when and if Apple will put Google’s application back on the iPhone. Talks are ongoing and he’s at pains to point out what a “great” product the iPhone is. But when – or if – Apple caves, it will be a huge climbdown. In the meantime, what McClendon really cares about is building a better map.
This not the first time Google has made a landgrab in the real world, as the publishing industry will attest. Unhappy that online search was missing all the good stuff inside old books, Google – controversially – set about scanning the treasures of Oxford’s Bodleian library and some of the world’s other most respected collections.
Its ambitions in maps may be bigger, more far reaching and perhaps more controversial still. For a company developing driverless cars and glasses that are wearable computers, maps are a serious business. There’s no doubting the scale of McClendon’s vision. His license plate reads: ITLLHPN.
Until the 1980s, maps were still largely a pen and ink affair. Then mainframe computers allowed the development of geographic information system software (GIS), which was able to display and organise geographic information in new ways. By 2005, when Google launched Google Maps, computing power allowed GIS to go mainstream. Maps were about to change the way we find a bar, a parcel or even a story. Washington DC’s homicidewatch.org, for example, uses Google Maps to track and follow deaths across the city. Now the rise of mobile devices has pushed mapping into everyone’s hands and to the front line in the battle of the tech giants.
It’s easy to see why Google is so keen on maps. Some 20% of Google’s queries are now “location specific”. The company doesn’t split the number out but on mobile the percentage is “even higher”, says McClendon, who believes maps are set to unfold themselves ever further into our lives.
Google’s approach to making better maps is about layers. Starting with an aerial view, in 2007 Google added Street View, an on-the-ground photographic map snapped from its own fleet of specially designed cars that now covers 5 million of the 27.9 million miles of roads on Google Maps.
Google isn’t stopping there. The company has put cameras on bikes to cover harder-to-reach trails, and you can tour the Great Barrier Reef thanks to diving mappers. Luc Vincent, the Google engineer known as “Mr Street View”, carried a 40lb pack of snapping cameras down to the bottom of the Grand Canyon and then back up along another trail as fellow hikers excitedly shouted “Google, Google” at the man with the space-age backpack. McClendon, pictured, has also played his part. He took his camera to Antarctica, taking 500 or more photos of a penguin-filled island to add to Google Maps. “The penguins were pretty oblivious. They just don’t care about people,” he says.
Now the company has projects called Ground Truth, which corrects errors online, and Map Maker, a service that lets people make their own maps. In the western world the product has been used to add a missing road or correct a one-way street that is pointing the wrong way, and to generally improve what’s already there. In Africa, Asia and other less well covered areas of the world, Google is – literally – helping people put themselves on the map.
In 2008, it could take six to 18 months for Google to update a map. The company would have to go back to the firm that provided its map information and get them to check the error, correct it and send it back. “At that point we decided we wanted to bring that information in house,” says McClendon. Google now updates its maps hundreds of times a day. Anyone can correct errors with roads signs or add missing roads and other details; Google double checks and relies on other users to spot mistakes.
Thousands of people use Google’s Map Maker daily to recreate their world online, says Michael Weiss-Malik, engineering director at Google Maps. “We have some Pakistanis living in the UK who have basically built the whole map,” he says. Using aerial shots and local information, people have created the most detailed, and certainly most up-to-date, maps of cities like Karachi that have probably ever existed. Regions of Africa and Asia have been added by map-mad volunteers.
- Fly Me to the Moon: Mere Millionaries Need Not Apply>
Golden Spike, a Boulder Colorado based company, has an interesting proposition for the world’s restless billionaires. It is offering a two-seat trip to the Moon, and back, for a tidy sum of $1.5 billion. And, the company is even throwing in a moon-walk. The first trip is planned for 2020.From the Washington Post:
It had to happen: A start-up company is offering rides to the moon. Book your seat now — though it’s going to set you back $750 million (it’s unclear if that includes baggage fees).
At a news conference scheduled for Thursday afternoon in Washington, former NASA science administrator Alan Stern plans to announce the formation of Golden Spike, which, according to a news release, is “the first company planning to offer routine exploration expeditions to the surface of the Moon.”
“We can do this,” an excited Stern said Thursday morning during a brief phone interview.
The gist of the company’s strategy is that it’ll repurpose existing space hardware for commercial lunar missions and take advantage of NASA-sanctioned commercial rockets that, in a few years, are supposed to put astronauts in low Earth orbit. Stern said a two-person lunar mission, complete with moonwalking and, perhaps best of all, a return to Earth, would cost $1.5 billion.
“Two seats, 750 each,” Stern said. “The trick is 40 years old. We know how to do this. The difference is now we have rockets and space capsules in the inventory. .?.?. They’re already developed. .?.?. We don’t have to invent them from a clean sheet of paper. We don’t have to start over.”
The statement says, “The company’s plan is to maximize use of existing rockets and to market the resulting system to nations, individuals, and corporations with lunar exploration objectives and ambitions.” Golden Spike says its plans have been vetted by a former space shuttle commander, a space shuttle program manager and a member of the National Academy of Engineering.
And Newt Gingrich is involved: The former speaker of the House, who was widely mocked this year when, campaigning for president, he talked at length about ambitious plans for a permanent moon base by 2021, is listed as a member of Golden Spike’s board of advisers.
Also on that list is Bill Richardson, the former New Mexico governor and secretary of the Department of Energy. The chairman of the board is Gerry Griffin, a former Apollo mission flight director and former director of NASA’s Johnson Space Center.
The private venture fills a void, as it were, in the wake of President Obama’s decision to cancel NASA’s Constellation program, which was initiated during the George W. Bush years as the next step in space exploration after the retirement of the space shuttle. Constellation aimed to put astronauts back on the moon by 2020 for what would become extended stays at a lunar base.
A sweeping review from a presidential committee led by retired aerospace executive Norman Augustine concluded that NASA didn’t have the money to achieve Constellation’s goals. The administration and Congress have given NASA new marching orders that require the building of a heavy-lift rocket that would give the agency the ability to venture far beyond low Earth orbit.
Routine access to space is being shifted to companies operating under commercial contracts. But as those companies try to develop commercial spaceflight, the United States lacks the ability to launch astronauts directly and must purchase flights to the international space station from the Russians.Image courtesy of The Golden Spike Company.
- Steam Without Boiling Water>
Despite what seems to be an overwhelmingly digital shift in our lives, we still live in a world of steam. Steam plays a vital role in generating most of the world’s electricity, steam heats our buildings (especially if you live in New York City), steam sterilizes our medical supplies.
So, in a research discovery with far-reaching implication, scientists have succeeded in making steam at room temperature without actually boiling water. All courtesy of some ingenious nanoparticles.From Technology Review:
Steam is a key ingredient in a wide range of industrial and commercial processes—including electricity generation, water purification, alcohol distillation, and medical equipment sterilization.
Generating that steam, however, typically requires vast amounts of energy to heat and eventually boil water or another fluid. Now researchers at Rice University have found a shortcut. Using light-absorbing nanoparticles suspended in water, the group was able to turn the water molecules surrounding the nanoparticles into steam while scarcely raising the temperature of the remaining water. The trick could dramatically reduce the cost of many steam-reliant processes.
The Rice team used a Fresnel lens to focus sunlight on a small tube of water containing high concentrations of nanoparticles suspended in the fluid. The water, which had been cooled to near freezing, began generating steam within five to 20 seconds, depending on the type of nanoparticles used. Changes in temperature, pressure, and mass revealed that 82 percent of the sunlight absorbed by the nanoparticles went directly to generating steam while only 18 percent went to heating water.
“It’s a new way to make steam without boiling water,” says Naomi Halas, director of the Laboratory for Nanophotonics at Rice University. Halas says that the work “opens up a lot of interesting doors in terms of what you can use steam for.”
The new technique could, for instance, lead to inexpensive steam-generation devices for small-scale water purification, sterilization of medical instruments, and sewage treatment in developing countries with limited resources and infrastructure.
The use of nanoparticles to increase heat transfer in water and other fluids has been well studied, but few researchers have looked at using the particles to absorb light and generate steam.
In the current study, Halas and colleagues used nanoparticles optimized to absorb the widest possible spectrum of sunlight. When light hits the particles, their temperature quickly rises to well above 100 °C, the boiling point of water, causing surrounding water molecules to vaporize.
Precisely how the particles and water molecules interact remains somewhat of a mystery. Conventional heat-transfer models suggest that the absorbed sunlight should dissipate into the surrounding fluid before causing any water to boil. “There seems to be some nanoscale thermal barrier, because it’s clearly making steam like crazy,” Halas says.
The system devised by Halas and colleagues exhibited an efficiency of 24 percent in converting sunlight to steam.
Todd Otanicar, a mechanical engineer at the University of Tulsa who was not involved in the current study, says the findings could have significant implications for large-scale solar thermal energy generation. Solar thermal power stations typically use concentrated sunlight to heat a fluid such as oil, which is then used to heat water to generate steam. Otanicar estimates that by generating steam directly with nanoparticles in water, such a system could see an increased efficiency of 3 to 5 percent and a cost savings of 10 percent because a less complex design could be used.Image: Stott Park Bobbin Mill Steam Engine. Courtesy of Wikipedia.
- The Rise of the Industrial Internet>
As the internet that connects humans reaches a stable saturation point the industrial internet — the network that connects things — is increasing its growth and reach.From the New York Times:
When Sharoda Paul finished a postdoctoral fellowship last year at the Palo Alto Research Center, she did what most of her peers do — considered a job at a big Silicon Valley company, in her case, Google. But instead, Ms. Paul, a 31-year-old expert in social computing, went to work for General Electric.
Ms. Paul is one of more than 250 engineers recruited in the last year and a half to G.E.’s new software center here, in the East Bay of San Francisco. The company plans to increase that work force of computer scientists and software developers to 400, and to invest $1 billion in the center by 2015. The buildup is part of G.E’s big bet on what it calls the “industrial Internet,” bringing digital intelligence to the physical world of industry as never before.
The concept of Internet-connected machines that collect data and communicate, often called the “Internet of Things,” has been around for years. Information technology companies, too, are pursuing this emerging field. I.B.M. has its “Smarter Planet” projects, while Cisco champions the “Internet of Everything.”
But G.E.’s effort, analysts say, shows that Internet-era technology is ready to sweep through the industrial economy much as the consumer Internet has transformed media, communications and advertising over the last decade.
In recent months, Ms. Paul has donned a hard hat and safety boots to study power plants. She has ridden on a rail locomotive and toured hospital wards. “Here, you get to work with things that touch people in so many ways,” she said. “That was a big draw.”
G.E. is the nation’s largest industrial company, a producer of aircraft engines, power plant turbines, rail locomotives and medical imaging equipment. It makes the heavy-duty machinery that transports people, heats homes and powers factories, and lets doctors diagnose life-threatening diseases.
G.E. resides in a different world from the consumer Internet. But the major technologies that animate Google and Facebook are also vital ingredients in the industrial Internet — tools from artificial intelligence, like machine-learning software, and vast streams of new data. In industry, the data flood comes mainly from smaller, more powerful and cheaper sensors on the equipment.
Smarter machines, for example, can alert their human handlers when they will need maintenance, before a breakdown. It is the equivalent of preventive and personalized care for equipment, with less downtime and more output.
“These technologies are really there now, in a way that is practical and economic,” said Mark M. Little, G.E.’s senior vice president for global research.
G.E.’s embrace of the industrial Internet is a long-term strategy. But if its optimism proves justified, the impact could be felt across the economy.
The outlook for technology-led economic growth is a subject of considerable debate. In a recent research paper, Robert J. Gordon, a prominent economist at Northwestern University, argues that the gains from computing and the Internet have petered out in the last eight years.
Since 2000, Mr. Gordon asserts, invention has focused mainly on consumer and communications technologies, including smartphones and tablet computers. Such devices, he writes, are “smaller, smarter and more capable, but do not fundamentally change labor productivity or the standard of living” in the way that electric lighting or the automobile did.
But others say such pessimism misses the next wave of technology. “The reason I think Bob Gordon is wrong is precisely because of the kind of thing G.E. is doing,” said Andrew McAfee, principal research scientist at M.I.T.’s Center for Digital Business.
Today, G.E. is putting sensors on everything, be it a gas turbine or a hospital bed. The mission of the engineers in San Ramon is to design the software for gathering data, and the clever algorithms for sifting through it for cost savings and productivity gains. Across the industries it covers, G.E. estimates such efficiency opportunities at as much as $150 billion.Image: Internet of Things. Courtesy of Intel.
- Startup Culture: New is the New New>
Starting up a new business was once a demanding and complex process, often undertaken in anonymity in the long shadows between the hours of a regular job. It still is over course. However nowadays “the startup” has become more of an event. The tech sector has raised this to a fine art by spawning an entire self-sustaining and self-promoting industry around startups.
You’ll find startup gurus, serial entrepreneurs and digital prophets — yes, AOL has a digital prophet on its payroll — strutting around on stage, twittering tips in the digital world, leading business plan bootcamps, pontificating on accelerator panels, hosting incubator love-ins in coffee shops or splashed across the covers of Entrepreneur or Inc or FastCompany magazines on an almost daily basis. Beware! The back of your cereal box may be next.From the Telegraph:
I’ve seen the best minds of my generation destroyed by marketing, shilling for ad clicks, dragging themselves through the strip-lit corridors of convention centres looking for a venture capitalist. Just as X Factor has convinced hordes of tone deaf kids they can be pop stars, the startup industry has persuaded thousands that they can be the next rockstar entrepreneur. What’s worse is that while X Factor clogs up the television schedules for a couple of months, tech conferences have proliferated to such an extent that not a week goes by without another excuse to slope off. Some founders spend more time on panels pontificating about their business plans than actually executing them.
Earlier this year, I witnessed David Shing, AOL’s Digital Prophet – that really is his job title – delivering the opening remarks at a tech conference. The show summed up the worst elements of the self-obsessed, hyperactive world of modern tech. A 42-year-old man with a shock of Russell Brand hair, expensive spectacles and paint-splattered trousers, Shingy paced the stage spouting buzzwords: “Attention is the new currency, man…the new new is providing utility, brothers and sisters…speaking on the phone is completely cliche.” The audience lapped it all up. At these rallies in praise of the startup, enthusiasm and energy matter much more than making sense.
Startup culture is driven by slinging around superlatives – every job is an “incredible opportunity”, every product is going to “change lives” and “disrupt” an established industry. No one wants to admit that most startups stay stuck right there at the start, pub singers pining for their chance in the spotlight. While the startups and hangers-on milling around in the halls bring in stacks of cash for the event organisers, it’s the already successful entrepreneurs on stage and the investors who actually benefit from these conferences. They meet up at exclusive dinners and in the speakers’ lounge where the real deals are made. It’s Studio 54 for geeks.Image: Startup, WA. Courtesy of Wikipedia.
- The Most Annoying Technology? The Winner Is...>
We all have owned or have used or have come far too close to a technology that we absolutely abhor and wish numerous curses upon its inventors. Said gizmo may be the unfathomable VCR, the forever lost TV remote, the tinny sounding Sony Walkman replete with unraveling cassette tape, the Blackberry, or even Facebook.
Ours over here at theDiagonal is the voice recognition system used by 99 percent of so-called customer service organizations. You know how it goes, something like this: “please say ‘one’ for new accounts”, “please say ‘two’ if you are an existing customer”, please say ‘three’ for returns”, “please say ‘Kyrgyzstan’ to speak with a customer service representative”.
Wired recently listed their least favorite, most hated technologies. No surprises here — winners of this dubious award include the Bluetooth headset, CDROM, and Apple TV remote.From Wired:
Look, here’s a good rule of thumb: Once you get out of the car, or leave your desk, take off the headset. Nobody wants to hear your end of the conversation. That’s not idle speculation, it’s science! Headsets just make it worse. At least when there’s a phone involved, there are visual cues that say “I’m on the phone.” I mean, other than hearing one end of a shouted conversation.
Is your home set on a large wooded lot with acreage to spare between you and your closest neighbor? Did a tornado power through your yard last night, leaving your property covered in limbs and leaves? No? Then get a rake, dude. Leaf blowers are so irritating, they have been been outlawed in some towns. Others should follow suit.Image courtesy of the Sun/Mercury News.
- The Tubes of the Internets>
Google lets the world peek at the many tubes that form a critical part of its search engine infrastructure — functional and pretty too.From the Independent:
They are the cathedrals of the information age – with the colour scheme of an adventure playground.
For the first time, Google has allowed cameras into its high security data centres – the beating hearts of its global network that allow the web giant to process 3 billion internet searches every day.
Only a small band of Google employees have ever been inside the doors of the data centres, which are hidden away in remote parts of North America, Belgium and Finland.
Their workplaces glow with the blinking lights of LEDs on internet servers reassuring technicians that all is well with the web, and hum to the sound of hundreds of giant fans and thousands of gallons of water, that stop the whole thing overheating.
“Very few people have stepped inside Google’s data centers [sic], and for good reason: our first priority is the privacy and security of your data, and we go to great lengths to protect it, keeping our sites under close guard,” the company said yesterday. Row upon row of glowing servers send and receive information from 20 billion web pages every day, while towering libraries store all the data that Google has ever processed – in case of a system failure.
With data speeds 200,000 times faster than an ordinary home internet connection, Google’s centres in America can share huge amounts of information with European counterparts like the remote, snow-packed Hamina centre in Finland, in the blink of an eye.Read the entire article after the jump, or take a look at more images from the bowels of Google after the leap.
- 3D Printing Coming to a Home Near You>
It seems that not too long ago we were writing about pioneering research into 3D printing and start-up businesses showing off their industrially focused, prototype 3D printers. Now, only a couple of years later there is a growing, consumer market, home-based printers for under $3,000, and even a a 3D printing expo — 3D Printshow. The future looks bright and very much three dimensional.From the Independent:
It is Star Trek science made reality, with the potential for production-line replacement body parts, aeronautical spares, fashion, furniture and virtually any other object on demand. It is 3D printing, and now people in Britain can try it for themselves.
The cutting-edge technology, which layers plastic resin in a manner similar to an inkjet printer to create 3D objects, is on its way to becoming affordable for home use. Some of its possibilities will be on display at the UK’s first 3D-printing trade show from Friday to next Sunday at The Brewery in central London .
Clothes made using the technique will be exhibited in a live fashion show, which will include the unveiling of a hat designed for the event by the milliner Stephen Jones, and a band playing a specially composed score on 3D-printed musical instruments.
Some 2,000 consumers are expected to join 1,000 people from the burgeoning industry to see what the technique has to offer, including jewellery and art. A 3D body scanner, which can reproduce a “mini” version of the person scanned, will also be on display.
Workshops run by Jason Lopes of Legacy Effects, which provided 3D-printed models and props for cinema blockbusters such as the Iron Man series and Snow White and the Huntsman, will add a sprinkling of Hollywood glamour.
Kerry Hogarth, the woman behind 3D Printshow, said yesterday she aims to showcase the potential of the technology for families. While prices for printers start at around £1,500 – with DIY kits for less – they are expected to drop steadily over the coming year. One workshop, run by the Birmingham-based Black Country Atelier, will invite people to design a model vehicle and then see the result “printed” off for them to take home.Image: 3D scanning and printing. Courtesy of Wikipedia.
- GigaBytes and TeraWatts>
Online social networks have expanded to include hundreds of millions of twitterati and their followers. An ever increasing volume of data, images, videos and documents continues to move into the expanding virtual “cloud”, hosted in many nameless data centers. Virtual processing and computation on demand is growing by leaps and bounds.
Yet while business models for the providers of these internet services remain ethereal, one segment of this business ecosystem is salivating — electricity companies and utilities — at the staggering demand for electrical power.From the New York Times:
Jeff Rothschild’s machines at Facebook had a problem he knew he had to solve immediately. They were about to melt.
The company had been packing a 40-by-60-foot rental space here with racks of computer servers that were needed to store and process information from members’ accounts. The electricity pouring into the computers was overheating Ethernet sockets and other crucial components.
Thinking fast, Mr. Rothschild, the company’s engineering chief, took some employees on an expedition to buy every fan they could find — “We cleaned out all of the Walgreens in the area,” he said — to blast cool air at the equipment and prevent the Web site from going down.
That was in early 2006, when Facebook had a quaint 10 million or so users and the one main server site. Today, the information generated by nearly one billion people requires outsize versions of these facilities, called data centers, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.
They are a mere fraction of the tens of thousands of data centers that now exist to support the overall explosion of digital information. Stupendous amounts of data are set in motion each day as, with an innocuous click or tap, people download movies on iTunes, check credit card balances through Visa’s Web site, send Yahoo e-mail with files attached, buy products on Amazon, post on Twitter or read newspapers online.
A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.
Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid, The Times found.
To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centers has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centers appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.
Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.
“It’s staggering for most people, even people in the industry, to understand the numbers, the sheer size of these systems,” said Peter Gross, who helped design hundreds of data centers. “A single data center can take more power than a medium-size town.”Image courtesy of the AP / Thanassis Stavrakis.
- Social Media and Vanishing History>
Social media is great for notifying members in one’s circle of events in the here and now. Of course, most events turn out to be rather trivial, of the “what I ate for dinner” kind. However, social media also has a role in spreading word of more momentous social and political events; the Arab Spring comes to mind.
But, while Twitter and its peers may be a boon for those who live in the present moment and need to transmit their current status, it seems that our social networks are letting go of the past. Will history become lost and irrelevant to the Twitter generation?
A terrifying thought.From Technology Review:
On 25 January 2011, a popular uprising began in Egypt that led to the overthrow of the country’s brutal president and to the first truly free elections. One of the defining features of this uprising and of others in the Arab Spring was the way people used social media to organise protests and to spread news.
Several websites have since begun the task of curating this content, which is an important record of events and how they unfolded. That led Hany SalahEldeen and Michael Nelson at Old Dominion University in Norfolk, Virginia, to take a deeper look at the material to see how much the shared were still live.
What they found has serious implications. SalahEldeen and Nelson say a significant proportion of the websites that this social media points to has disappeared. And the same pattern occurs for other culturally significant events, such as the the H1N1 virus outbreak, Michael Jackson’s death and the Syrian uprising.
In other words, our history, as recorded by social media, is slowly leaking away.
Their method is straightforward. SalahEldeen and Nelson looked for tweets on six culturally significant events that occurred between June 2009 and March 2012. They then filtered the URLs these tweets pointed to and checked to see whether the content was still available on the web, either in its original form or in an archived form.
They found that the older the social media, the more likely its content was to be missing. In fact, they found an almost linear relationship between time and the percentage lost.
The numbers are startling. They say that 11 per cent of the social media content had disappeared within a year and 27 per cent within 2 years. Beyond that, SalahEldeen and Nelson say the world loses 0.02 per cent of its culturally significant social media material every day.
That’s a sobering thought.Image: Movie poster for the 2002 film ”The Man Without a Past”. The Man Without a Past (Finnish: Mies vailla menneisyyttä) is a 2002 Finnish comedy-drama film directed by Aki Kaurismäki. Courtesy of Wikipedia.
- How to Build a (Catastrophic) Website>
How-to infographics are as common as convenience stores at intersections. So it takes something special to get thediagonal’s attention. This one fits the bill — courtesy of the geeks at MyDestination; replete with cheesy graphics and terrible advise.
- What's All the Fuss About Big Data?>
We excerpt an interview with big data pioneer and computer scientist, Alex Pentland, via the Edge. Pentland is a leading thinker in computational social science and currently directs the Human Dynamics Laboratory at MIT.
While there is no exact definition of “big data” it tends to be characterized quantitatively and qualitatively differently from data commonly used by most organizations. Where regular data can be stored, processed and analyzed using common database tools and analytical engines, big data refers to vast collections of data that often lie beyond the realm of regular computation. So, often big data requires vast and specialized storage and enormous processing capabilities. Data sets that fall into the big data area cover such areas as climate science, genomics, particle physics, and computational social science.
Big data holds true promise. However, while storage and processing power now enable quick and efficient crunching of tera- and even petabytes of data, tools for comprehensive analysis and visualization lag behind.Alex Pentland via the Edge:
Recently I seem to have become MIT’s Big Data guy, with people like Tim O’Reilly and “Forbes” calling me one of the seven most powerful data scientists in the world. I’m not sure what all of that means, but I have a distinctive view about Big Data, so maybe it is something that people want to hear.
I believe that the power of Big Data is that it is information about people’s behavior instead of information about their beliefs. It’s about the behavior of customers, employees, and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs. This sort of Big Data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.
What those breadcrumbs tell is the story of your life. It tells what you’ve chosen to do. That’s very different than what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behavior, and by analyzing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.
They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviors, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviors they will learn from each other.
As a consequence analysis of Big Data is increasingly about finding connections, connections with the people around you, and connections between people’s behavior and outcomes. You can see this in all sorts of places. For instance, one type of Big Data and connection analysis concerns financial data. Not just the flash crash or the Great Recession, but also all the other sorts of bubbles that occur. What these are is these are systems of people, communications, and decisions that go badly awry. Big Data shows us the connections that cause these events. Big data gives us the possibility of understanding how these systems of people and machines work, and whether they’re stable.
The notion that it is connections between people that is really important is key, because researchers have mostly been trying to understand things like financial bubbles using what is called Complexity Science or Web Science. But these older ways of thinking about Big Data leaves the humans out of the equation. What actually matters is how the people are connected together by the machines and how, as a whole, they create a financial market, a government, a company, and other social structures.
Because it is so important to understand these connections Asu Ozdaglar and I have recently created the MIT Center for Connection Science and Engineering, which spans all of the different MIT departments and schools. It’s one of the very first MIT-wide Centers, because people from all sorts of specialties are coming to understand that it is the connections between people that is actually the core problem in making transportation systems work well, in making energy grids work efficiently, and in making financial systems stable. Markets are not just about rules or algorithms; they’re about people and algorithms together.
Understanding these human-machine systems is what’s going to make our future social systems stable and safe. We are getting beyond complexity, data science and web science, because we are including people as a key part of these systems. That’s the promise of Big Data, to really understand the systems that make our technological society. As you begin to understand them, then you can build systems that are better. The promise is for financial systems that don’t melt down, governments that don’t get mired in inaction, health systems that actually work, and so on, and so forth.
The barriers to better societal systems are not about the size or speed of data. They’re not about most of the things that people are focusing on when they talk about Big Data. Instead, the challenge is to figure out how to analyze the connections in this deluge of data and come to a new way of building systems based on understanding these connections.
Changing The Way We Design Systems
With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant! As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart.
Big data and the notion of Connection Science is outside of our normal way of managing things. We live in an era that builds on centuries of science, and our methods of building of systems, governments, organizations, and so on are pretty well defined. There are not a lot of things that are really novel. But with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.
With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident? You don’t know. Normal analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.
The other problem with Big Data is human understanding. When you find a connection that works, you’d like to be able to use it to build new systems, and that requires having human understanding of the connection. The managers and the owners have to understand what this new connection means. There needs to be a dialogue between our human intuition and the Big Data statistics, and that’s not something that’s built into most of our management systems today. Our managers have little concept of how to use big data analytics, what they mean, and what to believe.
In fact, the data scientists themselves don’t have much of intuition either…and that is a problem. I saw an estimate recently that said 70 to 80 percent of the results that are found in the machine learning literature, which is a key Big Data scientific field, are probably wrong because the researchers didn’t understand that they were overfitting the data. They didn’t have that dialogue between intuition and causal processes that generated the data. They just fit the model and got a good number and published it, and the reviewers didn’t catch it either. That’s pretty bad because if we start building our world on results like that, we’re going to end up with trains that crash into walls and other bad things. Management using Big Data is actually a radically new thing.Image courtesy of Techcrunch.
Science fiction stories and illustrations from our past provide a wonderful opportunity for us to test the predictive and prescient capabilities of their creators. Some like Arthur C. Clarke, we are often reminded, foresaw the communications satellite and the space elevator. Others, such as science fiction great, Isaac Asimov, fared less well in predicting future technology; while he is considered to have coined the term “robotics”, he famously predicted future computers and robots as using punched cards.
Illustrations of our future from the past are even more fascinating. One of the leading proponents of the science fiction illustration genre, or scientifiction, as it was titled in the mid-1920s, was Frank R. Paul. Paul illustrated many of the now classic U.S. pulp science fiction magazines beginning in the 1920s with vivid visuals of aliens, spaceships, destroyed worlds and bizarre technologies. Though, one of his less apocalyptic, but perhaps prescient, works showed a web-footed alien smoking a cigarette through a lengthy proboscis.
Of Frank R. Paul, Ray Bradbury is quoted as saying, “Paul’s fantastic covers for Amazing Stories changed my life forever.”
See more of Paul’s classic illustrations after the jump.Image courtesy of 50Watts / Frank R. Paul.
- How Apple With the Help of Others Invented the iPhone>
Apple’s invention of the iPhone is story of insight, collaboration, cannibalization and dogged persistence over the period of a decade.From Slate:
Like many of Apple’s inventions, the iPhone began not with a vision, but with a problem. By 2005, the iPod had eclipsed the Mac as Apple’s largest source of revenue, but the music player that rescued Apple from the brink now faced a looming threat: The cellphone. Everyone carried a phone, and if phone companies figured out a way to make playing music easy and fun, “that could render the iPod unnecessary,” Steve Jobs once warned Apple’s board, according to Walter Isaacson’s biography.
Fortunately for Apple, most phones on the market sucked. Jobs and other Apple executives would grouse about their phones all the time. The simplest phones didn’t do much other than make calls, and the more functions you added to phones, the more complicated they were to use. In particular, phones “weren’t any good as entertainment devices,” Phil Schiller, Apple’s longtime marketing chief, testified during the company’s patent trial with Samsung. Getting music and video on 2005-era phones was too difficult, and if you managed that, getting the device to actually play your stuff was a joyless trudge through numerous screens and menus.
That was because most phones were hobbled by a basic problem—they didn’t have a good method for input. Hard keys (like the ones on the BlackBerry) worked for typing, but they were terrible for navigation. In theory, phones with touchscreens could do a lot more, but in reality they were also a pain to use. Touchscreens of the era couldn’t detect finger presses—they needed a stylus, and the only way to use a stylus was with two hands (one to hold the phone and one to hold the stylus). Nobody wanted a music player that required two-handed operation.
This is the story of how Apple reinvented the phone. The general outlines of this tale have been told before, most thoroughly in Isaacson’s biography. But the Samsung case—which ended last month with a resounding victory for Apple—revealed a trove of details about the invention, the sort of details that Apple is ordinarily loath to make public. We got pictures of dozens of prototypes of the iPhone and iPad. We got internal email that explained how executives and designers solved key problems in the iPhone’s design. We got testimony from Apple’s top brass explaining why the iPhone was a gamble.
Put it all together and you get remarkable story about a device that, under the normal rules of business, should not have been invented. Given the popularity of the iPod and its centrality to Apple’s bottom line, Apple should have been the last company on the planet to try to build something whose explicit purpose was to kill music players. Yet Apple’s inner circle knew that one day, a phone maker would solve the interface problem, creating a universal device that could make calls, play music and videos, and do everything else, too—a device that would eat the iPod’s lunch. Apple’s only chance at staving off that future was to invent the iPod killer itself. More than this simple business calculation, though, Apple’s brass saw the phone as an opportunity for real innovation. “We wanted to build a phone for ourselves,” Scott Forstall, who heads the team that built the phone’s operating system, said at the trial. “We wanted to build a phone that we loved.”
The problem was how to do it. When Jobs unveiled the iPhone in 2007, he showed off a picture of an iPod with a rotary-phone dialer instead of a click wheel. That was a joke, but it wasn’t far from Apple’s initial thoughts about phones. The click wheel—the brilliant interface that powered the iPod (which was invented for Apple by a firm called Synaptics)—was a simple, widely understood way to navigate through menus in order to play music. So why not use it to make calls, too?
In 2005, Tony Fadell, the engineer who’s credited with inventing the first iPod, got hold of a high-end desk phone made by Samsung and Bang & Olufsen that you navigated using a set of numerical keys placed around a rotating wheel. A Samsung cell phone, the X810, used a similar rotating wheel for input. Fadell didn’t seem to like the idea. “Weird way to hold the cellphone,” he wrote in an email to others at Apple. But Jobs thought it could work. “This may be our answer—we could put the number pad around our clickwheel,” he wrote. (Samsung pointed to this thread as evidence for its claim that Apple’s designs were inspired by other companies, including Samsung itself.)
Around the same time, Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”
Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.Read the entire article after the jump.Retro design iPhone courtesy of Ubergizmo.
- Happy Birthday :-)>
Thirty years ago today Professor Scott Fahlman of Carnegie Mellon University sent what is believed to be the first emoticon embedded in an email. The symbol, , which he proposed as a joke marker, spread rapidly, morphed and evolved into a universe of symbolic nods, winks, and cyber-emotions.
For a lengthy list of popular emoticons, including some very interesting Eastern ones, jump here.From the Independent:
To some, an email isn’t complete without the inclusion of or . To others, the very idea of using “emoticons” – communicative graphics – makes the blood boil and represents all that has gone wrong with the English language.
Regardless of your view, as emoticons celebrate their 30th anniversary this month, it is accepted that they are here stay. Their birth can be traced to the precise minute: 11:44am on 19 September 1982. At that moment, Professor Scott Fahlman, of Carnegie Mellon University in Pittsburgh, sent an email on an online electronic bulletin board that included the first use of the sideways smiley face: “I propose the following character sequence for joke markers: Read it sideways.” More than anyone, he must take the credit – or the blame.
The aim was simple: to allow those who posted on the university’s bulletin board to distinguish between those attempting to write humorous emails and those who weren’t. Professor Fahlman had seen how simple jokes were often misunderstood and attempted to find a way around the problem.
This weekend, the professor, a computer science researcher who still works at the university, says he is amazed his smiley face took off: “This was a little bit of silliness that I tossed into a discussion about physics,” he says. “It was ten minutes of my life. I expected my note might amuse a few of my friends, and that would be the end of it.”
But once his initial email had been sent, it wasn’t long before it spread to other universities and research labs via the primitive computer networks of the day. Within months, it had gone global.
Nowadays dozens of variations are available, mainly as little yellow, computer graphics. There are emoticons that wear sunglasses; some cry, while others don Santa hats. But Professor Fahlman isn’t a fan.
“I think they are ugly, and they ruin the challenge of trying to come up with a clever way to express emotions using standard keyboard characters. But perhaps that’s just because I invented the other kind.”Image courtesy of Wikipedia.
- Mobile Phone as Survival Gear>
So, here’s the premise. You have hiked alone for days and now find yourself isolated and lost in a dense forest half-way up a mountain. Yes! You have a cell phone. But, oh no, there is no service in this remote part of the world. So, no call for help and no GPS. And, it gets worse: you have no emergency supplies and no food. What can you do? The neat infographic offers some tips.Infographic courtesy of Natalie Bracco / AnsonAlex.com.
- The Pros and Cons of Online Reviews>
There is no doubt that online reviews for products and services, from books to news cars to a vacation spot, have revolutionized shopping behavior. Internet and mobile technology has made gathering, reviewing and publishing open and honest crowdsourced opinion simple, efficient and ubiquitous.
However, the same tools that allow frank online discussion empower those wishing to cheat and manipulate the system. Cyberspace is rife with fake reviews, fake reviewers, inflated ratings, edited opinion, and paid insertions.
So, just as in any purchase transaction since the time when buyers and sellers first met, caveat emptor still applies.From Slate:
The Internet has fundamentally changed the way that buyers and sellers meet and interact in the marketplace. Online retailers make it cheap and easy to browse, comparison shop, and make purchases with the click of a mouse. The Web can also, in theory, make for better-informed purchases—both online and off—thanks to sites that offer crowdsourced reviews of everything from dog walkers to dentists.
In a Web-enabled world, it should be harder for careless or unscrupulous businesses to exploit consumers. Yet recent studies suggest that online reviewing is hardly a perfect consumer defense system. Researchers at Yale, Dartmouth, and USC have found evidence that hotel owners post fake reviews to boost their ratings on the site—and might even be posting negative reviews of nearby competitors.
The preponderance of online reviews speaks to their basic weakness: Because it’s essentially free to post a review, it’s all too easy to dash off thoughtless praise or criticism, or, worse, to construct deliberately misleading reviews without facing any consequences. It’s what economists (and others) refer to as the cheap-talk problem. The obvious solution is to make it more costly to post a review, but that eliminates one of the main virtues of crowdsourcing: There is much more wisdom in a crowd of millions than in select opinions of a few dozen.
Of course, that wisdom depends on reviewers giving honest feedback. A few well-publicized incidents suggest that’s not always the case. For example, when Amazon’s Canadian site accidentally revealed the identities of anonymous book reviewers in 2004, it became apparent that many reviews came from publishers and from the authors themselves.
Technological idealists, perhaps not surprisingly, see a solution to this problem in cutting-edge computer science. One widely reported study last year showed that a text-analysis algorithm proved remarkably adept at detecting made-up reviews. The researchers instructed freelance writers to put themselves in the role of a hotel marketer who has been tasked by his boss with writing a fake customer review that is flattering to the hotel. They also compiled a set of comparison TripAdvisor reviews that the study’s authors felt were likely to be genuine. Human judges could not distinguish between the real ones and the fakes. But the algorithm correctly identified the reviews as real or phony with 90 percent accuracy by picking up on subtle differences, like whether the review described specific aspects of the hotel room layout (the real ones do) or mentioned matters that were unrelated to the hotel itself, like whether the reviewer was there on vacation or business (a marker of fakes). Great, but in the cat-and-mouse game of fraud vs. fraud detection, phony reviewers can now design feedback that won’t set off any alarm bells.
Just how prevalent are fake reviews? A trio of business school professors, Yale’s Judith Chevalier, Yaniv Dover of Dartmouth, and USC’s Dina Mayzlin, have taken a clever approach to inferring an answer by comparing the reviews on two travel sites, TripAdvisor and Expedia. In order to post an Expedia review, a traveler needs to have made her hotel booking through the site. Hence, a hotel looking to inflate its rating or malign a competitor would have to incur the cost of paying itself through the site, accumulating transaction fees and tax liabilities in the process. On TripAdvisor, all you need to post fake reviews are a few phony login names and email addresses.
Differences in the overall ratings on TripAdvisor versus Expedia could simply be the result of a more sympathetic community of reviewers. (In practice, TripAdvisor’s ratings are actually lower on average.) So Mayzlin and her co-authors focus on the places where the gaps between TripAdvisor and Expedia reviews are widest. In their analysis, they looked at hotels that probably appear identical to the average traveler but have different underlying ownership or management. There are, for example, companies that own scores of franchises from hotel chains like Marriott and Hilton. Other hotels operate under these same nameplates but are independently owned. Similarly, many hotels are run on behalf of their owners by large management companies, while others are owner-managed. The average traveler is unlikely to know the difference between a Fairfield Inn owned by, say, the Pillar Hotel Group and one owned and operated by Ray Fisman. The study’s authors argue that the small owners and independents have less to lose by trying to goose their online ratings (or torpedo the ratings of their neighbors), reasoning that larger companies would be more vulnerable to punishment, censure, and loss of business if their shenanigans were uncovered. (The authors give the example of a recent case in which a manager at Ireland’s Clare Inn was caught posting fake reviews. The hotel is part of the Lynch Hotel Group, and in the wake of the fake postings, TripAdvisor removed suspicious reviews from other Lynch hotels, and unflattering media accounts of the episode generated negative PR that was shared across all Lynch properties.)
The researchers find that, even comparing hotels under the same brand, small owners are around 10 percent more likely to get five-star reviews on TripAdvisor than they are on Expedia (relative to hotels owned by large corporations). The study also examines whether these small owners might be targeting the competition with bad reviews. The authors look at negative reviews for hotels that have competitors within half a kilometer. Hotels where the nearby competition comes from small owners have 16 percent more one- and two-star ratings than those with neighboring hotels that are owned by big companies like Pillar.
This isn’t to say that consumers are making a mistake by using TripAdvisor to guide them in their hotel reservations. Despite the fraudulent posts, there is still a high degree of concordance between the ratings assigned by TripAdvisor and Expedia. And across the Web, there are scores of posters who seem passionate about their reviews.
Consumers, in turn, do seem to take online reviews seriously. By comparing restaurants that fall just above and just below the threshold for an extra half-star on Yelp, Harvard Business School’s Michael Luca estimates that an extra star is worth an extra 5 to 9 percent in revenue. Luca’s intent isn’t to examine whether restaurants are gaming Yelp’s system, but his findings certainly indicate that they’d profit from trying. (Ironically, Luca also finds that independent restaurants—the establishments that Mayzlin et al. would predict are most likely to put up fake postings—benefit the most from an extra star. You don’t need to check out Yelp to know what to expect when you walk into McDonald’s or Pizza Hut.)Read the entire article following the jump:Image courtesy of Mashable.
- Shirking Life-As-Performance of a Social Network >
Ex-Facebook employee number 51, gives us a glimpse from within the social network giant. It’s a tale of social isolation, shallow relationships, voyeurism, and narcissistic performance art. It’s also a tale of the re-discovery of life prior to “likes”, “status updates”, “tweets” and “followers”.From the Washington Post:
Not long after Katherine Losse left her Silicon Valley career and moved to this West Texas town for its artsy vibe and crisp desert air, she decided to make friends the old-fashioned way, in person. So she went to her Facebook page and, with a series of keystrokes, shut it off.
The move carried extra import because Losse had been the social network’s 51st employee and rose to become founder Mark Zuckerberg’s personal ghostwriter. But Losse gradually soured on the revolution in human relations she witnessed from within.
The explosion of social media, she believed, left hundreds of millions of users with connections that were more plentiful but also narrower and less satisfying, with intimacy losing out to efficiency. It was time, Losse thought, for people to renegotiate their relationships with technology.
“It’s okay to feel weird about this because I feel weird about this, and I was in the center of it,” said Losse, 36, who has long, dark hair and sky-blue eyes. “We all know there is an anxiety, there’s an unease, there’s a worry that our lives are changing.”
Her response was to quit her job — something made easier by the vested stock she cashed in — and to embrace the ancient toil of writing something in her own words, at book length, about her experiences and the philosophical questions they inspired.
That brought her to Marfa, a town of 2,000 people in an area so remote that astronomers long have come here for its famously dark night sky, beyond the light pollution that’s a byproduct of modern life.
Losse’s mission was oddly parallel. She wanted to live, at least for a time, as far as practical from the world’s relentless digital glow.
Losse was a graduate student in English at Johns Hopkins University in 2004 when Facebook began its spread, first at Harvard, then other elite schools and beyond. It provided a digital commons, a way of sharing personal lives that to her felt safer than the rest of the Internet.
The mix has proved powerful. More than 900 million people have joined; if they were citizens of a single country, Facebook Nation would be the world’s third largest.
At first, Losse was among those smitten. In 2005, after moving to Northern California in search of work, she responded to a query on the Facebook home page seeking résumés. Losse soon became one of the company’s first customer-service reps, replying to questions from users and helping to police abuses.
She was firmly on the wrong side of the Silicon Valley divide, which prizes the (mostly male) engineers over those, like Losse, with liberal arts degrees. Yet she had the sense of being on the ground floor of something exciting that might also yield a life-altering financial jackpot.
In her first days, she was given a master password that she said allowed her to see any information users typed into their Facebook pages. She could go into pages to fix technical problems and police content. Losse recounted sparring with a user who created a succession of pages devoted to anti-gay messages and imagery. In one exchange, she noticed the man’s password, “Ilovejason,” and was startled by the painful irony.
Another time, Losse cringed when she learned that a team of Facebook engineers was developing what they called “dark profiles” — pages for people who had not signed up for the service but who had been identified in posts by Facebook users. The dark profiles were not to be visible to ordinary users, Losse said, but if the person eventually signed up, Facebook would activate those latent links to other users.
All the world a stage
Losse’s unease sharpened when a celebrated Facebook engineer was developing the capacity for users to upload video to their pages. He started videotaping friends, including Losse, almost compulsively. On one road trip together, the engineer made a video of her napping in a car and uploaded it remotely to an internal Facebook page. Comments noting her siesta soon began appearing — only moments after it happened.
“The day before, I could just be in a car being in a car. Now my being in a car is a performance that is visible to everyone,” Losse said, exasperation creeping into her voice. “It’s almost like there is no middle of nowhere anymore.”
Losse began comparing Facebook to the iconic 1976 Eagles song “Hotel California,” with its haunting coda, “You can check out anytime you want, but you can never leave.” She put a copy of the record jacket on prominent display in a house she and several other employees shared not far from the headquarters (then in Palo Alto., Calif.; it’s now in Menlo Park).
As Facebook grew, Losse’s career blossomed. She helped introduce Facebook to new countries, pushing for quick, clean translations into new languages. Later, she moved to the heart of the company as Zuckerberg’s ghostwriter, mimicking his upbeat yet efficient style of communicating in blog posts he issued.
But her concerns continue to grow. When Zuckerberg, apparently sensing this, said to Losse, “I don’t know if I trust you,” she decided she needed to either be entirely committed to Facebook or leave. She soon sold some of her vested stock. She won’t say how much; they provided enough of a financial boon for her to go a couple of years without a salary, though not enough to stop working altogether, as some former colleagues have.
‘Touchy, private territory’
Among Losse’s concerns were the vast amount of personal data Facebook gathers. “They are playing on very touchy, private territory. They really are,” she said. “To not be conscious of that seems really dangerous.”
It wasn’t just Facebook. Losse developed a skepticism for many social technologies and the trade-offs they require.
Facebook and some others have portrayed proliferating digital connections as inherently good, bringing a sprawling world closer together and easing personal isolation.
Moira Burke, a researcher who trained at the Human-Computer Interaction Institute at Carnegie Mellon University and has since joined Facebook’s Data Team, tracked the moods of 1,200 volunteer users. She found that simply scanning the postings of others had little effect on well-being; actively participating in exchanges with friends, however, relieved loneliness.
Summing up her findings, she wrote on Facebook’s official blog, “The more people use Facebook, the better they feel.”
But Losse’s concerns about online socializing tracks with the findings of Sherry Turkle, a Massachusetts Institute of Technology psychologist who says users of social media have little understanding of the personal information they are giving away. Nor, she said, do many understand the potentially distorting consequences when they put their lives on public display, as what amounts to an ongoing performance on social media.
“In our online lives, we edit, we retouch, we clean up,” said Turkle, author of “Alone Together: Why We Expect More From Technology and Less From Each Other,” published in 2011. “We substitute what I call ‘connection for real conversation.’?”Image: The Boy Kings by Katherine Losse.
- The Emperor Has Transparent Clothes>
Hot from the TechnoSensual Exposition in Vienna, Austria, come clothes that can be made transparent or opaque, and clothes that can detect a wearer telling a lie. While the value of the former may seem dubious outside of the home, the latter invention should be a mandatory garment for all politicians and bankers. Or, for the less adventurous, millinery fashionistas, how about a hat that reacts to ambient radio waves?
All these innovations find their way from the realms of a Philip K. Dick science fiction novel, courtesy of the confluence of new technologies and innovative textile design.From New Scientist:
WHAT if the world could see your innermost emotions? For the wearer of the Bubelle dress created by Philips Design, it’s not simply a thought experiment.
Aptly nicknamed “the blushing dress”, the futuristic garment has an inner layer fitted with sensors that measure heart rate, respiration and galvanic skin response. The measurements are fed to 18 miniature projectors that shine corresponding colours, shapes, and intensities onto an outer layer of fabric – turning the dress into something like a giant, high-tech mood ring. As a natural blusher, I feel like I already know what it would be like to wear this dress – like going emotionally, instead of physically, naked.
The Bubelle dress is just one of the technologically enhanced items of clothing on show at the Technosensual exhibition in Vienna, Austria, which celebrates the overlapping worlds of technology, fashion and design.
Other garments are even more revealing. Holy Dress, created by Melissa Coleman and Leonie Smelt, is a wearable lie detector – that also metes out punishment. Using voice-stress analysis, the garment is designed to catch the wearer out in a lie, whereupon it twinkles conspicuously and gives her a small shock. Though the garment is beautiful, a slim white dress under a geometric structure of copper tubes, I’d rather try it on a politician than myself. “You can become a martyr for truth,” says Coleman. To make it, she hacked a 1990s lie detector and added a novelty shocking pen.
Laying the wearer bare in a less metaphorical way, a dress that alternates between opaque and transparent is also on show. Designed by the exhibition’s curator, Anouk Wipprecht with interactive design laboratory Studio Roosegaarde, Intimacy 2.0 was made using conductive liquid crystal foil. When a very low electrical current is applied to the foil, the liquid crystals stand to attention in parallel, making the material transparent. Wipprecht expects the next iteration could be available commercially. It’s time to take the dresses “out of the museum and get them on the streets”, she says.Image: Taiknam Hat, a hat sensitive to ambient radio waves. Courtesy of Ricardo O’Nascimento, Ebru Kurbak, Fabiana Shizue / New Scientist.
- Beware, Big Telecomm is Watching You>
Facebook trawls your profile, status and friends to target ads more effectively. It also allows 3rd parties, for a fee, to mine mountains of aggregated data for juicy analyses. Many online companies do the same. However, some companies are taking this to a whole, new and very personal level.
Here’s an example from Germany. Politician Malte Spitz gathered 6 months of his personal geolocation data from his mobile phone company. Then, he combined this data with his activity online, such as Twitter updates, blog entries and website visits. The interactive results seen here, plotted over time and space, show the detailed extent to which an individual’s life is being tracked and recorded.From Zeit Online:
By pushing the play button, you will set off on a trip through Malte Spitz’s life. The speed controller allows you to adjust how fast you travel, the pause button will let you stop at interesting points. In addition, a calendar at the bottom shows when he was in a particular location and can be used to jump to a specific time period. Each column corresponds to one day.
Not surprisingly, Spitz had to sue his phone company, Deutsche Telekom, to gain access to his own phone data.From TED:
On August 31, 2009, politician Malte Spitz traveled from Berlin to Erlangen, sending 29 text messages as he traveled. On November 5, 2009, he rocked out to U2 at the Brandenburg Gate. On January 10, 2010, he made 10 outgoing phone calls while on a trip to Dusseldorf, and spent 22 hours, 53 minutes and 57 seconds of the day connected to the internet.
How do we know all this? By looking at a detailed, interactive timeline of Spitz’s life, created using information obtained from his cell phone company, Deutsche Telekom, between September 2009 and February 2010.
In an impassioned talk given at TEDGlobal 2012, Spitz, a member of Germany’s Green Party, recalls his multiple-year quest to receive this data from his phone company. And he explains why he decided to make this shockingly precise log into public information in the newspaper Die Zeit – to sound a warning bell of sorts.
“If you have access to this information, you can see what your society is doing,” says Spitz. “If you have access to this information, you can control your country.”
- How Do Startup Companies Succeed?>
A view from Esther Dyson, one of world’s leading digital technology entrepreneurs. She has served as a an early investor in numerous startups, including Flickr, del.icio.us, ZEDO, and Medspace, and is currently focused on startups in medical technology and aviation.From Project Syndicate:
The most popular stories often seem to end at the beginning. “…and so Juan and Alice got married.” Did they actually live happily ever after? “He was elected President.” But how did the country do under his rule? “The entrepreneur got her startup funding.” But did the company succeed?
Let’s consider that last one. Specifically, what happens to entrepreneurs once they get their money? Everywhere I go – and I have been in Moscow, Libreville (Gabon), and Dublin in the last few weeks – smart people ask how to get companies through the next phase of growth. How can we scale entrepreneurship to the point that it has a measurable and meaningful impact on the economy?
The real impact of both Microsoft and Google is not on their shareholders, or even on the people that they employ directly, but on the millions of people whom they have made more productive. That argues for companies that solve real problems, rather than for yet another photo-sharing app for rich, appealing (to advertisers) people with time on their hands.
It turns out that money is rarely enough – not just that there is not enough of it, but that entrepreneurs need something else. They need advice, contacts, customers, and employees immersed in a culture of effectiveness to succeed. But they also have to create something of real value to have meaningful economic impact in the long term.
The easy, increasingly popular answer is accelerators, incubators, camps, weekends – a host of locations and events to foster the development of startups. But these are just buildings and conferences unless they include people who can help with the software – contacts, customers, and culture. The people in charge, from NGOs to government officials, have great ideas about structures – tax policy, official financing, etc. – while the entrepreneurs themselves are too busy running their companies to find out about these things.
But this week in Dublin, I found what we need: not policies or theories, but actual living examples. Not far from the fancy hotel at which I was staying, and across from Google’s modish Irish offices, sits a squat old warehouse with a new sign: Startupbootcamp. You enter through a side door, into a cavern full of sawdust and cheap furniture (plus a pool table and a bar, of course).
What makes this place interesting is its sponsor: venerable old IBM. The mission of Startupbootcamp Europe is not to celebrate entrepreneurs, or even to educate them, but to help them scale up to meaningful businesses. Their new products can use IBM’s and other mentors’ contacts with the much broader world, whether for strategic marketing alliances, the power of an IBM endorsement, or, ultimately, an acquisition.
I was invited by Martin Kelly, who represents IBM’s venture arm in Ireland. He introduced me to the manager of the place, Eoghan Jennings, and a bunch of seasoned executives.
There was a three-time entrepreneur, Conor Hanley, co-founder of BiancaMed (recently sold to Resmed), who now has a sleep-monitoring tool and an exciting distribution deal with a large company he can’t yet mention; Jim Joyce, a former sales executive for Schering Plough who is now running Point of Care, which helps clinicians to help patients to manage their own care after they leave hospital; and Johnny Walker, a radiologist whose company operates scanners in the field and interprets them through a network of radiologists worldwide. Currently, Walker’s company, Global Diagnostics, is focused on pre-natal care, but give him time.
These guys are not the “startups”; they are the mentors, carefully solicited by Kelly from within the tightly knit Irish business community. He knew exactly what he was looking for: “In Ireland, we have people from lots of large companies. Joyce, for example, can put a startup in touch with senior management from virtually any pharma company around the world. Hanley knows manufacturing and tech partners. Walker understands how to operate in rural conditions.”
According to Jennings, a former chief financial officer of Xing, Europe’s leading social network, “We spent years trying to persuade people that they had a problem we could solve; now I am working with companies solving problems that people know they have.” And that usually involves more than an Internet solution; it requires distribution channels, production facilities, market education, and the like. Startupbootcamp’s next batch of startups, not coincidentally, will be in the health-care sector.
Each of the mentors can help a startup to go global. Precisely because the Irish market is so small, it’s a good place to find people who know how to expand globally. In Ireland right now, as in so many countries, many large companies are laying off people with experience. Not all of them have the makings of an entrepreneur. But most of them have skills worth sharing, whether it’s how to run a sales meeting, oversee a development project, or manage a database of customers.
- Extending Moore's Law Through Evolution> From Smithsonian:
In 1965, Intel co-founder Gordon Moore made a prediction about computing that has held true to this day. Moore’s law, as it came to be known, forecasted that the number of transistors we’d be able to cram onto a circuit—and thereby, the effective processing speed of our computers—would double roughly every two years. Remarkably enough, this rule has been accurate for nearly 50 years, but most experts now predict that this growth will slow by the end of the decade.
Someday, though, a radical new approach to creating silicon semiconductors might enable this rate to continue—and could even accelerate it. As detailed in a study published in this month’s Proceedings of the National Academy of Sciences, a team of researchers from the University of California at Santa Barbara and elsewhere have harnessed the process of evolution to produce enzymes that create novel semiconductor structures.
“It’s like natural selection, but here, it’s artificial selection,” Daniel Morse, professor emeritus at UCSB and a co-author of the study, said in an interview. After taking an enzyme found in marine sponges and mutating it into many various forms, “we’ve selected the one in a million mutant DNAs capable of making a semiconductor.”
In an earlier study, Morse and other members of the research team had discovered silicatein—a natural enzyme used used by marine sponges to construct their silica skeletons. The mineral, as it happens, also serves as the building block of semiconductor computer chips. “We then asked the question—could we genetically engineer the structure of the enzyme to make it possible to produce other minerals and semiconductors not normally produced by living organisms?” Morse said.
To make this possible, the researchers isolated and made many copies of the part of the sponge’s DNA that codes for silicatein, then intentionally introduced millions of different mutations in the DNA. By chance, some of these would likely lead to mutant forms of silicatein that would produce different semiconductors, rather than silica—a process that mirrors natural selection, albeit on a much shorter time scale, and directed by human choice rather than survival of the fittest.
- La Macchina: The Machine as Art, for Caffeine Addicts>
You may not know their names, but Desiderio Pavoni and Luigi Bezzerra are to coffee as are Steve Jobs and Steve Wozniak to computers. Modern day espresso machines owe all to the innovative design and business savvy of this early 20th century Italian duo.From Smithsonian:
For many coffee drinkers, espresso is coffee. It is the purest distillation of the coffee bean, the literal essence of a bean. In another sense, it is also the first instant coffee. Before espresso, it could take up to five minutes –five minutes!– for a cup of coffee to brew. But what exactly is espresso and how did it come to dominate our morning routines? Although many people are familiar with espresso these days thanks to the Starbucksification of the world, there is often still some confusion over what it actually is – largely due to “espresso roasts” available on supermarket shelves everywhere. First, and most importantly, espresso is not a roasting method. It is neither a bean nor a blend. It is a method of preparation. More specifically, it is a preparation method in which highly-pressurized hot water is forced over coffee grounds to produce a very concentrated coffee drink with a deep, robust flavor. While there is no standardized process for pulling a shot of espresso, Italian coffeemaker Illy’s definition of the authentic espresso seems as good a measure as any:
A jet of hot water at 88°-93°C (190°-200°F) passes under a pressure of nine or more atmospheres through a seven-gram (.25 oz) cake-like layer of ground and tamped coffee. Done right, the result is a concentrate of not more than 30 ml (one oz) of pure sensorial pleasure.
For those of you who, like me, are more than a few years out of science class, nine atmospheres of pressure is the equivalent to nine times the amount of pressure normally exerted by the earth’s atmosphere. As you might be able to tell from the precision of Illy’s description, good espresso is good chemistry. It’s all about precision and consistency and finding the perfect balance between grind, temperature, and pressure. Espresso happens at the molecular level. This is why technology has been such an important part of the historical development of espresso and a key to the ongoing search for the perfect shot. While espresso was never designed per se, the machines –or Macchina– that make our cappuccinos and lattes have a history that stretches back more than a century.
In the 19th century, coffee was a huge business in Europe with cafes flourishing across the continent. But coffee brewing was a slow process and, as is still the case today, customers often had to wait for their brew. Seeing an opportunity, inventors across Europe began to explore ways of using steam machines to reduce brewing time – this was, after all, the age of steam. Though there were surely innumerable patents and prototypes, the invention of the machine and the method that would lead to espresso is usually attributed to Angelo Moriondo of Turin, Italy, who was granted a patent in 1884 for “new steam machinery for the economic and instantaneous confection of coffee beverage.” The machine consisted of a large boiler, heated to 1.5 bars of pressure, that pushed water through a large bed of coffee grounds on demand, with a second boiler producing steam that would flash the bed of coffee and complete the brew. Though Moriondo’s invention was the first coffee machine to use both water and steam, it was purely a bulk brewer created for the Turin General Exposition. Not much more is known about Moriondo, due in large part to what we might think of today as a branding failure. There were never any “Moriondo” machines, there are no verifiable machines still in existence, and there aren’t even photographs of his work. With the exception of his patent, Moriondo has been largely lost to history. The two men who would improve on Morinodo’s design to produce a single serving espresso would not make that same mistake.
Luigi Bezzerra and Desiderio Pavoni were the Steve Wozniak and Steve Jobs of espresso. Milanese manufacturer and “maker of liquors” Luigi Bezzera had the know-how. He invented single-shot espresso in the early years of the 20th century while looking for a method of quickly brewing coffee directly into the cup. He made several improvements to Moriondo’s machine, introduced the portafilter, multiple brewheads, and many other innovations still associated with espresso machines today. In Bezzera’s original patent, a large boiler with built-in burner chambers filled with water was heated until it pushed water and steam through a tamped puck of ground coffee. The mechanism through which the heated water passed also functioned as heat radiators, lowering the temperature of the water from 250°F in the boiler to the ideal brewing temperature of approximately 195°F (90°C). Et voila, espresso. For the first time, a cup of coffee was brewed to order in a matter of seconds. But Bezzera’s machine was heated over an open flame, which made it difficult to control pressure and temperature, and nearly impossible to to produce a consistent shot. And consistency is key in the world of espresso. Bezzera designed and built a few prototypes of his machine but his beverage remained largely unappreciated because he didn’t have any money to expand his business or any idea how to market the machine. But he knew someone who did. Enter Desiderio Pavoni.Image: A 1910 Ideale espresso machine. Courtesy of Smithsonian.
- Keeping Secrets in the Age of Technology> From the Guardian:
With the benefit of hindsight, life as I knew it came to an end in late 1994, round Seal’s house. We used to live round the corner from each other and if he was in between supermodels I’d pop over to watch a bit of Formula 1 on his pop star-sized flat-screen telly. I was probably on the sofa reading Vogue (we had that in common, albeit for different reasons) while he was “mucking about” on his computer (then the actual technical term for anything non-work-related, vis-à-vis computers), when he said something like: “Kate, have a look at this thing called the World Wide Web. It’s going to be massive!”
I can’t remember what we looked at then, at the tail-end of what I now nostalgically refer to as “The Tipp-Ex Years” – maybe The Well, accessed by Web Crawler – but whatever it was, it didn’t do it for me: “Information dual carriageway!” I said (trust me, this passed for witty in the 1990s). “Fancy a pizza?”
So there we are: Seal introduced me to the interweb. And although I remain a bit of a petrol-head and (nothing if not brand-loyal) own an iPad, an iPhone and two Macs, I am still basically rubbish at “modern”. Pre-Leveson, when I was writing a novel involving a phone-hacking scandal, my only concern was whether or not I’d come up with a plot that was: a) vaguely plausible and/or interesting, and b) technically possible. (A very nice man from Apple assured me that it was.)
I would gladly have used semaphore, telegrams or parchment scrolls delivered by magic owls to get the point across. Which is that ever since people started chiselling cuneiform on to big stones they’ve been writing things that will at some point almost certainly be misread and/or misinterpreted by someone else. But the speed of modern technology has made the problem rather more immediate. Confusing your public tweets with your Direct Messages and begging your young lover to take-me-now-cos-im-gagging-4-u? They didn’t have to worry about that when they were issuing decrees at Memphis on a nice bit of granodiorite.
These days the mis-sent (or indeed misread) text is still a relatively intimate intimation of an affair, while the notorious “reply all” email is the stuff of tired stand-up comedy. The boundary-less tweet is relatively new – and therefore still entertaining – territory, as evidenced most recently by American model Melissa Stetten, who, sitting on a plane next to a (married) soap actor called Brian Presley, tweeted as he appeared to hit on her.
Whenever and wherever words are written, somebody, somewhere will want to read them. And if those words are not meant to be read they very often will be – usually by the “wrong” people. A 2010 poll announced that six in 10 women would admit to regularly snooping on their partner’s phone, Twitter, or Facebook, although history doesn’t record whether the other four in 10 were then subjected to lie-detector tests.
Our compelling, self-sabotaging desire to snoop is usually informed by… well, if not paranoia, exactly, then insecurity, which in turn is more revealing about us than the words we find. If we seek out bad stuff – in a partner’s text, an ex’s Facebook status or best friend’s Twitter timeline – we will surely find it. And of course we don’t even have to make much effort to find the stuff we probably oughtn’t. Employers now routinely snoop on staff, and while this says more about the paranoid dynamic between boss classes and foot soldiers than we’d like, I have little sympathy for the employee who tweets their hangover status with one hand while phoning in “sick” with the other.
Take Google Maps: the more information we are given, the more we feel we’ve been gifted a licence to snoop. It’s the kind of thing we might be protesting about on the streets of Westminster were we not too busy invading our own privacy, as per the recent tweet-spat between Mr and Mrs Ben Goldsmith.
Technology feeds an increasing yet non-specific social unease – and that uneasiness inevitably trickles down to our more intimate relationships. For example, not long ago, I was blown out via text for a lunch date with a friend (“arrrgh, urgent deadline! SO SOZ!”), whose “urgent deadline” (their Twitter timeline helpfully revealed) turned out to involve lunch with someone else.
Did I like my friend any less when I found this out? Well yes, a tiny bit – until I acknowledged that I’ve done something similar 100 times but was “cleverer” at covering my tracks. Would it have been easier for my friend to tell me the truth? Arguably. Should I ever have looked at their Twitter timeline? Well, I had sought to confirm my suspicion that they weren’t telling the truth, so given that my paranoia gremlin was in charge it was no wonder I didn’t like what it found.
It is, of course, the paranoia gremlin that is in charge when we snoop – or are snooped upon – by partners, while “trust” is far more easily undermined than it has ever been. The randomly stumbled-across text (except they never are, are they?) is our generation’s lipstick-on-the-collar. And while Foursquare may say that your partner is in the pub, is that enough to stop you checking their Twitter/Facebook/emails/texts?
- You as a Data Strip Mine: What Facebook Knows>
China, India, Facebook. With its 900 million member-citizens Facebook is the third largest country on the planet, ranked by population. This country has some benefits: no taxes, freedom to join and/or leave, and of course there’s freedom to assemble and a fair degree of free speech.
However, Facebook is no democracy. In fact, its data privacy policies and personal data mining might well put it in the same league as the Stalinist Soviet Union or cold war East Germany.
A fascinating article by Tom Simonite excerpted below sheds light on the data collection and data mining initiatives underway or planned at Facebook.From Technology Review:
If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.
It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.
And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy“) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.
Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.
Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content. Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.
“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.
Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.
In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”
Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.
It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.
Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.
In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.
Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)
This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.
Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”Image courtesy of thejournal.ie / abracapocus_pocuscadabra (Flickr).
- The SpeechJammer and Other Innovations to Come>
The mind boggles at the possible situations when a SpeechJammer (affectionately known as the “Shutup Gun”) might come in handy – raucous parties, boring office meetings, spousal arguments, playdates with whiny children.From the New York Times:
When you aim the SpeechJammer at someone, it records that person’s voice and plays it back to him with a delay of a few hundred milliseconds. This seems to gum up the brain’s cognitive processes — a phenomenon known as delayed auditory feedback — and can painlessly render the person unable to speak. Kazutaka Kurihara, one of the SpeechJammer’s creators, sees it as a tool to prevent loudmouths from overtaking meetings and public forums, and he’d like to miniaturize his invention so that it can be built into cellphones. “It’s different from conventional weapons such as samurai swords,” Kurihara says. “We hope it will build a more peaceful world.”Read the entire list of 32 weird and wonderful innovations after the jump.Graphic courtesy of Chris Nosenzo / New York Times.
- Ray Bradbury's Real World Dystopia >
Ray Bradbury’s death on June 5 reminds us of his uncanny gift for inventing a future that is much like our modern day reality.
Bradbury’s body of work beginning in the early 1940s introduced us to ATMs, wall mounted flat screen TVs, ear-piece radios, online social networks, self-driving cars, and electronic surveillance. Bravely and presciently he also warned us of technologically induced cultural amnesia, social isolation, indifference to violence, and dumbed-down 24/7 mass media.
An especially thoughtful opinion from author Tim Kreider on Bradbury’s life as a “misanthropic humanist”.From the New York Times:
IF you’d wanted to know which way the world was headed in the mid-20th century, you wouldn’t have found much indication in any of the day’s literary prizewinners. You’d have been better advised to consult a book from a marginal genre with a cover illustration of a stricken figure made of newsprint catching fire.
Prescience is not the measure of a science-fiction author’s success — we don’t value the work of H. G. Wells because he foresaw the atomic bomb or Arthur C. Clarke for inventing the communications satellite — but it is worth pausing, on the occasion of Ray Bradbury’s death, to notice how uncannily accurate was his vision of the numb, cruel future we now inhabit.
Mr. Bradbury’s most famous novel, “Fahrenheit 451,” features wall-size television screens that are the centerpieces of “parlors” where people spend their evenings watching interactive soaps and vicious slapstick, live police chases and true-crime dramatizations that invite viewers to help catch the criminals. People wear “seashell” transistor radios that fit into their ears. Note the perversion of quaint terms like “parlor” and “seashell,” harking back to bygone days and vanished places, where people might visit with their neighbors or listen for the sound of the sea in a chambered nautilus.
Mr. Bradbury didn’t just extrapolate the evolution of gadgetry; he foresaw how it would stunt and deform our psyches. “It’s easy to say the wrong thing on telephones; the telephone changes your meaning on you,” says the protagonist of the prophetic short story “The Murderer.” “First thing you know, you’ve made an enemy.”
Anyone who’s had his intended tone flattened out or irony deleted by e-mail and had to explain himself knows what he means. The character complains that he’s relentlessly pestered with calls from friends and employers, salesmen and pollsters, people calling simply because they can. Mr. Bradbury’s vision of “tired commuters with their wrist radios, talking to their wives, saying, ‘Now I’m at Forty-third, now I’m at Forty-fourth, here I am at Forty-ninth, now turning at Sixty-first” has gone from science-fiction satire to dreary realism.
“It was all so enchanting at first,” muses our protagonist. “They were almost toys, to be played with, but the people got too involved, went too far, and got wrapped up in a pattern of social behavior and couldn’t get out, couldn’t admit they were in, even.”
Most of all, Mr. Bradbury knew how the future would feel: louder, faster, stupider, meaner, increasingly inane and violent. Collective cultural amnesia, anhedonia, isolation. The hysterical censoriousness of political correctness. Teenagers killing one another for kicks. Grown-ups reading comic books. A postliterate populace. “I remember the newspapers dying like huge moths,” says the fire captain in “Fahrenheit,” written in 1953. “No one wanted them back. No one missed them.” Civilization drowned out and obliterated by electronic chatter. The book’s protagonist, Guy Montag, secretly trying to memorize the Book of Ecclesiastes on a train, finally leaps up screaming, maddened by an incessant jingle for “Denham’s Dentrifice.” A man is arrested for walking on a residential street. Everyone locked indoors at night, immersed in the social lives of imaginary friends and families on TV, while the government bombs someone on the other side of the planet. Does any of this sound familiar?
The hero of “The Murderer” finally goes on a rampage and smashes all the yammering, blatting devices around him, expressing remorse only over the Insinkerator — “a practical device indeed,” he mourns, “which never said a word.” It’s often been remarked that for a science-fiction writer, Mr. Bradbury was something of a Luddite — anti-technology, anti-modern, even anti-intellectual. (“Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers,” he challenged a Wired magazine interviewer, and swore he would “outcreate” every one.)
But it was more complicated than that; his objections were not so much reactionary or political as they were aesthetic. He hated ugliness, noise and vulgarity. He opposed the kind of technology that deadened imagination, the modernity that would trash the past, the kind of intellectualism that tried to centrifuge out awe and beauty. He famously did not care to drive or fly, but he was a passionate proponent of space travel, not because of its practical benefits but because he saw it as the great spiritual endeavor of the age, our generation’s cathedral building, a bid for immortality among the stars.Image courtesy of Technorati.
- Mobile Technology and Mobile Travel>
Mobile and social technologies such as smartphones, Twitter feeds, and inflight internet, to name but three, are having an increasing effect on the travel and transportation industry.Inforgraphic courtesy of Mydestination.
- Killer Ideas>
It’s possible that most households on the planet have one. It’s equally possible that most humans have used one — excepting members of PETA (People for the Ethical Treatment of Animals) and other tolerant souls.
United States Patent 640,790 covers a simple and effective technology, invented by Robert Montgomery. The patent for a “Fly Killer”, or fly swatter as it is now more commonly known, was issued in 1900.
Sometimes the simplest design is the most pervasive and effective.From the New York Times:
The first modern fly-destruction device was invented in 1900 by Robert R. Montgomery, an entrepreneur based in Decatur, Ill. Montgomery was issued Patent No. 640,790 for the Fly-Killer, a “cheap device of unusual elasticity and durability” made of wire netting, “preferably oblong,” attached to a handle. The material of the handle remained unspecified, but the netting was crucial: it reduced wind drag, giving the swatter a “whiplike swing.” By 1901, Montgomery’s invention was advertised in Ladies’ Home Journal as a tool that “kills without crushing” and “soils nothing,” unlike, say, a rolled-up newspaper might.
Montgomery sold the patent rights in 1903 to an industrialist named John L. Bennett, who later invented the beer can. Bennett improved the design — stitching around the edge of the netting to keep it from fraying — but left the name.
The various fly-killing implements on the market at the time got the name “swatter” from Samuel Crumbine, secretary of the Kansas Board of Health. In 1905, he titled one of his fly bulletins, which warned of flyborne diseases, “Swat the Fly,” after a chant he heard at a ballgame. Crumbine took an invention known as the Fly Bat — a screen attached to a yardstick — and renamed it the Fly Swatter, which became the generic term we use today.
Fly-killing technology has advanced to include fly zappers (electrified tennis rackets that roast flies on contact) and fly guns (spinning discs that mulch insects). But there will always be less techy solutions: flypaper (sticky tape that traps the bugs), Fly Bottles (glass containers lined with an attractive liquid substance) and the Venus’ flytrap (a plant that eats insects).
During a 2009 CNBC interview, President Obama killed a fly with his bare hands, triumphantly exclaiming, “I got the sucker!” PETA was less gleeful, calling it a public “execution” and sending the White House a device that traps flies so that they may be set free.
But for the rest of us, as the product blogger Sean Byrne notes, “it’s hard to beat the good old-fashioned fly swatter.”Image courtesy of Goodgrips.
- Men are From LinkedIn, Women are From Pinterest>
No surprise. Women and men use online social networks differently. A new study of online behavior by researchers in Vienna, Austria, shows that the sexes organize their networks very differently and for different reasons.From Technology Review:
One of the interesting insights that social networks offer is the difference between male and female behaviour.
In the past, behavioural differences have been hard to measure. Experiments could only be done on limited numbers of individuals and even then, the process of measurement often distorted people’s behaviour.
That’s all changed with the advent of massive online participation in gaming, professional and friendship networks. For the first time, it has become possible to quantify exactly how the genders differ in their approach to things like risk and communication.
Gender specific studies are surprisingly rare, however. Nevertheless a growing body if evidence is emerging that social networks reflect many of the social and evolutionary differences that we’ve long suspected.
Earlier this year, for example, we looked at a remarkable study of a mobile phone network that demonstrated the different reproductive strategies that men and women employ throughout their lives, as revealed by how often they call friends, family and potential mates.
Today, Michael Szell and Stefan Thurner at the Medical University of Vienna in Austria say they’ve found significance differences in the way men and women manage their social networks in an online game called Pardus with over 300,000 players.
In this game, players explore various solar systems in a virtual universe. On the way, they can mark other players as friends or enemies, exchange messages, gain wealth by trading or doing battle but can also be killed.
The interesting thing about online games is that almost every action of every player is recorded, mostly without the players being consciously aware of this. That means measurement bias is minimal.
The networks of friends and enemies that are set up also differ in an important way from those on social networking sites such as Facebook. That’s because players can neither see nor influence other players’ networks. This prevents the kind of clustering and herding behaviour that sometimes dominates other social networks.
Szell and Thurner say the data reveals clear and significant differences between men and women in Pardus.
For example, men and women interact with the opposite sex differently. ”Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females,” say Szell and Thurner.
Women are also significantly more risk averse than men as measured by the amount of fighting they engage in and their likelihood of dying.
They are also more likely to be friends with each other than men.
These results are more or less as expected. More surprising is the finding that women tend to be more wealthy than men, probably because they engage more in economic than destructive behaviour.Image courtesy of InformationWeek.
- Facebook: What Next?>
The Facebook IPO (insider profit opportunity rather than Initial Public Offering) finally came and went. Much like its 900 million members, Facebook executives managed to garner enough fleeting “likes” from its Wall Street road show to ensure temporary short-term hype and big returns for key insiders. But, beneath the hyperbole lies a basic question that goes to the heart of its stratospheric valuation: Does Facebook have a long-term strategy beyond the rapidly deflating ad revenue model?From Technology Review:
Facebook is not only on course to go bust, but will take the rest of the ad-supported Web with it.
Given its vast cash reserves and the glacial pace of business reckonings, that will sound hyperbolic. But that doesn’t mean it isn’t true.
At the heart of the Internet business is one of the great business fallacies of our time: that the Web, with all its targeting abilities, can be a more efficient, and hence more profitable, advertising medium than traditional media. Facebook, with its 900 million users, valuation of around $100 billion, and the bulk of its business in traditional display advertising, is now at the heart of the heart of the fallacy.
The daily and stubborn reality for everybody building businesses on the strength of Web advertising is that the value of digital ads decreases every quarter, a consequence of their simultaneous ineffectiveness and efficiency. The nature of people’s behavior on the Web and of how they interact with advertising, as well as the character of those ads themselves and their inability to command real attention, has meant a marked decline in advertising’s impact.
At the same time, network technology allows advertisers to more precisely locate and assemble audiences outside of branded channels. Instead of having to go to CNN for your audience, a generic CNN-like audience can be assembled outside CNN’s walls and without the CNN-brand markup. This has resulted in the now famous and cruelly accurate formulation that $10 of offline advertising becomes $1 online.
I don’t know anyone in the ad-Web business who isn’t engaged in a relentless, demoralizing, no-exit operation to realign costs with falling per-user revenues, or who isn’t manically inflating traffic to compensate for ever-lower per-user value.
Facebook, however, has convinced large numbers of otherwise intelligent people that the magic of the medium will reinvent advertising in a heretofore unimaginably profitable way, or that the company will create something new that isn’t advertising, which will produce even more wonderful profits. But at a forward profit-to-earnings ratio of 56 (as of the close of trading on May 21), these innovations will have to be something like alchemy to make the company worth its sticker price. For comparison, Google trades at a forward P/E ratio of 12. (To gauge how much faith investors have that Google, Facebook, and other Web companies will extract value from their users, see our recent chart.)
Facebook currently derives 82 percent of its revenue from advertising. Most of that is the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles. Some is the kind of sponsorship that promises users further social relationships with companies: a kind of marketing that General Motors just announced it would no longer buy.
Facebook’s answer to its critics is: pay no attention to the carping. Sure, grunt-like advertising produces the overwhelming portion of our $4 billion in revenues; and, yes, on a per-user basis, these revenues are in pretty constant decline, but this stuff is really not what we have in mind. Just wait.
It’s quite a juxtaposition of realities. On the one hand, Facebook is mired in the same relentless downward pressure of falling per-user revenues as the rest of Web-based media. The company makes a pitiful and shrinking $5 per customer per year, which puts it somewhat ahead of the Huffington Post and somewhat behind the New York Times’ digital business. (Here’s the heartbreaking truth about the difference between new media and old: even in the New York Times’ declining traditional business, a subscriber is still worth more than $1,000 a year.) Facebook’s business only grows on the unsustainable basis that it can add new customers at a faster rate than the value of individual customers declines. It is peddling as fast as it can. And the present scenario gets much worse as its users increasingly interact with the social service on mobile devices, because it is vastly harder, on a small screen, to sell ads and profitably monetize users.
On the other hand, Facebook is, everyone has come to agree, profoundly different from the Web. First of all, it exerts a new level of hegemonic control over users’ experiences. And it has its vast scale: 900 million, soon a billion, eventually two billion (one of the problems with the logic of constant growth at this scale and speed, of course, is that eventually it runs out of humans with computers or smart phones). And then it is social. Facebook has, in some yet-to-be-defined way, redefined something. Relationships? Media? Communications? Communities? Something big, anyway.
The subtext—an overt subtext—of the popular account of Facebook is that the network has a proprietary claim and special insight into social behavior. For enterprises and advertising agencies, it is therefore the bridge to new modes of human connection.
Expressed so baldly, this account is hardly different from what was claimed for the most aggressively boosted companies during the dot-com boom. But there is, in fact, one company that created and harnessed a transformation in behavior and business: Google. Facebook could be, or in many people’s eyes should be, something similar. Lost in such analysis is the failure to describe the application that will drive revenues.
- Quantum Computer Leap>
The practical science behind quantum computers continues to make exciting progress. Quantum computers promise, in theory, immense gains in power and speed through the use of atomic scale parallel processing.From the Observer:
The reality of the universe in which we live is an outrage to common sense. Over the past 100 years, scientists have been forced to abandon a theory in which the stuff of the universe constitutes a single, concrete reality in exchange for one in which a single particle can be in two (or more) places at the same time. This is the universe as revealed by the laws of quantum physics and it is a model we are forced to accept – we have been battered into it by the weight of the scientific evidence. Without it, we would not have discovered and exploited the tiny switches present in their billions on every microchip, in every mobile phone and computer around the world. The modern world is built using quantum physics: through its technological applications in medicine, global communications and scientific computing it has shaped the world in which we live.
Although modern computing relies on the fidelity of quantum physics, the action of those tiny switches remains firmly in the domain of everyday logic. Each switch can be either “on” or “off”, and computer programs are implemented by controlling the flow of electricity through a network of wires and switches: the electricity flows through open switches and is blocked by closed switches. The result is a plethora of extremely useful devices that process information in a fantastic variety of ways.
Modern “classical” computers seem to have almost limitless potential – there is so much we can do with them. But there is an awful lot we cannot do with them too. There are problems in science that are of tremendous importance but which we have no hope of solving, not ever, using classical computers. The trouble is that some problems require so much information processing that there simply aren’t enough atoms in the universe to build a switch-based computer to solve them. This isn’t an esoteric matter of mere academic interest – classical computers can’t ever hope to model the behaviour of some systems that contain even just a few tens of atoms. This is a serious obstacle to those who are trying to understand the way molecules behave or how certain materials work – without the possibility to build computer models they are hampered in their efforts. One example is the field of high-temperature superconductivity. Certain materials are able to conduct electricity “for free” at surprisingly high temperatures (still pretty cold, though, at well but still below -100 degrees celsius). The trouble is, nobody really knows how they work and that seriously hinders any attempt to make a commercially viable technology. The difficulty in simulating physical systems of this type arises whenever quantum effects are playing an important role and that is the clue we need to identify a possible way to make progress.
It was American physicist Richard Feynman who, in 1981, first recognised that nature evidently does not need to employ vast computing resources to manufacture complicated quantum systems. That means if we can mimic nature then we might be able to simulate these systems without the prohibitive computational cost. Simulating nature is already done every day in science labs around the world – simulations allow scientists to play around in ways that cannot be realised in an experiment, either because the experiment would be too difficult or expensive or even impossible. Feynman’s insight was that simulations that inherently include quantum physics from the outset have the potential to tackle those otherwise impossible problems.
Quantum simulations have, in the past year, really taken off. The ability to delicately manipulate and measure systems containing just a few atoms is a requirement of any attempt at quantum simulation and it is thanks to recent technical advances that this is now becoming possible. Most recently, in an article published in the journal Nature last week, physicists from the US, Australia and South Africa have teamed up to build a device capable of simulating a particular type of magnetism that is of interest to those who are studying high-temperature superconductivity. Their simulator is esoteric. It is a small pancake-like layer less than 1 millimetre across made from 300 beryllium atoms that is delicately disturbed using laser beams… and it paves the way for future studies into quantum magnetism that will be impossible using a classical computer.Image: A crystal of beryllium ions confined by a large magnetic field at the US National Institute of Standards and Technology’s quantum simulator. The outermost electron of each ion is a quantum bit (qubit), and here they are fluorescing blue, which indicates they are all in the same state. Photograph courtesy of Britton/NIST, Observer.
- Nanotech: Bane and Boon>
An insightful opinion on the benefits and perils of nanotechnology from essayist and naturalist, Diane Ackerman.From the New York Times:
“I SING the body electric,” Walt Whitman wrote in 1855, inspired by the novelty of useful electricity, which he would live to see power streetlights and telephones, locomotives and dynamos. In “Leaves of Grass,” his ecstatic epic poem of American life, he depicted himself as a live wire, a relay station for all the voices of the earth, natural or invented, human or mineral. “I have instant conductors all over me,” he wrote. “They seize every object and lead it harmlessly through me… My flesh and blood playing out lightning to strike what is hardly different from myself.”
Electricity equipped Whitman and other poets with a scintillation of metaphors. Like inspiration, it was a lightning flash. Like prophetic insight, it illuminated the darkness. Like sex, it tingled the flesh. Like life, it energized raw matter. Whitman didn’t know that our cells really do generate electricity, that the heart’s pacemaker relies on such signals and that billions of axons in the brain create their own electrical charge (equivalent to about a 60-watt bulb). A force of nature himself, he admired the range and raw power of electricity.
Deeply as he believed the vow “I sing the body electric” — a line sure to become a winning trademark — I suspect one of nanotechnology’s recent breakthroughs would have stunned him. A team at the University of Exeter in England has invented the lightest, supplest, most diaphanous material ever made for conducting electricity, a dream textile named GraphExeter, which could revolutionize electronics by making it fashionable to wear your computer, cellphone and MP3 player. Only one atom thick, it’s an ideal fabric for street clothes and couture lines alike. You could start your laptop by plugging it into your jeans, recharge your cellphone by plugging it into your T-shirt. Then, not only would your cells sizzle with electricity, but even your clothing would chime in.
I don’t know if a fully electric suit would upset flight electronics, pacemakers, airport security monitors or the brain’s cellular dispatches. If you wore an electric coat in a lightning storm, would the hairs on the back of your neck stand up? Would you be more likely to fall prey to a lightning strike? How long will it be before a jokester plays the sound of one-hand-clapping from a mitten? How long before late-night hosts riff about electric undies? Will people tethered to recharging poles haunt the airport waiting rooms? Will it become hip to wear flashing neon ads, quotes and designs — maybe a name in a luminous tattoo?
Another recent marvel of nanotechnology promises to alter daily life, too, but this one, despite its silver lining, strikes me as wickedly dangerous, though probably inevitable. As a result, it’s bound to inspire labyrinthine laws and a welter of patents and to ignite bioethical debates.
Nano-engineers have developed a way to coat both hard surfaces (like hospital bed rails, doorknobs and furniture) and also soft surfaces (sheets, gowns and curtains) with microscopic nanoparticles of silver, an element known to kill microbes. You’d think the new nano-coating would offer a silver bullet, be a godsend to patients stricken with hospital-acquired sepsis and pneumonia, and to doctors fighting what has become a nightmare of antibiotic-resistant micro-organisms that can kill tens of thousands of people a year.
It does, and it is. That’s the problem. It’s too effective. Most micro-organisms are harmless, many are beneficial, but some are absolutely essential for the environment and human life. Bacteria were the first life forms on the planet, and we owe them everything. Our biochemistry is interwoven with theirs. Swarms of bacteria blanket us on the outside, other swarms colonize our insides. Kill all the gut bacteria, essential for breaking down large molecules, and digestion slows.
Friendly bacteria aid the immune system. They release biotin, folic acid and vitamin K; help eliminate heavy metals from the body; calm inflammation; and prevent cancers. During childbirth, a baby picks up beneficial bacteria in the birth canal. Nitrogen-fixing bacteria ensure healthy plants and ecosystems. We use bacteria to decontaminate sewage and also to create protein-rich foods like kefir and yogurt.
How tempting for nanotechnology companies, capitalizing on our fears and fetishes, to engineer superbly effective nanosilver microbe-killers, deodorants and sanitizers of all sorts for home and industry.Image courtesy of Technorati.
- Google: Please Don't Be Evil>
Google has been variously praised and derided for its corporate manta, “Don’t Be Evil”. For those who like to believe that Google has good intentions recent events strain these assumptions. The company was found to have been snooping on and collecting data from personal Wi-Fi routers. Is this the case of a lone-wolf or a corporate strategy?From Slate:
Was Google’s snooping on home Wi-Fi users the work of a rogue software engineer? Was it a deliberate corporate strategy? Was it simply an honest-to-goodness mistake? And which of these scenarios should we wish for—which would assuage your fears about the company that manages so much of our personal data?
These are the central questions raised by a damning FCC report on Google’s Street View program that was released last weekend. The Street View scandal began with a revolutionary idea—Larry Page wanted to snap photos of every public building in the world. Beginning in 2007, the search company’s vehicles began driving on streets in the United States (and later Europe, Canada, Mexico, and everywhere else), collecting a stream of images to feed into Google Maps.
While developing its Street View cars, Google’s engineers realized that the vehicles could also be used for “wardriving.” That’s a sinister-sounding name for the mainly noble effort to map the physical location of the world’s Wi-Fi routers. Creating a location database of Wi-Fi hotspots would make Google Maps more useful on mobile devices—phones without GPS chips could use the database to approximate their physical location, while GPS-enabled devices could use the system to speed up their location-monitoring systems. As a privacy matter, there was nothing unusual about wardriving. By the time Google began building its system, several startups had already created their own Wi-Fi mapping databases.
But Google, unlike other companies, wasn’t just recording the location of people’s Wi-Fi routers. When a Street View car encountered an open Wi-Fi network—that is, a router that was not protected by a password—it recorded all the digital traffic traveling across that router. As long as the car was within the vicinity, it sucked up a flood of personal data: login names, passwords, the full text of emails, Web histories, details of people’s medical conditions, online dating searches, and streaming music and movies.
Imagine a postal worker who opens and copies one letter from every mailbox along his route. Google’s sniffing was pretty much the same thing, except instead of one guy on one route it was a whole company operating around the world. The FCC report says that when French investigators looked at the data Google collected, they found “an exchange of emails between a married woman and man, both seeking an extra-marital relationship” and “Web addresses that revealed the sexual preferences of consumers at specific residences.” In the United States, Google’s cars collected 200 gigabytes of such data between 2008 and 2010, and they stopped only when regulators discovered the practice.
Why did Google collect all this data? What did it want to do with people’s private information? Was collecting it a mistake? Was it the inevitable result of Google’s maximalist philosophy about public data—its aim to collect and organize all of the world’s information?
Google says the answer to that final question is no. In its response to the FCC and its public blog posts, the company says it is sorry for what happened, and insists that it has established a much stricter set of internal policies to prevent something like this from happening again. The company characterizes the collection of Wi-Fi payload data as the idea of one guy, an engineer who contributed code to the Street View program. In the FCC report, he’s called Engineer Doe. On Monday, the New York Times identified him as Marius Milner, a network programmer who created Network Stumbler, a popular Wi-Fi network detection tool. The company argues that Milner—for reasons that aren’t really clear—slipped the snooping code into the Street View program without anyone else figuring out what he was up to. Nobody else on the Street View team wanted to collect Wi-Fi data, Google says—they didn’t think it would be useful in any way, and, in fact, the data was never used for any Google product.
Should we believe Google’s lone-coder theory? I have a hard time doing so. The FCC report points out that Milner’s “design document” mentions his intention to collect and analyze payload data, and it also highlights privacy as a potential concern. Though Google’s privacy team never reviewed the program, many of Milner’s colleagues closely reviewed his source code. In 2008, Milner told one colleague in an email that analyzing the Wi-Fi payload data was “one of my to-do items.” Later, he ran a script to count the Web addresses contained in the collected data and sent his results to an unnamed “senior manager.” The manager responded as if he knew what was going on: “Are you saying that these are URLs that you sniffed out of Wi-Fi packets that we recorded while driving?” Milner responded by explaining exactly where the data came from. “The data was collected during the daytime when most traffic is at work,” he said.Image courtesy of Fastcompany.
- Your Tween Online>
Many parents with children in the pre-teenage years probably have a containment policy restricting them from participating on adult oriented social media such as Facebook. Well, these tech-savvy tweens may be doing more online than just playing Club Penguin.From the WSJ:
Celina McPhail’s mom wouldn’t let her have a Facebook account. The 12-year-old is on Instagram instead.
Her mother, Maria McPhail, agreed to let her download the app onto her iPod Touch, because she thought she was fostering an interest in photography. But Ms. McPhail, of Austin, Texas, has learned that Celina and her friends mostly use the service to post and “like” Photoshopped photo-jokes and text messages they create on another free app called Versagram. When kids can’t get on Facebook, “they’re good at finding ways around that,” she says.
It’s harder than ever to keep an eye on the children. Many parents limit their preteens’ access to well-known sites like Facebook and monitor what their children do online. But with kids constantly seeking new places to connect—preferably, unsupervised by their families—most parents are learning how difficult it is to prevent their kids from interacting with social media.
Children are using technology at ever-younger ages. About 15% of kids under the age of 11 have their own mobile phone, according to eMarketer. The Pew Research Center’s Internet & American Life Project reported last summer that 16% of kids 12 to 17 who are online used Twitter, double the number from two years earlier.
Parents worry about the risks of online predators and bullying, and there are other concerns. Kids are creating permanent public records, and they may encounter excessive or inappropriate advertising. Yet many parents also believe it is in their kids’ interest to be nimble with technology.
As families grapple with how to use social media safely, many marketers are working to create social networks and other interactive applications for kids that parents will approve. Some go even further, seeing themselves as providing a crucial education in online literacy—”training wheels for social media,” as Rebecca Levey of social-media site KidzVuz puts it.
Along with established social sites for kids, such as Walt Disney Co.’s Club Penguin, kids are flocking to newer sites such as FashionPlaytes.com, a meeting place aimed at girls ages 5 to 12 who are interested in designing clothes, and Everloop, a social network for kids under the age of 13. Viddy, a video-sharing site which functions similarly to Instagram, is becoming more popular with kids and teenagers as well.
Some kids do join YouTube, Google, Facebook, Tumblr and Twitter, despite policies meant to bar kids under 13. These sites require that users enter their date of birth upon signing up, and they must be at least 13 years old. Apple—which requires an account to download apps like Instagram to an iPhone—has the same requirement. But there is little to bar kids from entering a false date of birth or getting an adult to set up an account. Instagram declined to comment.
“If we learn that someone is not old enough to have a Google account, or we receive a report, we will investigate and take the appropriate action,” says Google spokesman Jay Nancarrow. He adds that “users first have a chance to demonstrate that they meet our age requirements. If they don’t, we will close the account.” Facebook and most other sites have similar policies.
Still, some children establish public identities on social-media networks like YouTube and Facebook with their parents’ permission. Autumn Miller, a 10-year-old from Southern California, has nearly 6,000 people following her Facebook fan-page postings, which include links to videos of her in makeup and costumes, dancing Laker-Girl style.
- The Gender Gap Online>
Facebook is so, well, yesterday. If you are female then Pinterest is the new go to place online. But, males prefer to hang at Dartitup. In fact the gender bias at these two new social networks is startling: 97 percent of Pinterest’s registered users are female. Infographic courtesy of PRDaily.
- You Are What You Share>
The old maxim used to go something like, “you are what you eat”. Well, in the early 21st century it has been usurped by, “you are what you share online (knowingly or not)”.From the Wall Street Journal:
Not so long ago, there was a familiar product called software. It was sold in stores, in shrink-wrapped boxes. When you bought it, all that you gave away was your credit card number or a stack of bills.
Now there are “apps”—stylish, discrete chunks of software that live online or in your smartphone. To “buy” an app, all you have to do is click a button. Sometimes they cost a few dollars, but many apps are free, at least in monetary terms. You often pay in another way. Apps are gateways, and when you buy an app, there is a strong chance that you are supplying its developers with one of the most coveted commodities in today’s economy: personal data.
Some of the most widely used apps on Facebook—the games, quizzes and sharing services that define the social-networking site and give it such appeal—are gathering volumes of personal information.
A Wall Street Journal examination of 100 of the most popular Facebook apps found that some seek the email addresses, current location and sexual preference, among other details, not only of app users but also of their Facebook friends. One Yahoo service powered by Facebook requests access to a person’s religious and political leanings as a condition for using it. The popular Skype service for making online phone calls seeks the Facebook photos and birthdays of its users and their friends.
Yahoo and Skype say that they seek the information to customize their services for users and that they are committed to protecting privacy. “Data that is shared with Yahoo is managed carefully,” a Yahoo spokeswoman said.
The Journal also tested its own app, “WSJ Social,” which seeks data about users’ basic profile information and email and requests the ability to post an update when a user reads an article. A Journal spokeswoman says that the company asks only for information required to make the app work.
This appetite for personal data reflects a fundamental truth about Facebook and, by extension, the Internet economy as a whole: Facebook provides a free service that users pay for, in effect, by providing details about their lives, friendships, interests and activities. Facebook, in turn, uses that trove of information to attract advertisers, app makers and other business opportunities.
Up until a few years ago, such vast and easily accessible repositories of personal information were all but nonexistent. Their advent is driving a profound debate over the definition of privacy in an era when most people now carry information-transmitting devices with them all the time.
Capitalizing on personal data is a lucrative enterprise. Facebook is in the midst of planning for an initial public offering of its stock in May that could value the young company at more than $100 billion on the Nasdaq Stock Market.
Facebook requires apps to ask permission before accessing a user’s personal details. However, a user’s friends aren’t notified if information about them is used by a friend’s app. An examination of the apps’ activities also suggests that Facebook occasionally isn’t enforcing its own rules on data privacy.Image: Facebook is watching and selling you. Courtesy of Daily Mail.
- First, There Was Bell Labs>
The results of innovation surround us. Innovation nourishes our food supply and helps us heal when we are sick; innovation lubricates our businesses, underlies our products, and facilitates our interactions. Innovation stokes our forward momentum.
But, before many of our recent technological marvels could come in to being, some fundamental innovations were necessary. These were the technical precursors and catalysts that paves the way for the iPad and the smartphone , GPS and search engines and microwave ovens. The building blocks that made much of this possible included the transistor, the laser, the Unix operating system, the communication satellite. And, all of these came from one place, Bell Labs, during a short but highly productive period from 1920 to 1980.
In his new book, “The Idea Factory”, Jon Gertner explores how and why so much innovation sprung from the visionary leaders, engineers and scientists of Bell LabsFrom the New York Times:
In today’s world of Apple, Google and Facebook, the name may not ring any bells for most readers, but for decades — from the 1920s through the 1980s — Bell Labs, the research and development wing of AT&T, was the most innovative scientific organization in the world. As Jon Gertner argues in his riveting new book, “The Idea Factory,” it was where the future was invented.
Indeed, Bell Labs was behind many of the innovations that have come to define modern life, including the transistor (the building block of all digital products), the laser, the silicon solar cell and the computer operating system called Unix (which would serve as the basis for a host of other computer languages). Bell Labs developed the first communications satellites, the first cellular telephone systems and the first fiber-optic cable systems.
The Bell Labs scientist Claude Elwood Shannon effectively founded the field of information theory, which would revolutionize thinking about communications; other Bell Labs researchers helped push the boundaries of physics, chemistry and mathematics, while defining new industrial processes like quality control.
In “The Idea Factory,” Mr. Gertner — an editor at Fast Company magazine and a writer for The New York Times Magazine — not only gives us spirited portraits of the scientists behind Bell Labs’ phenomenal success, but he also looks at the reasons that research organization became such a fount of innovation, laying the groundwork for the networked world we now live in.
It’s clear from this volume that the visionary leadership of the researcher turned executive Mervin Kelly played a large role in Bell Labs’ sense of mission and its ability to institutionalize the process of innovation so effectively. Kelly believed that an “institute of creative technology” needed a critical mass of talented scientists — whom he housed in a single building, where physicists, chemists, mathematicians and engineers were encouraged to exchange ideas — and he gave his researchers the time to pursue their own investigations “sometimes without concrete goals, for years on end.”
That freedom, of course, was predicated on the steady stream of revenue provided (in the years before the AT&T monopoly was broken up in the early 1980s) by the monthly bills paid by telephone subscribers, which allowed Bell Labs to function “much like a national laboratory.” Unlike, say, many Silicon Valley companies today, which need to keep an eye on quarterly reports, Bell Labs in its heyday could patiently search out what Mr. Gertner calls “new and fundamental ideas,” while using its immense engineering staff to “develop and perfect those ideas” — creating new products, then making them cheaper, more efficient and more durable.
Given the evolution of the digital world we inhabit today, Kelly’s prescience is stunning in retrospect. “He had predicted grand vistas for the postwar electronics industry even before the transistor,” Mr. Gertner writes. “He had also insisted that basic scientific research could translate into astounding computer and military applications, as well as miracles within the communications systems — ‘a telephone system of the future,’ as he had said in 1951, ‘much more like the biological systems of man’s brain and nervous system.’ ”Read the entire article after jump.Image: Jack A. Morton (left) and J. R. Wilson at Bell Laboratories, circa 1948. Courtesy of Computer History Museum.
- Language Translation With a Cool Twist>
The last couple of decades has shown a remarkable improvement in the ability of software to translate the written word from one language to another. Yahoo Babel Fish and Google Translate are good examples. Also, voice recognition systems, such as those you encounter every day when trying desperately to connect with a real customer service rep, have taken great leaps forward. Apple’s Siri now leads the pack.
But, what do you get if you combine translation and voice recognition technology? Well, you get a new service that translates the spoken word in your native language to a second. And, here’s the neat twist. The system translates into the second language while keeping a voice like yours. The technology springs from Microsoft’s Research division in Redmond, WA.From Technology Review:
Researchers at Microsoft have made software that can learn the sound of your voice, and then use it to speak a language that you don’t. The system could be used to make language tutoring software more personal, or to make tools for travelers.
In a demonstration at Microsoft’s Redmond, Washington, campus on Tuesday, Microsoft research scientist Frank Soong showed how his software could read out text in Spanish using the voice of his boss, Rick Rashid, who leads Microsoft’s research efforts. In a second demonstration, Soong used his software to grant Craig Mundie, Microsoft’s chief research and strategy officer, the ability to speak Mandarin.
Hear Rick Rashid’s voice in his native language and then translated into several other languages:
In English, a synthetic version of Mundie’s voice welcomed the audience to an open day held by Microsoft Research, concluding, “With the help of this system, now I can speak Mandarin.” The phrase was repeated in Mandarin Chinese, in what was still recognizably Mundie’s voice.
“We will be able to do quite a few scenario applications,” said Soong, who created the system with colleagues at Microsoft Research Asia, the company’s second-largest research lab, in Beijing, China.
- Turing Test 2.0 - Intelligent Behavior Free of Bigotry>
One wonders what the world would look like today had Alan Turing been criminally prosecuted and jailed by the British government for his homosexuality before the Second World War, rather than in 1952. Would the British have been able to break German Naval ciphers encoded by their Enigma machine? Would the German Navy have prevailed, and would the Nazis have gone on to conquer the British Isles?
Actually, Turing was not imprisoned in 1952 — rather, he “accepted” chemical castration at the hands of the British government rather than face jail. He died two years later of self-inflicted cyanide poisoning, just short of his 42nd birthday.
Now a hundred years on from his birthday, historians are reflecting on his short life and his lasting legacy. Turing is widely regarded to have founded the discipline of artificial intelligence and he made significant contributions to computing. Yet most of his achievements went unrecognized for many decades or were given short shrift, perhaps, due to his confidential work for the government, or more likely, because of his persona non grata status.
In 2009 the British government offered Turing an apology. And, of course, we now have the Turing Test. (The Turing Test is a test of a machine’s ability to exhibit intelligent behavior). So, one hundred years after Turing’s birth to honor his life we should launch a new and improved Turing Test. Let’s call it the Turing Test 2.0.
This test would measure a human’s ability to exhibit intelligent behavior free of bigotry.From Nature:
Alan Turing is always in the news — for his place in science, but also for his 1952 conviction for having gay sex (illegal in Britain until 1967) and his suicide two years later. Former Prime Minister Gordon Brown issued an apology to Turing in 2009, and a campaign for a ‘pardon’ was rebuffed earlier this month.
Must you be a great figure to merit a ‘pardon’ for being gay? If so, how great? Is it enough to break the Enigma ciphers used by Nazi Germany in the Second World War? Or do you need to invent the computer as well, with artificial intelligence as a bonus? Is that great enough?
Turing’s reputation has gone from zero to hero, but defining what he achieved is not simple. Is it correct to credit Turing with the computer? To historians who focus on the engineering of early machines, Turing is an also-ran. Today’s scientists know the maxim ‘publish or perish’, and Turing just did not publish enough about computers. He quickly became perishable goods. His major published papers on computability (in 1936) and artificial intelligence (in 1950) are some of the most cited in the scientific literature, but they leave a yawning gap. His extensive computer plans of 1946, 1947 and 1948 were left as unpublished reports. He never put into scientific journals the simple claim that he had worked out how to turn his 1936 “universal machine” into the practical electronic computer of 1945. Turing missed those first opportunities to explain the theory and strategy of programming, and instead got trapped in the technicalities of primitive storage mechanisms.
He could have caught up after 1949, had he used his time at the University of Manchester, UK, to write a definitive account of the theory and practice of computing. Instead, he founded a new field in mathematical biology and left other people to record the landscape of computers. They painted him out of it. The first book on computers to be published in Britain, Faster than Thought (Pitman, 1953), offered this derisive definition of Turing’s theoretical contribution:
“Türing machine. In 1936 Dr. Turing wrote a paper on the design and limitations of computing machines. For this reason they are sometimes known by his name. The umlaut is an unearned and undesirable addition, due, presumably, to an impression that anything so incomprehensible must be Teutonic.”
That a book on computers should describe the theory of computing as incomprehensible neatly illustrates the climate Turing had to endure. He did make a brief contribution to the book, buried in chapter 26, in which he summarized computability and the universal machine. However, his low-key account never conveyed that these central concepts were his own, or that he had planned the computer revolution.Image: Alan Mathison Turing at the time of his election to a Fellowship of the Royal Society. Photograph was taken at the Elliott & Fry studio on 29 March 1951.
- Your Guide to Online Morality>
By most estimates Facebook has around 800 million registered users. This means that its policies governing what is or is not appropriate user content should bear detailed scrutiny. So, a look at Facebook’s recently publicized guidelines for sexual and violent content show a somewhat peculiar view of morality. It’s a view that some characterize as typically American prudishness, but with a blind eye towards violence.From the Guardian:
Facebook bans images of breastfeeding if nipples are exposed – but allows “graphic images” of animals if shown “in the context of food processing or hunting as it occurs in nature”. Equally, pictures of bodily fluids – except semen – are allowed as long as no human is included in the picture; but “deep flesh wounds” and “crushed heads, limbs” are OK (“as long as no insides are showing”), as are images of people using marijuana but not those of “drunk or unconscious” people.
The strange world of Facebook’s image and post approval system has been laid bare by a document leaked from the outsourcing company oDesk to the Gawker website, which indicates that the sometimes arbitrary nature of picture and post approval actually has a meticulous – if faintly gore-friendly and nipple-unfriendly – approach.
For the giant social network, which has 800 million users worldwide and recently set out plans for a stock market flotation which could value it at up to $100bn (£63bn), it is a glimpse of its inner workings – and odd prejudices about sex – that emphasise its American origins.
Facebook has previously faced an outcry from breastfeeding mothers over its treatment of images showing them with their babies. The issue has rumbled on, and now seems to have been embedded in its “Abuse Standards Violations”, which states that banned items include “breastfeeding photos showing other nudity, or nipple clearly exposed”. It also bans “naked private parts” including “female nipple bulges and naked butt cracks” – though “male nipples are OK”.
The guidelines, which have been set out in full, depict a world where sex is banned but gore is acceptable. Obvious sexual activity, even if “naked parts” are hidden, people “using the bathroom”, and “sexual fetishes in any form” are all also banned. The company also bans slurs or racial comments “of any kind” and “support for organisations and people primarily known for violence”. Also banned is anyone who shows “approval, delight, involvement etc in animal or human torture”.Image courtesy of Guardian / Photograph: Dominic Lipinski/PA.
- Travel Photo Clean-up>
We’ve all experienced this phenomenon when on vacation: you’re at a beautiful location with a significant other, friends or kids; the backdrop is idyllic, the subjects are exquisitely posed, you need to preserve and share this perfect moment with a photograph, you get ready to snap the shutter. Then, at that very moment an oblivious tourist, unperturbed locals or a stray goat wander into the picture, too late, the picture is ruined, and it’s getting dark, so there’s no time to reinvent that perfect scene! Oh well, you’ll still be able to talk about the scene’s unspoiled perfection when you get home.
But now, there’s an app for that.From New Scientist:
It’s the same scene played out at tourist sites the world over: You’re trying to take a picture of a partner or friend in front of some monument, statue or building and other tourists keep striding unwittingly – or so they say - into the frame.
Now a new smartphone app promises to let you edit out these unwelcome intruders, leaving just leave your loved one and a beautiful view intact.
Remove, developed by Swedish photography firm Scalada, takes a burst of shots of your scene. It then identifies the objects which are moving – based on their relative position in each frame. These objects are then highlighted and you can delete the ones you don’t want and keep the ones you do, leaving you with a nice, clean composite shot.
Loud party of schoolchildren stepping in front of the Trevi Fountain? Select and delete. Unwanted, drunken stag party making the Charles Bridge in Prague look untidy? See you later.
Remove uses similar technology to the firm’s Rewind app, launched last year, which merges composite group shots to create the best single image.
The app is just a prototype at the moment – as is the video above – but Scalado will demonstrate a full version at the 2012 Mobile World Conference in Barcelona later this month.
- Barcode as Art>
The ubiquitous and utilitarian barcode turns 60 years old. Now, it’s upstart and more fashionable sibling, the QR or quick response, code, seems to be stealing the show by finding its way from the product on the grocery store shelf to the world of art and design.From the New York Times:
It’s usually cause for celebration when a product turns 60. How could it have survived for so long, unless it is genuinely wanted or needed, or maybe both?
One of the sexagenarians this year, the bar code, has more reasons than most to celebrate. Having been a familiar part of daily life for decades, those black vertical lines have taken on a new role of telling ethically aware consumers whether their prospective purchases are ecologically and socially responsible. Not bad for a 60-year-old.
But a new rival has surfaced. A younger version of the bar code, the QR, or “Quick Response” code, threatens to become as ubiquitous as the original, and is usurping some of its functions. Both symbols are black and white, geometric in style and rectangular in shape, but there the similarities end, because each one has a dramatically different impact on the visual landscape, aesthetically and symbolically.
First, the bar code. The idea of embedding information about a product, including its price, in a visual code that could be decrypted quickly and accurately at supermarket checkouts was hatched in the late 1940s by Bernard Silver and Norman Joseph Woodland, graduate students at the Drexel Institute of Technology in Philadelphia. Their idea was that retailers would benefit from speeding up the checkout process, enabling them to employ fewer staff, and from reducing the expense and inconvenience caused when employees keyed in the wrong prices.
At 8.01 a.m. on June 26, 1974, a packet of Wrigley’s Juicy Fruit chewing gum was sold for 67 cents at a Marsh Supermarket in Troy, Ohio — the first commercial transaction to use a bar code. More than five billion bar-coded products are now scanned at checkouts worldwide every day. Some of those codes will also have been vetted on the cellphones of shoppers who wanted to check the product’s impact on their health and the environment, and the ethical credentials of the manufacturer. They do so by photographing the bar code with their phones and using an application to access information about the product on ethical rating Web sites like GoodGuide.
As for the QR code, it was developed in the mid-1990s by the Japanese carmaker Toyota to track components during the manufacturing process. A mosaic of tiny black squares on a white background, the QR code has greater storage capacity than the original bar code. Soon, Japanese cellphone makers were adding QR readers to camera phones, and people were using them to download text, films and Web links from QR codes on magazines, newspapers, billboards and packaging. The mosaic codes then appeared in other countries and are now common all over the world. Anyone who has downloaded a QR reading application can decrypt them with a camera phone.Image courtesy of Google search.
- Morality and Machines>
Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.
Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.From the New York Times:
A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”
The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.
The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.
The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.
This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.Image courtesy of Asimov Foundation / Wikipedia.
- The Internet of Things>
The term “Internet of Things” was first coined in 1999 by Kevin Ashton. It refers to the notion whereby physical objects of all kinds are equipped with small identifying devices and connected to a network. In essence: everything connected to anytime, anywhere by anyone. One of the potential benefits is that this would allow objects to be tracked, inventoried and status continuously monitored.From the New York Times:
THE Internet likes you, really likes you. It offers you so much, just a mouse click or finger tap away. Go Christmas shopping, find restaurants, locate partying friends, tell the world what you’re up to. Some of the finest minds in computer science, working at start-ups and big companies, are obsessed with tracking your online habits to offer targeted ads and coupons, just for you.
But now — nothing personal, mind you — the Internet is growing up and lifting its gaze to the wider world. To be sure, the economy of Internet self-gratification is thriving. Web start-ups for the consumer market still sprout at a torrid pace. And young corporate stars seeking to cash in for billions by selling shares to the public are consumer services — the online game company Zynga last week, and the social network giant Facebook, whose stock offering is scheduled for next year.
As this is happening, though, the protean Internet technologies of computing and communications are rapidly spreading beyond the lucrative consumer bailiwick. Low-cost sensors, clever software and advancing computer firepower are opening the door to new uses in energy conservation, transportation, health care and food distribution. The consumer Internet can be seen as the warm-up act for these technologies.
The concept has been around for years, sometimes called the Internet of Things or the Industrial Internet. Yet it takes time for the economics and engineering to catch up with the predictions. And that moment is upon us.
“We’re going to put the digital ‘smarts’ into everything,” said Edward D. Lazowska, a computer scientist at the University of Washington. These abundant smart devices, Dr. Lazowska added, will “interact intelligently with people and with the physical world.”
The role of sensors — once costly and clunky, now inexpensive and tiny — was described this month in an essay in The New York Times by Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology; he said the ultimate goal was “the sensor-aware planetary computer.”
That may sound like blue-sky futurism, but evidence shows that the vision is beginning to be realized on the ground, in recent investments, products and services, coming from large industrial and technology corporations and some ambitious start-ups.Image: Internet of Things. Courtesy of Cisco.
- What Did You Have for Breakfast Yesterday? Ask Google>
Memory is, well, so 1990s. Who needs it when we have Google, Siri and any number of services to help answer and recall everything we’ve ever perceived and wished to remember or wanted to know. Will our personal memories become another shared service served up from the “cloud”?From the Wilson Quarterly:
In an age when most information is just a few keystrokes away, it’s natural to wonder: Is Google weakening our powers of memory? According to psychologists Betsy Sparrow of Columbia University, Jenny Liu of the University of Wisconsin, Madison, and Daniel M. Wegner of Harvard, the Internet has not so much diminished intelligent recall as tweaked it.
The trio’s research shows what most computer users can tell you anecdotally: When you know you have the Internet at hand, your memory relaxes. In one of their experiments, 46 Harvard undergraduates were asked to answer 32 trivia questions on computers. After each one, they took a quick Stroop test, in which they were shown words printed in different colors and then asked to name the color of each word. They took more time to name the colors of Internet-related words, such as modem and browser. According to Stroop test conventions, this is because the words were related to something else that they were already thinking about—yes, they wanted to fire up Google to answer those tricky trivia questions.
In another experiment, the authors uncovered evidence suggesting that access to computers plays a fundamental role in what people choose to commit to their God-given hard drive. Subjects were instructed to type 40 trivia-like statements into a dialog box. Half were told that the computer would erase the information and half that it would be saved. Afterward, when asked to recall the statements, the students who were told their typing would be erased remembered much more. Lacking a computer backup, they apparently committed more to memory.
- Life Without Facebook>
Perhaps it’s time to re-think your social network when through it you know all about the stranger with whom you are sharing the elevator.From the New York Times:
Tyson Balcomb quit Facebook after a chance encounter on an elevator. He found himself standing next to a woman he had never met — yet through Facebook he knew what her older brother looked like, that she was from a tiny island off the coast of Washington and that she had recently visited the Space Needle in Seattle.
“I knew all these things about her, but I’d never even talked to her,” said Mr. Balcomb, a pre-med student in Oregon who had some real-life friends in common with the woman. “At that point I thought, maybe this is a little unhealthy.”
As Facebook prepares for a much-anticipated public offering, the company is eager to show off its momentum by building on its huge membership: more than 800 million active users around the world, Facebook says, and roughly 200 million in the United States, or two-thirds of the population.
But the company is running into a roadblock in this country. Some people, even on the younger end of the age spectrum, just refuse to participate, including people who have given it a try.
One of Facebook’s main selling points is that it builds closer ties among friends and colleagues. But some who steer clear of the site say it can have the opposite effect of making them feel more, not less, alienated.
“I wasn’t calling my friends anymore,” said Ashleigh Elser, 24, who is in graduate school in Charlottesville, Va. “I was just seeing their pictures and updates and felt like that was really connecting to them.”Image: Facebook user. Courtesy of the New York Times.
- How to Make Social Networking Even More Annoying>
What do you get when you take a social network, add sprinkles of mobile telephony, and throw in a liberal dose of proximity sensing? You get the first “social accessory” that creates a proximity network around you as you move about your daily life. Welcome to the world of a yet another social networking technology startup, this one, called magnetU. The company’s tagline is:
It was only a matter of time before your social desires became wearable!
magnetU markets a wearable device, about the size of a memory stick, that lets people wear and broadcast their social desires, allowing immediate social gratification anywhere and anytime. When a magnetU user comes into proximity with others having similar social profiles the system notifies the user of a match. A social match is signaled as either “attractive”, “hot” or “red hot”. So, if you want to find a group of anonymous but like minds (or bodies) for some seriously homogeneous partying magnetU is for you.
Time will tell whether this will become successful and pervasive, or whether it will be consigned to the tech start-up waste bin of history. If magnetU becomes as ubiquitous as Facebook then humanity be entering a disastrous new phase characterized by the following: all social connections become a marketing opportunity; computer algorithms determine when and whom to like (or not) instantly; the content filter bubble extends to every interaction online and in the real world; people become ratings and nodes on a network; advertisers insert themselves into your daily conversations; Big Brother is watching you!From Technology Review:
MagnetU is a $24 device that broadcasts your social media profile to everyone around you. If anyone else with a MagnetU has a profile that matches yours sufficiently, the device will alert both of you via text and/or an app. Or, as founder Yaron Moradi told Mashable in a video interview, “MagnetU brings Facebook, Linkedin, Twitter and other online social networks to the street.”
Moradi calls this process “wearing your social desires,” and anyone who’s ever attempted online dating can tell you that machines are poor substitutes for your own judgement when it comes to determining with whom you’ll actually want to connect.
You don’t have to be a pundit to come up with a long list of Mr. McCrankypants reasons this is a terrible idea, from the overwhelming volume of distraction we already face to the fact that unless this is a smash hit, the only people MagnetU will connect you to are other desperately lonely geeks.
My primary objection, however, is not that this device or something like it won’t work, but that if it does, it will have the Facebook-like effect of pushing even those who loathe it on principle into participating, just because everyone else is using it and those who don’t will be left out in real life.
“MagnetU lets you wear your social desires… Anything from your social and dating preferences to business matches in conferences,” says Moradi. By which he means this will be very popular with Robert Scoble and anyone who already has Grindr loaded onto his or her phone.Image: Facebook founder Mark Zuckerberg. Courtesy of Rocketboom.
- A Serious Conversation with Siri>
According to Apple, Siri is:
… the intelligent personal assistant that helps you get things done just by asking. It allows you to use your voice to send messages, schedule meetings, place phone calls, and more. But Siri isn’t like traditional voice recognition software that requires you to remember keywords and speak specific commands. Siri understands your natural speech, and it asks you questions if it needs more information to complete a task.
It knows what you mean.
Siri not only understands what you say, it’s smart enough to know what you mean. So when you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighborhood. And Siri is proactive, so it will question you until it finds what you’re looking for.
Yet in reality, Siri has taken on an expanded role that encompasses comedian, philosopher, companion and polymath mentor. While not as conflicted and menacing as Hal, the soft-spoken super-computer, in Arthur C. Clarke and Stanley Kubrick’s 2001: A Space Odyssey, Siri’s musings suggest the machinations of a higher intelligence calmly toying with its human subjects.
Siri is very much the product of its very human, and talented, software and hardware engineers, unless, that is, Apple has made earth-moving strides in artificial intelligence (AI) and Siri has become sentient. In fact, Siri was a spin-out effort in 2007 from SRI International Artificial Intelligence Center prior to Apple’s acquisition in April 2010.
Yet, ask Siri a slew of esoteric questions and you’re likely to get some very nuanced answers, many of which will leave you scratching your head or laughing out loud. In fact, several websites have already sprung up to catalog Siri’s musings. A sample of human questions and Siri’s answers below:
Human: “I need to hide a body”.
Siri: What kind of place are you looking for? Reservoirs, Metal Foundries, Mines, Dumps, Swamps…
Human: “I am sleepy”.
Siri: I hope you’re not driving.
Human: “Tell me a joke”.
Siri: Siri answers to this question vary: (1) Two iPhones walk into a bar… I forget the rest; (2) I can’t. I always forget the punch line.
Human: “My wife wants a divorce because of you”.
Siri: I found a number of divorce lawyers, some of them are not far from you.
Human: “Do you know HAL 9000?”.
Siri: Everyone knows what happened to HAL. I’d rather not talk about it.
Human: “What two things are infinite?”.
Siri: The universe and human stupidity… and I’m not sure about the universe.Image: HAL9000. Courtesy of Wikipedia.
- Google's GDP>
According to the infographic below Google had revenues of $29.3 billion in 2010. Not bad! Interestingly, that’s more than the combined Gross Domestic Product (GDP) of the world’s 28 poorest nations.Infographic courtesy of MBA.org / dailyinfographic.
- The Adaptive Soundscape: Musak and the Social Network DJ>
Recollect the piped “musak” that once played, and still plays, in many hotel elevators and public waiting rooms. Remember the perfectly designed mood music in restaurants and museums. Now, re-imagine the ambient soundscape dynamically customized for a space based on the music preferences of the people inhabiting that space. Well, there is a growing list of apps for that.From Wired:
This idea of having environments automatically reflect the predilections of those who inhabit them seems like the stuff of science fiction, but it’s already established fact, though not many people likely realize it yet.
Let me explain. You know how most of the music services we listen to these days “scrobble” what we hear to Facebook and/or Last.fm? Well, outside developers can access that information — with your permission, of course — in order to shape their software around your taste.
At the moment, most developers of Facebook-connected apps we’ve spoken with are able to mine your Likes (when you “like” something on Facebook) and profile information (when you add a band, book, movie, etc. as a favorite thing within your Facebook profile).
However, as we recently confirmed with a Facebook software developer (who was not speaking for Facebook at the time but as an independent developer in his free time), third-party software developers can also access your listening data — each song you’ve played in any Facebook-connected music service and possibly what your friends listened to as well. Video plays and news article reads are also counted, if those sources are connected to Facebook.
Don’t freak out — you have to give these apps permission to harvest this data. But once you do, they can start building their service using information about what you listened to in another service.
Right now, this is starting to happen in the world of software (if I listen to “We Ah Wi” by Javelin on MOG, Spotify can find out if I give them permission to do so). Soon, due to mobile devices’ locational awareness — also opt-in — these preferences will leech into the physical world.
I’m talking about the kids who used to sit around on the quad listening to that station. The more interesting option for mainstream users is music selections that automatically shift in response to the people in the room. The new DJs? Well, they will simply be the social butterflies who are most permissive with their personal information.
Here are some more apps for real-world locations that can adapt music based on the preferences of these social butterflies:
Crowdjuke: Winner of an MTV O Music Award for “best music hack,” this web app pulls the preferences of people who have RSVPed to an event and creates the perfect playlist for that group. Attendees can also add specific tracks using a mobile app or even text messaging from a “dumb” phone.
Automatic DJ: Talk about science fiction; this one lets people DJ a party merely by having their picture taken at it.
AudioVroom: This iPhone app (also with a new web version) makes a playlist that reflects two users’ tastes when they meet in real life. There’s no venue-specific version of this, but there could be (see also: Myxer).Image: Elevator Music. A Surreal History of Muzak, Easy-Listening, and Other Moodsong; Revised and Expanded Edition. Courtesy of the University of Michigan Press.
- Kodak: The Final Picture?>
If you’re over 30 years old, then you may still recall having used roll film with your analog, chemically-based camera. If you did then it’s likely you would have used a product, such as Kodachrome, manufactured by Eastman Kodak. The company was founded by George Eastman in 1892. Eastman invented roll film and helped make photography a mainstream pursuit.
Kodak had been synonymous with photography for around a 100 years. However, in recent years it failed to change gears during the shift to digital media. Indeed it finally ceased production and processing of Kodachrome in 2009. While other companies, such as Nikon and Canon, managed the transition to a digital world, Kodak failed to anticipate and capitalize. Now, the company is struggling for survival.From Wired:
Eastman Kodak Co. is hemorrhaging money, the latest Polaroid to be wounded by the sweeping collapse of the market for analog film.
In a statement to the Securities and Exchange Commission, Kodak reported that it needs to make more money out of its patent portfolio or to raise money by selling debt.
Kodak has tried to recalibrate operations around printing, as the sale of film and cameras steadily decline, but it appears as though its efforts have been fruitless: in Q3 of last year, Kodak reported it had $1.4 billion in cash, ending the same quarter this year with just $862 million — 10 percent less than the quarter before.
Recently, the patent suits have been a crutch for the crumbling company, adding a reliable revenue to the shrinking pot. But this year the proceeds from this sadly demeaning revenue stream just didn’t pan out. With sales down 17 percent, this money is critical, given the amount of cash being spent on restructuring lawyers and continued production.
Though the company has no plans to seek bankruptcy, one thing is clear: Kodak’s future depends on its ability to make its Intellectual Property into a profit, no matter the method.Image courtesy of Wired.
- Lifecycle of a Webpage>
If you’ve ever “stumbled”, as in used the popular and addictive website Stumbleupon, the infographic below if for you. It’s a great way to broaden one’s exposure to related ideas and make serendipitous discoveries.
Interestingly, the typical attention span of a Stumbleupon user seems to be much longer than that of the average Facebook follower.Infographic courtesy of Column Five Media.
- Lights That You Can Print>
The lowly incandescent light bulb continues to come under increasing threat. First, came the fluorescent tube, then the compact fluorescent. More recently the LED (light emitting diode) seems to be gaining ground. Now LED technology takes another leap forward with printed LED “light sheets”.From Technology Review:
A company called Nth Degree Technologies hopes to replace light bulbs with what look like glowing sheets of paper (as shown in this video). The company’s first commercial product is a two-by-four-foot-square light, which it plans to start shipping to select customers for evaluation by the end of the year.
The technology could allow for novel lighting designs at costs comparable to the fluorescent light bulbs and fixtures used now, says Neil Shotton, Nth Degree’s president and CEO. Light could be emitted over large areas from curved surfaces of unusual shapes. The printing processes used to make the lights also make it easy to vary the color and brightness of the light emitted by a fixture. “It’s a new kind of lighting,” Shotton says.
Nth Degree makes its light sheets by first carving up a wafer of gallium nitride to produce millions of tiny LEDs—one four-inch wafer yields about eight million of them. The LEDs are then mixed with resin and binders, and a standard screen printer is used to deposit the resulting “ink” over a large surface.
In addition to the LED ink, there’s a layer of silver ink for the back electrical contact, a layer of phosphors to change the color of light emitted by the LEDs (from blue to various shades of white), and an insulating layer to prevent short circuits between the front and back. The front electrical contact, which needs to be transparent to let the light out, is made using an ink that contains invisibly small metal wires.Image courtesy of Technology Review.
- The Middleman is Dead; Long Live the Middleman> From the New York Times:
Amazon.com has taught readers that they do not need bookstores. Now it is encouraging writers to cast aside their publishers.
Amazon will publish 122 books this fall in an array of genres, in both physical and e-book form. It is a striking acceleration of the retailer’s fledging publishing program that will place Amazon squarely in competition with the New York houses that are also its most prominent suppliers.
It has set up a flagship line run by a publishing veteran, Laurence Kirshbaum, to bring out brand-name fiction and nonfiction. It signed its first deal with the self-help author Tim Ferriss. Last week it announced a memoir by the actress and director Penny Marshall, for which it paid $800,000, a person with direct knowledge of the deal said.
Publishers say Amazon is aggressively wooing some of their top authors. And the company is gnawing away at the services that publishers, critics and agents used to provide.
Several large publishers declined to speak on the record about Amazon’s efforts. “Publishers are terrified and don’t know what to do,” said Dennis Loy Johnson of Melville House, who is known for speaking his mind.
“Everyone’s afraid of Amazon,” said Richard Curtis, a longtime agent who is also an e-book publisher. “If you’re a bookstore, Amazon has been in competition with you for some time. If you’re a publisher, one day you wake up and Amazon is competing with you too. And if you’re an agent, Amazon may be stealing your lunch because it is offering authors the opportunity to publish directly and cut you out.Read more here.
- Brokering the Cloud>
Computer hardware reached (or plummeted, depending upon your viewpoint) the level of commodity a while ago. And of course, some types of operating systems platforms, and software and applications have followed suit recently — think Platform as a Service (PaaS) and Software as a Service (SaaS). So, it should come as no surprise to see new services arise that try to match supply and demand, and profit in the process. Welcome to the “cloud brokerage”.From MIT Technology Review:
Cloud computing has already made accessing computer power more efficient. Instead of buying computers, companies can now run websites or software by leasing time at data centers run by vendors like Amazon or Microsoft. The idea behind cloud brokerages is to take the efficiency of cloud computing a step further by creating a global marketplace where computing capacity can be bought and sold at auction.
Such markets offer steeply discounted rates, and they may also offer financial benefits to companies running cloud data centers, some of which are flush with excess capacity. “The more utilized you are as a [cloud services] provider … the faster return on investment you’ll realize on your hardware,” says Reuven Cohen, founder of Enomaly, a Toronto-based firm that last February launched SpotCloud, cloud computing’s first online spot market.
On SpotCloud, computing power can be bought and sold like coffee, soybeans, or any other commodity. But it’s caveat emptor for buyers, since unlike purchasing computer time with Microsoft, buying on SpotCloud doesn’t offer many contractual guarantees. There is no assurance computers won’t suffer an outage, and sellers can even opt to conceal their identity in a blind auction, so buyers don’t always know whether they’re purchasing capacity from an established vendor or a fly-by-night startup.Read more here.Image courtesy of MIT Technology Review.
- C is For Dennis Richie>
Last week on October 8, 2011, Dennis Richie passed away. Most of the mainstream media failed to report his death — after all he was never quite as flamboyant as another technology darling, Steve Jobs. However, his contributions to the worlds of technology and computer science should certainly place him in the same club.
After all, Dennis Richie developed the computer language C, and he significantly influenced the development of other languages. He also pioneered the operating system, Unix. Both C and Unix now run much of the world’s computer systems.
Dennis Ritchie, and co-developer, Ken Thompson, were awarded the National Medal of Technology in 1999 by President Bill Clinton.Image courtesy of Wikipedia.
- Remembering Another Great Inventor: Edwin Land> From the New York Times:
IN the memorials to Steven P. Jobs this week, Apple’s co-founder was compared with the world’s great inventor-entrepreneurs: Thomas Edison, Henry Ford, Alexander Graham Bell. Yet virtually none of the obituaries mentioned the man Jobs himself considered his hero, the person on whose career he explicitly modeled his own: Edwin H. Land, the genius domus of Polaroid Corporation and inventor of instant photography.
Land, in his time, was nearly as visible as Jobs was in his. In 1972, he made the covers of both Time and Life magazines, probably the only chemist ever to do so. (Instant photography was a genuine phenomenon back then, and Land had created the entire medium, once joking that he’d worked out the whole idea in a few hours, then spent nearly 30 years getting those last few details down.) And the more you learn about Land, the more you realize how closely Jobs echoed him.
Both built multibillion-dollar corporations on inventions that were guarded by relentless patent enforcement. (That also kept the competition at bay, and the profit margins up.) Both were autodidacts, college dropouts (Land from Harvard, Jobs from Reed) who more than made up for their lapsed educations by cultivating extremely refined taste. At Polaroid, Land used to hire Smith College’s smartest art-history majors and send them off for a few science classes, in order to create chemists who could keep up when his conversation turned from Maxwell’s equations to Renoir’s brush strokes.
Most of all, Land believed in the power of the scientific demonstration. Starting in the 60s, he began to turn Polaroid’s shareholders’ meetings into dramatic showcases for whatever line the company was about to introduce. In a perfectly art-directed setting, sometimes with live music between segments, he would take the stage, slides projected behind him, the new product in hand, and instead of deploying snake-oil salesmanship would draw you into Land’s World. By the end of the afternoon, you probably wanted to stay there.
Three decades later, Jobs would do exactly the same thing, except in a black turtleneck and jeans. His admiration for Land was open and unabashed. In 1985, he told an interviewer, “The man is a national treasure. I don’t understand why people like that can’t be held up as models: This is the most incredible thing to be — not an astronaut, not a football player — but this.”Read the full article here.Edwin Herbert Land. Photograph by J. J. Scarpetti, The National Academies Press.
- Global Interconnectedness: Submarine Cables>
Apparently only 1 percent of global internet traffic is transmitted via satellite or terrestrially-based radio frequency. The remaining 99 percent is still carried via cable – fiber optic and copper. Much of this cable is strewn for many thousands of miles across the seabeds of our deepest oceans.
For a fascinating view of these intricate systems and to learn why and how Brazil is connected to Angola, or Auckland, New Zealand connected to Redondo Beach California via the 12,750 km long Pacific Fiber check the interactive Submarine Cable Map from TeleGeography.
- Crowdsourcing Explained> Infographic courtesy of BizMedia:
Jonathan Ive, the design brains behind such iconic contraptions as the iMac, iPod and the iPhone discusses his notion of “undesign”. Ive has over 300 patents and is often cited as one of the most influential industrial designers of the last 20 years. Perhaps it’s purely coincidental that’s Ive’s understated “undesign” comes from his unassuming Britishness.From Slate:
Macworld, 1999. That was the year Apple introduced the iMac in five candy colors. The iMac was already a translucent computer that tried its best not to make you nervous. Now it strove to be even more welcoming, almost silly. And here was Apple’s newish head of design, Jonathan Ive, talking about the product in a video—back when he let his hair grow and before he had permanently donned his dark T-shirt uniform. Even then, Ive had the confessional intimacy that makes him the star of Apple promotional videos today. His statement is so ridiculous that he laughs at it himself: “A computer absolutely can be sexy, it’s um … yeah, it can.”
A decade later, no one would laugh (too loudly) if you said that an Apple product was sexy. Look at how we all caress our iPhones. This is not an accident. In interviews, Ive talks intensely about the tactile quality of industrial design. The team he runs at Apple is obsessed with mocking up prototypes. There is a now-legendary story from Ive’s student days of an apartment filled with foam models of his projects. Watch this scene in the documentary Objectified where Ive explains the various processes used to machine a MacBook Air keyboard. He gazes almost longingly upon a titanium blank. This is a man who loves his materials.
Ive’s fixation on how a product feels in your hand, and his micro-focus on aspects like the shininess of the stainless steel, or the exact amount of reflectivity in the screen, were first fully realized with the iPod. From that success, you can see how Ive and Steve Jobs led Apple to glory in the past decade. The iPod begat the iPhone, which in turned inspired the iPad. A new kind of tactile computing was born. Ive’s primary concern for physicality, and his perfectionist desire to think through every aspect of the manufacturing process (even the boring parts), were the exact gifts needed to make a singular product like the iPhone a reality and to guide Apple products through a new era of human-computer interaction. Putting design first has reaped huge financial rewards: Apple is now vying with Exxon to be the world’s most valuable company.Image courtesy of CNNMoney.
- If Web Browsers Were People>
A whimsical look at your favorite piece of internet software — the web browser.Infographic courtesy of Wix:
- Software is Eating the World> By Marc Andreesen for the WSJ:
This week, Hewlett-Packard (where I am on the board) announced that it is exploring jettisoning its struggling PC business in favor of investing more heavily in software, where it sees better potential for growth. Meanwhile, Google plans to buy up the cellphone handset maker Motorola Mobility. Both moves surprised the tech world. But both moves are also in line with a trend I’ve observed, one that makes me optimistic about the future growth of the American and world economies, despite the recent turmoil in the stock market.
In short, software is eating the world.
More than 10 years after the peak of the 1990s dot-com bubble, a dozen or so new Internet companies like Facebook and Twitter are sparking controversy in Silicon Valley, due to their rapidly growing private market valuations, and even the occasional successful IPO. With scars from the heyday of Webvan and Pets.com still fresh in the investor psyche, people are asking, “Isn’t this just a dangerous new bubble?”
I, along with others, have been arguing the other side of the case. (I am co-founder and general partner of venture capital firm Andreessen-Horowitz, which has invested in Facebook, Groupon, Skype, Twitter, Zynga, and Foursquare, among others. I am also personally an investor in LinkedIn.) We believe that many of the prominent new Internet companies are building real, high-growth, high-margin, highly defensible businesses.
. . .
Why is this happening now?
Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.
. . .
Perhaps the single most dramatic example of this phenomenon of software eating a traditional business is the suicide of Borders and corresponding rise of Amazon. In 2001, Borders agreed to hand over its online business to Amazon under the theory that online book sales were non-strategic and unimportant.
Today, the world’s largest bookseller, Amazon, is a software company—its core capability is its amazing software engine for selling virtually everything online, no retail stores necessary. On top of that, while Borders was thrashing in the throes of impending bankruptcy, Amazon rearranged its web site to promote its Kindle digital books over physical books for the first time. Now even the books themselves are software.
- Friending the Dead Online>
Accumulating likes, collecting followers and quantifying one’s friends online is serious business. If you don’t have more than a couple of hundred professional connections in your LinkedIn profile or at least twice that number of “friends” through Facebook or ten times that volume of Twittering followers, you’re most likely to be a corporate wallflower, a social has-been.
Professional connection collectors and others who measure their worth through numbers, such as politicians, can of course purchase “friends” and followers. There are a number of agencies online whose purpose is to purchase Twitter followers for their clients. Many of these “followers” come from dummy or inactive accounts; others are professional followers who also pay to be followed themselves. If this is not a sign that connections are now commodity then what is?
Of course social networks recognize that many of their members place a value on the quantity of connections — the more connections a member has the more, well, the more something that person has. So, many networks proactively and regularly present lists of potential connections to their registered members; “know this person? Just click here to connect!”. It’s become so simple and convenient to collect new relationships online.
So, it comes a no surprise that a number of networks recommend friends and colleagues that have since departed, as in “passed away”. Christopher Mims over at Slate has a great article on the consequences of being followed by the dead online.From Technology Review:
Aside from the feeling that I’m giving up yet more of my privacy out of fear of becoming techno-socially irrelevant, the worst part of signing up for a new social network like Google+ is having the service recommend that I invite or classify a dead friend.
Now, I’m aware that I could prevent this happening by deleting this friend from my email contacts list, because I’m a Reasonably Savvy Geek™ and I’ve intuited that the Gmail contacts list is Google’s central repository of everyone with whom I’d like to pretend I’m more than just acquaintances (by ingesting them into the whirligig of my carefully mediated, frequently updated, lavishly illustrated social networking persona).
But what about the overwhelming majority of people who don’t know this or won’t bother? And what happens when I figure out how to overcome Facebook’s intransigence about being rendered irrelevant and extract my social graph from that site and stuff it into Google+, and this friend is re-imported? Round and round we go.
Even though I know it’s an option, I don’t want to simply erase this friend from my view of the Internet. Even though I know the virtual world, unlike the physical, can be reconfigured to swallow every last unsavory landmark in our past.Images courtesy of Wikipedia / Creative Commons.
- Honey? Does this Outfit Look Good?>
Regardless of culture, every spouse (most often the male in this case) on the planet knows to tread very carefully when formulating the answer to that question. An answer that’s conclusively negative will consign the outfit to the disposable pile and earn a scowl; a response that’s only a little negative will get a scowl; a response that’s ebulliently positive will not be believed; one that slightly positive will not be believed and earn another scowl; and the ambivalent, non-committal answer gets an even bigger scowl. This oft repeated situation is very much a lose-lose event. That is, until now.
A new mobile app and website, called Go Try It On, aims to give crowdsourced, anonymous feedback in real-time to any of the outfit-challenged amongst us. Spouses can now relax – no more awkward conversations about clothing.From the New York Times:
There is a reason that women go shopping in groups — they like to ask their stylish friend, mother or the store’s dressing room attendant whether something looks good.
Go Try It On, a start-up that runs a Web site and mobile app for getting real-time feedback on outfits, believes that with computers and cellphones, fashion consultations should be possible even when people aren’t together.
“It’s crowdsourcing an opinion on an outfit and getting a quick, unbiased second opinion,” said Marissa Evans, Go Try It On’s founder and chief executive.
On Friday, Go Try It On will announce that it has raised $3 million from investors including SPA Investments and Index Ventures. It is also introducing a way to make money, by allowing brands to critique users’ outfits and suggest products, beginning with Gap and Sephora.
Users upload a photo or use a Webcam to show an outfit and solicit advice from other users. The service, which is one of several trying to make online shopping more social, started last year, and so far 250,000 people have downloaded the app and commented on outfits 10 million times. Most of the users are young women, and 30 percent live abroad.More from theSource here.
- Ravelry 1, Facebook 0>
Facebook with its estimated 600-700 million users, multi-billion dollar valuation, and its 2,500 or so employees in 15 countries is an internet juggernaut by most measures. But, measure a social network by the loyalty and adoration of its users and Facebook is likely to be eclipsed by a social network of knitters and crocheters.
The online community is known as Ravelry. It was created by a wife-and-husband team and has four employees, including the founders, and boasts around 1.5 million members.From Slate:
The best social network you’ve (probably) never heard of is one-five-hundredth the size of Facebook. It has no video chat feature, it doesn’t let you check in to your favorite restaurant, and there are no games. The company that runs it has just four employees, one of whom is responsible for programming the entire operation. It has never taken any venture capital money and has no plans to go public. Despite these apparent shortcomings, the site’s members absolutely adore it. They consider it a key part of their social lives, and they use it to forge deeper connections with strangers—and share more about themselves—than you’re likely to see elsewhere online. There’s a good chance this site isn’t for you, but after you see how much fun people have there, you’ll wish you had a similar online haunt. The social network is called Ravelry. It’s for knitters (and crocheters).
Ravelry’s success is evidence in favor of an argument that you often hear from Facebook’s critics: A single giant social network is no fun. Social sites work better when they’re smaller and bespoke, created to cater to a specific group. What makes Ravelry work so well is that, in addition to being a place to catch up with friends, it is also a boon to its users’ favorite hobby—it helps people catalog their yarn, their favorite patterns, and the stuff they’ve made or plan on making. In other words, there is something to do there. And having something to do turns out to make an enormous difference in the way people interact with one another on the Web.
- Shnakule, Ishabor and Cinbric: The Biggest Networks You've Never Heard>
Shnakule, Ishabor, Cinbric, Naargo and Vidzeban are not five fictional colleagues of Lord Voldemort from the mind of JK Rowling. They are indeed bad guys, but they live in our real world, online. Shnakule and its peers are the top 5 malware delivery networks. That is, they host a range of diverse and sophisticated malicious software, or malware, on ever-changing computer networks that seek to avoid detection. Malware on these networks includes: fake anti-virus software, fake software updates, drive-by downloads, suspicious link farming, ransomware, pharmacy spam, malvertising, work-at-home scams and unsolicited pornography. Other malware includes: computer viruses, worms, trojan horses, spyware, dishonest adware, and other unwanted software.
Malware researcher Chris Larsen, with Blue Coat, derived this malware infographic from the company’s Mid-Year Security Report. Interestingly, search engine poisoning is the most prevalent point of entry for the delivery of malware to a user’s computer. As the New York Times reports:
Search engine poisoning (SEP) makes up 40% of malware delivery vectors on the Web. It is easy to see why. People want to be able to trust that what they search for in Google, Bing or Yahoo is safe to click on. Users are not conditioned to think that search results could be harmful to the health of their computers. The other leading attack vectors on the Web all pale in comparison to SEP, with malvertising, email, porn and social networking all 10% of malware delivery.Infographic courtesy of Blue Coat:
- Facebook Overdose>
If you are a parent of a teen this one’s for you. A startling infographic summarizing recent Facebook usage and trends. The infographic and data is courtesy of SocialHype and OnlineSchools.org.
Via: Online Schools
- Programming Languages Through the Ages>
The infographic below shows the evolution of some of the influential programming languages since the 1950s. Though, it omits some key languages such as LISP, PL/1, APL, Prolog, MUMPS, ALGOL, Smalltalk and others.From Rackspace:
- Tim Berners-Lee's "Baby" Hits 20 - Happy Birthday World Wide Web>
In early 1990 at CERN headquarters in Geneva, Switzerland, Tim Berners-Lee and Robert Cailliau published a formal proposal to build a “Hypertext project” called “WorldWideWeb” as a “web” of “hypertext documents” to be viewed by “browsers”.
Following development work the pair introduced the proposal to a wider audience in December, and on August 6, 1991, 20 years ago, the World Wide Web officially opened for business on the internet. On that day Berners-Lee posted the first web page — a short summary of the World Wide Web project on the alt.hypertext newsgroup.
The page authored by Tim Berners-Lee was http://info.cern.ch/hypertext/WWW/TheProject.html. A later version on the page can be found here. The page described Berners-Lee’s summary of a project for organizing information on a computer network using a web or links. In fact, the the effort was originally coined “Mesh”, but later became the “World Wide Web”.
The first photograph on the web was uploaded by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes. Twenty years on, one website alone — Flickr – hosts around 5.75 billion images.Photograph of Les Horribles Cernettes, the very first photo to be published on the world wide web in 1992. Image courtesy of Cernettes / Silvano de Gennaro. Granted under fair use.
- The End of 140>
Five years in internet time is analogous to several entire human lifespans. So, it’s no surprise that Twitter seems to have been with us forever. Despite the near ubiquity of the little blue bird, most of the service’s tweeters have no idea why they are constrained to using a mere 140 characters to express themselves.
Farhad Manjoo over at Slate has a well-reasoned plea to increase this upper character limit for the more garrulous amongst us.
Though perhaps more importantly is the effect of this truncated form of messaging on our broader mechanisms of expression and communication. Time will tell if our patterns of speech and the written word will adjust accordingly.From Slate:
Five years ago this month, Twitter opened itself up to the public. The new service, initially called Twttr, was born out of software engineer Jack Dorsey’s fascination with an overlooked corner of the modern metropolis—the central dispatch systems that track delivery trucks, taxis, emergency vehicles, and bike messengers as they’re moving about town. As Dorsey once told the Los Angeles Times, the logs of central dispatchers contained “this very rich sense of what’s happening right now in the city.” For a long time, Dorsey tried to build a public version of that log. It was only around 2005, when text messaging began to take off in America, that his dream became technically feasible. There was only one problem with building Twittr on mobile carriers’ SMS system, though—texts were limited to 160 characters, and if you included space for a user’s handle, that left only 140 characters per message.
What could you say in 140 characters? Not a whole lot—and that was the point. Dorsey believed that Twitter would be used for status updates—his prototypical tweets were “in bed” and “going to park,” and his first real tweet was “inviting coworkers.” That’s not how we use Twitter nowadays. In 2009, the company acknowledged that its service had “outgrown the concept of personal status updates,” and it changed its home-screen prompt from “What are you doing?” to the more open-ended “What’s happening?”
As far as I can tell, though, Twitter has never considered removing the 140-character limit, and Twitter’s embrace of this constraint has been held up as one of the key reasons for the service’s success. But I’m hoping Twitter celebrates its fifth birthday by rethinking this stubborn stance. The 140-character limit now feels less like a feature than a big, obvious bug. I don’t want Twitter to allow messages of unlimited length, as that would encourage people to drone on interminably. But since very few Twitter users now access the system through SMS, it’s technically possible for the network to accommodate longer tweets. I suggest doubling the ceiling—give me 280 characters, Jack, and I’ll give you the best tweets you’ve ever seen!
- Rate This Article: What’s Wrong with the Culture of Critique> From Wired:
You don’t have to read this essay to know whether you’ll like it. Just go online and assess how provocative it is by the number of comments at the bottom of the web version. (If you’re already reading the web version, done and done.) To find out whether it has gone viral, check how many people have hit the little thumbs-up, or tweeted about it, or liked it on Facebook, or dug it on Digg. These increasingly ubiquitous mechanisms of assessment have some real advantages: In this case, you could save 10 minutes’ reading time. Unfortunately, life is also getting a little ruined in the process.
A funny thing has quietly accompanied our era’s eye-gouging proliferation of information, and by funny I mean not very funny. For every ocean of new data we generate each hour—videos, blog posts, VRBO listings, MP3s, ebooks, tweets—an attendant ocean’s worth of reviewage follows. The Internet-begotten abundance of absolutely everything has given rise to a parallel universe of stars, rankings, most-recommended lists, and other valuations designed to help us sort the wheat from all the chaff we’re drowning in. I’ve never been to Massimo’s pizzeria in Princeton, New Jersey, but thanks to the Yelpers I can already describe the personality of Big Vince, a man I’ve never met. (And why would I want to? He’s surly and drums his fingers while you order, apparently.) Everything exists to be charted and evaluated, and the charts and evaluations themselves grow more baroque by the day. Was this review helpful to you? We even review our reviews.
Technoculture critic and former Wired contributor Erik Davis is concerned about the proliferation of reviews, too. “Our culture is afflicted with knowingness,” he says. “We exalt in being able to know as much as possible. And that’s great on many levels. But we’re forgetting the pleasures of not knowing. I’m no Luddite, but we’ve started replacing actual experience with someone else’s already digested knowledge.”
Of course, Yelpification of the universe is so thorough as to be invisible. I scarcely blinked the other day when, after a Skype chat with my mother, I was asked to rate the call. (I assumed they were talking about connection quality, but if they want to hear about how Mom still pronounces it noo-cu-lar, I’m happy to share.) That same afternoon, the UPS guy delivered a guitar stand I’d ordered. Even before I could weigh in on the product, or on the seller’s expeditiousness, I was presented with a third assessment opportunity. It was emblazoned on the cardboard box: “Rate this packaging.”
- QR Codes as Art>
A QR or Quick Response code is a two-dimensional matrix that looks like a scrambled barcode, and behaves much like one, with one important difference. The QR code exhibits a rather high level of tolerance for errors. Some have reported that up to 20-30 percent of the QR code can be selectively altered without affecting its ability to be scanned correctly. Try scanning a regular barcode that has some lines missing or has been altered and your scanner is likely to give you a warning beep. The QR code however still scans correctly even if specific areas are missing or changed. This is important because a QR code does not require a high-end, dedicated barcode scanner for it to be scanned, and therefore also makes it suitable for outdoor use.
A QR code can be scanned, actually photographed, with a regular smartphone (or other device) equipped with a camera and QR code reading app. This makes it possible for QR codes to take up residence anywhere, not just on product packages, and scanned by anyone with a smartphone. In fact you may have seen QR codes displayed on street corners, posters, doors, billboards, websites, vehicles and magazines.
Of course, once you snap a picture of a code, your smartphone app will deliver more details about the object on which the QR code resides. For instance, take a picture of a code placed on a billboard advertising a new BMW model, and you’ll be linked to the BMW website with special promotions for your region. QR codes not only link to websites, but also can be used to send pre-defined text messages, provide further textual information, and deliver location maps.
Since parts of a QR code can be changed without reducing its ability to be scanned correctly, artists and designers now have the leeway to customize the matrix with some creative results.
Some favorites below.Images courtesy of Duncan Robertson, BBC; Louis Vuitton, SET; Ayara Thai Cuisine Restaurant.
- Mr.Carrier, Thanks for Inventing the Air Conditioner>
It’s #$% hot in the southern plains of the United States, with high temperatures constantly above 100 degrees F, and lows never dipping below 80. For that matter, it’s hotter than average this year in most parts of the country. So, a timely article over at Slate gives a great overview of the history of the air conditioning system, courtesy of inventor Willis Carrier.From Slate:
Anyone tempted to yearn for a simpler time must reckon with a few undeniable unpleasantries of life before modern technology: abscessed teeth, chamber pots, the bubonic plague—and a lack of air conditioning in late July. As temperatures rise into the triple digits across the eastern United States, it’s worth remembering how we arrived at the climate-controlled summer environments we have today.
Until the 20th century, Americans dealt with the hot weather as many still do around the world: They sweated and fanned themselves. Primitive air-conditioning systems have existed since ancient times, but in most cases, these were so costly and inefficient as to preclude their use by any but the wealthiest people. In the United States, things began to change in the early 1900s, when the first electric fans appeared in homes. But cooling units have only spread beyond American borders in the last couple of decades, with the confluence of a rising global middle class and breakthroughs in energy-efficient technology. . . .
The big breakthrough, of course, was electricity. Nikola Tesla’s development of alternating current motors made possible the invention of oscillating fans in the early 20th century. And in 1902, a 25-year-old engineer from New York named Willis Carrier invented the first modern air-conditioning system. The mechanical unit, which sent air through water-cooled coils, was not aimed at human comfort, however; it was designed to control humidity in the printing plant where he worked.Image of Willis Carrier courtesy of Wikipedia / Creative Commons.
- Rechargeable Nanotube-Based Solar Energy Storage> From Ars Technica:
Since the 1970s, chemists have worked on storing solar energy in molecules that change state in response to light. These photoactive molecules could be the ideal solar fuel, as the right material should be transportable, affordable, and rechargeable. Unfortunately, scientists haven’t had much success.
One of the best examples in recent years, tetracarbonly-diruthenium fulvalene, requires the use of ruthenium, which is rare and expensive. Furthermore, the ruthenium compound has a volumetric energy density (watt-hours per liter) that is several times smaller than that of a standard lithium-ion battery.
Alexie Kolpak and Jeffrey Grossman from the Massachusetts Institute of Technology propose a new type of solar thermal fuel that would be affordable, rechargeable, thermally stable, and more energy-dense than lithium-ion batteries. Their proposed design combines an organic photoactive molecule, azobenzene, with the ever-popular carbon nanotube.
Before we get into the details of their proposal, we’ll quickly go over how photoactive molecules store solar energy. When a photoactive molecule absorbs sunlight, it undergoes a conformational change, moving from the ground energy state into a higher energy state. The higher energy state is metastable (stable for the moment, but highly susceptible to energy loss), so a trigger—voltage, heat, light, etc.—will cause the molecule to fall back to the ground state. The energy difference between the higher energy state and the ground state (termed ?H) is then discharged. A useful photoactive molecule will be able to go through numerous cycles of charging and discharging.
The challenge in making a solar thermal fuel is finding a material that will have both a large ?H and large activation energy. The two factors are not always compatible. To have a large ?H, you want a big energy difference between the ground and higher energy state. But you don’t want the higher energy state to be too energetic, as it would be unstable. Instability means that the fuel will have a small activation energy and be prone to discharging its stored energy too easily.
Kolpak and Grossman managed to find the right balance between ?H and activation energy when they examined computational models of azobenzene (azo) bound to carbon nanotubes (CNT) in azo/CNT nanostructures.
- NASA Retires Shuttle; France Telecom Guillotines Minitel>
The lives of 2 technological marvels came to a close this week. First, NASA officially concluded the space shuttle program with the final flight of Atlantis.
Then, France Telecom announced the imminent demise of Minitel. Sacre Bleu! What next? Will the United Kingdom phase out afternoon tea and the Royal Family?
If you’re under 35 years of age, especially if you have never visited France, you may never have heard of Minitel. About ten years before the mainstream arrival of the World Wide Web and Mosaic, the first internet browser, there was Minitel. The Minitel network offered France Telecom subscribers a host of internet-like services such as email, white-pages, news and information services, message boards, train reservations, airline schedules, stock quotes and online purchases. Users leased small, custom terminals for free that connected via telephone line. Think prehistoric internet services: no hyperlinks, no fancy search engines, no rich graphics and no multimedia — that was Minitel.
Though rudimentary, Minitel was clearly ahead of its time and garnered a wide and loyal following in France. France Telecom delivered millions of terminals for free to household and business telephone subscribers. By 2000, France Telecom estimates that almost 9 million terminals, covering 25 million people or over 41 percent of the French population, still had access to the Minitel network. Deploying the Minitel service allowed France Telecom to replace printed white-pages directories given to all its customers with a free, online Minitel version.
The Minitel equipment included a basic dumb terminal with a text based screen, keyboard and modem. The modem transmission speed was a rather slow 75 bits per second (upstream) and 1,200 bits per second (downstream). This compares with today’s basic broad speeds of 1 Mbit per second (upstream) and 4 Mbits per second (downstream).
In a bow to Minitel’s more attractive siblings, the internet and the World Wide Web, France Telecom finally plans to retire the service on the June 30, 2012.Image courtesy of Wikipedia/Creative Commons.
- Face (Recognition) Time>
If you’ve traveled or lived in the UK then you may well have been filmed and recorded by one of Britain’s 4.2 million security cameras (and that’s the count as of 2009). That’s one per every 14 people.
While it’s encouraging that the United States and other nations have not followed a similar dubious path, there are reports that facial recognition systems will soon be mobile, and in the hands of police departments across the nation.From Slate:
According to the Wall Street Journal, police departments across the nation will soon adopt handheld facial-recognition systems that will let them identify people with a snapshot. These new capabilities are made possible by BI2 Technologies, a Massachusetts company that has developed a small device that attaches to officers’ iPhones. The police departments who spoke to the Journal said they plan to use the device only when officers suspect criminal activity and have no other way to identify a person—for instance, when they stop a driver who isn’t carrying her license. Law enforcement officials also seemed wary about civil liberties concerns. Is snapping someone’s photo from five feet away considered a search? Courts haven’t decided the issue, but sheriffs who spoke to the paper say they plan to exercise caution.
Don’t believe it. Soon, face recognition will be ubiquitous. While the police may promise to tread lightly, the technology is likely to become so good, so quickly that officers will find themselves reaching for their cameras in all kinds of situations. The police will still likely use traditional ID technologies like fingerprinting—or even iris scanning—as these are generally more accurate than face-scanning, but face-scanning has an obvious advantage over fingerprints: It works from far away. Bunch of guys loitering on the corner? Scantily clad woman hanging around that run-down motel? Two dudes who look like they’re smoking a funny-looking cigarette? Why not snap them all just to make sure they’re on the up-and-up?
Sure, this isn’t a new worry. Early in 2001, police scanned the faces of people going to the Super Bowl, and officials rolled out the technology at Logan Airport in Boston after 9/11. Those efforts raised a stink, and the authorities decided to pull back. But society has changed profoundly in the last decade, and face recognition is now set to go mainstream. What’s more, the police may be the least of your worries. In the coming years—if not months—we’ll see a slew of apps that allow your friends and neighbors to snap your face and get your name and other information you’ve put online. This isn’t a theoretical worry; the technology exists, now, to do this sort of thing crudely, and the only thing stopping companies from deploying it widely is a fear of public outcry. That fear won’t last long. Face recognition for everyone is coming. Get used to it.
- Saluting a Fantastic Machine and Courageous Astronauts> From the New York Times:
The last space shuttle flight rolled to a stop just before 6 a.m. on Thursday, closing an era of the nation’s space program.
“Mission complete, Houston,” said Capt. Christopher J. Ferguson of the Navy, commander of the shuttle Atlantis for the last flight. “After serving the world for over 30 years, the space shuttle has earned its place in history, and it’s come to a final stop.”
It was the 19th night landing at the Kennedy Space Center in Florida to end the 135th space shuttle mission. For Atlantis, the final tally of its 26-year career is 33 missions, accumulating just short of 126 million miles during 307 days in space, circumnavigating the Earth 4,848 times.
A permanent marker will be placed on the runway to indicate the final resting spot of the space shuttle program.
The last day in space went smoothly. Late on Wednesday night, the crew awoke to the Kate Smith version of “God Bless America.” With no weather or technical concerns, the crew closed the payload doors at 2:09 a.m. on Thursday.
At 4:13 a.m., Barry E. Wilmore, an astronaut at mission control in Houston, told the Atlantis crew, “Everything is looking fantastic, there you are go for the deorbit burn, and you can maneuver on time.”
“That’s great, Butch,” replied Captain Ferguson. “Go on the deorbit maneuver, on time.”Image courtesy of Philip Scott Andrews/The New York Times.
- Equation: How GPS Bends Time> From Wired:
Einstein knew what he was talking about with that relativity stuff. For proof, just look at your GPS. The global positioning system relies on 24 satellites that transmit time-stamped information on where they are. Your GPS unit registers the exact time at which it receives that information from each satellite and then calculates how long it took for the individual signals to arrive. By multiplying the elapsed time by the speed of light, it can figure out how far it is from each satellite, compare those distances, and calculate its own position.
For accuracy to within a few meters, the satellites’ atomic clocks have to be extremely precise—plus or minus 10 nanoseconds. Here’s where things get weird: Those amazingly accurate clocks never seem to run quite right. One second as measured on the satellite never matches a second as measured on Earth—just as Einstein predicted.
According to Einstein’s special theory of relativity, a clock that’s traveling fast will appear to run slowly from the perspective of someone standing still. Satellites move at about 9,000 mph—enough to make their onboard clocks slow down by 8 microseconds per day from the perspective of a GPS gadget and totally screw up the location data. To counter this effect, the GPS system adjusts the time it gets from the satellites by using the equation here. (Don’t even get us started on the impact of general relativity.)
- Hello Internet; Goodbye Memory>
Imagine a world without books; you’d have to commit useful experiences, narratives and data to handwritten form and memory.Imagine a world without the internet and real-time search; you’d have to rely on a trusted expert or a printed dictionary to find answers to your questions. Imagine a world without the written word; you’d have to revert to memory and oral tradition to pass on meaningful life lessons and stories.
Technology is a wonderfully double-edged mechanism. It brings convenience. It helps in most aspects of our lives. Yet, it also brings fundamental cognitive change that brain scientists have only recently begun to fathom. Recent studies, including the one cited below from Columbia University explore this in detail.From Technology Review:
A study says that we rely on external tools, including the Internet, to augment our memory.
The flood of information available online with just a few clicks and finger-taps may be subtly changing the way we retain information, according to a new study. But this doesn’t mean we’re becoming less mentally agile or thoughtful, say the researchers involved. Instead, the change can be seen as a natural extension of the way we already rely upon social memory aids—like a friend who knows a particular subject inside out.
Researchers and writers have debated over how our growing reliance on Internet-connected computers may be changing our mental faculties. The constant assault of tweets and YouTube videos, the argument goes, might be making us more distracted and less thoughtful—in short, dumber. However, there is little empirical evidence of the Internet’s effects, particularly on memory.
Betsy Sparrow, assistant professor of psychology at Columbia University and lead author of the new study, put college students through a series of four experiments to explore this question.
One experiment involved participants reading and then typing out a series of statements, like “Rubber bands last longer when refrigerated,” on a computer. Half of the participants were told that their statements would be saved, and the other half were told they would be erased. Additionally, half of the people in each group were explicitly told to remember the statements they typed, while the other half were not. Participants who believed the statements would be erased were better at recalling them, regardless of whether they were told to remember them.
- 3D Printing - A demonstration>
Three dimensional “printing” has been around for a few years now, but the technology continues to advance by leaps and bounds. The technology has already progressed to such an extent that some 3D print machines can now “print” objects with moving parts and in color as well. And, we all thought those cool replicator machines in Star Trek were the stuff of science fiction.
- The Allure of Steampunk Videotelephony and the Telephonoscope>
A concept for the videophone surfaced just a couple of years after the telephone was patented in the United States. The telephonoscope as it was called first appeared in Victorian journals and early French science fiction in 1878.
In 1891 Alexander Graham Bell recorded his concept of an electrical radiophone, which discussed, “…the possibility of seeing by electricity”. He later went on to predict that, “…the day would come when the man at the telephone would be able to see the distant person to whom he was speaking”.
The world’s first videophone entered service in 1934, in Germany. The service was offered in select post offices linking several major German cities, and provided bi-directional voice and image on 8 inch square displays. In the U.S., AT&T launched the Picturephone in the mid-1960s. However, the costly equipment, high-cost per call, and inconveniently located public video-telephone booths ensured that the service would never gain public acceptance. Similar to the U.S., experience major telephone companies in France, Japan and Sweden had limited success with video-telephony during the 1970s-80s.
Major improvements in video technology, telecommunications deregulation and increases in bandwidth during the 1980s-90s brought the price point down considerably. However, significant usage remained mostly within the realm of major corporations due to the still not insignificant investment in equipment and cost of bandwidth.
Fast forward to the 21st century. Skype and other IP (internet protocol) based services have made videochat commonplace and affordable, and in most cases free.It now seems that videchat has become almost ubiquitous. Recent moves into this space by tech heavyweights like Apple with Facetime, Microsoft with its acquisition of Skype, Google with its Google Plus social network video calling component, and Facebook’s new video calling service will in all likelihood add further momentum.
Of course, while videochat is an effective communication tool it does have a cost in terms of personal and social consequences over its non-video cousin, the telephone. Next time you videochat rather than make a telephone call you will surely be paying greater attention to your bad hair and poor grooming, your crumpled clothes, uncoordinated pajamas or lack thereof, the unwanted visitors in the background shot, and the not so subtle back-lighting that focuses attention on the clutter in your office or bedroom. Doesn’t it make you harken back for the days of the simple telephone? Either that or perhaps you are drawn to the more alluring and elegant steampunk form of videochat as imagined by the Victorians, in the image above.
- The Homogenous Culture of "Like"> Echo and Narcissus, John William Waterhouse [Public domain], via Wikimedia Commons
About 12 months ago I committed suicide — internet suicide that is. I closed my personal Facebook account after recognizing several important issues. First, it was a colossal waste of time; time that I could and should be using more productively. Second, it became apparent that following, belonging and agreeing with others through the trivial “wall” status-in-a-can postings and now pervasive “like button” was nothing other than a declaration of mindless group-think and a curious way to maintain social standing. So, my choice was clear: become part of a group that had similar interests, like-minded activities, same politics, parallel beliefs, common likes and dislikes; or revert to my own weirdly independent path. I chose the latter, rejecting the road towards a homogeneity of ideas and a points-based system of instant self-esteem.
This facet of the Facebook ecosystem has an affect similar to the filter bubble that I described is a previous post, The Technology of Personalization and the Bubble Syndrome. In both cases my explicit choices on Facebook, such as which friends I follow or which content I “like”, and my implicit browsing behaviors that increasingly filter what I see and don’t see causes a narrowing of the world of ideas to which I am a exposed. This cannot be good.
So, although I may incur the wrath of author Neil Strauss for including an excerpt of his recent column below, I cannot help but “like” what he has to say. More importantly, he does a much more eloquent job of describing the issue which commoditizes social relationships and, dare I say it, lowers the barrier to entry for narcissists to grow and fine tune their skills.By Neil Strauss for the Wall Street Journal:
If you happen to be reading this article online, you’ll notice that right above it, there is a button labeled “like.” Please stop reading and click on “like” right now.
Thank you. I feel much better. It’s good to be liked.
Don’t forget to comment on, tweet, blog about and StumbleUpon this article. And be sure to “+1″ it if you’re on the newly launched Google+ social network. In fact, if you don’t want to read the rest of this article, at least stay on the page for a few minutes before clicking elsewhere. That way, it will appear to the site analytics as if you’ve read the whole thing.
Once, there was something called a point of view. And, after much strife and conflict, it eventually became a commonly held idea in some parts of the world that people were entitled to their own points of view.
Unfortunately, this idea is becoming an anachronism. When the Internet first came into public use, it was hailed as a liberation from conformity, a floating world ruled by passion, creativity, innovation and freedom of information. When it was hijacked first by advertising and then by commerce, it seemed like it had been fully co-opted and brought into line with human greed and ambition.
But there was one other element of human nature that the Internet still needed to conquer: the need to belong. The “like” button began on the website FriendFeed in 2007, appeared on Facebook in 2009, began spreading everywhere from YouTube to Amazon to most major news sites last year, and has now been officially embraced by Google as the agreeable, supportive and more status-conscious “+1.” As a result, we can now search not just for information, merchandise and kitten videos on the Internet, but for approval.
Just as stand-up comedians are trained to be funny by observing which of their lines and expressions are greeted with laughter, so too are our thoughts online molded to conform to popular opinion by these buttons. A status update that is met with no likes (or a clever tweet that isn’t retweeted) becomes the equivalent of a joke met with silence. It must be rethought and rewritten. And so we don’t show our true selves online, but a mask designed to conform to the opinions of those around us.
Conversely, when we’re looking at someone else’s content—whether a video or a news story—we are able to see first how many people liked it and, often, whether our friends liked it. And so we are encouraged not to form our own opinion but to look to others for cues on how to feel.
“Like” culture is antithetical to the concept of self-esteem, which a healthy individual should be developing from the inside out rather than from the outside in. Instead, we are shaped by our stats, which include not just “likes” but the number of comments generated in response to what we write and the number of friends or followers we have. I’ve seen rock stars agonize over the fact that another artist has far more Facebook “likes” and Twitter followers than they do.
- Solar power from space: Beam it down, Scotty> From the Economist:
THE idea of collecting solar energy in space and beaming it to Earth has been around for at least 70 years. In “Reason”, a short story by Isaac Asimov that was published in 1941, a space station transmits energy collected from the sun to various planets using microwave beams.
The advantage of intercepting sunlight in space, instead of letting it find its own way through the atmosphere, is that so much gets absorbed by the air. By converting it to the right frequency first (one of the so-called windows in the atmosphere, in which little energy is absorbed) a space-based collector could, enthusiasts claim, yield on average five times as much power as one located on the ground.
The disadvantage is cost. Launching and maintaining suitable satellites would be ludicrously expensive. But perhaps not, if the satellites were small and the customers specialised. Military expeditions, rescuers in disaster zones, remote desalination plants and scientific-research bases might be willing to pay for such power from the sky. And a research group based at the University of Surrey, in England, hopes that in a few years it will be possible to offer it to them.
This summer, Stephen Sweeney and his colleagues will test a laser that would do the job which Asimov assigned to microwaves. Certainly, microwaves would work: a test carried out in 2008 transmitted useful amounts of microwave energy between two Hawaiian islands 148km (92 miles) apart, so penetrating the 100km of the atmosphere would be a doddle. But microwaves spread out as they propagate. A collector on Earth that was picking up power from a geostationary satellite orbiting at an altitude of 35,800km would need to be spread over hundreds of square metres. Using a laser means the collector need be only tens of square metres in area.
- Life of a Facebook Photo>
Before photo-sharing, photo blogs, photo friending, “PhotoShopping” and countless other photo-enabled apps and services, there was compose, point, focus, click, develop, print. The process seemed a lot simpler way back then. Perhaps, this was due to lack of options for both input and output. Input? Simple. Go buy a real camera. Output? Simple. Slide or prints. The end.
The options for input and output have exploded by orders of magnitude over the last couple of decades. Nowadays, even my toaster can take pictures and I can output them on my digital refrigerator, sans, of course, real photographs with that limp, bendable magnetic backing. The entire end-to-end process of taking a photograph and sharing it with someone else is now replete with so many choices and options that today it seems to have become inordinately more complex.
So, to help all prehistoric photographers like me, here’s an interesting process flow for your digital images in the age of Facebook.From Pixable:
- Online Advertising Spaghetti>
The technology and business model that is online advertising has evolved and matured significantly since the early days of “pay-per-click”. The team at Infographics Labs does a wonderful job below of bringing the current model, in all its spaghetti-like glory, to life.From Infographic Labs:
- The Technology of Personalization and the Bubble Syndrome>
A decade ago in another place and era during my days as director of technology research for a Fortune X company I tinkered with a cool array of then new personalization tools. The aim was simple, use some of these emerging technologies to deliver a more customized and personalized user experience for our customers and suppliers. What could be wrong with that? Surely, custom tools and more personalized data could do nothing but improve knowledge and enhance business relationships for all concerned. Our customers would benefit from seeing only the information they asked for, our suppliers would benefit from better analysis and filtered feedback, and we, the corporation in the middle, would benefit from making everyone in our supply chain more efficient and happy. Advertisers would be even happier since with more focused data they would be able to deliver messages that were increasingly more precise and relevant based on personal context.
Fast forward to the present. Customization, or filtering, technologies have indeed helped optimize the supply chain; personalization tools and services have made customer experiences more focused and efficient. In today’s online world it’s so much easier to find, navigate and transact when the supplier at the other end of our browser knows who we are, where we live, what we earn, what we like and dislike, and so on. After all, if a supplier knows my needs, requirements, options, status and even personality, I’m much more likely to only receive information, services or products that fall within the bounds that define “me” in the supplier’s database.
And, therein lies the crux of the issue that has helped me to realize that personalization offers a false promise despite the seemingly obvious benefits to all concerned. The benefits are outweighed by two key issues: erosion of privacy and the bubble syndrome.
Privacy as Commodity
I’ll not dwell too long on the issue of privacy since in this article I’m much more concerned with the personalization bubble. However, as we have increasingly seen in recent times privacy in all its forms is becoming a scarce, and tradable commodity. Much of our data is now in the hands of a plethora of suppliers, intermediaries and their partners, ready for continued monetization. Our locations are constantly pinged and polled; our internet browsers note our web surfing habits and preferences; our purchases generate genius suggestions and recommendations to further whet our consumerist desires. Now in digital form this data is open to legitimate sharing and highly vulnerable to discovery by hackers, phishers and spammers and any with technical or financial resources.
Personalization technologies filter content at various levels, minutely and broadly, both overtly and covertly. For instance, I may explicitly signal my preferences for certain types of clothing deals at my favorite online retailer by answering a quick retail survey or checking a handful of specific preference buttons on a website.
However, my previous online purchases, browsing behaviors, time spent of various online pages, visits to other online retailers and a range of other flags deliver a range of implicit or “covert” information to the same retailer (and others). This helps the retailer filter, customize and personalize what I get to see even before I have made a conscious decision to limit my searches and exposure to information. Clearly, this is not too concerning when my retailer knows I’m male and usually purchase size 32 inch jeans; after all why would I need to see deals or product information for women’s shoes.
But, this type of covert filtering becomes more worrisome when the data being filtered and personalized is information, news, opinion and comment in all its glorious diversity. Sophisticated media organizations, information portals, aggregators and news services can deliver personalized and filtered information based on your overt and covert personal preferences as well. So, if you subscribe only to a certain type of information based on topic, interest, political persuasion or other dimension your personalized news services will continue to deliver mostly or only this type of information. And, as I have already described, your online behaviors will deliver additional filtering parameters to these news and information providers so that they may further personalize and narrow your consumption of information.
Increasingly, we will not be aware of what we don’t know. Whether explicitly or not, our use of personalization technologies will have the ability to build a filter, a bubble, around us, which will permit only information that we wish to see or that which our online suppliers wish us to see. We’ll not even get exposed to peripheral and tangential information — that information which lies outside the bubble. This filtering of the rich oceans of diverse information to a mono-dimensional stream will have profound implications for our social and cultural fabric.
I assume that our increasingly crowded planet will require ever more creativity, insight, tolerance and empathy as we tackle humanity’s many social and political challenges in the future. And, these very seeds of creativity, insight, tolerance and empathy are those that are most at risk from the personalization filter. How are we to be more tolerant of others’ opinions if we are never exposed to them in the first place? How are we to gain insight when disparate knowledge is no longer available for serendipitous discovery? How are we to become more creative if we are less exposed to ideas outside of our normal sphere, our bubble?
For some ideas on how to punch a few holes in your online filter bubble read Eli Pariser’s practical guide, here.
Filter Bubble image courtesy of TechCrunch.
- Lemonade without the Lemons: New Search Engine Looks for Uplifting News> From Scientific American:
Good news, if you haven’t noticed, has always been a rare commodity. We all have our ways of coping, but the media’s pessimistic proclivity presented a serious problem for Jurriaan Kamp, editor of the San Francisco-based Ode magazine—a must-read for “intelligent optimists”—who was in dire need of an editorial pick-me-up, last year in particular. His bright idea: an algorithm that can sense the tone of daily news and separate the uplifting stories from the Debbie Downers.
Talk about a ripe moment: A Pew survey last month found the number of Americans hearing “mostly bad” news about the economy and other issues is at its highest since the downturn in 2008. That is unlikely to change anytime soon: global obesity rates are climbing, the Middle East is unstable, and campaign 2012 vitriol is only just beginning to spew in the U.S. The problem is not trivial. A handful of studies, including one published in the Clinical Psychology Review in 2010, have linked positive thinking to better health. Another from the Journal of Economic Psychology the year prior found upbeat people can even make more money.
Kamp, realizing he could be a purveyor of optimism in an untapped market, partnered with Federated Media Publishing, a San Francisco–based company that leads the field in search semantics. The aim was to create an automated system for Ode to sort and aggregate news from the world’s 60 largest news sources based on solutions, not problems. The system, released last week in public beta testing online and to be formally introduced in the next few months, runs thousands of directives to find a story’s context. “It’s kind of like playing 20 questions, building an ontology to find either optimism or pessimism,” says Tim Musgrove, the chief scientist who designed the broader system, which has been dubbed a “slant engine”. Think of the word “hydrogen” paired with “energy” rather than “bomb.”
Web semantics developers in recent years have trained computers to classify news topics based on intuitive keywords and recognizable names. But the slant engine dives deeper into algorithmic programming. It starts by classifying a story’s topic as either a world problem (disease and poverty, for example) or a social good (health care and education). Then it looks for revealing phrases. “Efforts against” in a story, referring to a world problem, would signal something good. “Setbacks to” a social good, likely bad. Thousands of questions later every story is eventually assigned a score between 0 and 1—above 0.95 fast-tracks the story to Ode’s Web interface, called OdeWire. Below that, a score higher than 0.6 is reviewed by a human. The system is trained to only collect themes that are “meaningfully optimistic,” meaning it throws away flash-in-the-pan stories about things like sports or celebrities.
- Self-Published Author Sells a Million E-Books on Amazon> From ReadWriteWeb:
Since the Kindle’s launch, Amazon has heralded each new arrival into what it calls the “Kindle Million Club,” the group of authors who have sold over 1 million Kindle e-books. There have been seven authors in this club up ’til now – some of the big names in publishing: Stieg Larsson, James Patterson, and Nora Roberts for example.
But the admission today of the eighth member of this club is really quite extraordinary. Not because John Locke is a 60 year old former insurance salesman from Kentucky with no writing or publishing background. But because John Locke has accomplished the feat of selling one million e-books as a completely self-published author.
Rather than being published by major publishing house – and all the perks that have long been associated with that (marketing, book tours, prime shelf space in retail stores) – Locke has sold 1,010,370 Kindle books (as of yesterday) having used Kindle Direct Publishing to get his e-books into the Amazon store. No major publisher. No major marketing.
Locke writes primarily crime and adventure stories, including Vegas Moon, Wish List, and the New York Times E-Book Bestseller, Saving Rachel. Most of the e-books sell for $.99, and he says he makes 35 cents on every sale. That sort of per book profit is something that authors would never get from a traditional book deal.
- How Free Is Your Will?> From Scientific American:
Think about the last time you got bored with the TV channel you were watching and decided to change it with the remote control. Or a time you grabbed a magazine off a newsstand, or raised a hand to hail a taxi. As we go about our daily lives, we constantly make choices to act in certain ways. We all believe we exercise free will in such actions – we decide what to do and when to do it. Free will, however, becomes more complicated when you try to think how it can arise from brain activity.
Do we control our neurons or do they control us? If everything we do starts in the brain, what kind of neural activity would reflect free choice? And how would you feel about your free will if we were to tell you that neuroscientists can look at your brain activity, and tell that you are about to make a decision to move – and that they could do this a whole second and a half before you yourself became aware of your own choice?
Scientists from UCLA and Harvard — Itzhak Fried, Roy Mukamel and Gabriel Kreiman — have taken an audacious step in the search for free will, reported in a new article in the journal Neuron. They used a powerful tool – intracranial recording – to find neurons in the human brain whose activity predicts decisions to make a movement, challenging conventional notions of free will.
Fried is one of a handful of neurosurgeons in the world who perform the delicate procedure of inserting electrodes into a living human brain, and using them to record activity from individual neurons. He does this to pin down the source of debilitating seizures in the brains of epileptic patients. Once he locates the part of the patients’ brains that sparks off the seizures, he can remove it, pulling the plug on their neuronal electrical storms.
- Social Media Brands>
Infographic Labs creates some really engaging infographics, so we take the liberty of publishing some of their best, and most relevant, ones on theDiagonal. The team created this summary of top social networking properties for the Blog Herald.From Infographic Labs:
- Search Engine History>
It’s hard to believe that internet based search engines have been in the mainstream consciousness for around twenty years now. It seems not too long ago that we were all playing Pong and searching index cards at the local library. Infographics Labs puts the last twenty years of search in summary for us below.From Infographic Labs:
- Commonplaces of technology critique> From Eurozine:
What is it good for? A passing fad! It makes you stupid! Today’s technology critique is tomorrow’s embarrassing error of judgement, as Katrin Passig shows. Her suggestion: one should try to avoid repeating the most commonplace critiques, particularly in public.
In a 1969 study on colour designations in different cultures, anthropologist Brent Berlin and linguist Paul Kay described how the sequence of levels of observed progression was always the same. Cultures with only two colour concepts distinguish between “light” and “dark” shades. If the culture recognizes three colours, the third will be red. If the language differentiates further, first come green and/or yellow, then blue. All languages with six colour designations distinguish between black, white, red, green, blue and yellow. The next level is brown, then, in varying sequences, orange, pink, purple and/or grey, with light blue appearing last of all.
The reaction to technical innovations, both in the media and in our private lives, follows similarly preconceived paths. The first, entirely knee-jerk dismissal is the “What the hell is it good for?” (Argument No.1) with which IBM engineer Robert Lloyd greeted the microprocessor in 1968. Even practices and techniques that only constitute a variation on the familiar – the electric typewriter as successor to the mechanical version, for instance – are met with distaste in the cultural criticism sector. Inventions like the telephone or the Internet, which open up a whole new world, have it even tougher. If cultural critics had existed at the dawn of life itself, they would have written grumpily in their magazines: “Life – what is it good for? Things were just fine before.”
Because the new throws into confusion processes that people have got used to, it is often perceived not only as useless but as a downright nuisance. The student Friedrich August Köhler wrote in 1790 after a journey on foot from Tübingen to Ulm: “[Signposts] had been put up everywhere following an edict of the local prince, but their existence proved short-lived, since they tended to be destroyed by a boisterous rabble in most places. This was most often the case in areas where the country folk live scattered about on farms, and when going on business to the next city or village more often than not come home inebriated and, knowing the way as they do, consider signposts unnecessary.”
The Parisians seem to have greeted the introduction of street lighting in 1667 under Louis XIV with a similar lack of enthusiasm. Dietmar Kammerer conjectured in the Süddeutsche Zeitung that the regular destruction of these street lamps represented a protest on the part of the citizens against the loss of their private sphere, since it seemed clear to them that here was “a measure introduced by the king to bring the streets under his control”. A simpler explanation would be that citizens tend in the main to react aggressively to unsupervised innovations in their midst. Recently, Deutsche Bahn explained that the initial vandalism of their “bikes for hire” had died down, now that locals had “grown accustomed to the sight of the bicycles”.
When it turns out that the novelty is not as useless as initially assumed, there follows the brief interregnum of Argument No.2: “Who wants it anyway?” “That’s an amazing invention,” gushed US President Rutherford B. Hayes of the telephone, “but who would ever want to use one of them?” And the film studio boss Harry M. Warner is quoted as asking in 1927, “Who the hell wants to hear actors talk?”.
- Social networking: Failure to connect> From the Guardian:
The first time I joined Facebook, I had to quit again immediately. It was my first week of university. I was alone, along with thousands of other students, in a sea of club nights and quizzes and tedious conversations about other people’s A-levels. This was back when the site was exclusively for students. I had been told, in no uncertain terms, that joining was mandatory. Failure to do so was a form of social suicide worse even than refusing to drink alcohol. I had no choice. I signed up.
Users of Facebook will know the site has one immutable feature. You don’t have to post a profile picture, or share your likes and dislikes with the world, though both are encouraged. You can avoid the news feed, the apps, the tweet-like status updates. You don’t even have to choose a favourite quote. The one thing you cannot get away from is your friend count. It is how Facebook keeps score.
Five years ago, on probably the loneliest week of my life, my newly created Facebook page looked me square in the eye and announced: “You have 0 friends.” I closed the account.
Facebook is not a good place for a lonely person, and not just because of how precisely it quantifies your isolation. The news feed, the default point of entry to the site, is a constantly updated stream of your every friend’s every activity, opinion and photograph. It is a Twitter feed in glorious technicolour, complete with pictures, polls and videos. It exists to make sure you know exactly how much more popular everyone else is, casually informing you that 14 of your friends were tagged in the album “Fun without Tom Meltzer”. It can be, to say the least, disheartening. Without a real-world social network with which to interact, social networking sites act as proof of the old cliché: you’re never so alone as when you’re in a crowd.
The pressures put on teenagers by sites such as Facebook are well-known. Reports of cyber-bullying, happy-slapping, even self-harm and suicide attempts motivated by social networking sites have become increasingly common in the eight years since Friendster – and then MySpace, Bebo and Facebook – launched. But the subtler side-effects for a generation that has grown up with these sites are only now being felt. In March this year, the NSPCC published a detailed breakdown of calls made to ChildLine in the last five years. Though overall the number of calls from children and teenagers had risen by just 10%, calls about loneliness had nearly tripled, from 1,853 five years ago to 5,525 in 2009. Among boys, the number of calls about loneliness was more than five times higher than it had been in 2004.
This is not just a teenage problem. In May, the Mental Health Foundation released a report called The Lonely Society? Its survey found that 53% of 18-34-year-olds had felt depressed because of loneliness, compared with just 32% of people over 55. The question of why was, in part, answered by another of the report’s findings: nearly a third of young people said they spent too much time communicating online and not enough in person.
- What is HTML5>
There is much going on in the world on internet and web standards, including the gradual roll-out of IPv6 and HTML5. HTML5 is a much more functional markup language than its predecessors and is better suited for developing richer user interfaces and interactions. Major highlights of HTML from the infographic below.From Focus.com:
- The internet: Everything you ever need to know> From The Observer:
In spite of all the answers the internet has given us, its full potential to transform our lives remains the great unknown. Here are the nine key steps to understanding the most powerful tool of our age – and where it’s taking us.
A funny thing happened to us on the way to the future. The internet went from being something exotic to being boring utility, like mains electricity or running water – and we never really noticed. So we wound up being totally dependent on a system about which we are terminally incurious. You think I exaggerate about the dependence? Well, just ask Estonia, one of the most internet-dependent countries on the planet, which in 2007 was more or less shut down for two weeks by a sustained attack on its network infrastructure. Or imagine what it would be like if, one day, you suddenly found yourself unable to book flights, transfer funds from your bank account, check bus timetables, send email, search Google, call your family using Skype, buy music from Apple or books from Amazon, buy or sell stuff on eBay, watch clips on YouTube or BBC programmes on the iPlayer – or do the 1,001 other things that have become as natural as breathing.
The internet has quietly infiltrated our lives, and yet we seem to be remarkably unreflective about it. That’s not because we’re short of information about the network; on the contrary, we’re awash with the stuff. It’s just that we don’t know what it all means. We’re in the state once described by that great scholar of cyberspace, Manuel Castells, as “informed bewilderment”.
Mainstream media don’t exactly help here, because much – if not most – media coverage of the net is negative. It may be essential for our kids’ education, they concede, but it’s riddled with online predators, seeking children to “groom” for abuse. Google is supposedly “making us stupid” and shattering our concentration into the bargain. It’s also allegedly leading to an epidemic of plagiarism. File sharing is destroying music, online news is killing newspapers, and Amazon is killing bookshops. The network is making a mockery of legal injunctions and the web is full of lies, distortions and half-truths. Social networking fuels the growth of vindictive “flash mobs” which ambush innocent columnists such as Jan Moir. And so on.
All of which might lead a detached observer to ask: if the internet is such a disaster, how come 27% of the world’s population (or about 1.8 billion people) use it happily every day, while billions more are desperate to get access to it?
So how might we go about getting a more balanced view of the net ? What would you really need to know to understand the internet phenomenon? Having thought about it for a while, my conclusion is that all you need is a smallish number of big ideas, which, taken together, sharply reduce the bewilderment of which Castells writes so eloquently.
But how many ideas? In 1956, the psychologist George Miller published a famous paper in the journal Psychological Review. Its title was “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information” and in it Miller set out to summarise some earlier experiments which attempted to measure the limits of people’s short-term memory. In each case he reported that the effective “channel capacity” lay between five and nine choices. Miller did not draw any firm conclusions from this, however, and contented himself by merely conjecturing that “the recurring sevens might represent something deep and profound or be just coincidence”. And that, he probably thought, was that.
But Miller had underestimated the appetite of popular culture for anything with the word “magical’ in the title. Instead of being known as a mere aggregator of research results, Miller found himself identified as a kind of sage — a discoverer of a profound truth about human nature. “My problem,” he wrote, “is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals… Either there really is something unusual about the number or else I am suffering from delusions of persecution.”
- What Is I.B.M.’s Watson?> From The New York Times:
“Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”
This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?
Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.
For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.
With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.
Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.
- Forget Avatar, the real 3D revolution is coming to your front room> From The Guardian:
Enjoy eating goulash? Fed up with needing three pieces of cutlery? It could be that I have a solution for you – and not just for you but for picnickers who like a bit of bread with their soup, too. Or indeed for anyone who has dreamed of seeing the spoon and the knife incorporated into one, easy to use, albeit potentially dangerous instrument. Ladies and gentlemen, I would like to introduce you to the Knoon.
The Knoon came to me in a dream – I had a vision of a soup spoon with a knife stuck to its top, blade pointing upwards. Given the potential for lacerating your mouth on the Knoon’s sharp edge, maybe my dream should have stayed just that. But thanks to a technological leap that is revolutionising manufacturing and, some hope, may even change the nature of our consumer society, I now have a Knoon sitting right in front of me. I had the idea, I drew it up and then I printed my cutlery out.
3D is this year’s buzzword in Hollywood. From Avatar to Clash of the Titans, it’s a new take on an old fad that’s coming to save the movie industry. But with less glitz and a degree less fanfare, 3D printing is changing our vision of the world too, and ultimately its effects might prove a degree more special.
Thinglab is a company that specialises in 3D printing. Based in a nondescript office building in east London, its team works mainly with commercial clients to print models that would previously have been assembled by hand. Architects design their buildings in 3D software packages and pass them to Thinglab to print scale models. When mobile phone companies come up with a new handset, they print prototypes first in order to test size, shape and feel. Jewellers not only make prototypes, they use them as a basis for moulds. Sculptors can scan in their original works, adjust the dimensions and rattle off a series of duplicates (signatures can be added later).
All this work is done in the Thinglab basement, a kind of temple to 3D where motion capture suits hang from the wall and a series of next generation TV screens (no need for 3D glasses) sit in the corner. In the middle of the room lurk two hulking 3D printers. Their facades give them the faces of miserable robots.
“We had David Hockney in here recently and he was gobsmacked,” says Robin Thomas, one of Thinglab’s directors, reeling a list of intrigued celebrities who have made a pilgrimage to his basement. “Boy George came in and we took a scan of his face.” Above the printers sit a collection of the models they’ve produced: everything from a car’s suspension system to a rendering of John Cleese’s head. “If a creative person wakes up in the morning with an idea,” says Thomas, “they could have a model by the end of the day. People who would have spent days, weeks months on these type of models can now do it with a printer. If they can think of it, we can make it.”
- Cloud Computing: Public vs Private> From Wikibon Blog:
- The Man Who Builds Brains> From Discover:
On the quarter-mile walk between his office at the École Polytechnique Fédérale de Lausanne in Switzerland and the nerve center of his research across campus, Henry Markram gets a brisk reminder of the rapidly narrowing gap between human and machine. At one point he passes a museumlike display filled with the relics of old supercomputers, a memorial to their technological limitations. At the end of his trip he confronts his IBM Blue Gene/P—shiny, black, and sloped on one side like a sports car. That new supercomputer is the centerpiece of the Blue Brain Project, tasked with simulating every aspect of the workings of a living brain.
Markram, the 47-year-old founder and codirector of the Brain Mind Institute at the EPFL, is the project’s leader and cheerleader. A South African neuroscientist, he received his doctorate from the Weizmann Institute of Science in Israel and studied as a Fulbright Scholar at the National Institutes of Health. For the past 15 years he and his team have been collecting data on the neocortex, the part of the brain that lets us think, speak, and remember. The plan is to use the data from these studies to create a comprehensive, three-dimensional simulation of a mammalian brain. Such a digital re-creation that matches all the behaviors and structures of a biological brain would provide an unprecedented opportunity to study the fundamental nature of cognition and of disorders such as depression and schizophrenia.
Until recently there was no computer powerful enough to take all our knowledge of the brain and apply it to a model. Blue Gene has changed that. It contains four monolithic, refrigerator-size machines, each of which processes data at a peak speed of 56 teraflops (teraflops being one trillion floating-point operations per second). At $2 million per rack, this Blue Gene is not cheap, but it is affordable enough to give Markram a shot with this ambitious project. Each of Blue Gene’s more than 16,000 processors is used to simulate approximately one thousand virtual neurons. By getting the neurons to interact with one another, Markram’s team makes the computer operate like a brain. In its trial runs Markram’s Blue Gene has emulated just a single neocortical column in a two-week-old rat. But in principle, the simulated brain will continue to get more and more powerful as it attempts to rival the one in its creator’s head. “We’ve reached the end of phase one, which for us is the proof of concept,” Markram says. “We can, I think, categorically say that it is possible to build a model of the brain.” In fact, he insists that a fully functioning model of a human brain can be built within a decade.
- The Madness of Crowds and an Internet Delusion> From The New York Times:
RETHINKING THE WEB Jaron Lanier, pictured here in 1999, was an early proponent of the Internet’s open culture. His new book examines the downsides.
In the 1990s, Jaron Lanier was one of the digital pioneers hailing the wonderful possibilities that would be realized once the Internet allowed musicians, artists, scientists and engineers around the world to instantly share their work. Now, like a lot of us, he is having second thoughts.
Mr. Lanier, a musician and avant-garde computer scientist — he popularized the term “virtual reality” — wonders if the Web’s structure and ideology are fostering nasty group dynamics and mediocre collaborations. His new book, “You Are Not a Gadget,” is a manifesto against “hive thinking” and “digital Maoism,” by which he means the glorification of open-source software, free information and collective work at the expense of individual creativity.
He blames the Web’s tradition of “drive-by anonymity” for fostering vicious pack behavior on blogs, forums and social networks. He acknowledges the examples of generous collaboration, like Wikipedia, but argues that the mantras of “open culture” and “information wants to be free” have produced a destructive new social contract.
“The basic idea of this contract,” he writes, “is that authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising.”
I find his critique intriguing, partly because Mr. Lanier isn’t your ordinary Luddite crank, and partly because I’ve felt the same kind of disappointment with the Web. In the 1990s, when I was writing paeans to the dawning spirit of digital collaboration, it didn’t occur to me that the Web’s “gift culture,” as anthropologists called it, could turn into a mandatory potlatch for so many professions — including my own.
So I have selfish reasons for appreciating Mr. Lanier’s complaints about masses of “digital peasants” being forced to provide free material to a few “lords of the clouds” like Google and YouTube. But I’m not sure Mr. Lanier has correctly diagnosed the causes of our discontent, particularly when he blames software design for leading to what he calls exploitative monopolies on the Web like Google.
He argues that old — and bad — digital systems tend to get locked in place because it’s too difficult and expensive for everyone to switch to a new one. That basic problem, known to economists as lock-in, has long been blamed for stifling the rise of superior technologies like the Dvorak typewriter keyboard and Betamax videotapes, and for perpetuating duds like the Windows operating system.
It can sound plausible enough in theory — particularly if your Windows computer has just crashed. In practice, though, better products win out, according to the economists Stan Liebowitz and Stephen Margolis. After reviewing battles like Dvorak-qwerty and Betamax-VHS, they concluded that consumers had good reasons for preferring qwerty keyboards and VHS tapes, and that sellers of superior technologies generally don’t get locked out. “Although software is often brought up as locking in people,” Dr. Liebowitz told me, “we have made a careful examination of that issue and find that the winning products are almost always the ones thought to be better by reviewers.” When a better new product appears, he said, the challenger can take over the software market relatively quickly by comparison with other industries.
Dr. Liebowitz, a professor at the University of Texas at Dallas, said the problem on the Web today has less to do with monopolies or software design than with intellectual piracy, which he has also studied extensively. In fact, Dr. Liebowitz used to be a favorite of the “information-wants-to-be-free” faction.
In the 1980s he asserted that photocopying actually helped copyright owners by exposing more people to their work, and he later reported that audio and video taping technologies offered large benefits to consumers without causing much harm to copyright owners in Hollywood and the music and television industries.
But when Napster and other music-sharing Web sites started becoming popular, Dr. Liebowitz correctly predicted that the music industry would be seriously hurt because it was so cheap and easy to make perfect copies and distribute them. Today he sees similar harm to other industries like publishing and television (and he is serving as a paid adviser to Viacom in its lawsuit seeking damages from Google for allowing Viacom’s videos to be posted on YouTube).
Trying to charge for songs and other digital content is sometimes dismissed as a losing cause because hackers can crack any copy-protection technology. But as Mr. Lanier notes in his book, any lock on a car or a home can be broken, yet few people do so — or condone break-ins.
“An intelligent person feels guilty for downloading music without paying the musician, but they use this free-open-culture ideology to cover it,” Mr. Lanier told me. In the book he disputes the assertion that there’s no harm in copying a digital music file because you haven’t damaged the original file.
“The same thing could be said if you hacked into a bank and just added money to your online account,” he writes. “The problem in each case is not that you stole from a specific person but that you undermined the artificial scarcities that allow the economy to function.”
Mr. Lanier was once an advocate himself for piracy, arguing that his fellow musicians would make up for the lost revenue in other ways. Sure enough, some musicians have done well selling T-shirts and concert tickets, but it is striking how many of the top-grossing acts began in the predigital era, and how much of today’s music is a mash-up of the old.
“It’s as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump,” Mr. Lanier writes. Or, to use another of his grim metaphors: “Creative people — the new peasants — come to resemble animals converging on shrinking oases of old media in a depleted desert.”
To save those endangered species, Mr. Lanier proposes rethinking the Web’s ideology, revising its software structure and introducing innovations like a universal system of micropayments. (To debate reforms, go to Tierney Lab at nytimes.com/tierneylab.
Dr. Liebowitz suggests a more traditional reform for cyberspace: punishing thieves. The big difference between Web piracy and house burglary, he says, is that the penalties for piracy are tiny and rarely enforced. He expects people to keep pilfering (and rationalizing their thefts) as long as the benefits of piracy greatly exceed the costs.
In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.
The result is a problem a bit like trying to stop a mob of looters. When the majority of people feel entitled to someone’s property, who’s going to stand in their way?
- CERN celebrates 20th anniversary of World Wide Web>
theDiagonal doesn’t normally post “newsy” items. So, we are making an exception in this case for two reasons: first, the “web” wasn’t around in 1989 so we wouldn’t have been able to post a news release on our blog announcing its birth; second, in 1989 Tim Berners-Lee’s then manager waved off his proposal with a “Vague, but exciting” annotation, so without the benefit of the hindsight we now have and lacking in foresight that we so desire, we may just have dismissed it. The rest, as they say “is history”.From Interactions.org:
Web inventor Tim Berners-Lee today returned to the birthplace of his brainchild, 20 years after submitting his paper ‘Information Management: A Proposal’ to his manager Mike Sendall in March 1989. By writing the words ‘Vague, but exciting’ on the document’s cover, and giving Berners-Lee the go-ahead to continue, Sendall signed into existence the information revolution of our time: the World Wide Web. In September the following year, Berners-Lee took delivery of a computer called a NeXT cube, and by December 1990 the Web was up and running, albeit between just a couple of computers at CERN*.
Today’s event takes a look back at some of the early history, and pre-history, of the World Wide Web at CERN, includes a keynote speech from Tim Berners-Lee, and concludes with a series of talks from some of today’s Web pioneers.
“It’s a pleasure to be back at CERN today,” said Berners-Lee. “CERN has come a long way since 1989, and so has the Web, but its roots will always be here.”
The World Wide Web is undoubtedly the most well known spin-off from CERN, but it’s not the only one. Technologies developed at CERN have found applications in domains as varied as solar energy collection and medical imaging.
“When CERN scientists find a technological hurdle in the way of their ambitions, they have a tendency to solve it,” said CERN Director General Rolf Heuer. “I’m pleased to say that the spirit of innovation that allowed Tim Berners-Lee to invent the Web at CERN, and allowed CERN to nurture it, is alive and well today.”
- The society of the query and the Googlization of our lives> From Eurozine:
“There is only one way to turn signals into information, through interpretation”, wrote the computer critic Joseph Weizenbaum. As Google’s hegemony over online content increases, argues Geert Lovink, we should stop searching and start questioning.
A spectre haunts the world’s intellectual elites: information overload. Ordinary people have hijacked strategic resources and are clogging up once carefully policed media channels. Before the Internet, the mandarin classes rested on the idea that they could separate “idle talk” from “knowledge”. With the rise of Internet search engines it is no longer possible to distinguish between patrician insights and plebeian gossip. The distinction between high and low, and their co-mingling on occasions of carnival, belong to a bygone era and should no longer concern us. Nowadays an altogether new phenomenon is causing alarm: search engines rank according to popularity, not truth. Search is the way we now live. With the dramatic increase of accessed information, we have become hooked on retrieval tools. We look for telephone numbers, addresses, opening times, a person’s name, flight details, best deals and in a frantic mood declare the ever growing pile of grey matter “data trash”. Soon we will search and only get lost. Old hierarchies of communication have not only imploded, communication itself has assumed the status of cerebral assault. Not only has popular noise risen to unbearable levels, we can no longer stand yet another request from colleagues and even a benign greeting from friends and family has acquired the status of a chore with the expectation of reply. The educated class deplores that fact that chatter has entered the hitherto protected domain of science and philosophy, when instead they should be worrying about who is going to control the increasingly centralized computing grid.
What today’s administrators of noble simplicity and quiet grandeur cannot express, we should say for them: there is a growing discontent with Google and the way the Internet organizes information retrieval. The scientific establishment has lost control over one of its key research projects – the design and ownership of computer networks, now used by billions of people. How did so many people end up being that dependent on a single search engine? Why are we repeating the Microsoft saga once again? It seems boring to complain about a monopoly in the making when average Internet users have such a multitude of tools at their disposal to distribute power. One possible way to overcome this predicament would be to positively redefine Heidegger’s Gerede. Instead of a culture of complaint that dreams of an undisturbed offline life and radical measures to filter out the noise, it is time to openly confront the trivial forms of Dasein today found in blogs, text messages and computer games. Intellectuals should no longer portray Internet users as secondary amateurs, cut off from a primary and primordial relationship with the world. There is a greater issue at stake and it requires venturing into the politics of informatic life. It is time to address the emergence of a new type of corporation that is rapidly transcending the Internet: Google.
The World Wide Web, which should have realized the infinite library Borges described in his short story The Library of Babel (1941), is seen by many of its critics as nothing but a variation of Orwell’s Big Brother (1948). The ruler, in this case, has turned from an evil monster into a collection of cool youngsters whose corporate responsibility slogan is “Don’t be evil”. Guided by a much older and experienced generation of IT gurus (Eric Schmidt), Internet pioneers (Vint Cerf) and economists (Hal Varian), Google has expanded so fast, and in such a wide variety of fields, that there is virtually no critic, academic or business journalist who has been able to keep up with the scope and speed with which Google developed in recent years. New applications and services pile up like unwanted Christmas presents. Just add Google’s free email service Gmail, the video sharing platform YouTube, the social networking site Orkut, GoogleMaps and GoogleEarth, its main revenue service AdWords with the Pay-Per-Click advertisements, office applications such as Calendar, Talks and Docs. Google not only competes with Microsoft and Yahoo, but also with entertainment firms, public libraries (through its massive book scanning program) and even telecom firms. Believe it or not, the Google Phone is coming soon. I recently heard a less geeky family member saying that she had heard that Google was much better and easier to use than the Internet. It sounded cute, but she was right. Not only has Google become the better Internet, it is taking over software tasks from your own computer so that you can access these data from any terminal or handheld device. Apple’s MacBook Air is a further indication of the migration of data to privately controlled storage bunkers. Security and privacy of information are rapidly becoming the new economy and technology of control. And the majority of users, and indeed companies, are happily abandoning the power to self-govern their informational resources.
- What People are Doing Online>
A fascinating infographic that summarizes what we were all doing online in 2007.From BusinessWeek:
- A Digital Life> From Scientific American:
New systems may allow people to record everything they see and hear–and even things they cannot sense–and to store all these data in a personal digital archive.
Human memory can be maddeningly elusive. We stumble upon its limitations every day, when we forget a friend’s telephone number, the name of a business contact or the title of a favorite book. People have developed a variety of strategies for combating forgetfulness–messages scribbled on Post-it notes, for example, or electronic address books carried in handheld devices–but important information continues to slip through the cracks. Recently, however, our team at Microsoft Research has begun a quest to digitally chronicle every aspect of a person’s life, starting with one of our own lives (Bell’s). For the past six years, we have attempted to record all of Bell’s communications with other people and machines, as well as the images he sees, the sounds he hears and the Web sites he visits–storing everything in a personal digital archive that is both searchable and secure.
Digital memories can do more than simply assist the recollection of past events, conversations and projects. Portable sensors can take readings of things that are not even perceived by humans, such as oxygen levels in the blood or the amount of carbon dioxide in the air. Computers can then scan these data to identify patterns: for instance, they might determine which environmental conditions worsen a child’s asthma. Sensors can also log the three billion or so heartbeats in a person’s lifetime, along with other physiological indicators, and warn of a possible heart attack. This information would allow doctors to spot irregularities early, providing warnings before an illness becomes serious. Your physician would have access to a detailed, ongoing health record, and you would no longer have to rack your brain to answer questions such as “When did you first feel this way?”
- Viral Nanoelectronics> From Scientific American:
M.I.T. breeds viruses that coat themselves in selected substances, then self-assemble into such devices as liquid crystals, nanowires and electrodes.
For many years, materials scientists wanted to know how the abalone, a marine snail, constructed its magnificently strong shell from unpromising minerals, so that they could make similar materials themselves. Angela M. Belcher asked a different question: Why not get the abalone to make things for us?
She put a thin glass slip between the abalone and its shell, then removed it. “We got a flat pearl,” she says, “which we could use to study shell formation on an hour-by-hour basis, without having to sacrifice the animal.” It turns out the abalone manufactures proteins that induce calcium carbonate molecules to adopt two distinct yet seamlessly melded crystalline forms–one strong, the other fast-growing. The work earned her a Ph.D. from the University of California, Santa Barbara, in 1997 and paved her way to consultancies with the pearl industry, a professorship at the Massachusetts Institute of Technology, and a founding role in a start-up company called Cambrios in Mountain View, Calif.
- A Plan to Keep Carbon in Check> By Robert H. Socolow and Stephen W. Pacala, From Scientific American:
Getting a grip on greenhouse gases is daunting but doable. The technologies already exist. But there is no time to lose.
Retreating glaciers, stronger hurricanes, hotter summers, thinner polar bears: the ominous harbingers of global warming are driving companies and governments to work toward an unprecedented change in the historical pattern of fossil-fuel use. Faster and faster, year after year for two centuries, human beings have been transferring carbon to the atmosphere from below the surface of the earth. Today the world’s coal, oil and natural gas industries dig up and pump out about seven billion tons of carbon a year, and society burns nearly all of it, releasing carbon dioxide (CO2). Ever more people are convinced that prudence dictates a reversal of the present course of rising CO2 emissions.
The boundary separating the truly dangerous consequences of emissions from the merely unwise is probably located near (but below) a doubling of the concentration of CO2 that was in the atmosphere in the 18th century, before the Industrial Revolution began. Every increase in concentration carries new risks, but avoiding that danger zone would reduce the likelihood of triggering major, irreversible climate changes, such as the disappearance of the Greenland ice cap. Two years ago the two of us provided a simple framework to relate future CO2 emissions to this goal.
- Plan B for Energy> From Scientific American:
If efficiency improvements and incremental advances in today’s technologies fail to halt global warming, could revolutionary new carbon-free energy sources save the day? Don’t count on it–but don’t count it out, either.
To keep this world tolerable for life as we like it, humanity must complete a marathon of technological change whose finish line lies far over the horizon. Robert H. Socolow and Stephen W. Pacala of Princeton University have compared the feat to a multigenerational relay race [see their article "A Plan to Keep Carbon in Check"]. They outline a strategy to win the first 50-year leg by reining back carbon dioxide emissions from a century of unbridled acceleration. Existing technologies, applied both wisely and promptly, should carry us to this first milestone without trampling the global economy. That is a sound plan A.
The plan is far from foolproof, however. It depends on societies ramping up an array of carbon-reducing practices to form seven “wedges,” each of which keeps 25 billion tons of carbon in the ground and out of the air. Any slow starts or early plateaus will pull us off track. And some scientists worry that stabilizing greenhouse gas emissions will require up to 18 wedges by 2056, not the seven that Socolow and Pacala forecast in their most widely cited model.
- A Power Grid for the Hydrogen Economy> From Scientific American:
On the afternoon of August 14, 2003, electricity failed to arrive in New York City, plunging the eight million inhabitants of the Big Apple–along with 40 million other people throughout the northeastern U.S. and Ontario–into a tense night of darkness. After one power plant in Ohio had shut down, elevated power loads overheated high-voltage lines, which sagged into trees and short-circuited. Like toppling dominoes, the failures cascaded through the electrical grid, knocking 265 power plants offline and darkening 24,000 square kilometers.
That incident–and an even more extensive blackout that affected 56 million people in Italy and Switzerland a month later–called attention to pervasive problems with modern civilization’s vital equivalent of a biological circulatory system, its interconnected electrical networks. In North America the electrical grid has evolved in piecemeal fashion over the past 100 years. Today the more than $1-trillion infrastructure spans the continent with millions of kilometers of wire operating at up to 765,000 volts. Despite its importance, no single organization has control over the operation, maintenance or protection of the grid; the same is true in Europe. Dozens of utilities must cooperate even as they compete to generate and deliver, every second, exactly as much power as customers demand–and no more. The 2003 blackouts raised calls for greater government oversight and spurred the industry to move more quickly, through its Intelli-Grid Consortium and the Grid-Wise program of the U.S. Department of Energy, to create self-healing systems for the grid that may prevent some kinds of outages from cascading. But reliability is not the only challenge–and arguably not even the most important challenge–that the grid faces in the decades ahead.
- Dependable Software by Design> From Scientific American:
Computers fly our airliners and run most of the world’s banking, communications, retail and manufacturing systems. Now powerful analysis tools will at last help software engineers ensure the reliability of their designs.
An architectural marvel when it opened 11 years ago, the new Denver International Airport’s high-tech jewel was to be its automated baggage handler. It would autonomously route luggage around 26 miles of conveyors for rapid, seamless delivery to planes and passengers. But software problems dogged the system, delaying the airport’s opening by 16 months and adding hundreds of millions of dollars in cost overruns. Despite years of tweaking, it never ran reliably. Last summer airport managers finally pulled the plug–reverting to traditional manually loaded baggage carts and tugs with human drivers. The mechanized handler’s designer, BAE Automated Systems, was liquidated, and United Airlines, its principal user, slipped into bankruptcy, in part because of the mess.
The high price of poor software design is paid daily by millions of frustrated users. Other notorious cases include costly debacles at the U.S. Internal Revenue Service (a failed $4-billion modernization effort in 1997, followed by an equally troubled $8-billion updating project); the Federal Bureau of Investigation (a $170-million virtual case-file management system was scrapped in 2005); and the Federal Aviation Administration (a lingering and still unsuccessful attempt to renovate its aging air-traffic control system).
EssentialstheDiagonal is a personal blog by Mike Gerra, skeptic, technologist, psychologist, artist, humanist, collector of grand, eclectic ideas.theDiagonal blog connects the dots across multiple disciplines for inquisitive, objective and critical thinkers, exploring the vertices of big science, disruptive innovation, global sustainability, illuminating literature and leftfield art. It is on this diagonal that creativity thrives, big ideas take flight and reason triumphs.