Category Archives: Technica

Journey to the Center of Consumerism

Our collective addiction for purchasing anything, anytime may be wonderfully satisfying for a culture that collects objects and values unrestricted choice and instant gratification. However, it comes at a human cost. Not merely for those who produce our toys, clothes, electronics and furnishings in faraway, anonymous factories, but for those who get the products to our swollen mailboxes.

An intrepid journalist ventured to the very heart of the beast — an Amazon fulfillment center — to discover how the blood of internet commerce circulates; the Observer’s Carole Cadwalladr worked at Amazon’s warehouse, in Swansea, UK, for a week. We excerpt her tale below.

From the Guardian:

The first item I see in Amazon’s Swansea warehouse is a package of dog nappies. The second is a massive pink plastic dildo. The warehouse is 800,000 square feet, or, in what is Amazon’s standard unit of measurement, the size of 11 football pitches (its Dunfermline warehouse, the UK’s largest, is 14 football pitches). It is a quarter of a mile from end to end. There is space, it turns out, for an awful lot of crap.

But then there are more than 100m items on its UK website: if you can possibly imagine it, Amazon sells it. And if you can’t possibly imagine it, well, Amazon sells it too. To spend 10½ hours a day picking items off the shelves is to contemplate the darkest recesses of our consumerist desires, the wilder reaches of stuff, the things that money can buy: a One Direction charm bracelet, a dog onesie, a cat scratching post designed to look like a DJ’s record deck, a banana slicer, a fake twig. I work mostly in the outsize “non-conveyable” section, the home of diabetic dog food, and bio-organic vegetarian dog food, and obese dog food; of 52in TVs, and six-packs of water shipped in from Fiji, and oversized sex toys – the 18in double dong (regular-sized sex toys are shelved in the sortables section).

On my second day, the manager tells us that we alone have picked and packed 155,000 items in the past 24 hours. Tomorrow, 2 December – the busiest online shopping day of the year – that figure will be closer to 450,000. And this is just one of eight warehouses across the country. Amazon took 3.5m orders on a single day last year. Christmas is its Vietnam – a test of its corporate mettle and the kind of challenge that would make even the most experienced distribution supply manager break down and weep. In the past two weeks, it has taken on an extra 15,000 agency staff in Britain. And it expects to double the number of warehouses in Britain in the next three years. It expects to continue the growth that has made it one of the most powerful multinationals on the planet.

Right now, in Swansea, four shifts will be working at least a 50-hour week, hand-picking and packing each item, or, as the Daily Mail put it in an article a few weeks ago, being “Amazon’s elves” in the “21st-century Santa’s grotto”.

If Santa had a track record in paying his temporary elves the minimum wage while pushing them to the limits of the EU working time directive, and sacking them if they take three sick breaks in any three-month period, this would be an apt comparison. It is probably reasonable to assume that tax avoidance is not “constitutionally” a part of the Santa business model as Brad Stone, the author of a new book on Amazon, The Everything Store: Jeff Bezos and the Age of Amazon, tells me it is in Amazon’s case. Neither does Santa attempt to bully his competitors, as Mark Constantine, the founder of Lush cosmetics, who last week took Amazon to the high court, accuses it of doing. Santa was not called before the Commons public accounts committee and called “immoral” by MPs.

For a week, I was an Amazon elf: a temporary worker who got a job through a Swansea employment agency – though it turned out I wasn’t the only journalist who happened upon this idea. Last Monday, BBC’s Panorama aired a programme that featured secret filming from inside the same warehouse. I wonder for a moment if we have committed the ultimate media absurdity and the show’s undercover reporter, Adam Littler, has secretly filmed me while I was secretly interviewing him. He didn’t, but it’s not a coincidence that the heat is on the world’s most successful online business. Because Amazon is the future of shopping; being an Amazon “associate” in an Amazon “fulfilment centre” – take that for doublespeak, Mr Orwell – is the future of work; and Amazon’s payment of minimal tax in any jurisdiction is the future of global business. A future in which multinational corporations wield more power than governments.

But then who hasn’t absent-mindedly clicked at something in an idle moment at work, or while watching telly in your pyjamas, and, in what’s a small miracle of modern life, received a familiar brown cardboard package dropping on to your doormat a day later. Amazon is successful for a reason. It is brilliant at what it does. “It solved these huge challenges,” says Brad Stone. “It mastered the chaos of storing tens of millions of products and figuring out how to get them to people, on time, without fail, and no one else has come even close.” We didn’t just pick and pack more than 155,000 items on my first day. We picked and packed the right items and sent them to the right customers. “We didn’t miss a single order,” our section manager tells us with proper pride.

At the end of my first day, I log into my Amazon account. I’d left my mum’s house outside Cardiff at 6.45am and got in at 7.30pm and I want some Compeed blister plasters for my toes and I can’t do it before work and I can’t do it after work. My finger hovers over the “add to basket” option but, instead, I look at my Amazon history. I made my first purchase, The Rough Guide to Italy, in February 2000 and remember that I’d bought it for an article I wrote on booking a holiday on the internet. It’s so quaint reading it now. It’s from the age before broadband (I itemise my phone bill for the day and it cost me £25.10), when Google was in its infancy. It’s littered with the names of defunct websites (remember Sir Bob Geldof’s deckchair.com, anyone?). It was a frustrating task and of pretty much everything I ordered, only the book turned up on time, as requested.

But then it’s a phenomenal operation. And to work in – and I find it hard to type these words without suffering irony seizure – a “fulfilment centre” is to be a tiny cog in a massive global distribution machine. It’s an industrialised process, on a truly massive scale, made possible by new technology. The place might look like it’s been stocked at 2am by a drunk shelf-filler: a typical shelf might have a set of razor blades, a packet of condoms and a My Little Pony DVD. And yet everything is systemised, because it has to be. It’s what makes it all the more unlikely that at the heart of the operation, shuffling items from stowing to picking to packing to shipping, are those flesh-shaped, not-always-reliable, prone-to-malfunctioning things we know as people.

It’s here, where actual people rub up against the business demands of one of the most sophisticated technology companies on the planet, that things get messy. It’s a system that includes unsystemisable things like hopes and fears and plans for the future and children and lives. And in places of high unemployment and low economic opportunities, places where Amazon deliberately sites its distribution centres – it received £8.8m in grants from the Welsh government for bringing the warehouse here – despair leaks around the edges. At the interview – a form-filling, drug- and alcohol-testing, general-checking-you-can-read session at a local employment agency – we’re shown a video. The process is explained and a selection of people are interviewed. “Like you, I started as an agency worker over Christmas,” says one man in it. “But I quickly got a permanent job and then promoted and now, two years later, I’m an area manager.”

Amazon will be taking people on permanently after Christmas, we’re told, and if you work hard, you can be one of them. In the Swansea/Neath/Port Talbot area, an area still suffering the body blows of Britain’s post-industrial decline, these are powerful words, though it all starts to unravel pretty quickly. There are four agencies who have supplied staff to the warehouse, and their reps work from desks on the warehouse floor. Walking from one training session to another, I ask one of them how many permanent employees work in the warehouse but he mishears me and answers another question entirely: “Well, obviously not everyone will be taken on. Just look at the numbers. To be honest, the agencies have to say that just to get people through the door.”

It does that. It’s what the majority of people in my induction group are after. I train with Pete – not his real name – who has been unemployed for the past three years. Before that, he was a care worker. He lives at the top of the Rhondda Valley, and his partner, Susan (not her real name either), an unemployed IT repair technician, has also just started. It took them more than an hour to get to work. “We had to get the kids up at five,” he says. After a 10½-hour shift, and about another hour’s drive back, before picking up the children from his parents, they got home at 9pm. The next day, they did the same, except Susan twisted her ankle on the first shift. She phones in but she will receive a “point”. If she receives three points, she will be “released”, which is how you get sacked in modern corporatese.

Read the entire article here.

Image: Amazon distribution warehouse in Milton Keynes, UK. Courtesy of Reuters / Dylan Martinez.

To Hype or To Over-Hype, That is the Question

The perennial optimists who form the backbone of many tech start-ups and venture capital firms, which populate California’s Silicon Valley, have only one question on their minds: should they hype the future, or over-hype it?

From the NYT:

These are fabulous times in Silicon Valley.

Mere youths, who in another era would just be graduating from college or perhaps wondering what to make of their lives, are turning down deals that would make them and their great-grandchildren wealthy beyond imagining. They are confident that even better deals await.

“Man, it feels more and more like 1999 every day,” tweeted Bill Gurley, one of the valley’s leading venture capitalists. “Risk is being discounted tremendously.”

That was in May, shortly after his firm, Benchmark, led a $13.5 million investment in Snapchat, the disappearing-photo site that has millions of adolescent users but no revenue.

Snapchat, all of two years old, just turned down a multibillion-dollar deal from Facebook and, perhaps, an even bigger deal from Google. On paper, that would mean a fortyfold return on Benchmark’s investment in less than a year.

Benchmark is the venture capital darling of the moment, a backer not only of Snapchat but the photo-sharing app Instagram (sold for $1 billion to Facebook), the ride-sharing service Uber (valued at $3.5 billion) and Twitter ($22 billion), among many others. Ten of its companies have gone public in the last two years, with another half-dozen on the way. Benchmark seems to have a golden touch.

That is generating a huge amount of attention and an undercurrent of concern. In Silicon Valley, it may not be 1999 yet, but that fateful year — a moment when no one thought there was any risk to the wildest idea — can be seen on the horizon, drifting closer.

No one here would really mind another 1999, of course. As a legendary Silicon Valley bumper sticker has it, “Please God, just one more bubble.” But booms are inevitably followed by busts.

“All business activity is driven by either fear or greed, and in Silicon Valley we’re in a cycle where greed may be on the rise,” said Josh Green, a venture capitalist who is chairman of the National Venture Capital Association.

For Benchmark, that means walking a narrow line between hyping the future — second nature to everyone in Silicon Valley — and overhyping it.

Opinions differ here about exactly what stage of exuberance the valley is in. “Everyone feels like the valley has been in a boom cycle for quite some time,” said Jeremy Stoppelman, the chief executive of Yelp. “That makes people nervous.”

John Backus, a founding partner with New Atlantic Ventures, says he believes it is more like 1996: Things are just ramping up.

The numbers back him up. In 2000, just as the dot-com party was ending, a record number of venture capitalists invested a record amount of money in a record number of deals. Entrepreneurs received over $100 billion, a tenfold rise in dollars deployed in just four years.

Much of the money disappeared. So, eventually, did many of the entrepreneurs and most of the venture capitalists.

Recovery was fitful. Even with the stock market soaring since the recession, venture money invested fell in 2012 from 2011, and then fell again in the first half of this year. Predictions of the death of venture capital have been plentiful.

For one thing, it takes a lot less money to start a company now than it did in 1999. When apps like Instagram and Snapchat catch on, they do so in a matter of months. V.C.’s are no longer quite as essential, and they know it. Just last week, Tim Draper, a third-generation venture capitalist with Draper Fisher Jurvetson, said he was skipping the next fund to devote his time to his academy for young entrepreneurs.

But there are signs of life. Funding in the third quarter suddenly popped, up 17 percent from 2012. “I think this is the best time we’ve seen since 1999 to be a venture capitalist,” Mr. Backus said. He expects the returns on venture capital, which have been miserable since the bust, to greatly improve this year.

“Everyone talks about the mega win — who was in Facebook, Twitter, Pinterest,” he said. “But the bread and butter of venture firms is not those multibillion exits but the $200 million deals, and there are a lot of those.” As an example he pointed to GlobalLogic, which operates design and engineering centers. It was acquired in October in a deal that returned $75 million on New Atlantic’s $5 million investment.

Better returns would influence pension firms and other big investors to give more money to the V.C.’s, which would in term increase the number of deals.

Read the entire article here.

4-D Printing and Self-Assembly

[tube]NV1blyzcdjE[/tube]

With the 3-D printing revolution firmly upon us comes word of the next logical extension — 3-D printing in time, or 4-D printing. This allows for “printing” of components that can self-assemble over time at a macro-scale. We are still a long way from Iain M. Banks’ self-assembling starships, but this heralds a small step in a very important direction.

From Slate:

Read the entire article here.

Vide curtesy of MIT: Self-assembly Lab.

What’s Up With Bitcoin?

The digital, internet currency Bitcoin seems to be garnering much attention recently from some surprising corners, and it’s beyond speculators and computer geeks. Why?

From the Guardian:

The past weeks have seen a surprising meeting of minds between chairman of the US Federal Reserve Ben Bernanke, the Bank of England, the Olympic-rowing and Zuckerberg-bothering Winklevoss twins, and the US Department of Homeland Security. The connection? All have decided it’s time to take Bitcoin seriously.

Until now, what pundits called in a rolling-eye fashion “the new peer-to-peer cryptocurrency” had been seen just as a digital form of gold, with all the associated speculation, stake-claiming and even “mining”; perfect for the digital wild west of the internet, but no use for real transactions.

Bitcoins are mined by computers solving fiendishly hard mathematical problems. The “coin” doesn’t exist physically: it is a virtual currency that exists only as a computer file. No one computer controls the currency. A network keeps track of all transactions made using Bitcoins but it doesn’t know what they were used for – just the ID of the computer “wallet” they move from and to.

Right now the currency is tricky to use, both in terms of the technological nous required to actually acquire Bitcoins, and finding somewhere to spend them. To get them, you have to first set up a wallet, probably online at a site such as Blockchain.info, and then pay someone hard currency to get them to transfer the coins into that wallet.

A Bitcoin payment address is a short string of random characters, and if used carefully, it’s possible to make transactions anonymously. That’s what made it the currency of choice for sites such as the Silk Road and Black Market Reloaded, which let users buy drugs anonymously over the internet. It also makes it very hard to tax transactions, despite the best efforts of countries such as Germany, which in August declared that Bitcoin was “private money” in which transactions should be taxed as normal.

It doesn’t have all the advantages of cash, though the fact you can’t forge it is a definite plus: Bitcoin is “peer-to-peer” and every coin “spent” is authenticated with the network. Thus you can’t spend the same coin in two different places. (But nor can you spend it without an internet connection.) You don’t have to spend whole Bitcoins: each one can be split into 100m pieces (each known as a satoshi), and spent separately.

Although most people have now vaguely heard of Bitcoin, you’re unlikely to find someone outside the tech community who really understands it in detail, let alone accepts it as payment. Nobody knows who invented it; its pseudonymous creator, Satoshi Nakamoto, hasn’t come forward. He or she may not even be Japanese but certainly knows a lot about cryptography, economics and computing.

It was first presented in November 2008 in an academic paper shared with a cryptography mailing list. It caught the attention of that community but took years to take off as a niche transaction tool. The first Bitcoin boom and bust came in 2011, and signalled that it had caught the attention of enough people for real money to get involved – but also posed the question of whether it could ever be more than a novelty.

The algorithm for mining Bitcoins means the number in circulation will never exceed 21m and this limit will be reached in around 2140. Already 57% of all Bitcoins have been created; by 2017, 75% will have been. If you tried to create a Bitcoin in 2141, every other computer on the network would reject it as fake because it would not have been made according to the rules of currency.

The number of companies taking Bitcoin payments is increasing from a small base, and a few payment processors such as Atlanta-based Bitpay are making real money from the currency. But it’s difficult to get accurate numbers on conventional transactions, and it still seems that the most popular uses of Bitcoins are buying drugs in the shadier parts of the internet, as people did on the Silk Road website, and buying the currency in the hope that in a few weeks’ time you will be able to sell it at a profit.

This is remarkable because there’s no fundamental reason why Bitcoin should have any value at all. The only reason people are willing to pay money for the currency is because other people are willing to as well. (Try not to think about it too hard.) Now, though, sensible economists are saying that Bitcoin might become part of our future economy. That’s quite a shift from October last year, when the European Central Bank said that Bitcoin was “characteristic of a Ponzi [pyramid] scheme”. This month, the Chicago Federal Reserve commented that the currency was “a remarkable conceptual and technical achievement, which may well be used by existing financial institutions (which could issue their own bitcoins) or even by governments themselves”.

It might not sound thrilling. But for a central banker, that’s like yelling “BITCOIIINNNN!” from the rooftops. And Bernanke, in a carefully dull letter to the US Senate committee on Homeland Security, said that when it came to virtual currencies (read: Bitcoin), the US Federal Reserve had “ongoing initiatives” to “identify additional areas of … concern that require heightened attention by the banking organisations we supervise”.

In other words, Bernanke is ready to make Bitcoin part of US currency regulation – the key step towards legitimacy.

Most reporting about Bitcoin until now has been of its extraordinary price ramp – from a low of $1 in 2011 to more than $900 earlier this month. That massive increase has sparked a classic speculative rush, with more and more people hoping to get a piece of the pie by buying and then selling Bitcoins. Others are investing thousands of pounds in custom “mining rigs”, computers specially built to solve the mathematical problems necessary to confirm a Bitcoin transaction.

But bubbles can burst: in 2011 it went from $33 to $1. The day after hitting that $900 high, Bitcoin’s value halved on MtGox, the biggest exchange. Then it rose again.

Speculative bubbles happen everywhere, though, from stock markets to Beanie Babies. All that’s needed is enough people who think that they are the smart money, and that everyone else is sufficiently stupid to buy from them. But the Bitcoin bubbles tell us as much about the usefulness of the currency itself as the tulip mania of 17th century Holland did about flower-arranging.

History does provide some lessons. While the Dutch were selling single tulip bulbs for 10 times a craftsman’s annual income, the British were panicking about their own economic crisis. The silver coinage that had been the basis of the national economy for centuries was rapidly becoming unfit for purpose: it was constrained in supply and too easy to forge. The economy was taking on the features of a modern capitalist state, and the currency simply couldn’t catch up.

Describing the problem Britain faced then, David Birch, a consultant specialising in electronic transactions, says: “We had a problem in matching the nature of the economy to the nature of the money we used.” Birch has been talking about electronic money for over two decades and is convinced that we find ourselves on the edge of the same shift that occurred 400 years ago.

The cause of that shift is the internet, because even though you might want to, you can’t use cash – untraceable, no-fee-charged cash – online. Existing payment systems such as PayPal and credit cards demand a cut. So for individuals looking for a digital equivalent of cash – no middleman, quick, easy – Bitcoin looks pretty good.

In 1613, as people looked for a replacement for silver, Birch says, “we might have been saying ‘the idea of tulip bulbs as an asset class looks pretty good, but this central bank nonsense will never catch on.’ We knew we needed a change, but we couldn’t tell which made sense.” Back then, the currency crisis was solved with the introduction first of Isaac Newton’s Royal Mint (“official” silver and gold) and later with the creation of the Bank of England (“official” paper money that could in theory be swapped for official silver or gold).

And now? Bitcoin offers unprecedented flexibility compared with what has gone before. “Some people in the mid-90s asked: ‘Why do we need the web when we have AOL and CompuServe?'” says Mike Hearn, who works on the programs that underpin Bitcoin. “And so now people ask the same of Bitcoin. The web came to dominate because it was flexible and open, so anyone could take part, innovate and build interesting applications like YouTube, Facebook or Wikipedia, none of which would have ever happened on the AOL platform. I think the same will be true of Bitcoin.”

For a small (but vocal) group in the US, Bitcoin represents the next best alternative to the gold standard, the 19th-century conception that money ought to be backed by precious metals rather than government printing presses and promises. This love of “hard money” is baked into Bitcoin itself, and is the reason why the owners who set computers to do the maths required to make the currency work are known as “miners”, and is why the total supply of Bitcoin is capped.

And for Tyler and Cameron Winklevoss, the twins who sued Mark Zuckerberg (claiming he stole their idea for Facebook; the case was settled out of court), it’s a handy vehicle for speculation. The two of them are setting up the “Winklevoss Bitcoin Trust”, letting conventional investors gamble on the price of the currency.

Some of the hurdles left between Bitcoin and widespread adoption can be fixed. But until and unless Bitcoin develops a fully fledged banking system, some things that we take for granted with conventional money won’t work.

Others are intrinsic to the currency. At some point in the early 22nd century, the last Bitcoin will be generated. Long before that, the creation of new coins will have dropped to near-zero. And through the next 100 or so years, it will follow an economic path laid out by “Nakomoto” in 2009 – a path that rejects the consensus view of modern economics that management by a central bank is beneficial. For some, that means Bitcoin can never achieve ubiquity. “Economies perform better when they have managed monetary policies,” the Bank of England’s chief cashier, Chris Salmon, said at an event to discuss Bitcoin last week. “As a result, it will never be more than an alternative [to state-backed money].” To macroeconomists, Bitcoin isn’t scary because it enables crime, or eases tax dodging. It’s scary because a world where it’s used for all transactions is one where the ability of a central bank to guide the economy is destroyed, by design.

Read the entire article here.

Image courtesy of Google Search.

Good, Old-Fashioned Spying

The spied-upon — and that’s most of us — must wonder how the spymasters of the NSA eavesdrop on their electronic communications. After all, we are led to believe that the agency with a voracious appetite for our personal data — phone records, financial transactions, travel reservations, texts and email conversations — gathered it all without permission. And, apparently, companies such as Google, Yahoo and AT&T with vast data centers and sprawling interconnections between them, did not collude with the government.

So, there is growing speculation that the agency tapped into the physical cables that make up the very backbone of the Internet. It brings a whole new meaning to the phrase World Wide Web.

From the NYT:

The recent revelation that the National Security Agency was able to eavesdrop on the communications of Google and Yahoo users without breaking into either companies’ data centers sounded like something pulled from a Robert Ludlum spy thriller.

How on earth, the companies asked, did the N.S.A. get their data without them knowing about it?

The most likely answer is a modern spin on a century-old eavesdropping tradition.

People knowledgeable about Google and Yahoo’s infrastructure say they believe that government spies bypassed the big Internet companies and hit them at a weak spot — the fiber-optic cables that connect data centers around the world that are owned by companies like Verizon Communications, the BT Group, the Vodafone Group and Level 3 Communications. In particular, fingers have been pointed at Level 3, the world’s largest so-called Internet backbone provider, whose cables are used by Google and Yahoo.

The Internet companies’ data centers are locked down with full-time security and state-of-the-art surveillance, including heat sensors and iris scanners. But between the data centers — on Level 3’s fiber-optic cables that connected those massive computer farms — information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.

It is impossible to say for certain how the N.S.A. managed to get Google and Yahoo’s data without the companies’ knowledge. But both companies, in response to concerns over those vulnerabilities, recently said they were now encrypting data that runs on the cables between their data centers. Microsoft is considering a similar move.

“Everyone was so focused on the N.S.A. secretly getting access to the front door that there was an assumption they weren’t going behind the companies’ backs and tapping data through the back door, too,” said Kevin Werbach, an associate professor at the Wharton School.

Data transmission lines have a long history of being tapped.

As far back as the days of the telegraph, spy agencies have located their operations in proximity to communications companies. Indeed, before the advent of the Internet, the N.S.A. and its predecessors for decades operated listening posts next to the long-distance lines of phone companies to monitor all international voice traffic.

Beginning in the 1960s, a spy operation code-named Echelon targeted the Soviet Union and its allies’ voice, fax and data traffic via satellite, microwave and fiber-optic cables.

In the 1990s, the emergence of the Internet both complicated the task of the intelligence agencies and presented powerful new spying opportunities based on the ability to process vast amounts of computer data.

In 2002, John M. Poindexter, former national security adviser under President Ronald Reagan, proposed the Total Information Awareness plan, an effort to scan the world’s electronic information — including phone calls, emails and financial and travel records. That effort was scrapped in 2003 after a public outcry over potential privacy violations.

The technologies Mr. Poindexter proposed are similar to what became reality years later in N.S.A. surveillance programs like Prism and Bullrun.

The Internet effectively mingled domestic and international communications, erasing the bright line that had been erected to protect against domestic surveillance. Although the Internet is designed to be a highly decentralized system, in practice a small group of backbone providers carry almost all of the network’s data.

The consequences of the centralization and its value for surveillance was revealed in 2006 by Mark Klein, an AT&T technician who described an N.S.A. listening post inside a room at an AT&T switching facility.

The agency was capturing a copy of all the data passing over the telecommunications links and then filtering it in AT&T facilities that housed systems that were able to filter data packets at high speed.

Documents taken by Edward J. Snowden and reported by The Washington Post indicate that, seven years after Mr. Klein first described the N.S.A.’s surveillance technologies, they have been refined and modernized.

Read the entire article here.

Image: fiber-optic cables. Courtesy of Daily Mail.

Nooooooooooooooooooo!

The Federal Communications Commission (FCC) recently relaxed rules governing the use of electronics onboard aircraft. We can now use our growing collection of electronic gizmos during take-off and landing, not just during the cruise portion of the flight. But, during flight said gizmos still need to be set to “airplane mode” which shuts off a device’s wireless transceiver.

However, the FCC is considering relaxing the rule even further, allowing cell phone use during flight. Thus, many flyers will soon have yet another reason to hate airlines and hate flying. We’ll be able to add loud cell phone conversations to the lengthy list of aviation pain inducers: cramped seating, fidgety kids, screaming babies, business bores, snorers, Microsoft Powerpoint, body odor, non-existent or bad food, and worst of all travelers who still can’t figure out how to buckle the seat belt.

FCC, please don’t do it!

From WSJ:

If cellphone calling comes to airplanes, it is likely to be the last call for manners.

The prospect is still down the road a bit, and a good percentage of the population can be counted on to be polite. But etiquette experts who already are fuming over the proliferation of digital rudeness aren’t optimistic.

Jodi R.R. Smith, owner of Mannersmith Etiquette Consulting in Massachusetts, says the biggest problem is forced proximity. It is hard to be discreet when just inches separate passengers. And it isn’t possible to escape.

“If I’m on an airplane, and my seatmate starts making a phone call, there’s not a lot of places I can go,” she says.

Should the Federal Communications Commission allow cellphone calls on airplanes above 10,000 feet, and if the airlines get on board, one solution would be to create yakking and non-yakking sections of aircraft, or designate flights for either the chatty or the taciturn, as airlines used to do for smoking.

Barring such plans, there are four things you should consider before placing a phone call on an airplane, Ms. Smith says:

• Will you disturb those around you?

• Will you be ignoring companions you should be paying attention to?

• Will you be discussing confidential topics?

• Is it an emergency?
The answer to the last question needs to be “Yes,” she says, and even then, make the call brief.

“I find that the vast majority of people will get it,” she says. “It’s just the few that don’t who will make life uncomfortable for the rest of us.”

FCC Chairman Tom Wheeler said last week that there is no technical reason to maintain what has been a long-standing ban.

Airlines are approaching the issue cautiously because many customers have expressed strong feelings against cellphone use.

“I believe fistfights at 39,000 feet would become common place,” says Alan Smith, a frequent flier from El Dorado Hills, Calif. “I would be terrified that some very large fellow, after a few drinks, would beat up a passenger annoying him by using the phone.”

Minneapolis etiquette consultant Gretchen Ditto says cellphone use likely will become commonplace on planes since our expectations have changed about when people should be reachable.

Passengers will feel obliged to answer calls, she says. “It’s going to become more prevalent for returning phone calls, and it’s going to be more annoying to everybody.”

Electronic devices are taking over our lives, says Arden Clise, an etiquette expert in Seattle. We text during romantic dinners, answer email during meetings and shop online during Thanksgiving. Making a call on a plane is only marginally more rude.

“Are we saying that our tools are more important than the people in front of us?” she asks. Even if you don’t know your in-flight neighbor, ask yourself, “Do I want to be that annoying person,” Ms. Clise says.

If airlines decide to allow calls, punching someone’s lights out clearly wouldn’t be the best way to get some peace, says New Jersey etiquette consultant Mary Harris. But tensions often run high during flights, and fights could happen.

If someone is bothering you with a phone call, Ms. Harris advises asking politely for the person to end the conversation.

If that doesn’t work, you’re stuck.

In-flight cellphone calls have been possible in Europe for several years. But U.K. etiquette expert William Hanson says they haven’t caught on.

If you need to make a call, he advises leaving your seat for the area near the lavatory or door. If it is night and the lights are dimmed, “you should not make a call at your seat,” he says.

Calls used to be possible on U.S. flights using Airfone units installed on the planes, but the technology never became popular. When people made calls, they were usually brief, in part because they cost $2 a minute, says Tony Lent, a telecommunication consultant in Detroit who worked on Airfone products in the 1980s.

The situation might be different today. “People were much more prudent about using their mobile phones,” Mr. Lent says. “Nowadays, those social mores are gone.”

Several years ago, when the government considered lifting its cellphone ban, U.S. Rep. Tom Petri co-sponsored the Halting Airplane Noise to Give Us Peace Act of 2008. The bill would have allowed texting and other data applications but banned voice calls. He was motivated by “a sense of courtesy,” he says. The bill was never brought to a vote.

Mr. Petri says he will try again if the FCC allows calls this time around. What if his bill doesn’t pass? “I suppose you can get earplugs,” he says.

Read the entire article here.

Image: Smartphone user. Courtesy of CNN / Money.

Predicting the Future is Highly Overrated

Contrary to what political pundits, stock market talking heads and your local strip mall psychic will have you believe, no one, yet, can predict the future. And, it is no more possible for the current generation of tech wunderkinds or Silicon Valley venture fund investors or the armies of analysts.

From WSJ:

I believe the children aren’t our future. Teach them well, but when it comes to determining the next big thing in tech, let’s not fall victim to the ridiculous idea that they lead the way.

Yes, I’m talking about Snapchat.

Last week my colleagues reported that Facebook FB -2.71% recently offered $3 billion to acquire the company behind the hyper-popular messaging app. Stunningly, Evan Spiegel, Snapchat’s 23-year-old co-founder and CEO, rebuffed the offer.

If you’ve never used Snapchat—and I implore you to try it, because Snapchat can be pretty fun if you’re into that sort of thing, which I’m not, because I’m grumpy and old and I have two small kids and no time for fun, which I think will be evident from the rest of this column, and also would you please get off my lawn?—there are a few things you should know about the app.

First, Snapchat’s main selling point is ephemerality. When I send you a photo and caption using the app, I can select how long I want you to be able to view the picture. After you look at it for the specified time—1 to 10 seconds—the photo and all trace of our having chatted disappear from your phone. (Or, at least, they are supposed to. Snapchat’s security measures have frequently been defeated.)

Second, and relatedly, Snapchat is used primarily by teens and people in college. This explains much of Silicon Valley’s obsession with the company.

The app doesn’t make any money—its executives have barely even mentioned any desire to make money—but in the ad-supported tech industry, youth is the next best thing to revenue. For tech execs, youngsters are the canaries in the gold mine.

That logic follows a widely shared cultural belief: We all tend to assume that young people are on the technological vanguard, that they somehow have got an inside scoop on what’s next. If today’s kids are Snapchatting instead of Facebooking, the thinking goes, tomorrow we’ll all be Snapchatting, too, because tech habits, like hairstyles, flow only one way: young to old.

There is only one problem with elevating young people’s tastes this way: Kids are often wrong. There is little evidence to support the idea that the youth have any closer insight on the future than the rest of us do. Sometimes they are first to flock to technologies that turn out to be huge; other times, the young pick products and services that go nowhere. They can even be late adopters, embracing innovations that older people understood first. To butcher another song: The kids could be all wrong.

Here’s a thought exercise. How many of the products and services that you use every day were created or first used primarily by people under 25?

A few will spring to mind, Facebook the biggest of all. Yet the vast majority of your most-used things weren’t initially popular among teens. The iPhone, the iPad, the iPod, the Google search engine, YouTube, Twitter, TWTR -1.86% Gmail, Google Maps, Pinterest, LinkedIn, the Kindle, blogs, the personal computer, none of these were initially targeted to, or primarily used by, high-school or college-age kids. Indeed, many of the most popular tech products and services were burdened by factors that were actively off-putting to kids, such as high prices, an emphasis on productivity and a distinct lack of fun. Yet they succeeded anyway.

Even the exceptions suggest we should be wary of catering to youth. It is true that in 2004, Mark Zuckerberg designed Facebook for his Harvard classmates, and the social network was first made available only to college students. At the time, though, Facebook looked vastly more “grown up” than its competitors. The site prevented you from uglifying your page with your own design elements, something you could do with Myspace, which, incidentally, was the reigning social network among the pubescent set.

Mr. Zuckerberg deliberately avoided catering to this group. He often told his co-founders that he wanted Facebook to be useful, not cool. That is what makes the persistent worry about Facebook’s supposedly declining cachet among teens so bizarre; Facebook has never really been cool, but neither are a lot of other billion-dollar companies. Just ask Myspace how far being cool can get you.

Incidentally, though 20-something tech founders like Mr. Zuckerberg, Steve Jobs and Bill Gates get a lot of ink, they are unusual. A recent study by the VC firm Cowboy Ventures found that among tech startups that have earned a valuation of at least $1 billion since 2003, the average founder’s age was 34. “The twentysomething inexperienced founder is an outlier, not the norm,” wrote Cowboy’s founder Aileen Lee.

If you think about it for a second, the fact that young people aren’t especially reliable predictors of tech trends shouldn’t come as a surprise. Sure, youth is associated with cultural flexibility, a willingness to try new things that isn’t necessarily present in older folk. But there are other, less salutary hallmarks of youth, including capriciousness, immaturity, and a deference to peer pressure even at the cost of common sense. This is why high school is such fertile ground for fads. And it’s why, in other cultural areas, we don’t put much stock in teens’ choices. No one who’s older than 18, for instance, believes One Direction is the future of music.

That brings us back to Snapchat. Is the app just a youthful fad, just another boy band, or is it something more permanent; is it the Beatles?

To figure this out, we would need to know why kids are using it. Are they reaching for Snapchat for reasons that would resonate with older people—because, like the rest of us, they’ve grown wary of the public-sharing culture promoted by Facebook and Twitter? Or are they using it for less universal reasons, because they want to evade parental snooping, send risqué photos, or avoid feeling left out of a fad everyone else has adopted?

Read the entire article here.

Image: Snapchat logo. Courtesy of Snapchat / Wikipedia.

Retailing: An Engineering Problem

Traditional retailers look at retailing primarily as a marketing and customer acquisition and relationship problem. For Amazon, it’s more of an engineering and IT problem with solutions to be found in innovation and optimization.

From Technology Review:

Why do some stores succeed while others fail? Retailers constantly struggle with this question, battling one another in ways that change with each generation. In the late 1800s, architects ruled. Successful merchants like Marshall Field created palaces of commerce that were so gorgeous shoppers rushed to come inside. In the early 1900s, mail order became the “killer app,” with Sears Roebuck leading the way. Toward the end of the 20th century, ultra-efficient suburban discounters like Target and Walmart conquered all.

Now the tussles are fiercest in online retailing, where it’s hard to tell if anyone is winning. Retailers as big as Walmart and as small as Tweezerman.com all maintain their own websites, catering to an explosion of customer demand. Retail e-commerce sales expanded 15 percent in the U.S in 2012—seven times as fast as traditional retail. But price competition is relentless, and profit margins are thin to nonexistent. It’s easy to regard this $186 billion market as a poisoned prize: too big to ignore, too treacherous to pursue.

Even the most successful online retailer, Amazon.com, has a business model that leaves many people scratching their heads. Amazon is on track to ring up $75 billion in worldwide sales this year. Yet it often operates in the red; last quarter, Amazon posted a $41 million loss. Amazon’s founder and chief executive officer, Jeff Bezos, is indifferent to short-term earnings, having once quipped that when the company achieved profitability for a brief stretch in 1995, “it was probably a mistake.”

Look more closely at Bezos’s company, though, and its strategy becomes clear. Amazon is constantly plowing cash back into its business. Its secretive advanced-research division, Lab 126, works on next-generation Kindles and other mobile devices. More broadly, Amazon spends heavily to create the most advanced warehouses, the smoothest customer-service channels, and other features that help it grab an ever-larger share of the market. As former Amazon manager Eugene Wei wrote in a recent blog post, “Amazon’s core business model does generate a profit with most every transaction … The reason it isn’t showing a profit is because it’s undertaken a massive investment to support an even larger sales base.”

Much of that investment goes straight into technology. To Amazon, retailing looks like a giant engineering problem. Algorithms define everything from the best way to arrange a digital storefront to the optimal way of shipping a package. Other big retailers spend heavily on advertising and hire a few hundred engineers to keep systems running. Amazon prefers a puny ad budget and a payroll packed with thousands of engineering graduates from the likes of MIT, Carnegie Mellon, and Caltech.

Other big merchants are getting the message. Walmart, the world’s largest retailer, two years ago opened an R&D center in Silicon Valley where it develops its own search engines and looks for startups to buy. But competing on Amazon’s terms doesn’t stop with putting up a digital storefront or creating a mobile app. Walmart has gone as far as admitting that it may have to rethink what its stores are for. To equal Amazon’s flawless delivery, this year it even floated the idea of recruiting shoppers out of its aisles to play deliveryman, whisking goods to customers who’ve ordered online.

Amazon is a tech innovator by necessity, too. The company lacks three of conventional retailing’s most basic elements: a showroom where customers can touch the wares; on-the-spot salespeople who can woo shoppers; and the means for customers to take possession of their goods the instant a sale is complete. In one sense, everything that Amazon’s engineers create is meant to make these fundamental deficits vanish from sight.

Amazon’s cunning can be seen in the company’s growing patent portfolio. Since 1994, Amazon.com and a subsidiary, Amazon Technologies, have won 1,263 patents. (By contrast, Walmart has just 53.) Each Amazon invention is meant to make shopping on the site a little easier, a little more seductive, or to trim away costs. Consider U.S. Patent No. 8,261,983, on “generating customized packaging” which came into being in late 2012.

“We constantly try to drive down the percentage of air that goes into a shipment,” explains Dave Clark, the Amazon vice president who oversees the company’s nearly 100 warehouses, known as fulfillment centers. The idea of shipping goods in a needlessly bulky box (and paying a few extra cents to United Parcel Service or other carriers) makes him shudder. Ship nearly a billion packages a year, and those pennies add up. Amazon over the years has created more than 40 sizes of boxes– but even that isn’t enough. That’s the glory of Amazon’s packaging patent: when a customer’s odd pairing of items creates a one-of-a-kind shipment, Amazon now has systems that will compute the best way to pack that order and create a perfect box for it within 30 minutes.

For thousands of online merchants, it’s easier to live within Amazon’s ecosystem than to compete. So small retailers such as EasyLunchboxes.com have moved their inventory into Amazon’s warehouses, where they pay a commission on each sale for shipping and other services. That is becoming a highly lucrative business for Amazon, says Goldman Sachs analyst Heath Terry. He predicts Amazon will reap $3.5 billion in cash flow from third-party shipping in 2014, creating a very profitable side business that he values at $38 billion—about 20 percent of the company’s overall stock market value.

Jousting directly with Amazon is tougher. Researchers at Internet Retailer calculate that Amazon’s revenue exceeds that of its next 12 competitors combined. In a regulatory filing earlier this year, Target—the third-largest retailer in the U.S.—conceded that its “digital sales represented an immaterial amount of total sales.” For other online entrants, the most prudent strategies generally involve focusing on areas that the big guy hasn’t conquered yet, such as selling services, online “flash sales” that snare impulse buyers who can’t pass up a deal, or particularly challenging categories such as groceries. Yet many, if not most, of these upstarts are losing money.

Read the entire article here.

Image: Amazon fullfillment center, Scotland. Courtesy of Amazon / Wired.

Let the Sunshine In

A ingeniously simple and elegant idea brings sunshine to a small town in Norway.

From the Guardian:

On the market square in Rjukan stands a statue of the town’s founder, a noted Norwegian engineer and industrialist called Sam Eyde, sporting a particularly fine moustache. One hand thrust in trouser pocket, the other grasping a tightly rolled drawing, the great man stares northwards across the square at an almost sheer mountainside in front of him.

Behind him, to the south, rises the equally sheer 1,800-metre peak known as Gaustatoppen. Between the mountains, strung out along the narrow Vestfjord valley, lies the small but once mighty town that Eyde built in the early years of the last century, to house the workers for his factories.

He was plainly a smart guy, Eyde. He harnessed the power of the 100-metre Rjukanfossen waterfall to generate hydro-electricity in what was, at the time, the world’s biggest power plant. He pioneered new technologies – one of which bears his name – to produce saltpetre by oxidising nitrogen from air, and made industrial quantities of hydrogen by water electrolysis.

But there was one thing he couldn’t do: change the elevation of the sun. Deep in its east-west valley, surrounded by high mountains, Rjukan and its 3,400 inhabitants are in shadow for half the year. During the day, from late September to mid-March, the town, three hours’ north-west of Oslo, is not dark (well, it is almost, in December and January, but then so is most of Norway), but it’s certainly not bright either. A bit … flat. A bit subdued, a bit muted, a bit mono.

Since last week, however, Eyde’s statue has gazed out upon a sight that even the eminent engineer might have found startling. High on the mountain opposite, 450 metres above the town, three large, solar-powered, computer-controlled mirrors steadily track the movement of the sun across the sky, reflecting its rays down on to the square and bathing it in bright sunlight. Rjukan – or at least, a small but vital part of Rjukan – is no longer stuck where the sun don’t shine.

“It’s the sun!” grins Ingrid Sparbo, disbelievingly, lifting her face to the light and closing her eyes against the glare. A retired secretary, Sparbo has lived all her life in Rjukan and says people “do sort of get used to the shade. You end up not thinking about it, really. But this … This is so warming. Not just physically, but mentally. It’s mentally warming.”

Two young mothers wheel their children into the square, turn, and briefly bask: a quick hit. On a freezing day, an elderly couple sit wide-eyed on one of the half-dozen newly installed benches, smiling at the warmth on their faces. Children beam. Lots of people take photographs. A shop assistant, Silje Johansen, says it’s “awesome. Just awesome.”

Pushing his child’s buggy, electrical engineer Eivind Toreid is more cautious. “It’s a funny thing,” he says. “Not real sunlight, but very like it. Like a spotlight. I’ll go if I’m free and in town, yes. Especially in autumn and in the weeks before the sun comes back. Those are the worst: you look just a short way up the mountainside and the sun is right there, so close you can almost touch it. But not here.”

Pensioners Valborg and Eigil Lima have driven from Stavanger – five long hours on the road – specially to see it. Heidi Fieldheim, who lives in Oslo now but spent six years in Rjukan with her husband, a local man, says she heard all about it on the radio. “But it’s far more than I expected,” she says. “This will bring much happiness.”

Across the road in the Nyetider cafe, sporting – by happy coincidence – a particularly fine set of mutton chops, sits the man responsible for this unexpected access to happiness. Martin Andersen is a 40-year-old artist and lifeguard at the municipal baths who, after spells in Berlin, Paris, Mali and Oslo, pitched up in Rjukan in the summer of 2001.

The first inkling of an artwork Andersen dubbed the Solspeil, or sun mirror, came to him as the month of September began to fade: “Every day, we would take our young child for a walk in the buggy,” he says, “and every day I realised we were having to go a little further down the valley to find the sun.” By 28 September, Andersen realised, the sun completely disappears from Rjukan’s market square. The occasion of its annual reappearance, lighting up the bridge across the river by the old fire station, is a date indelibly engraved in the minds of all Rjukan residents: 12 March.

And throughout the seemingly endless intervening months, Andersen says: “We’d look up and see blue sky above, and the sun high on the mountain slopes, but the only way we could get to it was to go out of town. The brighter the day, the darker it was down here. And it’s sad, a town that people have to leave in order to feel the sun.”

A hundred years ago, Eyde had already grasped the gravity of the problem. Researching his own plan, Andersen discovered that, as early as 1913, Eyde was considering a suggestion by one of his factory workers for a system of mountain-top mirrors to redirect sunlight into the valley below.

The industrialist eventually abandoned the plan for want of adequate technology, but soon afterwards his company, Norsk Hydro, paid for the construction of a cable car to carry the long-suffering townsfolk, for a modest sum, nearly 500m higher up the mountain and into the sunlight. (Built in 1928, the Krossobanen is still running, incidentally; £10 for the return trip. The view is majestic and the coffee at the top excellent. A brass plaque in the ticket office declares the facility a gift from the company “to the people of Rjukan, because for six months of the year, the sun does not shine in the bottom of the valley”.)

Andersen unearthed a partially covered sports stadium in Arizona that was successfully using small mirrors to keep its grass growing. He learned that in the Middle East and other sun-baked regions of the world, vast banks of hi-tech tracking mirrors called heliostats concentrate sufficient reflected sunlight to heat steam turbines and drive whole power plants.He persuaded the town hall to come up with the cash to allow him to develop his project further. He contacted an expert in the field, Jonny Nersveen, who did the maths and told him it could probably work. He visited Viganella, an Italian village that installed a similar sun mirror in 2006.

And 12 years after he first dreamed of his Solspeil, a German company specialising in so-called CSP – concentrated solar power – helicoptered in the three 17 sq m glass mirrors that now stand high above the market square in Rjukan. “It took,” he says, “a bit longer than we’d imagined.” First, the municipality wasn’t used to dealing with this kind of project: “There’s no rubber stamp for a sun mirror.” But Andersen also wanted to be sure it was right – that Rjukan’s sun mirror would do what it was intended to do.

Viganella’s single polished steel mirror, he says, lights a much larger area, but with a far weaker, more diffuse light. “I wanted a smaller, concentrated patch of sunlight: a special sunlit spot in the middle of town where people could come for a quick five minutes in the sun.” The result, you would have to say, is pretty much exactly that: bordered on one side by the library and town hall, and on the other by the tourist office, the 600 sq ms of Rjukan’s market square, to be comprehensively remodelled next year in celebration, now bathes in a focused beam of bright sunlight fully 80-90% as intense as the original.

Their efforts monitored by webcams up on the mountain and down in the square, their movement dictated by computer in a Bavarian town outside Munich, the heliostats generate the solar power they need to gradually tilt and rotate, following the sun on its brief winter dash across the sky.

It really works. Even the objectors – and there were, in town, plenty of them; petitions and letter-writing campaigns and a Facebook page organised against what a large number of locals saw initially as a vanity project and, above all, a criminal waste of money – now seem largely won over.

Read the entire article here.

Image: Light reflected by the mirrors of Rjukan, Norway. Courtesy of David Levene / Guardian.

Masters of the Universe: Silicon Valley Edition

As we all (should) know the “real” masters of the universe (MOTU) center around He-Man and his supporting cast of characters from the mind of the Mattel media company. In the 80s, we also find masters of the universe on Wall Street — bright young MBAs leading the charge towards the untold wealth (and eventual destruction) mined by investment banks. Ironically, many of the east coast MOTU have since disappeared from public view following the financial meltdown that many of them helped engineer. Now, we seem to be at risk from another group of arrogant MOTU: this time, a select group of high-tech entrepreneurs from Silicon Valley.

From the WSJ:

At a startup conference in the San Francisco Bay area last month, a brash and brilliant young entrepreneur named Balaji Srinivasan took the stage to lay out a case for Silicon Valley’s independence.

According to Mr. Srinivasan, who co-founded a successful genetics startup and is now a popular lecturer at Stanford University, the tech industry is under siege from Wall Street, Washington and Hollywood, which he says he believes are harboring resentment toward Silicon Valley’s efforts to usurp their cultural and economic power.

Balaji Srinivasan, an entrepreneur who proposes an ‘opt-in society,’ run by technology. His idea seems a more expansive version of a call by Google CEO Larry Page for ‘a piece of the world’ to try out controversial new technologies.

On its surface, Mr. Srinivasan’s talk,?called “Silicon Valley’s Ultimate Exit,”?sounded like a battle cry of the libertarian, anti-regulatory sensibility long espoused by some of the tech industry’s leading thinkers. After arguing that the rest of the country wants to put a stop to the Valley’s rise, Mr. Srinivasan floated a plan for techies to build an “opt-in society, outside the U.S., run by technology.”

His idea seemed a more expansive version of Google Chief Executive Larry Page‘s call for setting aside “a piece of the world” to try out controversial new technologies, and investor Peter Thiel’s “Seastead” movement, which aims to launch tech-utopian island nations.

But there was something more significant about Mr. Srinivasan’s talk than simply a rehash of Silicon Valley’s grievances. It was one of several recent episodes in which tech stars have sought to declare the Valley the nation’s leading center of power and to dismiss non-techies as unimportant to the nation’s future.

For instance, on “This Week in Start-Ups,” a popular tech podcast, the venture capitalist Chamath Palihapitiya recently argued that “it’s becoming excruciatingly, obviously clear to everyone else that where value is created is no longer in New York; it’s no longer in Washington; it’s no longer in L.A.; it’s in San Francisco and the Bay Area.”

This is Silicon Valley’s superiority complex, and it sure is an ugly thing to behold. As the tech industry has shaken off the memories of the last dot-com bust, its luminaries have become increasingly confident about their capacity to shape the future. And now they seem to have lost all humility about their place in the world.

Sure, they’re correct that whether you measure success financially or culturally, Silicon Valley now seems to be doing better than just about anywhere else. But there is a suggestion bubbling beneath the surface of every San Francisco networking salon that the industry is unstoppable, and that its very success renders it immune to legitimate criticism.

This is a dangerous idea. For Silicon Valley’s own sake, the triumphalist tone needs to be kept in check. Everyone knows that Silicon Valley aims to take over the world. But if they want to succeed, the Valley’s inhabitants would be wise to at least pretend to be more humble in their approach.

I tried to suggest this to Mr. Srinivasan when I met him at a Palo Alto, Calif., cafe a week after his incendiary talk. We spoke for two hours, and I found him to be disarming and charming.

He has a quick, capacious mind, the sort that flits effortlessly from discussions of genetics to economics to politics to history. (He is the kind of person who will refer to the Treaty of Westphalia in conversation.)

Contrary to press reports, Mr. Srinivasan says he wasn’t advocating Silicon Valley’s “secession.” And, in fact, he hadn’t used that word. Instead he was advocating a “peaceful exit,” something similar to what his father did when he emigrated from India to the U.S. in the past century. But when I asked him what harms techies faced that might prompt such a drastic response, he couldn’t offer much evidence.

He pointed to a few headlines in the national press warning that robots might be taking over people’s jobs. These, he said, were evidence of the rising resentment that technology will foster as it alters conditions across the country and why Silicon Valley needs to keep an escape hatch open.

But I found Mr. Srinivasan’s thesis to be naive. According to the industry’s own hype, technologies like robotics, artificial intelligence, data mining and ubiquitous networking are poised to usher in profound changes in how we all work and live. I believe, as Mr. Srinivasan argues, that many of these changes will eventually improve human welfare.

But in the short run, these technologies could cause enormous economic and social hardships for lots of people. And it is bizarre to expect, as Mr. Srinivasan and other techies seem to, that those who are affected wouldn’t criticize or move to stop the industry pushing them.

Tech leaders have a choice in how to deal with the dislocations their innovations cause. They can empathize and even work with stalwarts of the old economy to reduce the shock of new invention in sectors such as Hollywood, the news and publishing industries, the government, and finance—areas that Mr. Srinivasan collectively labels “the paper belt.”

They can continue to disrupt many of these institutions in the marketplace without making preening claims about the superiority of tech culture. (Apple’s executives rarely shill for the Valley, but still sometimes manage to change the world).

Or, tech leaders can adopt an oppositional tone: If you don’t recognize our superiority and the rightness of our ways, we’ll take our ball and go home.

Read the entire article here.

Image courtesy of Silicon Valley.

Zombie Technologies

Next time Halloween festivities roll around consider dressing up as a fax machine — one of several technologies that seems unwilling to die.

From Wired:

One of the things we love about technology is how fast it moves. New products and new services are solving our problems all the time, improving our connectivity and user experience on a nigh-daily basis.

But underneath sit the technologies that just keep hanging on. Every flesh wound, every injury, every rupture of their carcass levied by a new device or new method of doing things doesn’t merit even so much as a flinch from them. They keep moving, slowly but surely, eating away at our livelihoods. They are the undead of the technology world, and they’re coming for your brains.

Below, you’ll find some of technology’s more persistent walkers—every time we seem to kill them off, more hordes still clinging to their past relevancy lumber up to distract you. It’s about time we lodged an axe in their skulls.

Oddly specific yet totally unhelpful error codes

It’s common when you’re troubleshooting hardware and software—something, somewhere throws an error code that pairs an incredibly specific alphanumerical code (“0x000000F4”) with a completely generic and unhelpful message like “an unknown error occurred” or “a problem has been detected.”

Back in computing’s early days, the desire to use these codes instead of providing detailed troubleshooting guides made sense—storage space was at a premium, Internet connectivity could not be assumed, and it was a safe bet that the software in question came with some tome-like manual to assist people in the event of problems. Now, with connectivity virtually omnipresent and storage space a non-issue, it’s not clear why codes like these don’t link to more helpful information in some way.

All too often, you’re left to take the law into your own hands. Armed with your error code, you head over to your search engine of choice and punch it in. At this point, one of two things can happen, and I’m not sure which is more infuriating: you either find an expanded, totally helpful explanation of the code and how to fix it on the official support website (could you really not have built that into the software itself?), or, alternatively, you find a bunch of desperate, inconclusive forum posts that offer no additional insight into the problem (though they do offer insight into the absurdity of the human condition). There has to be a better way.

Copper landlines

I’ve been through the Northeast blackout, the 9-11 attacks, and Hurricane Sandy, all of which took out cell service at the same time family and friends were most anxious to get in touch. So I’m a prime candidate for maintaining a landline, which carries enough power to run phones, often provided by a facility with a backup generator. And, in fact, I’ve tried to retain one. But corporate indifference has turned copper wiring into the technology of the living dead.

Verizon really wants you to have two things: cellular service and FiOS. Except it doesn’t actually want to give you FiOS—the company has stopped expanding its fiber footprint, and it’s moving with the speed of a glacier to hook up neighborhoods that are FiOS accessible. That has left Verizon in a position where the company will offer you cell service, but, if you don’t want that, it will stick you with a technology it no longer wants to support: service over copper wires.

This was made explicit in the wake of Sandy when a shore community that had seen its wires washed out was offered cellular service as a replacement. When the community demanded wires, Verizon backed down and gave it FiOS. But the issue shows up in countless other ways. One of our editors recently decided to have DSL service over copper wire activated in his apartment; Verizon took two weeks to actually get the job done.

I stuck with Verizon DSL in the hope that I would be able to transfer directly to FiOS when it finally got activated. But Verizon’s indifference to wired service led to a six-month nightmare. I’d experience erratic DSL, call for Verizon for help, and have it fixed through a process that cut off the phone service. Getting the phone service restored would degrade the DSL. On it went until I gave up and switched to cable—which was a good thing, because it took Verizon about two years to finally put fiber in place.

At the moment, AT&T still considers copper wiring central to its services, but it’s not clear how long that position will remain tenable. If AT&T’s position changes, then it’s likely that the company will also treat the copper just as Verizon has: like a technology that’s dead even as it continues to shamble around causing trouble.

The scary text mode insanity lying in wait beneath it all

PRESS DEL TO ENTER SETUP. Oh, BIOS, how I hate thee. Often the very first thing you have to deal with when dragging a new computer out of the box is the text mode BIOS setup screen, where you have to figure out how to turn on support for legacy USB devices, or change the boot order, or disable PXE booting, or force onboard video to work, or any number of other crazy things. It’s like being sucked into a time warp back into 1992.

Though slowly being replaced across the board by UEFI, BIOS setup screens are definitely still a thing even on new hardware—the small dual-Ethernet server I purchased just a month ago to serve as my new firewall required me to spend minutes figuring out which of its onboard USB ports were legacy-enabled and then which key summoned the setup screen (F2? Delete? F10? F1? IT’S NEVER THE SAME ONE!). Once in, I had to figure out how to enable USB device booting so that I could get Smoothwall installed, but the computer inexplicably wouldn’t boot from my carefully prepared USB stick, even though the stick worked great on the other servers in the closet. I ended up having to install from a USB CD-ROM drive instead.

Many motherboard OEMs now provide a way to adjust BIOS options from inside of Windows, which is great, but that won’t necessarily help you on a fresh Windows install (or on a computer you’ve parted together yourself and on which you haven’t installed the OEM’s universally hideous BIOS tweaking application). UEFI as a replacement has been steadily gaining ground for almost three years now, but we’ve likely got many more years of occasionally having to reboot and hold DEL to adjust some esoteric settings. Ugh.

Fax machines, and the general concept of faxing

Faxing has a longer and more venerable history than I would have guessed, based on how abhorrent it is in the modern day. The first commercial telefaxing service was established in France in 1865 via wire transmission, and we started sending faxes over phone lines circa 1964. For a long time, faxing was actually the best and fastest way to get a photographic clone of one piece of paper to an entirely different geographical location.

Then came e-mail. And digital cameras. And electronic signatures. And smartphones with digital cameras. And Passbook. And cloud storage. Yet people continue to ask me to fax them things.

When it comes to signing contracts or verifying or simply passing along information, digital copies, properly backed up with redundant files everywhere, are easier to deal with at literally every step in the process. On the very rare occasion that a physical piece of paper is absolutely necessary, here: e-mail it; I will sign it electronically and e-mail it back to you, and you print it out. You already sent me that piece of paper? I will sign it, take a picture with my phone, e-mail that picture to you, and you print it out. Everyone comes out ahead, no one has to deal with a fax machine.

That a business, let alone businesses, have actually cropped up around the concept of allowing people to e-mail documents to a fax number is ludicrous. Get an e-mail address. They are free. Get a printer. It is cheaper than a fax machine. Don’t get a printer that is also a fax machine, because then you are just encouraging this technological concept to live on, when, in fact, it needs to die.

Read the entire article here.

Image courtesy of Mobiledia.

Britain’s Genomics NHS

The United Kingdom is plotting a visionary strategy that will put its treasured National Health Service (NHS) at the heart of the new revolution in genomics-based medical care.

From Technology Review:

By sequencing the genomes of 100,000 patients and integrating the resulting data into medical care, the U.K. could become the first country to introduce genome sequencing into its mainstream health system. The U.K. government hopes that the investment will improve patient outcomes while also building a genomic medicine industry. But the project will test the practical challenges of integrating and safeguarding genomic data within an expansive health service.

Officials breathed life into the ambitious sequencing project in June when they announced the formation of Genomics England, a company set up to execute the £100 million project. The goal is to “transform how the NHS uses genomic medicine,” says the company’s chief scientist, Mark Caulfield.

Those changes will take many shapes. First, by providing whole-genome sequencing and analysis for National Health Service patients with rare diseases, Genomics England could help families understand the origin of these conditions and help doctors better treat them. Second, the company will sequence the genomes of cancer patients and their tumors, which could help doctors identify the best drugs to treat the disease. Finally, say leaders of the 100,000 genomes project, the efforts could uncover the basis for bacterial and viral resistance to medicines.

“We hope that the legacy at the end of 2017, when we conclude the 100,000 whole-genome sequences, will be a transformed capacity and capability in the NHS to use this data,” says Caulfield.

In the last few years, the cost and time required to sequence DNA have plummeted (see “Bases to Bytes”), making the technology more feasible to use as part of clinical care. Governments around the world are investing in large-scale projects to identify the best way to harness genome technology in a medical setting. For example, the Faroe Islands, a sovereign state within the Kingdom of Denmark, is offering sequencing to all of its citizens to understand the basis of genetic diseases prevalent in the isolated population. The U.S. has funded several large grants to study how to best use medical genomic data, and in 2011 it announced an effort to sequence thousands of veterans’ genomes. In 1999, the Chinese government helped establish the Beijing Genomics Institute, which would later become the world’s most prolific genome institute, providing sequences for projects based in China and abroad (see “Inside China’s Genome Factory”).

But the U.K. project stands out for the large number of genomes planned and the integration of the data into a national health-care system that serves more than 60 million people. The initial program will focus on rare inherited diseases, cancer, and infectious pathogens. Initially, the greatest potential will be in giving families long-sought-after answers as to why a rare disorder afflicts them or their children, and “in 10 or 20 years, there may be treatments sprung from it,” says Caulfield.

In addition to exploring how to best handle and use genomic data, the projects taking place in 2014 will give Genomics England time to explore different sequencing technologies offered by commercial providers. The San Diego-based sequencing company Illumina will provide sequencing at existing facilities in England, but Caulfeld emphasizes that the project will want to use the sequencing services of multiple commercial providers. “We are keen to encourage competitiveness in this marketplace as a route to bring down the price for everybody.”

To help control costs for the lofty project, and to foster investment in genomic medicine in the U.K., Genomics England will ask commercial providers to set up sequencing centers in England. “Part of this program is to generate wealth, and that means U.K. jobs,” he says. “We want the sequencing providers to invest in the U.K.” The sequencing centers will be ready by 2015, when the project kicks off in earnest. “Then we will be sequencing 30,000 whole-genome sequences a year,” says Caulfield.

Read the entire article here.

Image: Argonne’s Midwest Center for Structural Genomics deposits 1,000th protein structure. Courtesy of Wikipedia.

The Outliner as Outlier

Outlining tools for the composition of text are intimately linked with the evolution of the personal computer industry. Yet while outliners were some of the earliest “apps” to appear, their true power, as mechanisms to think new thoughts — has yet to be fully realized.

From Technology Review:

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

In 1984, the personal-computer industry was still small enough to be captured, with reasonable fidelity, in a one-volume publication, the Whole Earth Software Catalog. It told the curious what was up: “On an unlovely flat artifact called a disk may be hidden the concentrated intelligence of thousands of hours of design.” And filed under “Organizing” was one review of particular note, describing a program called ThinkTank, created by a man named Dave Winer.

ThinkTank was outlining software that ran on a personal computer. There had been outline programs before (most famously, Doug Engelbart’s NLS or oNLine System, demonstrated in 1968 in “The Mother of All Demos,” which also included the first practical implementation of hypertext). But Winer’s software was outlining for the masses, on personal computers. The reviewers in the Whole Earth Software Catalog were enthusiastic: “I have subordinate ideas neatly indented under other ideas,” wrote one. Another enumerated the possibilities: “Starting to write. Writer’s block. Refining expositions or presentations. Keeping notes that you can use later. Brainstorming.” ThinkTank wasn’t just a tool for making outlines. It promised to change the way you thought.

It’s an elitist view of software, and maybe self-defeating. Perhaps most users, who just want to compose two-page documents and quick e-mails, don’t need the structure that Fargo imposes.

But I sympathize with Winer. I’m an outliner person. I’ve used many outliners over the decades. Right now, my favorite is the open-source Org-mode in the Emacs text editor. Learning an outliner’s commands is a pleasure, because the payoff—the ability to distill a bubbling cauldron of thought into a list, and then to expand that bulleted list into an essay, a report, anything—is worth it. An outliner treats a text as a set of Lego bricks to be pulled apart and reassembled until the most pleasing structure is found.

Fargo is an excellent outline editor, and it’s innovative because it’s a true Web application, running all its code inside the browser and storing versions of files in Dropbox. (Winer also recently released Concord, the outlining engine inside Fargo, under a free software license so that any developer can insert an outline into any Web application.) As you move words and ideas around, Fargo feels jaunty. Click on one of those lines in your outline and drag it, and arrows show you where else in the hierarchy that line might fit. They’re good arrows: fat, clear, obvious, informative.

For a while, bloggers using Fargo could publish posts with a free hosted service operated by Winer. But this fall the service broke, and Winer said he didn’t see how to fix it. Perhaps that’s just as well: an outline creates a certain unresolved tension with the dominant model for blogging. For Winer, a blog is a big outline of one’s days and intellectual development. But most blog publishing systems treat each post in isolation: a title, some text, maybe an image or video. Are bloggers ready to see a blog as one continuous document, a set of branches hanging off a common trunk? That’s the thing about outlines: they can become anything.

Read the entire article here.

Big Bad Data; Growing Discrimination

You may be an anonymous data point online, but it does not follow that you’ll not still be a victim of personal discrimination. As technology to gather and track your every move online steadily improves so do the opportunities to misuse that information. Many of us are already unwitting participants in the growing internet filter bubble — a phenomenon that amplifies our personal tastes, opinions and shopping habits by pre-screening and delivering only more of the same based on our online footprints. Many argue that this is benign and even beneficial — after all isn’t it wonderful when Google’s ad network pops up product recommendations for you on “random” websites based on your previous searches, or isn’t it that much more effective when news organizations only deliver stories based on your previous browsing history, interests, affiliations or demographic?

Not so. We are in ever-increasing danger of allowing others to control what we see and hear online. So kiss discovery and serendipity goodbye. More troubling still, beyond the ability to deliver personalized experiences online, as corporations gather more and more data from and about you, they can decide if you are of value. While your data may be aggregated and anonymized, the results can still help a business target you, or not, whether you are explicitly identified by name or not.

So, perhaps your previous online shopping history divulged a proclivity for certain medications; well, kiss goodbye to that pre-existing health condition waiver. Or, perhaps the online groups that you belong to are rather left-of-center or way out in left-field; well, say hello to a smaller annual bonus from your conservative employer. Perhaps, the news or social groups that you subscribe to don’t align very well with the values of your landlord or prospective employer. Or, perhaps, Amazon will not allow you to shop online any more because the company knows your annual take-home pay and that you are a potential credit risk. You get the idea.

Without adequate safe-guards and controls those who gather the data about you will be in the driver’s seat. Whereas, put simply, it should be the other way around — you should own the data that describes who you are and what your do, and you should determine who gets to see it and how it’s used. Welcome to the age of Big (Bad) Data and the new age of data-driven discrimination.

From Technology Review:

Data analytics are being used to implement a subtle form of discrimination, while anonymous data sets can be mined to reveal health data and other private information, a Microsoft researcher warned this morning at MIT Technology Review’s EmTech conference.

Kate Crawford, principal researcher at Microsoft Research, argued that these problems could be addressed with new legal approaches to the use of personal data.

In a new paper, she and a colleague propose a system of “due process” that would give people more legal rights to understand how data analytics are used in determinations made against them, such as denial of health insurance or a job. “It’s the very start of a conversation about how to do this better,” Crawford, who is also a visiting professor at the MIT Center for Civic Media, said in an interview before the event. “People think ‘big data’ avoids the problem of discrimination, because you are dealing with big data sets, but in fact big data is being used for more and more precise forms of discrimination—a form of data redlining.”

During her talk this morning, Crawford added that with big data, “you will never know what those discriminations are, and I think that’s where the concern begins.”

Health data is particularly vulnerable, the researcher says. Search terms for disease symptoms, online purchases of medical supplies, and even the RFID tags on drug packaging can provide websites and retailers with information about a person’s health.

As Crawford and Jason Schultz, a professor at New York University Law School, wrote in their paper: “When these data sets are cross-referenced with traditional health information, as big data is designed to do, it is possible to generate a detailed picture about a person’s health, including information a person may never have disclosed to a health provider.”

And a recent Cambridge University study, which Crawford alluded to during her talk, found that “highly sensitive personal attributes”— including sexual orientation, personality traits, use of addictive substances, and even parental separation—are highly predictable by analyzing what people click on to indicate they “like” on Facebook. The study analyzed the “likes” of 58,000 Facebook users.

Similarly, purchasing histories, tweets, and demographic, location, and other information gathered about individual Web users, when combined with data from other sources, can result in new kinds of profiles that an employer or landlord might use to deny someone a job or an apartment.

In response to such risks, the paper’s authors propose a legal framework they call “big data due process.” Under this concept, a person who has been subject to some determination—whether denial of health insurance, rejection of a job or housing application, or an arrest—would have the right to learn how big data analytics were used.

This would entail the sorts of disclosure and cross-examination rights that are already enshrined in the legal systems of the United States and many other nations. “Before there can be greater social acceptance of big data’s role in decision-making, especially within government, it must also appear fair, and have an acceptable degree of predictability, transparency, and rationality,” the authors write.

Data analytics can also get things deeply wrong, Crawford notes. Even the formerly successful use of Google search terms to identify flu outbreaks failed last year, when actual cases fell far short of predictions. Increased flu-related media coverage and chatter about the flu in social media were mistaken for signs of people complaining they were sick, leading to the overestimates.  “This is where social media data can get complicated,” Crawford said.

Read the entire article here.

Bots That Build Themselves

[tube]6aZbJS6LZbs[/tube]

Wouldn’t it be a glorious breakthrough if your next furniture purchase could assemble itself? No more sifting though stepwise Scandinavian manuals describing your next “Fjell” or “Bestå” pieces from IKEA; no more looking for a magnifying glass to decipher strange text from Asia; no more searches for an Allen wrench that fits those odd hexagonal bolts. Now, to set your expectations, recent innovations at the macro-mechanical level are not yet quite in the same league as planet-sized self-assembling spaceships (from the mind of Iain Banks). But, researchers and engineers are making progress.

From ars technica:

At a certain level of complexity and obligation, sets of blocks can easily go from fun to tiresome to assemble. Legos? K’Nex? Great. Ikea furniture? Bridges? Construction scaffolding? Not so much. To make things easier, three scientists at MIT recently exhibited a system of self-assembling cubic robots that could in theory automate the process of putting complex systems together.

The blocks, dubbed M-Blocks, use a combination of magnets and an internal flywheel to move around and stick together. The flywheels, running off an internal battery, generate angular momentum that allows the blocks to flick themselves at each other, spinning them through the air. Magnets on the surfaces of the blocks allow them to click into position.

Each flywheel inside the blocks can spin at up to 20,000 rotations per minute. Motion happens when the flywheel spins and then is suddenly braked by a servo motor that tightens a belt encircling the flywheel, imparting its angular momentum to the body of the blocks. That momentum sends the block flying at a certain velocity toward its fellow blocks (if there is a lot of it) or else rolling across the ground (if there’s less of it). Watching a video of the blocks self-assembling, the effect is similar to watching Sid’s toys rally in Toy Story—a little off-putting to see so many parts moving into a whole at once, unpredictably moving together like balletic dying fish.

Each of the blocks is controlled by a 32-bit ARM microprocessor and three 3.7 volt batteries that afford each one between 20 and 100 moves before the battery life is depleted. Rolling is the least complicated motion, though the blocks can also use their flywheels to turn corners, climb over each other, or even complete a leap from ground level to three blocks high, sticking the landing on top of a column 51 percent of the time.

The blocks use 6-axis inertial measurement units, like those found on planes, ships, or spacecrafts, to figure out how they are oriented in space. Each cube has an IR LED and a photodiode that cubes use to communicate with each other.

The authors note that the cubes’ motion is not very precise yet; one cube is considered to have moved successfully if it hits its goal position within three tries. The researchers found the RPMs needed to generate momentum for different movements through trial and error.

If the individual cube movements weren’t enough, groups of the cubes can also move together in either a cluster or as a row of cubes rolling in lockstep. A set of four cubes arranged in a square attempting to roll together in a block approaches the limits of the cubes’ hardware, the authors write. The cubes can even work together to get around an obstacle, rolling over each other and stacking together World War Z-zombie style until the bump in the road has been crossed.

Read the entire article here.

Video: M-Blocks. Courtesy of ars technica.

Personalized Care Courtesy of Big Data

The era of truly personalized medicine and treatment plans may still be a fair way off, but thanks to big data initiatives predictive and preventative health is making significant progress. This bodes well for over-stretched healthcare systems, medical professionals, and those who need care and/or pay for it.

That said, it is useful to keep in mind how similar data in other domains such as shopping travel and media, has been delivering personalized content and services for quite some time. So, healthcare information technology certainly lags, where it should be leading. One single answer may be impossible to agree upon. However, it is encouraging to see the healthcare and medical information industries catching up.

From Technology Review:

On the ground floor of the Mount Sinai Medical Center’s new behemoth of a research and hospital building in Manhattan, rows of empty black metal racks sit waiting for computer processors and hard disk drives. They’ll house the center’s new computing cluster, adding to an existing $3 million supercomputer that hums in the basement of a nearby building.

The person leading the design of the new computer is Jeff Hammerbacher, a 30-year-old known for being Facebook’s first data scientist. Now Hammerbacher is applying the same data-crunching techniques used to target online advertisements, but this time for a powerful engine that will suck in medical information and spit out predictions that could cut the cost of health care.

With $3 trillion spent annually on health care in the U.S., it could easily be the biggest job for “big data” yet. “We’re going out on a limb—we’re saying this can deliver value to the hospital,” says Hammerbacher.

Mount Sinai has 1,406 beds plus a medical school and treats half a million patients per year. Increasingly, it’s run like an information business: it’s assembled a biobank with 26,735 patient DNA and plasma samples, it finished installing a $120 million electronic medical records system this year, and it has been spending heavily to recruit computing experts like Hammerbacher.

It’s all part of a “monstrously large bet that [data] is going to matter,” says Eric Schadt, the computational biologist who runs Mount Sinai’s Icahn Institute for Genomics and Multiscale Biology, where Hammerbacher is based, and who was himself recruited from the gene sequencing company Pacific Biosciences two years ago.

Mount Sinai hopes data will let it succeed in a health-care system that’s shifting dramatically. Perversely, because hospitals bill by the procedure, they tend to earn more the sicker their patients become. But health-care reform in Washington is pushing hospitals toward a new model, called “accountable care,” in which they will instead be paid to keep people healthy.

Mount Sinai is already part of an experiment that the federal agency overseeing Medicare has organized to test these economic ideas. Last year it joined 250 U.S. doctor’s practices, clinics, and other hospitals in agreeing to track patients more closely. If the medical organizations can cut costs with better results, they’ll share in the savings. If costs go up, they can face penalties.

The new economic incentives, says Schadt, help explain the hospital’s sudden hunger for data, and its heavy spending to hire 150 people during the last year just in the institute he runs. “It’s become ‘Hey, use all your resources and data to better assess the population you are treating,’” he says.

One way Mount Sinai is doing that already is with a computer model where factors like disease, past hospital visits, even race, are used to predict which patients stand the highest chance of returning to the hospital. That model, built using hospital claims data, tells caregivers which chronically ill people need to be showered with follow-up calls and extra help. In a pilot study, the program cut readmissions by half; now the risk score is being used throughout the hospital.

Hammerbacher’s new computing facility is designed to supercharge the discovery of such insights. It will run a version of Hadoop, software that spreads data across many computers and is popular in industries, like e-commerce, that generate large amounts of quick-changing information.

Patient data are slim by comparison, and not very dynamic. Records get added to infrequently—not at all if a patient visits another hospital. That’s a limitation, Hammerbacher says. Yet he hopes big-data technology will be used to search for connections between, say, hospital infections and the DNA of microbes present in an ICU, or to track data streaming in from patients who use at-home monitors.

One person he’ll be working with is Joel Dudley, director of biomedical informatics at Mount Sinai’s medical school. Dudley has been running information gathered on diabetes patients (like blood sugar levels, height, weight, and age) through an algorithm that clusters them into a weblike network of nodes. In “hot spots” where diabetic patients appear similar, he’s then trying to find out if they share genetic attributes. That way DNA information might add to predictions about patients, too.

A goal of this work, which is still unpublished, is to replace the general guidelines doctors often use in deciding how to treat diabetics. Instead, new risk models—powered by genomics, lab tests, billing records, and demographics—could make up-to-date predictions about the individual patient a doctor is seeing, not unlike how a Web ad is tailored according to who you are and sites you’ve visited recently.

That is where the big data comes in. In the future, every patient will be represented by what Dudley calls “large dossier of data.” And before they are treated, or even diagnosed, the goal will be to “compare that to every patient that’s ever walked in the door at Mount Sinai,” he says. “[Then] you can say quantitatively what’s the risk for this person based on all the other patients we’ve seen.”

Read the entire article here.

Google Hacks

Some cool shortcuts to make the most of Google search.

From the Telegraph:

1. Calculator

Google’s calculator function is far more powerful than most people realise. As well as doing basic maths (5+6 or 3*2) it can do logarithmic calculations, and it knows constants (like e and pi), as well as functions like Cos and Sin. Google can also translate numbers into binary code – try typing ’12*3 in binary’.

2. Site search

By using the ‘site:’ keyword, you can make Google only return results from one site. So for example, you could search for “site:telegraph.co.uk manchester united” and only get stories on Manchester United from the Telegraph website.

3. Conversions

Currency conversions and unit conversions can be found by using the syntax: <amount> <unit1> in <unit2>. So for example, you could type ‘1 GBP in USD’, ’20 C in F’ or ’15 inches in cm’ and get an instant answer.

4. Time zones

Search for ‘time in <place>’ and you will get the local time for that place, as well as the time zone it is in.

5. Translations

A quick way to translate foreign words is to type ‘translate <word> to <language>’. So for example, ‘translate pomme to english’ returns the result apple, and ‘translate pomme to spanish’ returns the result ‘manzana’.

6. Search for a specific file type

If you know you are looking for a PDF or a Word file, you can search for specific file types by typing ‘<search term> filetype:pdf’ or ‘<search term> filetype:doc’

7. Check flight status

If you type in a flight number, the top result is the details of the flight and its status. So, for example, typing in BA 335 reveals that British Airways flight 335 departs Paris at 15.45 today and arrives at Heathrow Terminal 5 at 15.48 local time.

8. Search for local film showings

Search for film showings in your area by typing ‘films’ or ‘movies’ followed by your postcode. In the UK, this only narrows it down to your town or city. In the US this is more accurate, as results are displayed according to zip-code.

9. Weather forecasts

Type the name of a city followed by ‘forecast’, and Google will tell you the weather today, including levels of precipitation, humidity and wind, as well as the forecast for the next week, based on data from The Weather Channel.

10. Exclude search terms

When you’re enter a search term that has a second meaning, or a close association with something else, it can be difficult to find the results you want. Exclude irrelevant results using the ‘-‘ sign. So for searches for ‘apple’ where the word ‘iPhone’ is not used, enter ‘apple -iPhone’.

Read the entire article below here.

Image courtesy of Google.

100-Year Starship Project

As Voyager 1 embarks on its interstellar voyage, having recently left the confines of our solar system, NASA and the Pentagon are collaborating with the 100-Year Starship Project. This effort aims to make human interstellar travel a reality within the next 100 years. While this is an admirable goal, let’s not forget that the current record holder for fastest man made object — Voyager 1 — would still take around 50,000 years to reach the nearest star to Earth. So NASA had better get its creative juices flowing.

From the Guardian:

It would be hard enough these days to find a human capable of playing a 12-inch LP, let alone an alien. So perhaps it is time for Nasa to update its welcome pack for extraterrestrials.

The agency announced earlier this month that its Voyager 1 probe has left the solar system, becoming the first object to enter interstellar space. On board is a gold-plated record from 1977.

It contains greetings in dozens of languages, sounds such as morse code, a tractor, a kiss, music – from Bach to Chuck Berry – and pictures of life on Earth, including a sperm fertilising an egg, athletes, and the Sydney Opera House.

Now, Jon Lomberg, the original Golden Record design director, has launched a project aiming to persuade Nasa to upload a current snapshot of Earth to one of its future interstellar craft as a sort of space-age message in a bottle.

The New Horizons spacecraft will reach Pluto in 2015, then is expected to leave the solar system in about three decades. The New Horizons Message Initiative wants to create a crowd-sourced “human fingerprint” for extra-terrestrial consumption that can be digitally uploaded to the probe as its journey continues. The message could be modified to reflect changes on Earth as years go by.

With the backing of numerous space experts, Lomberg is orchestrating a petition and fundraising campaign. The first stage will firm up what can be sent in a format that would be easy for aliens to decode; the second will be the online crowd-sourcing of material.

Especially given the remote possibility that the message will ever be read, Lomberg emphasises the benefits to earthlings of starting a debate about how we should introduce ourselves to interplanetary strangers.

“The Voyager record was our best foot forward. We just talked about what we were like on a good day … no wars or famine. It was a sanitised portrait. Should we go warts and all? That is a legitimate discussion that needs to be had,” he said.

“The previous messages were decided by elite groups … Everybody is equally entitled and qualified to do it. If you’re a human on Earth you have a right to decide how you’re presented.”

“Astronauts have said that you step off the Earth and look back and you see things differently. Looking at yourself with a different perspective is always useful. The Golden Record has had a tremendous effect in terms of making people think about the culture in ways they wouldn’t normally do.”

Buoyed by the Voyager news, scientists gathered in Houston last weekend for the annual symposium of the Nasa- and Pentagon-backed 100-Year Starship project, which aims to make human interstellar travel a reality within a century.

“I think it’s an incredible boost. I think it makes it much more plausible,” said Dr Mae Jemison, the group’s principal and the first African-American woman in space. “What it says is that we know we can get to interstellar space. We got to interstellar space with technologies that were developed 40 years ago. There is every reason to suspect that we can create and build vehicles that can go that far, faster.”

Jeff Nosanov, of Nasa’s Jet Propulsion Laboratory, near Los Angeles, hopes to persuade the agency to launch about ten interstellar probes to gather data from a variety of directions. They would be powered by giant sails that harness the sun’s energy, much like a boat on the ocean is propelled by wind. Solar sails are gaining credibility as a realistic way of producing faster spacecraft, given the limitations of existing rocket technology. Nasa is planning to launch a spacecraft with a 13,000 square-foot sail in November next year.

“We have a starship and it’s 36 years old, so that’s really good. This is not as impossible as it sounds. Where the challenge becomes ludicrous and really astounding is the distances from one star to another,” Nosanov said.

Read the entire article here.

Image: USS Enterprise (NCC-1701). Courtesy of Star Trek franchise.

Above and Beyond

According to NASA, Voyager 1 officially left the protection of the solar system on or about August 25, 2013, and is now heading into interstellar space. It is now the first and only human-made object to leave the solar system.

Perhaps, one day in the distant future real human voyagers — or their android cousins — will come across the little probe as it continues on its lonely journey.

From Space:

A spacecraft from Earth has left its cosmic backyard and taken its first steps in interstellar space.

After streaking through space for nearly 35 years, NASA’s robotic Voyager 1 probe finally left the solar system in August 2012, a study published today (Sept. 12) in the journal Science reports.

“Voyager has boldly gone where no probe has gone before, marking one of the most significant technological achievements in the annals of the history of science, and as it enters interstellar space, it adds a new chapter in human scientific dreams and endeavors,” NASA science chief John Grunsfeld said in a statement. “Perhaps some future deep-space explorers will catch up with Voyager, our first interstellar envoy, and reflect on how this intrepid spacecraft helped enable their future.”

A long and historic journey

Voyager 1 launched on Sept. 5, 1977, about two weeks after its twin, Voyager 2. Together, the two probes conducted a historic “grand tour” of the outer planets, giving scientists some of their first up-close looks at Jupiter, Saturn, Uranus, Neptune and the moons of these faraway worlds.

The duo completed its primary mission in 1989, and then kept on flying toward the edge of the heliosphere, the huge bubble of charged particles and magnetic fields that the sun puffs out around itself. Voyager 1 has now popped free of this bubble into the exotic and unexplored realm of interstellar space, scientists say.

They reached this historic conclusion with a little help from the sun. A powerful solar eruption caused electrons in Voyager 1’s location to vibrate signficantly between April 9 and May 22 of this year. The probe’s plasma wave instrument detected these oscillations, and researchers used the measurements to figure out that Voyager 1’s surroundings contained about 1.3 electrons per cubic inch (0.08 electrons per cubic centimeter).

That’s far higher than the density observed in the outer regions of the heliosphere (roughly 0.03 electrons per cubic inch, or 0.002 electrons per cubic cm) and very much in line with the 1.6 electrons per cubic inch (0.10 electrons per cubic cm) or so expected in interstellar space. [Photos from NASA’s Voyager 1 and 2 Probes]

“We literally jumped out of our seats when we saw these oscillations in our data — they showed us that the spacecraft was in an entirely new region, comparable to what was expected in interstellar space, and totally different than in the solar bubble,” study lead author Don Gurnett of the University of Iowa, the principal investigator of Voyager 1’s plasma wave instrument, said in a statement.

It may seem surprising that electron density is higher beyond the solar system than in its extreme outer reaches. Interstellar space is, indeed, emptier than the regions in Earth’s neighborhood, but the density inside the solar bubble drops off dramatically at great distances from the sun, researchers said.

Calculating a departure date

The study team wanted to know if Voyager 1 left the solar system sometime before April 2013, so they combed through some of the probe’s older data. They found a monthlong period of electron oscillations in October-November 2012 that translated to a density of 0.004 electrons per cubic inch (0.006 electrons per cubic cm).

Using these numbers and the amount of ground that Voyager 1 covers — about 325 million miles (520 million kilometers) per year — the researchers calculated that the spacecraft likely left the solar system in August 2012.

That time frame matches up well with several other important changes Voyager 1 observed. On Aug. 25, 2012, the probe recorded a 1,000-fold drop in the number of charged solar particles while also measuring a 9 percent increase in fast-moving galactic cosmic rays, which originate beyond the solar system.

“These results, and comparison with previous heliospheric radio measurements, strongly support the view that Voyager 1 crossed the heliopause into the interstellar plasma on or about Aug. 25, 2012,” Gurnett and his colleagues write in the new study.

At that point, Voyager 1 was about 11.25 billion miles (18.11 billion km) from the sun, or roughly 121 times the distance between Earth and the sun. The probe is now 11.66 billion miles (18.76 billion km) from the sun. (Voyager 2, which took a different route through the solar system, is currently 9.54 billion miles, or 15.35 billion km, from the sun.)

Read the entire article here.

Image: Voyager Gold Disk. Courtesy of Wikipedia.

Filter Bubble on the Move

Personalization technology that allows marketers and media organizations to customize their products and content specifically to you seems to be a win-win for all: businesses win by addressing the needs — perceived or real — of specific customers; you win by seeing or receiving only items in which you’re interested.

But, this is a rather simplistic calculation for it fails to address the consequences of narrow targeting and a cycle of blinkered self-reinforcement, resulting in tunnel vision. More recently this has become known as filter bubble. The filter bubble eliminates serendipitous discovery and reduces creative connections by limiting our exposure to contrarian viewpoints and the unexpected. Or to put it more bluntly, it helps maintain a closed mind. This is true while you sit on the couch surfing the internet and increasingly, while you travel.

From the New York Times:

I’m half a world from home, in a city I’ve never explored, with fresh sights and sounds around every corner. And what am I doing?

I’m watching exactly the kind of television program I might watch in my Manhattan apartment.

Before I left New York, I downloaded a season of “The Wire,” in case I wanted to binge, in case I needed the comfort. It’s on my iPad with a slew of books I’m sure to find gripping, a bunch of the music I like best, issues of favorite magazines: a portable trove of the tried and true, guaranteed to insulate me from the strange and new.

I force myself to quit “The Wire” after about 20 minutes and I venture into the streets, because Baltimore’s drug dealers will wait and Shanghai’s soup dumplings won’t. But I’m haunted by how tempting it was to stay put, by how easily a person these days can travel the globe, and travel through life, in a thoroughly customized cocoon.

I’m not talking about the chain hotels or chain restaurants that we’ve long had and that somehow manage to be identical from time zone to time zone, language to language: carbon-copy refuges for unadventurous souls and stomachs.

I’m talking about our hard drives, our wired ways, “the cloud” and all of that. I’m talking about our unprecedented ability to tote around and dwell in a snugly tailored reality of our own creation, a monochromatic gallery of our own curation.

This coddling involves more than earphones, touch pads, palm-sized screens and gigabytes of memory. It’s a function of how so many of us use this technology and how we let it use us. We tune out by tucking ourselves into virtual enclaves in which our ingrained tastes are mirrored and our established opinions reflected back at us.

In theory the Internet, along with its kindred advances, should expand our horizons, speeding us to aesthetic and intellectual territories we haven’t charted before. Often it does.

But at our instigation and with our assent, it also herds us into tribes of common thought and shared temperament, amplifying the timeless human tropism toward cliques. Cyberspace, like suburbia, has gated communities.

Our Web bookmarks and our chosen social-media feeds help us retreat deeper into our partisan camps. (Cable-television news lends its own mighty hand.) “It’s the great irony of the Internet era: people have more access than ever to an array of viewpoints, but also the technological ability to screen out anything that doesn’t reinforce their views,” Jonathan Martin wrote in Politico last year, explaining how so many strategists and analysts on the right convinced themselves, in defiance of polls, that Mitt Romney was about to win the presidency.

But this sort of echo chamber also exists on cultural fronts, where we’re exhorted toward sameness and sorted into categories. The helpful video-store clerk or bookstore owner has been replaced, refined, automated: we now have Netflix suggestions for what we should watch next, based on what we’ve watched before, and we’re given Amazon prods for purchasing novels that have been shown to please readers just like us. We’re profiled, then clustered accordingly.

By joining particular threads on Facebook and Twitter, we can linger interminably on the one or two television shows that obsess us. Through music-streaming services and their formulas for our sweet spots, we meet new bands that might as well be reconfigurations of the old ones. Algorithms lead us to anagrams.

Read the entire article here.

All Conquering TV

In almost 90 years since television was invented it has done more to re-shape our world than conquering armies and pandemics. Whether you see TV  as a force for good or evil — or more recently, as a method for delivering absurd banality — you would be hard-pressed to find another human invention that has altered us so profoundly, psychologically, socially and culturally. What would its creator — John Logie Baird — think of his invention now, almost 70 years after his death?

From the Guardian:

Like most people my age – 51 – my childhood was in black and white. That’s because my memory of childhood is in black and white, and that’s because television in the 1960s (and most photography) was black and white. Bill and Ben, the Beatles, the Biafran war, Blue Peter, they were all black and white, and their images form the monochrome memories of my early years.

That’s one of the extraordinary aspects of television – its ability to trump reality. If seeing is believing, then there’s always a troubling doubt until you’ve seen it on television. A mass medium delivered to almost every household, it’s the communal confirmation of experience.

On 30 September it will be 84 years since the world’s first-ever television transmission. In Armchair Nation, his new social history of TV, Joe Moran, professor of English and cultural history at Liverpool John Moores University, recounts the events of that momentous day. A Yorkshire comedian named Sydney Howard performed a comic monologue and someone called Lulu Stanley sang “He’s tall, and dark, and handsome” in what was perhaps the earliest progenitor of The X Factor.

The images were broadcast by the BBC and viewed by a small group of invited guests on a screen about half the size of the average smartphone in the inventor John Logie Baird’s Covent Garden studio. Logie Baird may have been a visionary but even he would have struggled to comprehend just how much the world would be changed by his vision – television, the 20th century’s defining technology.

Every major happening is now captured by television, or it’s not a major happening. Politics and politicians are determined by how they play on television. Public knowledge, charity, humour, fashion trends, celebrity and consumer demand are all subject to its critical influence. More than the aeroplane or the nuclear bomb, the computer or the telephone, TV has determined what we know and how we think, the way we believe and how we perceive ourselves and the world around us (only the motor car is a possible rival and that, strictly speaking, was a 19th-century invention).

Not not only did television re-envision our sense of the world, it remains, even in the age of the internet, Facebook and YouTube, the most powerful generator of our collective memories, the most seductive and shocking mirror of society, and the most virulent incubator of social trends. It’s also stubbornly unavoidable.

There is good television, bad television, too much television and even, for some cultural puritans, no television, but whatever the equation, there is always television. It’s ubiquitously there, radiating away in the corner, even when it’s not. Moran quotes a dumbfounded Joey Tribbiani (Matt LeBlanc) from Friends on learning that a new acquaintance doesn’t have a TV set: “But what does your furniture point at?”

Like all the best comic lines, it contains a profound truth. The presence of television is so pervasive that its very absence is a kind of affront to the modern way of life. Not only has television reshaped the layout of our sitting rooms, it has also reshaped the very fabric of our lives.

Just to take Friends as one small example. Before it was first aired back in 1994, the idea of groups of young people hanging out in a coffee bar talking about relationships in a language of comic neurosis was, at least as far as pubcentric Britain was concerned, laughable. Now it’s a high-street fact of life. Would Starbucks and Costa have enjoyed the same success if Joey and friends had not showed the way?

But in 1929 no one had woken up and smelled the coffee. The images were extremely poor quality, the equipment was dauntingly expensive and reception vanishingly limited. In short, it didn’t look like the future. One of the first people to recognise television’s potential – or at least the most unappealing part of it – was Aldous Huxley. Writing in Brave New World, published in 1932, he described a hospice of the future in which every bed had a TV set at its foot. “Television was left on, a running tap, from morning till night.”

All the same, television remained a London-only hobby for a tiny metropolitan elite right up until the Second World War. Then, for reasons of national security, the BBC switched off its television signal and the experiment seemed to come to a bleak end.

It wasn’t until after the war that television was slowly spread out across the country. Some parts of the Scottish islands did not receive a signal until deep into the 1960s, but the nation was hooked. Moran quotes revealing statistics from 1971 about the contemporary British way of life: “Ten per cent of homes still had no indoor lavatory or bath, 31% had no fridge and 62% had no telephone, but only 9% had no TV.”

My family, as IT happened, fitted into that strangely incongruous sector that had no inside lavatory or bath but did have a TV. This seems bizarre, if you think about society’s priorities, but it’s a common situation today throughout large parts of the developing world.

I don’t recall much anxiety about the lack of a bath, at least on my part, but I can’t imagine what the sense of social exclusion would have been like, aged nine, if I hadn’t had access to Thunderbirds and The Big Match.

The strongest memory I have of watching television in the early 1970s is in my grandmother’s flat on wintry Saturday afternoons. Invariably the gas fire was roaring, the room was baking, and that inscrutable spectacle of professional wrestling, whose appeal was a mystery to me (if not Roland Barthes), lasted an eternity before the beautifully cadenced poetry of the football results came on.

Read the entire article here.

Image: John Logie Baird. Courtesy of Wikipedia.

A Post-PC, Post-Laptop World

Not too long ago the founders and shapers of much of our IT world were dreaming up new information technologies, tools and processes that we didn’t know we needed. These tinkerers became the establishment luminaries that we still ove or hate — Microsoft, Dell, HP, Apple, Motorola and IBM. And, of course, they are still around.

But the world that they constructed is imploding and nobody really knows where it is heading. Will the leaders of the next IT revolution come from the likes of Google or Facebook? Or as is more likely, is this just a prelude to a more radical shift, with seeds being sown in anonymous garages and labs across the U.S. and other tech hubs. Regardless, we are in for some unpredictable and exciting times.

From ars technica:

Change happens in IT whether you want it to or not. But even with all the talk of the “post-PC” era and the rise of the horrifically named “bring your own device” hype, change has happened in a patchwork. Despite the disruptive technologies documented on Ars and elsewhere, the fundamentals of enterprise IT have evolved slowly over the past decade.

But this, naturally, is about to change. The model that we’ve built IT on for the past 10 years is in the midst of collapsing on itself, and the companies that sold us the twigs and straw it was built with—Microsoft, Dell, and Hewlett-Packard to name a few—are facing the same sort of inflection points in their corporate life cycles that have ripped past IT giants to shreds. These corporate giants are faced with moments of truth despite making big bets on acquisitions to try to position themselves for what they saw as the future.

Predicting the future is hard, especially when you have an installed base to consider. But it’s not hard to identify the economic, technological, and cultural forces that are converging right now to shape the future of enterprise IT in the short term. We’re not entering a “post-PC” era in IT—we’re entering an era where the device we use to access applications and information is almost irrelevant. Nearly everything we do as employees or customers will be instrumented, analyzed, and aggregated.

“We’re not on a 10-year reinvention path anymore for enterprise IT,” said David Nichols, Americas IT Transformation Leader at Ernst & Young. “It’s more like [a] five-year or four-year path. And it’s getting faster. It’s going to happen at a pace we haven’t seen before.”

While the impact may be revolutionary, the cause is more evolutionary. A host of technologies that have been the “next big thing” for much of the last decade—smart mobile devices, the “Internet of Things,” deep analytics, social networking, and cloud computing—have finally reached a tipping point. The demand for mobile applications has turned what were once called “Web services” into a new class of managed application programming interfaces. These are changing not just how users interact with data, but the way enterprises collect and share data, write applications, and secure them.

Add the technologies pushed forward by government and defense in the last decade (such as facial recognition) and an abundance of cheap sensors, and you have the perfect “big data” storm. This sea of structured and unstructured data could change the nature of the enterprise or drown IT departments in the process. It will create social challenges as employees and customers start to understand the level to which they are being tracked by enterprises. And it will give companies more ammunition to continue to squeeze more productivity out of a shrinking workforce, as jobs once done by people are turned over to software robots.

There has been a lot of talk about how smartphones and tablets have supplanted the PC. In many ways, that talk is true. In fact, we’re still largely using smartphones and tablets as if they were PCs.

But aside from mobile Web browsing and the use of tablets as a replacement for notebook PCs in presentations, most enterprises still use mobile devices the same way they used the BlackBerry in 1999—for e-mail. Mobile apps are the new webpage: everybody knows they need one to engage customers, but few are really sure what to do with them beyond what customers use their websites for. And while companies are trying to engage customers using social media on mobile, they’re largely not using the communications tools available on smart mobile devices to engage their own employees.

“I think right now, mobile adoption has been greatly overstated in terms of what people say they do with mobile versus mobile’s potential,” said Nichols. “Every CIO out there says, ‘Oh, we have mobile-enabled our workforce using tablets and smartphones.’ They’ve done mobile enablement but not mobile integration. Mobility at this point has not fundamentally changed the way the majority of the workforce works, at least not in the last five to six years.”

Smartphones make very poor PCs. But they have something no desktop PC has—a set of sensors that can provide a constant flow of data about where their user is. There’s visual information pulled in through a camera, motion and acceleration data, and even proximity. When combined with backend analytics, they can create opportunities to change how people work, collaborate, and interact with their environment.

Machine-to-machine (M2M) communications is a big part of that shift, according to Nichols. “Allowing devices with sensors to interact in a meaningful way is the next step,” he said. That step spans from the shop floor to the data center to the boardroom, as the devices we carry track our movements and our activities and interact with the systems around us.

Retailers are beginning to catch on to that, using mobile devices’ sensors to help close sales. “Everybody gets the concept that a mobile app is a necessity for a business-to-consumer retailer,” said Brian Kirschner, the director of Apigee Institute, a research organization created by the application infrastructure vendor Apigee in collaboration with executives of large enterprises and academic researchers. “But they don’t always get the transformative force on business that apps can have. Some can be small. For example, Home Depot has an app to help you search the store you’re in for what you’re looking for. We know that failure to find something in the store is a cause of lost sales and that Web search is useful and signs over aisles are ineffective. So the mobile app has a real impact on sales.”

But if you’ve already got stock information, location data for a customer, and e-commerce capabilities, why stop at making the app useful only during business hours? “If you think of the full potential of a mobile app, why can’t you buy something at the store when it’s closed if you’re near the store?” Kirschner said. “Instead of dropping you to a traditional Web process and offering you free shipping, they could have you pick it up at the store where you are tomorrow.”

That’s a change that’s being forced on many retailers, as noted in an article from the most recent MIT Sloan Management Review by a trio of experts: Erik Brynjolfsson, a professor at MIT’s Sloan School of Management and the director of the MIT Center for Digital Business; Yu Jeffrey Hu of the Georgia Institute of Technology; and Mohammed Rahman of the University of Calgary. If retailers don’t offer a way to meet mobile-equipped customers, they’ll buy it online elsewhere—often while standing in their store. Offering customers a way to extend their experience beyond the store’s walls is the kind of mobile use that’s going to create competitive advantage from information technology. And it’s the sort of competitive advantage that has long been milked out of the old IT model.

Nichols sees the same sort of technology transforming not just relationships with customers but the workplace itself. Say, for example, you’re in New York, and you want to discuss something with two colleagues. You request an appointment using your mobile device, and based on your location data, the location data of your colleagues, and the timing of the meeting, backend systems automatically book you a conference room and set up a video link to a co-worker out of town.

Based on analytics and the title of the meeting, relevant documents are dropped into a collaboration space. Your device records the meeting to an archive and notes who has attended in person. And this conversation is automatically transcribed, tagged, and forwarded to team members for review.

“Having location data to reserve conference rooms and calls and having all other logistics be handled in background changes the size of the organization I need to support that,” Nichols said.

The same applies to manufacturing, logistics, and other areas where applications can be tied into sensors and computing power. “If I have a factory where a machine has a belt that needs to be reordered every five years and it auto re-orders and it gets shipped without the need for human interaction, that changes the whole dynamics of how you operate,” Nichols said. “If you can take that and plug it into a proper workflow, you’re going to see an entirely new sort of workforce. That’s not that far away.”

Wearable devices like Google’s Glass will also feed into the new workplace. Wearable tech has been in use in some industries for decades, and in some cases it’s just an evolution from communication systems already used in many retail and manufacturing environments. But the ability to add augmented reality—a data overlay on top of a real world location—and to collect information without reaching for a device will quickly get traction in many enterprises.

Read the entire article here.

Image: Commodore PET (Personal Electronic Transactor) 2001 Series, circa 1977. Courtesy of Wikipedia.

Chameleon Syringes

How does a design aesthetic save lives? It’s simpler than you might think. Take a basic medical syringe, add a twist of color-change technology, borrowed from the design world, and you get a device that can save 1.3 million lives each year.

From the Guardian:

You might not want to hear this, but there’s a good reason to be scared of needles: the most deadly clinical procedure in the world is a simple injection.

Every year, 1.3 million deaths are caused by unsafe injections, due to the reuse of syringes. The World Health Organisation (WHO) estimates that up to 40% of the 40bn injections administered annually are delivered with syringes that have been reused without sterilisation, causing over 30% of hepatitis B and C cases and 5% of HIV cases – statistics that have put the problem at number five on the WHO priority list.

It is a call to arms that stirred Dr David Swann, reader in design at the University of Huddersfield, into action, to develop what he describes as a “behaviour-changing syringe” that would warn patients when the needle was unsafe – a design that is now in the running for the Index design awards.

“The difficulty for patients is that it is impossible to determine a visual difference between a used syringe that has been washed and a sterile syringe removed from its packaging,” says Swann. “Instigating a colour change would explicitly expose the risk and could indicate prior use without doubt.”

Keen to keep the price down to ensure accessibility, Swann turned to cheap technologies used in the food industry, using inks that react to carbon dioxide and packaging the syringes in nitrogen-filled packets – just the same as a bag of crisps. Once opened and exposed to the air, the syringe has a 60-second treatment window before turning bright red, while a faceted barrel design means that the piston will break if someone tries to replace it. Remarkably, the ABCs (A Behaviour Changing Syringe) cost only 0.16p more than a typical 2.5p disposable syringe.

Swann is trialling the product in India, as the country is the largest consumer of syringes in the world, accounting for 83% of all injections – over 60% of which are deemed unsafe, and 30% of which transmit a disease in some form, according to the WHO.

“There are landfill scavengers searching piles of waste for syringe devices that are then sold on to medical establishments,” says Swann. “We want to break that cycle.” He estimates that after five years, the ABCs will have prevented 700,000 unsafe injections, saved 6.5 million life years and saved $130m in medical costs in India alone.

Colour-changing technology is increasingly finding medical applications, as designers look to transfer innovations in reactive ink towards potentially lifesaving ends. Husband and wife doctor/designer duo Gautam and Kanupriya Goel have developed a form of packaging for medicine that gradually changes its pattern as the product expires.

Read the entire article here.

Image: Red for danger — ABCs syringe. Courtesy of David Swann / Guardian.

Quantum Computation: Spooky Arithmetic

Quantum computation holds the promise of vastly superior performance over traditional digital systems based on bits that are either “on” or “off”. Yet for all the theory, quantum computation still remains very much a research enterprise in its very infancy. And, because of the peculiarities of the quantum world — think Schrödinger’s cat, both dead and alive — it’s even difficult to measure a quantum computer at work.

From Wired:

In early May, news reports gushed that a quantum computation device had for the first time outperformed classical computers, solving certain problems thousands of times faster. The media coverage sent ripples of excitement through the technology community. A full-on quantum computer, if ever built, would revolutionize large swaths of computer science, running many algorithms dramatically faster, including one that could crack most encryption protocols in use today.

Over the following weeks, however, a vigorous controversy surfaced among quantum computation researchers. Experts argued over whether the device, created by D-Wave Systems, in Burnaby, British Columbia, really offers the claimed speedups, whether it works the way the company thinks it does, and even whether it is really harnessing the counterintuitive weirdness of quantum physics, which governs the world of elementary particles such as electrons and photons.

Most researchers have no access to D-Wave’s proprietary system, so they can’t simply examine its specifications to verify the company’s claims. But even if they could look under its hood, how would they know it’s the real thing?

Verifying the processes of an ordinary computer is easy, in principle: At each step of a computation, you can examine its internal state — some series of 0s and 1s — to make sure it is carrying out the steps it claims.

A quantum computer’s internal state, however, is made of “qubits” — a mixture (or “superposition”) of 0 and 1 at the same time, like Schrödinger’s fabled quantum mechanical cat, which is simultaneously alive and dead. Writing down the internal state of a large quantum computer would require an impossibly large number of parameters. The state of a system containing 1,000 qubits, for example, could need more parameters than the estimated number of particles in the universe.

And there’s an even more fundamental obstacle: Measuring a quantum system “collapses” it into a single classical state instead of a superposition of many states. (When Schrödinger’s cat is measured, it instantly becomes alive or dead.) Likewise, examining the inner workings of a quantum computer would reveal an ordinary collection of classical bits. A quantum system, said Umesh Vazirani of the University of California, Berkeley, is like a person who has an incredibly rich inner life, but who, if you ask him “What’s up?” will just shrug and say, “Nothing much.”

“How do you ever test a quantum system?” Vazirani asked. “Do you have to take it on faith? At first glance, it seems that the obvious answer is yes.”

It turns out, however, that there is a way to probe the rich inner life of a quantum computer using only classical measurements, if the computer has two separate “entangled” components.

In the April 25 issue of the journal Nature, Vazirani, together with Ben Reichardt of the University of Southern California in Los Angeles and Falk Unger of Knight Capital Group Inc. in Santa Clara, showed how to establish the precise inner state of such a computer using a favorite tactic from TV police shows: Interrogate the two components in separate rooms, so to speak, and check whether their stories are consistent. If the two halves of the computer answer a particular series of questions successfully, the interrogator can not only figure out their internal state and the measurements they are doing, but also issue instructions that will force the two halves to jointly carry out any quantum computation she wishes.

“It’s a huge achievement,” said Stefano Pironio, of the Université Libre de Bruxelles in Belgium.

The finding will not shed light on the D-Wave computer, which is constructed along very different principles, and it may be decades before a computer along the lines of the Nature paper — or indeed any fully quantum computer — can be built. But the result is an important proof of principle, said Thomas Vidick, who recently completed his post-doctoral research at the Massachusetts Institute of Technology. “It’s a big conceptual step.”

In the short term, the new interrogation approach offers a potential security boost to quantum cryptography, which has been marketed commercially for more than a decade. In principle, quantum cryptography offers “unconditional” security, guaranteed by the laws of physics. Actual quantum devices, however, are notoriously hard to control, and over the past decade, quantum cryptographic systems have repeatedly been hacked.

The interrogation technique creates a quantum cryptography protocol that, for the first time, would transmit a secret key while simultaneously proving that the quantum devices are preventing any potential information leak. Some version of this protocol could very well be implemented within the next five to 10 years, predicted Vidick and his former adviser at MIT, the theoretical computer scientist Scott Aaronson.

“It’s a new level of security that solves the shortcomings of traditional quantum cryptography,” Pironio said.

Spooky Action

In 1964, the Irish physicist John Stewart Bell came up with a test to try to establish, once and for all, that the bafflingly counterintuitive principles of quantum physics are truly inherent properties of the universe — that the decades-long effort of Albert Einstein and other physicists to develop a more intuitive physics could never bear fruit.

Einstein was deeply disturbed by the randomness at the core of quantum physics — God “is not playing at dice,” he famously wrote to the physicist Max Born in 1926.

In 1935, Einstein, together with his colleagues Boris Podolsky and Nathan Rosen, described a strange consequence of this randomness, now called the EPR paradox (short for Einstein, Podolsky, Rosen). According to the laws of quantum physics, it is possible for two particles to interact briefly in such a way that their states become “entangled” as “EPR pairs.” Even if the particles then travel many light years away from each other, one particle somehow instantly seems to “know” the outcome of a measurement on the other particle: When asked the same question, it will give the same answer, even though quantum physics says that the first particle chose its answer randomly. Since the theory of special relativity forbids information from traveling faster than the speed of light, how does the second particle know the answer?
To Einstein, these “spooky actions at a distance” implied that quantum physics was an incomplete theory. “Quantum mechanics is certainly imposing,” he wrote to Born. “But an inner voice tells me that it is not yet the real thing.”

Over the remaining decades of his life, Einstein searched for a way that the two particles could use classical physics to come up with their answers — hidden variables that could explain the behavior of the particles without a need for randomness or spooky actions.

But in 1964, Bell realized that the EPR paradox could be used to devise an experiment that determines whether quantum physics or a local hidden-variables theory correctly explains the real world. Adapted five years later into a format called the CHSH game (after the researchers John Clauser, Michael Horne, Abner Shimony and Richard Holt), the test asks a system to prove its quantum nature by performing a feat that is impossible using only classical physics.

The CHSH game is a coordination game, in which two collaborating players — Bonnie and Clyde, say — are questioned in separate interrogation rooms. Their joint goal is to give either identical answers or different answers, depending on what questions the “detective” asks them. Neither player knows what question the detective is asking the other player.

If Bonnie and Clyde can use only classical physics, then no matter how many “hidden variables” they share, it turns out that the best they can do is decide on a story before they get separated and then stick to it, no matter what the detective asks them, a strategy that will win the game 75 percent of the time. But if Bonnie and Clyde share an EPR pair of entangled particles — picked up in a bank heist, perhaps — then they can exploit the spooky action at a distance to better coordinate their answers and win the game about 85.4 percent of the time.

Bell’s test gave experimentalists a specific way to distinguish between quantum physics and any hidden-variables theory. Over the decades that followed, physicists, most notably Alain Aspect, currently at the École Polytechnique in Palaiseau, France, carried out this test repeatedly, in increasingly controlled settings. Almost every time, the outcome has been consistent with the predictions of quantum physics, not with hidden variables.

Aspect’s work “painted hidden variables into a corner,” Aaronson said. The experiments had a huge role, he said, in convincing people that the counterintuitive weirdness of quantum physics is here to stay.

If Einstein had known about the Bell test, Vazirani said, “he wouldn’t have wasted 30 years of his life looking for an alternative to quantum mechanics.” He simply would have convinced someone to do the experiment.

Read the whole article here.

Ethical Meat and Idiotic Media

Lab grown meat is now possible. But is not available on an industrial scale to satisfy the human desire for burgers, steak and ribs. While this does represent a breakthrough it’s likely to be a while before the last cow or chicken or pig is slaughtered. Of course, the mainstream media picked up this important event and immediately labeled it with captivating headlines featuring the word “frankenburger”. Perhaps a well-intentioned lab will someday come up with an intelligent form of media organization.

From the New York Times (dot earth):

I first explored livestock-free approaches to keeping meat on menus in 2008 in a pieced titled “Can People Have Meat and a Planet, Too?”

It’s been increasingly clear since then that there are both environmental and — obviously — ethical advantages to using technology to sustain omnivory on a crowding planet. This presumes humans will not all soon shift to a purely vegetarian lifestyle, even though there are signs of what you might call “peak meat” (consumption, that is) in prosperous societies (Mark Bittman wrote a nice piece on this). Given dietary trends as various cultures rise out of poverty, I would say it’s a safe bet meat will remain a favored food for decades to come.

Now non-farmed meat is back in the headlines, with a patty of in-vitro beef – widely dubbed a “frankenburger” — fried and served in London earlier today.

The beef was grown in a lab by a pioneer in this arena — Mark Post of Maastricht University in the Netherlands. My colleague Henry Fountain has reported the details in a fascinating news article. Here’s an excerpt followed by my thoughts on next steps in what I see as an important area of research and development:

According to the three people who ate it, the burger was dry and a bit lacking in flavor. One taster, Josh Schonwald, a Chicago-based author of a book on the future of food [link], said “the bite feels like a conventional hamburger” but that the meat tasted “like an animal-protein cake.”

But taste and texture were largely beside the point: The event, arranged by a public relations firm and broadcast live on the Web, was meant to make a case that so-called in-vitro, or cultured, meat deserves additional financing and research…..

Dr. Post, one of a handful of scientists working in the field, said there was still much research to be done and that it would probably take 10 years or more before cultured meat was commercially viable. Reducing costs is one major issue — he estimated that if production could be scaled up, cultured beef made as this one burger was made would cost more than $30 a pound.

The two-year project to make the one burger, plus extra tissue for testing, cost $325,000. On Monday it was revealed that Sergey Brin, one of the founders of Google, paid for the project. Dr. Post said Mr. Brin got involved because “he basically shares the same concerns about the sustainability of meat production and animal welfare.”
The enormous potential environmental benefits of shifting meat production, where feasible, from farms to factories were estimated in “Environmental Impacts of Cultured Meat Production,”a 2011 study in Environmental Science and Technology.

Read the entire article here.

Image: Professor Mark Post holds the world’s first lab-grown hamburger. Courtesy of Reuters/David Parry / The Atlantic.

En Vie: Bio-Fabrication Expo

En Vie, french for “alive” is an exposition like no other. It’s a fantastical place defined through a rich collaboration of material scientists, biologists, architects, designers and engineers. The premise of En Vie is quite elegant — put these disparate minds together and ask them to imagine what the future will look like. And, it’s a quite magical world; a world where biological fabrication replaces traditional mechanical and chemical fabrication. Here shoes grow from plants, furniture from fungi and bees construct vases. The En Vie exhibit is open at the Space Foundation EDF in Paris, France until September 1.

From ars technica:

The natural world has, over millions of years, evolved countless ways to ensure its survival. The industrial revolution, in contrast, has given us just a couple hundred years to play catch-up using technology. And while we’ve been busily degrading the Earth since that revolution, nature continues to outdo us in the engineering of materials that are stronger, tougher, and multipurpose.

Take steel for example. According to the World Steel Association, for every ton produced, 1.8 tons of carbon dioxide is emitted into the atmosphere. In total in 2010, the iron and steel industries, combined, were responsible for 6.7 percent of total global CO2 emissions. Then there’s the humble spider, which produces silk that is—weight for weight—stronger than steel. Webs spun by Darwin’s bark spider in Madagascar, meanwhile, are 10 times tougher than steel and more durable than Kevlar, the synthetic fiber used in bulletproof vests. Material scientists savvy to this have ensured biomimicry is now high on the agenda at research institutions, and an exhibit currently on at the Space Foundation EDF in Paris is doing its best to popularize the notion that we should not just be salvaging the natural world but also learning from it.

En Vie (Alive), curated by Reader and Deputy Director of the Textile Futures Research Center at Central Saint Martins College Carole Collet, is an exposition for what happens when material scientists, architects, biologists, and engineers come together with designers to ask what the future will look like. According to them, it will be a world where plants grow our products, biological fabrication replaces traditional manufacturing, and genetically reprogrammed bacteria build new materials, energy, or even medicine.

It’s a fantastical place where plants are magnetic, a vase is built by 60,000 bees, furniture is made from funghi, and shoes from cellulose. You can print algae onto rice paper, then eat it or encourage gourds to grow in the shape of plastic components found in things like torches or radios (you’ll have to wait a few months for the finished product, though). These are not fanciful designs but real products, grown or fashioned with nature’s direct help.

In other parts of the exhibit, biology is the inspiration and shows what might be. Eskin, for instance, provides visitors with a simulation of how a building’s exterior could mimic and learn from the human body in keeping it warm and cool.

Alive shows that, speculative or otherwise, design has a real role to play in bringing different research fields together, which will be essential if there’s any hope of propelling the field into mass commercialization.

“More than any other point in history, advances in science and engineering are making it feasible to mimic natural processes in the laboratory, which makes it a very exciting time,” Craig Vierra, Professor and Assistant Chair, Biological Sciences at University of the Pacific, tells Wired.co.uk. In his California lab, Vierra has for the past few years been growing spider silk proteins from bacteria in order to engineer fibers that are close, if not quite ready, to give steel a run for its money. The technique involves purifying the spider silk proteins away from the bacteria proteins before concentrating these using a freeze-dryer in order to render them into powder form. A solvent is then added, and the material is spun into fiber using wet spinning techniques and stretched to three times its original length.

“Although the mechanical properties of the synthetic spider fibers haven’t quite reached those of natural fibers, research scientists are rapidly approaching this level of performance. Our laboratory has been working on improving the composition of the spinning dope and spinning parameters of the fibers to enhance their performance.”

Vierra is a firm believer that nature will save us.

“Mother Nature has provided us with some of the most outstanding biomaterials that can be used for a plethora of applications in the textile industry. In addition to these, modern technological advances will also allow us to create new biocomposite materials that rely on the fundamentals of natural processes, elevating the numbers and types of materials that are available. But, more importantly, we can generate eco-friendly materials.

“As the population size increases, the availability of natural resources will become more scarce and limiting for humans. It will force society to develop new methods and strategies to produce larger quantities of materials at a faster pace to meet the demands of the world. We simply must find more cost-efficient methods to manufacture materials that are non-toxic for the environment. Many of the materials being synthesized today are very dangerous after they degrade and enter the environment, which is severely impacting the wildlife and disrupting the ecology of the animals on the planet.”

According to Vierra, the fact that funding in the field has become extremely competitive over the past ten years is proof of the quality of research today. “The majority of scientists are expected to justify how their research has a direct, immediate tie to applications in society in order to receive funding.”

We really have no alternative but to continue down this route, he argues. Without advances in material science, we will continue to produce “inferior materials” and damage the environment. “Ultimately, this will affect the way humans live and operate in society.”

We’re agreed that the field is a vital and rapidly growing one. But what value, if any, can a design-led project bring to the table, aside from highlighting the related issues. Vierra has assessed a handful of the incredible designs on display at Alive for us to see which he thinks could become a future biomanufacturing reality.

Read the entire article here.

Image: Radiant Soil, En Vie Exposition. Courtesy of Philip Beesley, En Vie / Wired.

Big Data and Your Career

If you’re a professional or like networking, but shun Facebook, then chances are good that you hang-out on LinkedIn. And, as you do, the company is trawling through your personal data and that of hundreds of millions of other members to turn human resources and career planning into a science — all with the help of big data.

From the Washington Post:

Every second, more than two people join LinkedIn’s network of 238 million members.

They are head hunters in search of talent. They are the talent in search of a job. And sometimes, the career site for the professional class is just a hangout for the well-connected worker.

LinkedIn, using complex, carefully concocted algorithms, analyzes their profiles and site behavior to steer them to opportunity. And corporations parse that data to set business strategy. As the network grows moment by moment, LinkedIn’s rich trove of information also grows more detailed and more comprehensive.

It’s big data meeting human resources. And that data, core to LinkedIn’s potential, could catapult the company beyond building careers and into the realms of education, urban development and economic policy.

Chief executive Jeff Weiner put it this way in a recent blog post: “Our ultimate dream is to develop the world’s first economic graph,” a sort of digital map of skills, workers and jobs across the global economy.

Ambitions, in other words, that are a far cry from the industry’s early stabs at modernizing the old-fashioned jobs board (think ­Monster.com and CareerBuilder).

So far, LinkedIn’s data-driven strategy appears to be working: It turned its highest-ever profit in the second quarter, $364 million, and its stock price has grown sixfold since its 2011 initial public offering. Because its workforce has doubled in a year, it’s fast outgrowing its Mountain View headquarters, just down the street from Google. In 2014, it’ll move into Yahoo’s neighborhood with a new campus in Sunnyvale.

The company makes money three ways: members who pay for premium access; ad sales; and its gold mine, a suite of products created by its talent solutions division and sold to corporate clients, which accounted for $205 million in revenue last quarter.

When LinkedIn staffers talk about their network and products, they often refer to an “ecosystem.” It’s an apt metaphor, because the value of their offerings would seem to rely heavily on equilibrium.

LinkedIn’s usefulness to recruiters is deeply contingent on the quality and depth of its membership base. And its usefulness to members depends on the quality of their experience on the site. LinkedIn’s success, then, depends largely on its ability to do more than just amass new members. The company must get its users to maintain comprehensive, up-to-date profiles, and it must give them a reason to visit the site frequently.

To engage members, the company has deployed new strategies on all fronts: a redesigned site; stuff to read from the likes of Bill Gates, Jack Welch and Richard Branson; new mobile applications; status updates; targeted aggregated news stories and more.

By throwing more and more at users, of course, LinkedIn risks undermining the very thing that’s made it the go-to site for recruiters: a mass of high-quality candidates, sorted and evaluated and offered up.

“I think there’s a chance of people getting tired of it and checking out of it,” said Chris Collins, director of Cornell University’s Center for Advanced Human Resource Studies.

Read the entire article here.

Image courtesy of Telegraph / LinkedIn.

Digital Romance is Alive (and Texting)

The last fifty years has seen a tremendous shift in our personal communications. We have moved from voice conversations via rotary phones molded in bakelite to anytime, anywhere texting via smartphones and public-private multimedia exposes held via social media. During all of this upheaval the process of romance may have changed too, but it remains alive and well, albeit rather different.

From Technology Review:

Boy meets girl; they grow up and fall in love. But technology interferes and threatens to destroy their blissful coupledom. The destructive potential of communication technologies is at the heart of Stephanie Jones’s self-published romance novel Dreams and Misunderstandings. Two childhood sweethearts, Rick and Jessie, use text messages, phone calls, and e-mail to manage the distance between them as Jessie attends college on the East Coast of the United States and Rick moves between Great Britain and the American West. Shortly before a summer reunion, their technological ties fail when Jessie is hospitalized after a traumatic attack. During her recovery, she loses access to her mobile phone, computer, and e-mail account. As a result, the lovers do not reunite and spend years apart, both thinking they have been deserted.

Jones blames digital innovations for the misunderstandings that prevent Rick and Jessie’s reunion. It’s no surprise this theme runs through a romance novel: it reflects a wider cultural fear that these technologies impede rather than strengthen human connection. One of the Internet’s earliest boosters, MIT professor Sherry Turkle, makes similar claims in her most recent book, Alone Together: Why We Expect More of Technology and Less from Each Other. She argues that despite their potential, communication technologies are threatening human relationships, especially intimate ones, because they offer “substitutes for connecting with each other face-to-face.”

If the technology is not fraying or undermining existing relationships, stories abound of how it is creating false or destructive ones among young people who send each other sexually explicit cell-phone photos or “catfish,” luring the credulous into online relationships with fabricated personalities. In her recent book about hookup culture, The End of Sex, Donna Freitas indicts mobile technologies for the ease with which they allow the hookup to happen.

It is true that communication technologies have been reshaping love, romance, and sex throughout the 2000s. The Internet, sociologists Michael ­Rosenfeld and Reuben Thomas have found, is now the third most common way to find a partner, after meeting through friends or in bars, restaurants, and other public places. Twenty-two percent of heterosexual couples now meet online. In many ways, the Internet has replaced families, churches, schools, neighborhoods, civic groups, and workplaces as a venue for finding romance. It has become especially important for those who have a “thin market” of potential romantic partners—middle-aged straight people, gays and lesbians of all ages, the elderly, and the geographically isolated. But even for those who are not isolated from current or potential partners, cell phones, social-network sites, and similar forms of communication now often play a central role in the formation, maintenance, and dissolution of intimate relationships.

While these developments are significant, fears about what they mean do not accurately reflect the complexity of how the technology is really used. This is not surprising: concerns about technology as a threat to the social order, particularly in matters of sexuality and intimacy, go back much further than Internet dating and cell phones. From the boxcar (critics worried that it could transport those of loose moral character from town to town) to the automobile (which gave young people a private space for sexual activity) to reproductive technologies like in vitro fertilization, technological innovations that affect intimate life have always prompted angst. Often, these fears have resulted in what sociologists call a “moral panic”—an episode of exaggerated public anxiety over a perceived threat to social order.

Moral panic is an appropriate description for the fears expressed by Jones, Turkle, and Freitas about the role of technology in romantic relationships. Rather than driving people apart, technology-­mediated communication is likely to have a “hyperpersonal effect,” communications professor Joseph Walther has found. That is, it allows people to be more intimate with one another—sometimes more intimate than would be sustainable face to face. “John,” a college freshman in Chicago whom I interviewed for research that I published in a 2009 book, Hanging Out, Messing Around and Geeking Out: Kids Living and Learning with New Media, highlights this paradox. He asks, “What happens after you’ve had a great online flirtatious chat … and then the conversation sucks in person?”

In the initial getting-to-know-you phase of a relationship, the asynchronous nature of written communication—texts, e-mails, and messages or comments on dating or social-network sites, as opposed to phone calls or video chatting—allows people to interact more continuously and to save face in potentially vulnerable situations. As people flirt and get to know each other this way, they can plan, edit, and reflect upon flirtatious messages before sending them. As John says of this type of communication, “I can think about things more. You can deliberate and answer however you want.”

As couples move into committed relationships, they use these communication technologies to maintain a digital togetherness regardless of their physical distance. With technologies like mobile phones and social-network sites, couples need never be truly apart. Often, this strengthens intimate relationships: in a study on couples’ use of technology in romantic relationships, Borae Jin and Jorge Peña found that couples who are in greater cell-phone contact exhibit less uncertainty about their relationships and higher levels of commitment. This type of communication becomes a form of “relationship work” in which couples trade digital objects of affection such as text messages or comments on online photos. As “Champ,” a 19-year-old in New York, told one of my collaborators on Hanging Out, Messing Around and Geeking Out about his relationship with his girlfriend, “You send a little text message—‘Oh I’m thinking of you,’ or something like that—while she’s working … Three times out of the day, you probably send little comments.”

To be sure, some of today’s fears are based on the perfectly accurate observation that communication technologies don’t always lend themselves to constructive relationship work. The public nature of Facebook posts, for example, appears to promote jealousy and decrease intimacy. When the anthropologist Ilana Gershon interviewed college students about their romantic lives, several told her that Facebook threatens their relationships. As one of her interviewees, “Cole,” said: “There is so much drama. It’s adding another stress.”

Read the entire article here.

Image courtesy of Google search.

Read Something Longer Than 140 Characters

Unplugging from the conveniences and obsessions of our age can be difficult, but not impossible. For those of you who have a demanding boss or needful relationships or lack the will to do away with the email, texts, tweets, voicemail, posts, SMS, likes and status messages there may still be (some) hope without having to go completely cold turkey.

While we would recommend you retreat to a quiet cabin by a still pond in the dark woods, the tips below may help you unwind if you’re frazzled but shun the idea of a remote hideaway. While you’re at it, why not immerse yourself in a copy of Walden.

From the Wall Street Journal:

You may never have read “Walden,” but you’re probably familiar with the premise: a guy with an ax builds a cabin in the woods and lives there for two years to tune out the inessential and discover himself. When Henry David Thoreau began his grand experiment, in 1845, he was about to turn 28—the age of a typical Instagram user today. Thoreau lived with his parents right before his move. During his sojourn, he returned home to do laundry.

Thoreau’s circumstances, in other words, weren’t so different from those of today’s 20-somethings—which is why seeking tech advice from a 19th-century transcendentalist isn’t as far-fetched as it may sound. “We do not ride on the railroad; it rides upon us,” he wrote in “Walden.” That statement still rings true for those of us who have lived with the latest high-tech wonders long enough to realize how much concentration they end up zapping. “We do not use the Facebook; it uses us,” we might say.

But even the average social-media curmudgeon’s views on gadgetry aren’t as extreme as those of Thoreau. Whereas he saw inventions “as improved means to an unimproved end,” most of us genuinely love our iPhones, Instagram feeds and on-demand video. We just don’t want them to take over our lives, lest we forget the joy of reading without the tempting interruption of email notifications, or the pleasure of watching just one good episode of a television show per sitting.

Thankfully, we don’t have to go off the grid to achieve more balance. We can arrive at a saner modern existence simply by tweaking a few settings on our gadgets and the services we rely on. Why renounce civilization when technology makes it so easy to duck out for short stretches?

Inspired by the writings of Thoreau, we looked for simple tools—the equivalent of Thoreau’s knife, ax, spade and wheelbarrow—to create the modern-day equivalent of a secluded cabin in the woods. Don’t worry: There’s still Wi-Fi.

1. Manage your Facebook ‘Friendships’

As your Facebook connections grow to include all 437 of the people you sort of knew in high school, it’s easy to get to the point where the site’s News Feed becomes a hub of oversharing—much of it accidental. (Your co-worker probably had no idea the site would post his results of the “Which Glee Character Are You?” quiz.) Adjusting a few settings will bring your feed back to a more Thoreauvian state.

Facebook tries to figure out which posts will be most interesting to you, but nothing beats getting in there yourself and decluttering by hand. The process is like playing Whac-A-Mole, with your hammer aimed at the irrelevant posts that pop up in your News Feed.

Start by removing serial offenders: On the website, hover your cursor over the person’s name as it appears above a post, hit the “Friends” button that pops up and then uncheck “Show in News Feed” to block future posts. If that feels too drastic, click “Acquaintances” from the pop-up screen instead. This relegates the person to a special “friends list” whose updates will appear lower in the News Feed. (Fear not, the person won’t be notified about either of the above demotions.)

You can go a step further and scale back the types of updates you receive from those you’ve added to Acquaintances (as well as any other friends lists you create). Hover your cursor over the News Feed’s “Friends” heading then click “More” and select the list name. Then click the “Manage Lists” button and, finally, “Choose Update Types.”

Unless you’re in the middle of a fierce match of Bejeweled Blitz, you can safely deselect “Games” and most likely “Music and Videos,” too. Go out on a limb and untick “Comments and Likes” to put the kibosh on musings and shout-outs about other people’s posts. You’ll probably want to leave the mysteriously named “Other Activity” checked, though; while it includes some yawn-inducing updates, the category also encompasses announcements of major life events, like engagements and births.

3. Read Something Longer Than 140 Characters

Computers, smartphones and tablets are perfect for skimming TMZ, but for hunkering down with the sort of thoughtful text Thoreau would endorse, a dedicated ereader is the tech equivalent of a wood-paneled reading room. Although there are fancier models out there, the classic Kindle and Kindle Paperwhite are still tough to beat. Because their screens aren’t backlit, they don’t cause eye strain the way a tablet or color ereader can. While Amazon sells discounted models that display advertisements (each costs $20 less), don’t fall for the trap: The ads undermine the tranquility of the device. (If you already own an ad-supported Kindle, remove the ads for $20 using the settings page.) Also be sure to install the Send to Kindle plug-in for the Chrome and Firefox Web browers. It lets you beam long articles that you stumble upon online to the device, magically stripping away banner ads and other Web detritus in the process.

Read the entire article here.

Image: Henry David Thoreau, 1856. Courtesy of Wikipedia.

Listening versus Snooping

Many of your mobile devices already know where you are and what you’re doing. Increasingly the devices you use will record your every step and every word (and those of any callers), and even know your mood and health status. Analysts and eavesdroppers at the U.S. National Security Agency (NSA) must be licking their collective their lips.

From Technology Review:

The Moto X, the new smartphone from Google’s Motorola Mobility, might be remembered best someday for helping to usher in the era of ubiquitous listening.

Unlike earlier phones, the Moto X includes two low-power chips whose only function is to process data from a microphone and other sensors—without tapping the main processor and draining the battery. This is a big endorsement of the idea that phones could serve you better if they did more to figure out what is going on (see “Motorola Reveals First Google-Era Phone”). For instance, you might say “OK Google Now” to activate Google’s intelligent assistant software, rather than having to first tap the screen or press buttons to get an audio-processing function up and running.

This brings us closer to having phones that continually monitor their auditory environment to detect the phone owner’s voice, discern what room or other setting the phone is in, or pick up other clues from background noise. Such capacities make it possible for software to detect your moods, know when you are talking and not to disturb you, and perhaps someday keep a running record of everything you hear.

“Devices of the future will be increasingly aware of the user’s current context, goals, and needs, will become proactive—taking initiative to present relevant information,” says Pattie Maes, a professor at MIT’s Media Lab. “Their use will become more integrated in our daily behaviors, becoming almost an extension of ourselves. The Moto X is definitely a step in that direction.”

Even before the Moto X, there were apps, such as the Shazam music-identification service, that could continually listen for a signal. When users enable a new feature called “auto-tagging” on a recent update to Shazam’s iPad app, Shazam listens to everything in the background, all the time. It’s seeking matches for songs and TV content that the company has stored on its servers, so you can go back and find information about something that you might have heard a few minutes ago. But the key change is that Shazam can now listen all the time, not just when you tap a button to ask it to identify something. The update is planned for other platforms, too.

But other potential uses abound. Tanzeem Choudury, a researcher at Cornell University, has demonstrated software that can detect whether you are talking faster than normal, or other changes in pitch or frequency that suggest stress. The StressSense app she is developing aims to do things like pinpoint the sources of your stress—is it the 9:30 a.m. meeting, or a call from Uncle Hank?

Similarly, audio analysis could allow the phone to understand where it is—and make fewer mistakes, says Vlad Sejnoha, the chief technology officer of Nuance Communications, which develops voice-recognition technologies. “I’m sure you’ve been in situation where someone has a smartphone in their pocket and suddenly a little voice emerges from the pocket, asking how they can be helped,” he says. That’s caused when an assistance app like Apple’s Siri is accidentally triggered. If the phone’s always-on ears could accurately detect the muffled acoustical properties of a pocket or purse, it could eliminate this false start and stop phones from accidentally dialing numbers as well. “That’s a work in progress,” Sejnoha says.  “And while it’s amusing, I think the general principle is serious: these devices have to try to understand the users’ world as much as possible.”

A phone might use ambient noise levels to decide how loud a ringtone should be: louder if you are out on the street, quiet if inside, says Chris Schmandt, director of the speech and mobility group at MIT’s Media Lab. Taking that concept a step further, a phone could detect an ambient conversation and recognize that one of the speakers was its owner. Then it might mute a potentially disruptive ringtone unless the call was from an important person, such as a spouse, Schmandt added.

Read the entire article here.