Category Archives: Technica

Your Current Dystopian Nightmare: In Just One Click

Amazon was supposed to give you back precious time by making shopping and spending painlessly simple. Apps on your smartphone were supposed to do the same for all manner of re-tooled on-demand services. What wonderful time-saving inventions! So, now you can live in the moment and make use of all this extra free time. It’s your time now. You’ve won it back and no one can take it away.

And, what do you spend this newly earned free time doing? Well, you sit at home in your isolated cocoon, you shop for more things online, you download some more great apps that promise to bring even greater convenience, you interact less with real humans, and, best of all, you spend more time working. Welcome to your new dystopian nightmare, and it’s happening right now. Click.

From Medium:

Angel the concierge stands behind a lobby desk at a luxe apartment building in downtown San Francisco, and describes the residents of this imperial, 37-story tower. “Ubers, Squares, a few Twitters,” she says. “A lot of work-from-homers.”

And by late afternoon on a Tuesday, they’re striding into the lobby at a just-get-me-home-goddammit clip, some with laptop bags slung over their shoulders, others carrying swank leather satchels. At the same time a second, temporary population streams into the building: the app-based meal delivery people hoisting thermal carrier bags and sacks. Green means Sprig. A huge M means Munchery. Down in the basement, Amazon Prime delivery people check in packages with the porter. The Instacart groceries are plunked straight into a walk-in fridge.

This is a familiar scene. Five months ago I moved into a spartan apartment a few blocks away, where dozens of startups and thousands of tech workers live. Outside my building there’s always a phalanx of befuddled delivery guys who seem relieved when you walk out, so they can get in. Inside, the place is stuffed with the goodies they bring: Amazon Prime boxes sitting outside doors, evidence of the tangible, quotidian needs that are being serviced by the web. The humans who live there, though, I mostly never see. And even when I do, there seems to be a tacit agreement among residents to not talk to one another. I floated a few “hi’s” in the elevator when I first moved in, but in return I got the monosyllabic, no-eye-contact mumble. It was clear: Lady, this is not that kind of building.

Back in the elevator in the 37-story tower, the messengers do talk, one tells me. They end up asking each other which apps they work for: Postmates. Seamless. EAT24. GrubHub. Safeway.com. A woman hauling two Whole Foods sacks reads the concierge an apartment number off her smartphone, along with the resident’s directions: “Please deliver to my door.”

“They have a nice kitchen up there,” Angel says. The apartments rent for as much as $5,000 a month for a one-bedroom. “But so much, so much food comes in. Between 4 and 8 o’clock, they’re on fire.”

I start to walk toward home. En route, I pass an EAT24 ad on a bus stop shelter, and a little further down the street, a Dungeons & Dragons–type dude opens the locked lobby door of yet another glass-box residential building for a Sprig deliveryman:

“You’re…”

“Jonathan?”

“Sweet,” Dungeons & Dragons says, grabbing the bag of food. The door clanks behind him.

And that’s when I realized: the on-demand world isn’t about sharing at all. It’s about being served. This is an economy of shut-ins.

In 1998, Carnegie Mellon researchers warned that the internet could make us into hermits. They released a study monitoring the social behavior of 169 people making their first forays online. The web-surfers started talking less with family and friends, and grew more isolated and depressed. “We were surprised to find that what is a social technology has such anti-social consequences,” said one of the researchers at the time. “And these are the same people who, when asked, describe the Internet as a positive thing.”

We’re now deep into the bombastic buildout of the on-demand economy— with investment in the apps, platforms and services surging exponentially. Right now Americans buy nearly eight percent of all their retail goods online, though that seems a wild underestimate in the most congested, wired, time-strapped urban centers.

Many services promote themselves as life-expanding?—?there to free up your time so you can spend it connecting with the people you care about, not standing at the post office with strangers. Rinse’s ad shows a couple chilling at a park, their laundry being washed by someone, somewhere beyond the picture’s frame. But plenty of the delivery companies are brutally honest that, actually, they never want you to leave home at all.

GrubHub’s advertising banks on us secretly never wanting to talk to a human again: “Everything great about eating, combined with everything great about not talking to people.” DoorDash, another food delivery service, goes for the all-caps, batshit extreme:

“NEVER LEAVE HOME AGAIN.”

Katherine van Ekert isn’t a shut-in, exactly, but there are only two things she ever has to run errands for any more: trash bags and saline solution. For those, she must leave her San Francisco apartment and walk two blocks to the drug store, “so woe is my life,” she tells me. (She realizes her dry humor about #firstworldproblems may not translate, and clarifies later: “Honestly, this is all tongue in cheek. We’re not spoiled brats.”) Everything else is done by app. Her husband’s office contracts with Washio. Groceries come from Instacart. “I live on Amazon,” she says, buying everything from curry leaves to a jogging suit for her dog, complete with hoodie.

She’s so partial to these services, in fact, that she’s running one of her own: A veterinarian by trade, she’s a co-founder of VetPronto, which sends an on-call vet to your house. It’s one of a half-dozen on-demand services in the current batch at Y Combinator, the startup factory, including a marijuana delivery app called Meadow (“You laugh, but they’re going to be rich,” she says). She took a look at her current clients?—?they skew late 20s to late 30s, and work in high-paying jobs: “The kinds of people who use a lot of on demand services and hang out on Yelp a lot ?”

Basically, people a lot like herself. That’s the common wisdom: the apps are created by the urban young for the needs of urban young. The potential of delivery with a swipe of the finger is exciting for van Ekert, who grew up without such services in Sydney and recently arrived in wired San Francisco. “I’m just milking this city for all it’s worth,” she says. “I was talking to my father on Skype the other day. He asked, ‘Don’t you miss a casual stroll to the shop?’ Everything we do now is time-limited, and you do everything with intention. There’s not time to stroll anywhere.”

Suddenly, for people like van Ekert, the end of chores is here. After hours, you’re free from dirty laundry and dishes. (TaskRabbit’s ad rolls by me on a bus: “Buy yourself time?—?literally.”)

So here’s the big question. What does she, or you, or any of us do with all this time we’re buying? Binge on Netflix shows? Go for a run? Van Ekert’s answer: “It’s more to dedicate more time to working.”

Read the entire story here.

Circadian Misalignment and Your Smartphone

Google-search-smartphone-night

You take your portable electronics everywhere, all the time. You watch TV with or on your smartphone. You eat with a fork in one hand and your smartphone in the other. In fact, you probably wish you had two pairs of arms so you could eat, drink and use your smartphone and laptop at the same time. You use your smartphone in your car — hopefully or sensibly not while driving. You read texts on your smartphone while in the restroom. You use it at the movie theater, at the theater (much to the dismay of stage actors). It’s with you at the restaurant, on the bus or metro, in the aircraft, in the bath (despite chances of getting electrically shocked). You check your smartphone first thing in the morning and last thing before going to sleep. And, if your home or work-life demands you will check it periodically throughout the night.

Let’s leave aside for now the growing body of anecdotal and formal evidence that smartphones are damaging your physical wellbeing. This includes finger, hand and wrist problems (from texting); and neck and posture problems (from constantly bending over your small screen). Now there is evidence that constant use, especially at night, is damaging your mental wellbeing and increasing the likelihood of additional, chronic physical ailments. It appears that the light from our constant electronic companions is not healthy, particularly as it disrupts our regular rhythm of sleep.

From Wired:

For More than 3 billion years, life on Earth was governed by the cyclical light of sun, moon and stars. Then along came electric light, turning night into day at the flick of a switch. Our bodies and brains may not have been ready.

A fast-growing body of research has linked artificial light exposure to disruptions in circadian rhythms, the light-triggered releases of hormones that regulate bodily function. Circadian disruption has in turn been linked to a host of health problems, from cancer to diabetes, obesity and depression. “Everything changed with electricity. Now we can have bright light in the middle of night. And that changes our circadian physiology almost immediately,” says Richard Stevens, a cancer epidemiologist at the University of Connecticut. “What we don’t know, and what so many people are interested in, are the effects of having that light chronically.”

Stevens, one of the field’s most prominent researchers, reviews the literature on light exposure and human health the latest Philosophical Transactions of the Royal Society B. The new article comes nearly two decades after Stevens first sounded the alarm about light exposure possibly causing harm; writing in 1996, he said the evidence was “sparse but provocative.” Since then, nighttime light has become even more ubiquitous: an estimated 95 percent of Americans regularly use screens shortly before going to sleep, and incandescent bulbs have been mostly replaced by LED and compact fluorescent lights that emit light in potentially more problematic wavelengths. Meanwhile, the scientific evidence is still provocative, but no longer sparse.

As Stevens says in the new article, researchers now know that increased nighttime light exposure tracks with increased rates of breast cancer, obesity and depression. Correlation isn’t causation, of course, and it’s easy to imagine all the ways researchers might mistake those findings. The easy availability of electric lighting almost certainly tracks with various disease-causing factors: bad diets, sedentary lifestyles, exposure to they array of chemicals that come along with modernity. Oil refineries and aluminum smelters, to be hyperbolic, also blaze with light at night.

Yet biology at least supports some of the correlations. The circadian system synchronizes physiological function—from digestion to body temperature, cell repair and immune system activity—with a 24-hour cycle of light and dark. Even photosynthetic bacteria thought to resemble Earth’s earliest life forms have circadian rhythms. Despite its ubiquity, though, scientists discovered only in the last decade what triggers circadian activity in mammals: specialized cells in the retina, the light-sensing part of the eye, rather than conveying visual detail from eye to brain, simply signal the presence or absence of light. Activity in these cells sets off a reaction that calibrates clocks in every cell and tissue in a body. Now, these cells are especially sensitive to blue wavelengths—like those in a daytime sky.

But artificial lights, particularly LCDs, some LEDs, and fluorescent bulbs, also favor the blue side of the spectrum. So even a brief exposure to dim artificial light can trick a night-subdued circadian system into behaving as though day has arrived. Circadian disruption in turn produces a wealth of downstream effects, including dysregulation of key hormones. “Circadian rhythm is being tied to so many important functions,” says Joseph Takahashi, a neurobiologist at the University of Texas Southwestern. “We’re just beginning to discover all the molecular pathways that this gene network regulates. It’s not just the sleep-wake cycle. There are system-wide, drastic changes.” His lab has found that tweaking a key circadian clock gene in mice gives them diabetes. And a tour-de-force 2009 study put human volunteers on a 28-hour day-night cycle, then measured what happened to their endocrine, metabolic and cardiovascular systems.

Crucially, that experiment investigated circadian disruption induced by sleep alteration rather than light exposure, which is also the case with the many studies linking clock-scrambling shift work to health problems. Whether artificial light is as problematic as disturbed sleep patterns remains unknown, but Stevens thinks that some and perhaps much of what’s now assumed to result from sleep issues is actually a function of light. “You can wake up in the middle of the night and your melatonin levels don’t change,” he says. “But if you turn on a light, melatonin starts falling immediately. We need darkness.” According to Stevens, most people live in a sort of “circadian fog.”

Read the entire article here.

Image courtesy of Google Search.

3D Printing Magic

[tube]UpH1zhUQY0c[/tube]

If you’ve visited this blog before you know I’m a great fan of 3D printing. Though some uses, such as printing 3D selfies, seem dubious at best. So, when Carbon3D unveiled its fundamentally different, and better, approach to 3D printing I was intrigued. The company uses an approach called continuous liquid interface production (CLIP), which seems to construct objects from a magical ooze. Check out the video — you’ll be enthralled. The future is here.

Learn more about Carbon3D here.

From Wired:

EVEN IF YOU have little interest in 3-D printing, you’re likely to find  Carbon3D’s Continuous Liquid Interface Production (CLIP) technology fascinating. Rather than the time-intensive printing of a 3-D object layer by layer like most printers, Carbon3D’s technique works 25 to 100 times faster than what you may have seen before, and looks a bit like Terminator 2‘s liquid metal T-1000 in the process.

CLIP creations grow out of a pool of UV-sensitive resin in a process that’s similar to the way laser 3-D printers work, but at a much faster pace. Instead of the laser used in conventional 3-D printers, CLIP uses an ultraviolet projector on the underside of a resin tray to project an image for how each layer should form. Light shines through an oxygen-permeable window onto the resin, which hardens it. Areas of resin that are exposed to oxygen don’t harden, while those that are cut off form the 3-D printed shape.

In practice, all that physics translates to unprecedented 3-D printing speed. At this week’s TED Conference in Vancouver, Carbon3D CEO and co-founder Dr. Joseph DeSimone demonstrated the printer onstage with a bit of theatrical underselling, wagering that his creation could produce in 10 minutes a geometric ball shape that would take a regular 3-D printer up to 10 hours. The CLIP process churned out the design in a little under 7 minutes.

Read the entire story here.

Video courtesy of Carbon3D.

The Internet 0f Th1ngs

Google-search-IoT

Technologist Marc Goodman describes a not too distant future in which all our appliances, tools, products… anything and everything is plugged into the so-called Internet of Things (IoT). The IoT describes a world where all things are connected to everything else, making for a global mesh of intelligent devices from your connected car and your WiFi enabled sneakers to your smartwatch and home thermostat. You may well believe it advantageous to have your refrigerator ping the local grocery store when it runs out of fresh eggs and milk or to have your toilet auto-call a local plumber when it gets stopped-up.

But, as our current Internet shows us — let’s call it the Internet of People — not all is rosy in this hyper-connected, 24/7, always-on digital ocean. What are you to do when hackers attack all your home appliances in a “denial of home service attack (DohS)”, or when your every move inside your home is scrutinized, collected, analyzed and sold to the nearest advertiser, or when your cooktop starts taking and sharing selfies with the neighbors?

Goodman’s new book on this important subject, excerpted here, is titled Future Crimes.

From the Guardian:

If we think of today’s internet metaphorically as about the size of a golf ball, tomorrow’s will be the size of the sun. Within the coming years, not only will every computer, phone and tablet be online, but so too will every car, house, dog, bridge, tunnel, cup, clock, watch, pacemaker, cow, streetlight, bridge, tunnel, pipeline, toy and soda can. Though in 2013 there were only 13bn online devices, Cisco Systems has estimated that by 2020 there will be 50bn things connected to the internet, with room for exponential growth thereafter. As all of these devices come online and begin sharing data, they will bring with them massive improvements in logistics, employee efficiency, energy consumption, customer service and personal productivity.

This is the promise of the internet of things (IoT), a rapidly emerging new paradigm of computing that, when it takes off, may very well change the world we live in forever.

The Pew Research Center defines the internet of things as “a global, immersive, invisible, ambient networked computing environment built through the continued proliferation of smart sensors, cameras, software, databases, and massive data centres in a world-spanning information fabric”. Back in 1999, when the term was first coined by MIT researcher Kevin Ashton, the technology did not exist to make the IoT a reality outside very controlled environments, such as factory warehouses. Today we have low-powered, ultra-cheap computer chips, some as small as the head of a pin, that can be embedded in an infinite number of devices, some for mere pennies. These miniature computing devices only need milliwatts of electricity and can run for years on a minuscule battery or small solar cell. As a result, it is now possible to make a web server that fits on a fingertip for $1.

The microchips will receive data from a near-infinite range of sensors, minute devices capable of monitoring anything that can possibly be measured and recorded, including temperature, power, location, hydro-flow, radiation, atmospheric pressure, acceleration, altitude, sound and video. They will activate miniature switches, valves, servos, turbines and engines – and speak to the world using high-speed wireless data networks. They will communicate not only with the broader internet but with each other, generating unfathomable amounts of data. The result will be an always-on “global, immersive, invisible, ambient networked computing environment”, a mere prelude to the tidal wave of change coming next.

In the future all objects may be smart

The broad thrust sounds rosy. Because chips and sensors will be embedded in everyday objects, we will have much better information and convenience in our lives. Because your alarm clock is connected to the internet, it will be able to access and read your calendar. It will know where and when your first appointment of the day is and be able to cross-reference that information against the latest traffic conditions. Light traffic, you get to sleep an extra 10 minutes; heavy traffic, and you might find yourself waking up earlier than you had hoped.

When your alarm does go off, it will gently raise the lights in the house, perhaps turn up the heat or run your bath. The electronic pet door will open to let Fido into the backyard for his morning visit, and the coffeemaker will begin brewing your coffee. You won’t have to ask your kids if they’ve brushed their teeth; the chip in their toothbrush will send a message to your smartphone letting you know the task is done. As you walk out the door, you won’t have to worry about finding your keys; the beacon sensor on the key chain makes them locatable to within two inches. It will be as if the Jetsons era has finally arrived.

While the hype-o-meter on the IoT has been blinking red for some time, everything described above is already technically feasible. To be certain, there will be obstacles, in particular in relation to a lack of common technical standards, but a wide variety of companies, consortia and government agencies are hard at work to make the IoT a reality. The result will be our transition from connectivity to hyper-connectivity, and like all things Moore’s law related, it will be here sooner than we realise.

The IoT means that all physical objects in the future will be assigned an IP address and be transformed into information technologies. As a result, your lamp, cat or pot plant will be part of an IT network. Things that were previously silent will now have a voice, and every object will be able to tell its own story and history. The refrigerator will know exactly when it was manufactured, the names of the people who built it, what factory it came from, and the day it left the assembly line, arrived at the retailer, and joined your home network. It will keep track of every time its door has been opened and which one of your kids forgot to close it. When the refrigerator’s motor begins to fail, it can signal for help, and when it finally dies, it will tell us how to disassemble its parts and best recycle them. Buildings will know every person who has ever worked there, and streetlights every car that has ever driven by.

All of these objects will communicate with each other and have access to the massive processing and storage power of the cloud, further enhanced by additional mobile and social networks. In the future all objects may become smart, in fact much smarter than they are today, and as these devices become networked, they will develop their own limited form of sentience, resulting in a world in which people, data and things come together. As a consequence of the power of embedded computing, we will see billions of smart, connected things joining a global neural network in the cloud.

In this world, the unknowable suddenly becomes knowable. For example, groceries will be tracked from field to table, and restaurants will keep tabs on every plate, what’s on it, who ate from it, and how quickly the waiters are moving it from kitchen to customer. As a result, when the next E coli outbreak occurs, we won’t have to close 500 eateries and wonder if it was the chicken or beef that caused the problem. We will know exactly which restaurant, supplier and diner to contact to quickly resolve the problem. The IoT and its billions of sensors will create an ambient intelligence network that thinks, senses and feels and contributes profoundly to the knowable universe.

Things that used to make sense suddenly won’t, such as smoke detectors. Why do most smoke detectors do nothing more than make loud beeps if your life is in mortal danger because of fire? In the future, they will flash your bedroom lights to wake you, turn on your home stereo, play an MP3 audio file that loudly warns, “Fire, fire, fire.” They will also contact the fire department, call your neighbours (in case you are unconscious and in need of help), and automatically shut off flow to the gas appliances in the house.

The byproduct of the IoT will be a living, breathing, global information grid, and technology will come alive in ways we’ve never seen before, except in science fiction movies. As we venture down the path toward ubiquitous computing, the results and implications of the phenomenon are likely to be mind-blowing. Just as the introduction of electricity was astonishing in its day, it eventually faded into the background, becoming an imperceptible, omnipresent medium in constant interaction with the physical world. Before we let this happen, and for all the promise of the IoT, we must ask critically important questions about this brave new world. For just as electricity can shock and kill, so too can billions of connected things networked online.

One of the central premises of the IoT is that everyday objects will have the capacity to speak to us and to each other. This relies on a series of competing communications technologies and protocols, many of which are eminently hackable. Take radio-frequency identification (RFID) technology, considered by many the gateway to the IoT. Even if you are unfamiliar with the name, chances are you have already encountered it in your life, whether it’s the security ID card you use to swipe your way into your office, your “wave and pay” credit card, the key to your hotel room, your Oyster card.

Even if you don’t use an RFID card for work, there’s a good chance you either have it or will soon have it embedded in the credit card sitting in your wallet. Hackers have been able to break into these as well, using cheap RFID readers available on eBay for just $50, tools that allow an attacker to wirelessly capture a target’s credit card number, expiration date and security code. Welcome to pocket picking 2.0.

More productive and more prison-like

A much rarer breed of hacker targets the physical elements that make up a computer system, including the microchips, electronics, controllers, memory, circuits, components, transistors and sensors – core elements of the internet of things. These hackers attack a device’s firmware, the set of computer instructions present on every electronic device we encounter, including TVs, mobile phones, game consoles, digital cameras, network routers, alarm systems, CCTVs, USB drives, traffic lights, gas station pumps and smart home management systems. Before we add billions of hackable things and communicate with hackable data transmission protocols, important questions must be asked about the risks for the future of security, crime, terrorism, warfare and privacy.

In the same way our every move online can be tracked, recorded, sold and monetised today, so too will that be possible in the near future in the physical world. Real space will become just like cyberspace. With the widespread adoption of more networked devices, what people do in their homes, cars, workplaces, schools and communities will be subjected to increased monitoring and analysis by the corporations making these devices. Of course these data will be resold to advertisers, data brokers and governments, providing an unprecedented view into our daily lives. Unfortunately, just like our social, mobile, locational and financial information, our IoT data will leak, providing further profound capabilities to stalkers and other miscreants interested in persistently tracking us. While it would certainly be possible to establish regulations and build privacy protocols to protect consumers from such activities, the greater likelihood is that every IoT-enabled device, whether an iron, vacuum, refrigerator, thermostat or lightbulb, will come with terms of service that grant manufacturers access to all your data. More troublingly, while it may be theoretically possible to log off in cyberspace, in your well-connected smart home there will be no “opt-out” provision.

We may find ourselves interacting with thousands of little objects around us on a daily basis, each collecting seemingly innocuous bits of data 24/7, information these things will report to the cloud, where it will be processed, correlated, and reviewed. Your smart watch will reveal your lack of exercise to your health insurance company, your car will tell your insurer of your frequent speeding, and your dustbin will tell your local council that you are not following local recycling regulations. This is the “internet of stool pigeons”, and though it may sound far-fetched, it’s already happening. Progressive, one of the largest US auto insurance companies, offers discounted personalised rates based on your driving habits. “The better you drive, the more you can save,” according to its advertising. All drivers need to do to receive the lower pricing is agree to the installation of Progressive’s Snapshot black-box technology in their cars and to having their braking, acceleration and mileage persistently tracked.

The IoT will also provide vast new options for advertisers to reach out and touch you on every one of your new smart connected devices. Every time you go to your refrigerator to get ice, you will be presented with ads for products based on the food your refrigerator knows you’re most likely to buy. Screens too will be ubiquitous, and marketers are already planning for the bounty of advertising opportunities. In late 2013, Google sent a letter to the Securities and Exchange Commission noting, “we and other companies could [soon] be serving ads and other content on refrigerators, car dashboards, thermostats, glasses and watches, to name just a few possibilities.”

Knowing that Google can already read your Gmail, record your every web search, and track your physical location on your Android mobile phone, what new powerful insights into your personal life will the company develop when its entertainment system is in your car, its thermostat regulates the temperature in your home, and its smart watch monitors your physical activity?

Not only will RFID and other IoT communications technologies track inanimate objects, they will be used for tracking living things as well. The British government has considered implanting RFID chips directly under the skin of prisoners, as is common practice with dogs. School officials across the US have begun embedding RFID chips in student identity cards, which pupils are required to wear at all times. In Contra Costa County, California, preschoolers are now required to wear basketball-style jerseys with electronic tracking devices built in that allow teachers and administrators to know exactly where each student is. According to school district officials, the RFID system saves “3,000 labour hours a year in tracking and processing students”.

Meanwhile, the ability to track employees, how much time they take for lunch, the length of their toilet breaks and the number of widgets they produce will become easy. Moreover, even things such as words typed per minute, eye movements, total calls answered, respiration, time away from desk and attention to detail will be recorded. The result will be a modern workplace that is simultaneously more productive and more prison-like.

At the scene of a suspected crime, police will be able to interrogate the refrigerator and ask the equivalent of, “Hey, buddy, did you see anything?” Child social workers will know there haven’t been any milk or nappies in the home, and the only thing stored in the fridge has been beer for the past week. The IoT also opens up the world for “perfect enforcement”. When sensors are everywhere and all data is tracked and recorded, it becomes more likely that you will receive a moving violation for going 26 miles per hour in a 25-mile-per-hour zone and get a parking ticket for being 17 seconds over on your meter.

The former CIA director David Petraeus has noted that the IoT will be “transformational for clandestine tradecraft”. While the old model of corporate and government espionage might have involved hiding a bug under the table, tomorrow the very same information might be obtained by intercepting in real time the data sent from your Wi-Fi lightbulb to the lighting app on your smart phone. Thus the devices you thought were working for you may in fact be on somebody else’s payroll, particularly that of Crime, Inc.

A network of unintended consequences

For all the untold benefits of the IoT, its potential downsides are colossal. Adding 50bn new objects to the global information grid by 2020 means that each of these devices, for good or ill, will be able to potentially interact with the other 50bn connected objects on earth. The result will be 2.5 sextillion potential networked object-to-object interactions – a network so vast and complex it can scarcely be understood or modelled. The IoT will be a global network of unintended consequences and black swan events, ones that will do things nobody ever planned. In this world, it is impossible to know the consequences of connecting your home’s networked blender to the same information grid as an ambulance in Tokyo, a bridge in Sydney, or a Detroit auto manufacturer’s production line.

The vast levels of cyber crime we currently face make it abundantly clear we cannot even adequately protect the standard desktops and laptops we presently have online, let alone the hundreds of millions of mobile phones and tablets we are adding annually. In what vision of the future, then, is it conceivable that we will be able to protect the next 50bn things, from pets to pacemakers to self-driving cars? The obvious reality is that we cannot.

Our technological threat surface area is growing exponentially and we have no idea how to defend it effectively. The internet of things will become nothing more than the Internet of things to be hacked.

Read the entire article here.

Image courtesy of Google Search.

Time For a New Body, Literally

Brainthatwouldntdie_film_poster

Let me be clear. I’m not referring to a hair transplant, but a head transplant.

A disturbing story has been making the media rounds recently. Dr. Sergio Canavero from the Turin Advanced Neuromodulation Group in Italy, suggests that the time is right to attempt the transplantation of a human head onto a different body. Canavero believes that advances in surgical techniques and immunotherapy are such that a transplantation could be attempted by 2017. Interestingly enough, he has already had several people volunteer for a new body.

Ethics aside, it certainly doesn’t stretch the imagination to believe Hollywood’s elite would clamor for this treatment. Now, I wonder if some people, liking their own body, would want a new head?

From New Scientist:

It’s heady stuff. The world’s first attempt to transplant a human head will be launched this year at a surgical conference in the US. The move is a call to arms to get interested parties together to work towards the surgery.

The idea was first proposed in 2013 by Sergio Canavero of the Turin Advanced Neuromodulation Group in Italy. He wants to use the surgery to extend the lives of people whose muscles and nerves have degenerated or whose organs are riddled with cancer. Now he claims the major hurdles, such as fusing the spinal cord and preventing the body’s immune system from rejecting the head, are surmountable, and the surgery could be ready as early as 2017.

Canavero plans to announce the project at the annual conference of the American Academy of Neurological and Orthopaedic Surgeons (AANOS) in Annapolis, Maryland, in June. Is society ready for such momentous surgery? And does the science even stand up?

The first attempt at a head transplant was carried out on a dog by Soviet surgeon Vladimir Demikhov in 1954. A puppy’s head and forelegs were transplanted onto the back of a larger dog. Demikhov conducted several further attempts but the dogs only survived between two and six days.

The first successful head transplant, in which one head was replaced by another, was carried out in 1970. A team led by Robert White at Case Western Reserve University School of Medicine in Cleveland, Ohio, transplanted the head of one monkey onto the body of another. They didn’t attempt to join the spinal cords, though, so the monkey couldn’t move its body, but it was able to breathe with artificial assistance. The monkey lived for nine days until its immune system rejected the head. Although few head transplants have been carried out since, many of the surgical procedures involved have progressed. “I think we are now at a point when the technical aspects are all feasible,” says Canavero.

This month, he published a summary of the technique he believes will allow doctors to transplant a head onto a new body (Surgical Neurology Internationaldoi.org/2c7). It involves cooling the recipient’s head and the donor body to extend the time their cells can survive without oxygen. The tissue around the neck is dissected and the major blood vessels are linked using tiny tubes, before the spinal cords of each person are cut. Cleanly severing the cords is key, says Canavero.

The recipient’s head is then moved onto the donor body and the two ends of the spinal cord – which resemble two densely packed bundles of spaghetti – are fused together. To achieve this, Canavero intends to flush the area with a chemical called polyethylene glycol, and follow up with several hours of injections of the same stuff. Just like hot water makes dry spaghetti stick together, polyethylene glycol encourages the fat in cell membranes to mesh.

Next, the muscles and blood supply would be sutured and the recipient kept in a coma for three or four weeks to prevent movement. Implanted electrodes would provide regular electrical stimulation to the spinal cord, because research suggests this can strengthen new nerve connections.

When the recipient wakes up, Canavero predicts they would be able to move and feel their face and would speak with the same voice. He says that physiotherapy would enable the person to walk within a year. Several people have already volunteered to get a new body, he says.

The trickiest part will be getting the spinal cords to fuse. Polyethylene glycol has been shown to prompt the growth of spinal cord nerves in animals, and Canavero intends to use brain-dead organ donors to test the technique. However, others are sceptical that this would be enough. “There is no evidence that the connectivity of cord and brain would lead to useful sentient or motor function following head transplantation,” says Richard Borgens, director of the Center for Paralysis Research at Purdue University in West Lafayette, Indiana.

Read the entire article here.

Image: Theatrical poster for the movie The Brain That Wouldn’t Die (1962). Courtesy of Wikipedia.

Net Neutrality Lives!

The US Federal Communications Commission (FCC) took a giant step in the right direction, on February 26, 2015, when it voted to regulate internet broadband much like a public utility. This is a great victory for net neutrality advocates and consumers who had long sought to protect equal access for all to online services and information. Tim Berners Lee, inventor of the World Wide Web, offered his support and praise for the ruling, saying:

“It’s about consumer rights, it’s about free speech, it’s about democracy.”

From the Guardian:

Internet activists scored a landmark victory on Thursday as the top US telecommunications regulator approved a plan to govern broadband internet like a public utility.

Following one of the most intense – and bizarre – lobbying battles in the history of modern Washington politics, the Federal Communications Commission (FCC) passed strict new rules that give the body its greatest power over the cable industry since the internet went mainstream.

FCC chairman Tom Wheeler – a former telecom lobbyist turned surprise hero of net-neutrality supporters – thanked the 4m people who had submitted comments on the new rules. “Your participation has made this the most open process in FCC history,” he said. “We listened and we learned.”

Wheeler said that while other countries were trying to control the internet, the sweeping new US protections on net neutrality – the concept that all information and services should have equal access online – represented “a red-letter day for internet freedom”.

“The internet is simply too important to be left without rules and without a referee on the field,” said Wheeler. “Today’s order is more powerful and more expansive than any previously suggested.”

Broadband providers will be banned from creating so-called “fast lanes” blocking or slowing traffic online, and will oversee mobile broadband as well as cable. The FCC would also have the authority to challenge unforeseen barriers broadband providers might create as the internet develops.

Activists and tech companies argue the new rules are vital to protect net neutrality – the concept that all information and services should have equal access to the internet. The FCC’s two Republican commissioners, Ajit Pai and Michael O’Rielly, voted against the plan but were overruled at a much anticipated meeting by three Democratic members on the panel.

Republicans have long fought the FCC’s net neutrality protections, arguing the rules will create an unnecessary burden on business. They have accused Barack Obama of bullying the regulator into the move in order to score political points, with conservative lawmakers and potential 2016 presidential candidates expected to keep the fight going well into that election campaign.

Pai said the FCC was flip-flopping for “one reason and one reason only: president Obama told us to do so”.

Wheeler dismissed accusations of a “secret” plan “nonsense”. “This is no more a plan to regulate the internet than the first amendment is a plan to regulate free speech,” Wheeler said.

“This is the FCC using all the tools in our toolbox to protect innovators and consumers.”

Obama offered his support to the rules late last year, following an online activism campaign that pitched internet organisers and companies from Netflix and Reddit to the online craft market Etsy and I Can Has Cheezburger? – weblog home of the Lolcats meme – against Republican leaders and the cable and telecom lobbies.

Broadband will now be regulated under Title II of the Communications Act – the strongest legal authority the FCC has in its authority. Obama called on the independent regulator to implement Title II last year, leading to charges that he unduly influenced Wheeler’s decision that are now being investigated in Congress.

A small band of protesters gathered in the snow outside the FCC’s Washington headquarters before the meeting on Thursday, in celebration of their success in lobbying for a dramatic U-turn in regulation. Wheeler and his Democratic colleagues, Mignon Clyburn and Jessica Rosenworcel, were cheered as they sat down for the meeting.

Joining the activists outside was Apple co-counder Steve Wozniak, who said the FCC also needed more power to prevent future attacks on the open internet.

“We have won on net neutrality,” Wozniak told the Guardian. “This is important because they don’t want the FCC to have oversight over other bad stuff.”

Tim Berners Lee, inventor of the world wide web, addressed the meeting via video, saying he applauded the FCC’s decision to protect net neutrality: “More than anything else, the action you take today will preserve the reality of a permission-less innovation that is the heart of the internet.”

“It’s about consumer rights, it’s about free speech, it’s about democracy,” Berners Lee said.

Clyburn compared the new rules to the Bill of Rights. “We are here to ensure that there is only one internet,” she said. “We want to ensure that those with deep pockets have the same opportunity as those with empty pockets too succeed.”

Read the entire story here.

Bit Rot is In Your Future

1978_AMC_Matador_sedan_red_NC_detail_of_factory_AM-FM-stereo-8-track_unit

If you are over the age of 55 or 60 you may well have some 8-track cassettes still stashed in the trunk (or boot if you’re a Brit) of your car. If you’re over 50 it’s possible that you may have some old floppy disks or regular music cassettes stored in a bottom drawer. If you’re over 40 you’re likely to have boxes of old VHS tapes and crate-loads of CDs (or even laser disks) under your bed. So, if you fall into one of these categories most of the content memorized on any of these media types is now very likely to be beyond your reach — your car (hopefully) does not have an 8-track player; you dumped your Sony Walkman for an iPod; and your CDs have been rendered obsolete by music that descends to your ears from the “cloud”.

[Of course, 45s and 33s still seem to have a peculiar and lasting appeal — and thanks to the analog characteristics of vinyl the music encoded in the spiral grooves is still relatively easily accessible. But this will be the subject of another post].

So our technological progress, paradoxically, comes at a cost. As our technologies become simpler to use and content becomes easier to construct and disseminate, it becomes “bit rot” for future generations. That is, our digital present will become lost to more advanced technologies in the future. One solution would be to hold on to your 8-track player. But, Vint Cerf, currently a VP at Google and one of the founding fathers of the internet, has other ideas.

From the Guardian:

Piles of digitised material – from blogs, tweets, pictures and videos, to official documents such as court rulings and emails – may be lost forever because the programs needed to view them will become defunct, Google’s vice-president has warned.

Humanity’s first steps into the digital world could be lost to future historians, Vint Cerf told the American Association for the Advancement of Science’s annual meeting in San Jose, California, warning that we faced a “forgotten generation, or even a forgotten century” through what he called “bit rot”, where old computer files become useless junk.

Cerf called for the development of “digital vellum” to preserve old software and hardware so that out-of-date files could be recovered no matter how old they are.

“When you think about the quantity of documentation from our daily lives that is captured in digital form, like our interactions by email, people’s tweets, and all of the world wide web, it’s clear that we stand to lose an awful lot of our history,” he said.

“We don’t want our digital lives to fade away. If we want to preserve them, we need to make sure that the digital objects we create today can still be rendered far into the future,” he added.

The warning highlights an irony at the heart of modern technology, where music, photos, letters and other documents are digitised in the hope of ensuring their long-term survival. But while researchers are making progress in storing digital files for centuries, the programs and hardware needed to make sense of the files are continually falling out of use.

“We are nonchalantly throwing all of our data into what could become an information black hole without realising it. We digitise things because we think we will preserve them, but what we don’t understand is that unless we take other steps, those digital versions may not be any better, and may even be worse, than the artefacts that we digitised,” Cerf told the Guardian. “If there are photos you really care about, print them out.”

Advertisement

Ancient civilisations suffered no such problems, because histories written in cuneiform on baked clay tablets, or rolled papyrus scrolls, needed only eyes to read them. To study today’s culture, future scholars would be faced with PDFs, Word documents, and hundreds of other file types that can only be interpreted with dedicated software and sometimes hardware too.

The problem is already here. In the 1980s, it was routine to save documents on floppy disks, upload Jet Set Willy from cassette to the ZX spectrum, slaughter aliens with a Quickfire II joystick, and have Atari games cartridges in the attic. Even if the disks and cassettes are in good condition, the equipment needed to run them is mostly found only in museums.

The rise of gaming has its own place in the story of digital culture, but Cerf warns that important political and historical documents will also be lost to bit rot. In 2005, American historian Doris Kearns Goodwin wrote Team of Rivals: the Political Genius of Abraham Lincoln, describing how Lincoln hired those who ran against him for presidency. She went to libraries around the US, found the physical letters of the people involved, and reconstructed their conversations. “In today’s world those letters would be emails and the chances of finding them will be vanishingly small 100 years from now,” said Cerf.

He concedes that historians will take steps to preserve material considered important by today’s standards, but argues that the significance of documents and correspondence is often not fully appreciated until hundreds of years later. Historians have learned how the greatest mathematician of antiquity considered the concept of infinity and anticipated calculus in 3BC after the Archimedes palimpsest was found hidden under the words of a Byzantine prayer book from the 13th century. “We’ve been surprised by what we’ve learned from objects that have been preserved purely by happenstance that give us insights into an earlier civilisation,” he said.

Researchers at Carnegie Mellon University in Pittsburgh have made headway towards a solution to bit rot, or at least a partial one. There, Mahadev Satyanarayanan takes digital snapshots of computer hard drives while they run different software programs. These can then be uploaded to a computer that mimics the one the software ran on. The result is a computer that can read otherwise defunct files. Under a project called Olive, the researchers have archived Mystery House, the original 1982 graphic adventure game for the Apple II, an early version of WordPerfect, and Doom, the original 1993 first person shooter game.

Inventing new technology is only half the battle, though. More difficult still could be navigating the legal permissions to copy and store software before it dies. When IT companies go out of business, or stop supporting their products, they may sell the rights on, making it a nightmarish task to get approval.

Read the entire article here.

Image: 1978 AMC Matador sedan red NC detail of factory AM-FM-stereo-8-track unit. Courtesy of CZmarlin / Wikipedia.

Yawn. Selfies Are So, Like, Yesterday!

DOOB 3D-image

If you know a dedicated and impassioned narcissist it’s time to convince him or her to ditch the selfie. Oh, and please ensure she or he discards the selfie-stick while they’re at it. You see, the selfie — that ubiquitous expression of the me-me-generation — is now rather passé.

So, where does a self-absorbed individual turn next? Enter the 3D printed version of yourself courtesy of a German company called DOOB 3D, with its Dooblicator scanner and high-res 3D printer. Connoisseurs of self can now — for a mere $395 — replicate themselves with a 10-inch facsimile. If you’re a cheapskate, you can get a Playmobil-sized replica for $95; while a 14-inch selfie-doll will fetch you $695. Love it!

To learn more about DOOB 3D visit their website.

From Wired:

We love looking at images of ourselves. First there were Olan Mills portraits. Nowadays there are selfies and selfie-stick selfies and drone selfies.

If you’re wondering what comes next, Dusseldorf-based DOOB 3D thinks it has the answer—and contrary to what the company’s name suggests, it doesn’t involve getting high and watching Avatar.

DOOB 3D can produce a detailed, four-inch figurine of your body—yes, a 3-D selfie. Making one of these figurines requires a massive pile of hardware and software: 54 DSLRs, 54 lenses, a complex 3-D modeling pipeline, and an $80,000 full-color 3-D printer, not to mention a room-size scanning booth.

Factor that all in and the $95 asking price for a replica of yourself that’s roughly the size of most classic Star Wars action figures doesn’t seem so bad. A Barbie-esque 10-inch model goes for $395, while a 14-inch figure that’s more along the lines of an old-school G.I. Joe doll costs $695.

The company has eight 3-D scanning booths (called “Doob-licators”) scattered in strategic locations throughout the world. There’s one in Dusseldorf, one in Tokyo, one at Santa Monica Place in Los Angeles, and one in New York City’s Chelsea Market. The company also says they’re set to add more U.S. locations soon, although details aren’t public yet.

In New York, the pop-up DOOB shop in Chelsea Market was a pretty big hit. According to Michael Anderson, CEO of DOOB 3D USA, the Doob-licator saw about 500 customers over the winter holiday season. About 10 percent of the booth’s customers got their pets Doob-licated.

“At first, (people got DOOBs made) mostly on a whim,” says Anderson of the holiday-season spike. Most people just walk up and stand in line, but you can also book an appointment in advance.

“Now that awareness has been built,” Anderson says, “there has been a shift where at least two thirds of our customers have planned ahead to get a DOOB.”

Each Doob-licator is outfitted with 54 Canon EOS Rebel T5i DSLRs, arranged in nine columns of six cameras each. You can make an appointment or just wait in line: A customer steps in, strikes a pose, and the Doob-licator operator fires all the cameras at once. That creates a full-body scan in a fraction of a second. The next step involves feeding all those 18-megapixel images through the company’s proprietary software, which creates a 3-D model of the subject.

The printing process requires more patience. The company operates three high-end 3-D printing centers to support its scanning operations: One in Germany, one in Tokyo, and one in Brooklyn. They all use 3D Systems’ ProJet 660Pro, a high-resolution (600 x 540 dpi) 3-D printer that creates full-color objects on the fly. The printer uses a resin polymer material, and the full range of CMYK color is added to each powder layer as it’s printed.

With a top printing speed of 1.1 inches per hour and a process that sometimes involves thousands of layers of powder, the process takes a few hours for the smallest-size DOOB and half a day or more for the larger ones. And depending on how many DOOBs are lined up in the queue, your mini statue takes between two and eight weeks to arrive in the mail.

Once you step inside that Doob-licator, it’s like international waters: You are largely unbound by laws and restrictions. Do you want to get naked? Go right ahead. Along with your nude statue, the company will also send you a 3-D PDF and keep your data in its database in case you want additional copies made (you can request that your data be deleted if that sounds too creepy).

Read the entire article here.

Image courtesy of of DOOB 3D.

Social Media Metes Out Social (Networking) Justice

Before the age of Facebook and Twitter if you were to say something utterly stupid, bigoted, sexist or racist among a small group of friends or colleagues it would, usually, have gone no further. Some members of your audience may have chastised you, while others may have agreed or ignored you. But then the comment would have been largely forgotten.

This is no longer so in our age of social networking and constant inter-connectedness. Our technologies distribute, repeat and amplify our words and actions, which now seem to take on lives of their very own. Love it or hate it — welcome to the age of social networking justice — a 21st century digital pillory.

Say something stupid or do something questionable today — and you’re likely to face a consequential backlash that stretches beyond the present and into your future. Just take the case of Justine Sacco.

From NYT:

As she made the long journey from New York to South Africa, to visit family during the holidays in 2013, Justine Sacco, 30 years old and the senior director of corporate communications at IAC, began tweeting acerbic little jokes about the indignities of travel. There was one about a fellow passenger on the flight from John F. Kennedy International Airport:

“?‘Weird German Dude: You’re in First Class. It’s 2014. Get some deodorant.’ — Inner monologue as I inhale BO. Thank God for pharmaceuticals.”

Then, during her layover at Heathrow:

“Chilly — cucumber sandwiches — bad teeth. Back in London!”

And on Dec. 20, before the final leg of her trip to Cape Town:

“Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”

She chuckled to herself as she pressed send on this last one, then wandered around Heathrow’s international terminal for half an hour, sporadically checking her phone. No one replied, which didn’t surprise her. She had only 170 Twitter followers.

Sacco boarded the plane. It was an 11-hour flight, so she slept. When the plane landed in Cape Town and was taxiing on the runway, she turned on her phone. Right away, she got a text from someone she hadn’t spoken to since high school: “I’m so sorry to see what’s happening.” Sacco looked at it, baffled.

Then another text: “You need to call me immediately.” It was from her best friend, Hannah. Then her phone exploded with more texts and alerts. And then it rang. It was Hannah. “You’re the No. 1 worldwide trend on Twitter right now,” she said.

Sacco’s Twitter feed had become a horror show. “In light of @Justine-Sacco disgusting racist tweet, I’m donating to @care today” and “How did @JustineSacco get a PR job?! Her level of racist ignorance belongs on Fox News. #AIDS can affect anyone!” and “I’m an IAC employee and I don’t want @JustineSacco doing any communications on our behalf ever again. Ever.” And then one from her employer, IAC, the corporate owner of The Daily Beast, OKCupid and Vimeo: “This is an outrageous, offensive comment. Employee in question currently unreachable on an intl flight.” The anger soon turned to excitement: “All I want for Christmas is to see @JustineSacco’s face when her plane lands and she checks her inbox/voicemail” and “Oh man, @JustineSacco is going to have the most painful phone-turning-on moment ever when her plane lands” and “We are about to watch this @JustineSacco bitch get fired. In REAL time. Before she even KNOWS she’s getting fired.”

The furor over Sacco’s tweet had become not just an ideological crusade against her perceived bigotry but also a form of idle entertainment. Her complete ignorance of her predicament for those 11 hours lent the episode both dramatic irony and a pleasing narrative arc. As Sacco’s flight traversed the length of Africa, a hashtag began to trend worldwide: #HasJustineLandedYet. “Seriously. I just want to go home to go to bed, but everyone at the bar is SO into #HasJustineLandedYet. Can’t look away. Can’t leave” and “Right, is there no one in Cape Town going to the airport to tweet her arrival? Come on, Twitter! I’d like pictures #HasJustineLandedYet.”

A Twitter user did indeed go to the airport to tweet her arrival. He took her photograph and posted it online. “Yup,” he wrote, “@JustineSacco HAS in fact landed at Cape Town International. She’s decided to wear sunnies as a disguise.”

By the time Sacco had touched down, tens of thousands of angry tweets had been sent in response to her joke. Hannah, meanwhile, frantically deleted her friend’s tweet and her account — Sacco didn’t want to look — but it was far too late. “Sorry @JustineSacco,” wrote one Twitter user, “your tweet lives on forever.”

Read the entire article here.

Creative Destruction

Internet_map

Author Andrew Keen ponders the true value of the internet in his new book The Internet is Not the Answer. Quite rightfully he asserts that many billions of consumers have benefited from the improved convenience and usually lower prices of every product imaginable delivered through a couple of clicks online. But there is a higher price to pay — one that touches on the values we want for our society and the deeper costs to our culture.

From the Guardian:

During every minute of every day of 2014, according to Andrew Keen’s new book, the world’s internet users – all three billion of them – sent 204m emails, uploaded 72 hours of YouTube video, undertook 4m Google searches, shared 2.46m pieces of Facebook content, published 277,000 tweets, posted 216,000 new photos on Instagram and spent $83,000 on Amazon.

By any measure, for a network that has existed recognisably for barely 20 years (the first graphical web browser, Mosaic, was released in 1993), those are astonishing numbers: the internet, plainly, has transformed all our lives, making so much of what we do every day – communicating, shopping, finding, watching, booking – unimaginably easier than it was. A Pew survey in the United States found last year that 90% of Americans believed the internet had been good for them.

So it takes a brave man to argue that there is another side to the internet; that stratospheric numbers and undreamed-of personal convenience are not the whole story. Keen (who was once so sure the internet was the answer that he sank all he had into a startup) is now a thoughtful and erudite contrarian who believes the internet is actually doing untold damage. The net, he argues, was meant to be “power to the people, a platform for equality”: an open, decentralised, democratising technology that liberates as it empowers as it informs.

Instead, it has handed extraordinary power and wealth to a tiny handful of people, while simultaneously, for the rest of us, compounding and often aggravating existing inequalities – cultural, social and economic – whenever and wherever it has found them. Individually, it may work wonders for us. Collectively, it’s doing us no good at all. “It was supposed to be win-win,” Keen declares. “The network’s users were supposed to be its beneficiaries. But in a lot of ways, we are its victims.”

This is not, Keen acknowledges, a very popular view, especially in Silicon Valley, where he has spent the best part of the past 30-odd years after an uneventful north London childhood (the family was in the rag trade). But The Internet is Not the Answer – Keen’s third book (the first questioned the value of user-generated content, the second the point of social media; you get where he’s coming from) – has been “remarkably well received”, he says. “I’m not alone in making these points. Moderate opinion is starting to see that this is a problem.”

What seems most unarguable is that, whatever else it has done, the internet – after its early years as a network for academics and researchers from which vulgar commercial activity was, in effect, outlawed – has been largely about the money. The US government’s decision, in 1991, to throw the nascent network open to private enterprise amounted, as one leading (and now eye-wateringly wealthy) Californian venture capitalist has put it, to “the largest creation of legal wealth in the history of the planet”.

The numbers Keen reels off are eye-popping: Google, which now handles 3.5bn searches daily and controls more than 90% of the market in some countries, including Britain, was valued at $400bn last year – more than seven times General Motors, which employs nearly four times more people. Its two founders, Larry Page and Sergey Brin, are worth $30bn apiece. Facebook’s Mark Zuckerberg, head of the world’s second biggest internet site – used by 19% of people in the world, half of whom access it six days a week or more – is sitting on a similar personal pile, while at $190bn in July last year, his company was worth more than Coca-Cola, Disney and AT&T.

Jeff Bezos of Amazon also has $30bn in his bank account. And even more recent online ventures look to be headed the same way: Uber, a five-year-old startup employing about 1,000 people and once succinctly described as “software that eats taxis”, was valued last year at more than $18bn – roughly the same as Hertz and Avis combined. The 700-staff lodging rental site Airbnb was valued at $10bn in February last year, not far off half as much as the Hilton group, which owns nearly 4,000 hotels and employs 150,000 people. The messaging app WhatsApp, bought by Facebook for $19bn, employs just 55, while the payroll of Snapchat – which turned down an offer of $3bn – numbers barely 20.

Part of the problem here, argues Keen, is that the digital economy is, by its nature, winner-takes-all. “There’s no inevitable or conspiratorial logic here; no one really knew it would happen,” he says. “There are just certain structural qualities that mean the internet lends itself to monopolies. The internet is a perfect global platform for free-market capitalism – a pure, frictionless, borderless economy … It’s a libertarian’s wet dream. Digital Milton Friedman.”Nor are those monopolies confined to just one business. Keen cites San Francisco-based writer Rebecca Solnit’s incisive take on Google: imagine it is 100 years ago, and the post office, the phone company, the public libraries, the printing houses, Ordnance Survey maps and the cinemas were all controlled by the same secretive and unaccountable organisation. Plus, he adds, almost as an afterthought: “Google doesn’t just own the post office – it has the right to open everyone’s letters.”

Advertisement

This, Keen argues, is the net economy’s natural tendency: “Google is the search and information monopoly and the largest advertising company in history. It is incredibly strong, joining up the dots across more and more industries. Uber’s about being the transport monopoly; Airbnb the hospitality monopoly; TaskRabbit the labour monopoly. These are all, ultimately, monopoly plays – that’s the logic. And that should worry people.”

It is already having consequences, Keen says, in the real world. Take – surely the most glaring example – Amazon. Keen’s book cites a 2013 survey by the US Institute for Local Self-Reliance, which found that while it takes, on average, a regular bricks-and-mortar store 47 employees to generate $10m in turnover, Bezos’s many-tentacled, all-consuming and completely ruthless “Everything Store” achieves the same with 14. Amazon, that report concluded, probably destroyed 27,000 US jobs in 2012.

“And we love it,” Keen says. “We all use Amazon. We strike this Faustian deal. It’s ultra-convenient, fantastic service, great interface, absurdly cheap prices. But what’s the cost? Truly appalling working conditions; we know this. Deep hostility to unions. A massive impact on independent retail; in books, savage bullying of publishers. This is back to the early years of the 19th century. But we’re seduced into thinking it’s good; Amazon has told us what we want to hear. Bezos says, ‘This is about you, the consumer.’ The problem is, we’re not just consumers. We’re citizens, too.”

Read the entire article here.

Image: Visualization of routing paths through a portion of the Internet. Courtesy of the Opte Project.

FCC Flexes Title II

US-FCC-Seal.svgChairman of the US Federal Communications Commission (FCC) was once beholden to the pseudo-monopolies that are cable and wireless providers. Now, he seems to be fighting to keep the internet fair, neutral and open — for consumers. Hard to believe. But, let’s face it, if Comcast and other telecoms behemoths are against Wheeler’s proposal then it must be good for consumer.

From Wired:

After more than a decade of debate and a record-setting proceeding that attracted nearly 4 million public comments, the time to settle the Net Neutrality question has arrived. This week, I will circulate to the members of the Federal Communications Commission (FCC) proposed new rules to preserve the internet as an open platform for innovation and free expression. This proposal is rooted in long-standing regulatory principles, marketplace experience, and public input received over the last several months.

Broadband network operators have an understandable motivation to manage their network to maximize their business interests. But their actions may not always be optimal for network users. The Congress gave the FCC broad authority to update its rules to reflect changes in technology and marketplace behavior in a way that protects consumers. Over the years, the Commission has used this authority to the public’s great benefit.

The internet wouldn’t have emerged as it did, for instance, if the FCC hadn’t mandated open access for network equipment in the late 1960s. Before then, AT&T prohibited anyone from attaching non-AT&T equipment to the network. The modems that enabled the internet were usable only because the FCC required the network to be open.

Companies such as AOL were able to grow in the early days of home computing because these modems gave them access to the open telephone network.

I personally learned the importance of open networks the hard way. In the mid-1980s I was president of a startup, NABU: The Home Computer Network. My company was using new technology to deliver high-speed data to home computers over cable television lines. Across town Steve Case was starting what became AOL. NABU was delivering service at the then-blazing speed of 1.5 megabits per second—hundreds of times faster than Case’s company. “We used to worry about you a lot,” Case told me years later.

But NABU went broke while AOL became very successful. Why that is highlights the fundamental problem with allowing networks to act as gatekeepers.

While delivering better service, NABU had to depend on cable television operators granting access to their systems. Steve Case was not only a brilliant entrepreneur, but he also had access to an unlimited number of customers nationwide who only had to attach a modem to their phone line to receive his service. The phone network was open whereas the cable networks were closed. End of story.

The phone network’s openness did not happen by accident, but by FCC rule. How we precisely deliver that kind of openness for America’s broadband networks has been the subject of a debate over the last several months.

Originally, I believed that the FCC could assure internet openness through a determination of “commercial reasonableness” under Section 706 of the Telecommunications Act of 1996. While a recent court decision seemed to draw a roadmap for using this approach, I became concerned that this relatively new concept might, down the road, be interpreted to mean what is reasonable for commercial interests, not consumers.

That is why I am proposing that the FCC use its Title II authority to implement and enforce open internet protections.

Using this authority, I am submitting to my colleagues the strongest open internet protections ever proposed by the FCC. These enforceable, bright-line rules will ban paid prioritization, and the blocking and throttling of lawful content and services. I propose to fully apply—for the first time ever—those bright-line rules to mobile broadband. My proposal assures the rights of internet users to go where they want, when they want, and the rights of innovators to introduce new products without asking anyone’s permission.

All of this can be accomplished while encouraging investment in broadband networks. To preserve incentives for broadband operators to invest in their networks, my proposal will modernize Title II, tailoring it for the 21st century, in order to provide returns necessary to construct competitive networks. For example, there will be no rate regulation, no tariffs, no last-mile unbundling. Over the last 21 years, the wireless industry has invested almost $300 billion under similar rules, proving that modernized Title II regulation can encourage investment and competition.

Congress wisely gave the FCC the power to update its rules to keep pace with innovation. Under that authority my proposal includes a general conduct rule that can be used to stop new and novel threats to the internet. This means the action we take will be strong enough and flexible enough not only to deal with the realities of today, but also to establish ground rules for the as yet unimagined.

The internet must be fast, fair and open. That is the message I’ve heard from consumers and innovators across this nation. That is the principle that has enabled the internet to become an unprecedented platform for innovation and human expression. And that is the lesson I learned heading a tech startup at the dawn of the internet age. The proposal I present to the commission will ensure the internet remains open, now and in the future, for all Americans.

Read the entire article here.

Image: Official US FCC government seal.

A Higher Purpose

In a fascinating essay, excerpted below, Michael Malone wonders if the tech gurus of Silicon Valley should be solving bigger problems. We see venture capitalists scrambling over one another to find the next viral, mobile app — perhaps one that automatically writes your tweets, or one that vibrates your smartphone if you say too many bad words. Should our capital markets — now with an attention span of 15 seconds — reward the so-called innovators of these so-called essential apps with millions or even billions in company valuations?

Shouldn’t Silicon Valley be tackling the hard problems? Wouldn’t humanity be better served, not from a new killer SnapChat replacement app, but from more efficient reverse osmosis; mitigation for Alzheimer’s (and all sundry of other chronic ailments); progress with alternative energy sources and more efficient energy sinks; next generation antibiotics; ridding the world of land-mines; growing and delivering nutritious food to those who need it most? Admittedly, these are some hard problems. But, isn’t that the point!

From Technology Review:

The view from Mike Steep’s office on Palo Alto’s Coyote Hill is one of the greatest in Silicon Valley.

Beyond the black and rosewood office furniture, the two large computer monitors, and three Indonesian artifacts to ward off evil spirits, Steep looks out onto a panorama stretching from Redwood City to Santa Clara. This is the historic Silicon Valley, the birthplace of Hewlett-Packard and Fairchild Semiconductor, Intel and Atari, Netscape and Google. This is the home of innovations that have shaped the modern world. So is Steep’s employer: Xerox’s Palo Alto Research Center, or PARC, where personal computing and key computer-­networking technologies were invented, and where he is senior vice president of global business operations.

And yet Mike Steep is disappointed at what he sees out the windows.

“I see a community that acts like it knows where it’s going, but that seems to have its head in the sand,” he says. He gestures towards the Hewlett-Packard headquarters a few blocks away and Hoover Tower at Stanford University. “This town used to think big—the integrated circuit, personal computers, the Internet. Are we really leveraging all that intellectual power and creativity creating Instagram and dating apps? Is this truly going to change the world?”

After spending years at Microsoft, HP, and Apple, Steep joined PARC in 2013 to help the legendary ideas factory better capitalize on its work. As part of the job, he travels around the world visiting R&D executives in dozens of big companies, and increasingly he worries that the Valley will become irrelevant to them. Steep is one of 22 tech executives on a board the mayor of London set up to promote a “smart city”; they advise officials on how to allocate hundreds of millions of pounds for projects that would combine physical infrastructure such as new high-speed rail with sensors, databases, and analytics. “I know for a fact that China and an array of other countries are chasing this project, which will be the template for scores of similar big-city infrastructure projects around the world in years to come,” Steep says. “From the U.S.? IBM. From Silicon Valley? Many in England ask if anyone here has even heard of the London subway project. That’s unbelievable. Why don’t we leverage opportunities like this here in the Valley?”

Steep isn’t alone in asking whether Silicon Valley is devoting far too many resources to easy opportunities in mobile apps and social media at the expense of attacking bigger problems in energy, medicine, and transportation (see Q&A: Peter Thiel). But if you put that argument to many investors and technologists here, you get a reasonable comeback: has Silicon Valley really ever set out to directly address big problems? In fact, the classic Valley approach has been to size up which technologies it can quickly and ambitiously advance, and then let the world make of them what it will. That is how we got Facebook and Google, and it’s why the Valley’s clean-tech affair was a short-lived mismatch. And as many people point out with classic Silicon Valley confidence, the kind of work that made the area great is still going on in abundance.

The next wave

A small group of executives, surrounded by hundreds of bottles of wine, sits in the private dining room at Bella Vita, an Italian restaurant in Los Altos’s picturesque downtown of expensive tiny shops. Within a few miles, one can find the site of the original Fairchild Semiconductor, Steve Jobs’s house, and the saloon where Nolan Bushnell set up the first Atari game. The host of this gathering is Carl Guardino, CEO of the Silicon Valley Leadership Group, an industry association dedicated to the economic health of the Valley. The 400 organizations that belong to the group are mostly companies that were founded long before the mobile-app craze; only 10 percent are startups. That is evident at this dinner, to which Guardino has invited three of his board members: Steve Berglund, CEO of Trimble, a maker of GPS equipment; Tom Werner, CEO of the solar provider SunPower; and Greg Becker, CEO of Silicon Valley Bank.

These are people who, like Steep, spend much of their time meeting with people in governments and other companies. Asked whether the Valley is falling out of touch with what the world really needs, each disagrees, vehemently. They are almost surprised by the question. “This is the most adaptive and flexible business community on the planet,” says Becker. “It is always about innovation—and going where the opportunity leads next. If you’re worried that the Valley is overpursuing one market or another, then just wait a while and it will change direction again. That’s what we are all about.”

“This is the center of world capitalism, and capitalism is always in flux,” Werner adds. “Are there too many social-­networking and app companies out there right now? Probably. But what makes you think it’s going to stay that way for long? We have always undergone corrections. It’s the nature of who we are … But we’ll come out stronger than ever, and in a whole different set of markets and new technologies. This will still be the best place on the planet for innovation.”

Berglund contends that a generational change already under way will reduce the emphasis on apps. “Young people don’t seem to care as much about code as their generational elders,” he says. “They want to build things—stuff like robots and drones. Just go to the Maker Faire and watch them. They’re going to take this valley in a whole different direction.”

Berglund could be right. In the first half of 2014, according to CB Insights, Internet startups were the leading recipient of venture investment in San Francisco and Silicon Valley (the area got half of the U.S. total; New York was second at 10 percent). But investment in the Internet sector accounted for 59 percent of the total, down from a peak of 68 percent in 2011.

Doug Henton, who heads the consulting firm Collaborative Economics and oversaw an upcoming research report on the state of the Valley, argues that since 1950 the area has experienced five technological waves. Each has lasted about 10 to 20 years and encompassed a frenzy followed by a crash and shakeout and then a mature “deployment period.” Henton has identified these waves as defense (1950s and 1960s), integrated circuits (1960s and 1970s), personal computers (1970s and 1980s), Internet (1990s), and social media (2000s and 2010s). By these lights, the social-media wave, however dominant it is in the public eye, soon may be replaced by another wave. Henton suggests that it’s likely to involve the combination of software, hardware, and sensors in wearable devices and the “Internet of things.”

Read the entire essay here.

Silicon Death Valley

boo-com

Have you ever wondered what happens to the 99 percent of Silicon Valley startups that don’t make billionaires (or even millionaires) of their founders? It’s not all milk and honey in the land of sunshine. After all, for every Google or Facebook there are hundreds of humiliating failures — think: Webvan, Boo.com, Pets.com. Beautyjungle.com, Boxman, Flooz, eToys.

The valley’s venture capitalists tend to bury their business failures rather quietly, careful not to taint their reputations as omnipotent, infallible futurists. From the ashes of these failures some employees move on to well-established corporate serfdom and others find fresh challenges at new startups. But there is a fascinating middle-ground, between success and failure — an entrepreneurial twilight zone populated by zombie businesses.

From the Guardian:

It is probably Silicon Valley’s most striking mantra: “Fail fast, fail often.” It is recited at technology conferences, pinned to company walls, bandied in conversation.

Failure is not only invoked but celebrated. Entrepreneurs give speeches detailing their misfires. Academics laud the virtue of making mistakes. FailCon, a conference about “embracing failure”, launched in San Francisco in 2009 and is now an annual event, with technology hubs in Barcelona, Tokyo, Porto Alegre and elsewhere hosting their own versions.

While the rest of the world recoils at failure, in other words, technology’s dynamic innovators enshrine it as a rite of passage en route to success.

But what about those tech entrepreneurs who lose – and keep on losing? What about those who start one company after another, refine pitches, tweak products, pivot strategies, reinvent themselves … and never succeed? What about the angst masked behind upbeat facades?

Silicon Valley is increasingly asking such questions, even as the tech boom rewards some startups with billion-dollar valuations, sprinkling stardust on founders who talk of changing the world.

“It’s frustrating if you’re trying and trying and all you read about is how much money Airbnb and Uber are making,” said Johnny Chin, 28, who endured three startup flops but is hopeful for his fourth attempt. “The way startups are portrayed, everything seems an overnight success, but that’s a disconnect from reality. There can be a psychic toll.”

It has never been easier or cheaper to launch a company in the hothouse of ambition, money and software that stretches from San Francisco to Cupertino, Mountain View, Menlo Park and San Jose.

In 2012 the number of seed investment deals in US tech reportedly more than tripled, to 1,700, from three years earlier. Investment bankers are quitting Wall Street for Silicon Valley, lured by hopes of a cooler and more creative way to get rich.

Most startups fail. However many entrepreneurs still overestimate the chances of success – and the cost of failure.

Some estimates put the failure rate at 90% – on a par with small businesses in other sectors. A similar proportion of alumni from Y Combinator, a legendary incubator which mentors bright prospects, are said to also struggle.

Companies typically die around 20 months after their last financing round and after having raised $1.3m, according to a study by the analytics firms CB Insights titled The RIP Report – startup death trends.

Advertisement

Failure is difficult to quantify because it does not necessarily mean liquidation. Many startups limp on for years, ignored by the market but sustained by founders’ savings or investors.

“We call them the walking dead,” said one manager at a tech behemoth, who requested anonymity. “They don’t necessarily die. They putter along.”

Software engineers employed by such zombies face a choice. Stay in hope the company will take off, turning stock options into gold. Or quit and take one of the plentiful jobs at other startups or giants like Apple and Google.

Founders face a more agonising dilemma. Continue working 100-hour weeks and telling employees and investors their dream is alive, that the metrics are improving, and hope it’s true, or pull the plug.

The loss aversion principle – the human tendency to strongly prefer avoiding losses to acquiring gains – tilts many towards the former, said Bruno Bowden, a former engineering manager at Google who is now a venture investor and entrepreneur.

“People will do a lot of irrational things to avoid losing even if it’s to their detriment. You push and push and exhaust yourself.”

Silicon Valley wannabes tell origin fables of startup founders who maxed out credit cards before dazzling Wall Street, the same way Hollywood’s struggling actors find solace in the fact Brad Pitt dressed as a chicken for El Pollo Loco before his breakthrough.

“It’s painful to be one of the walking dead. You lie to yourself and mask what’s not working. You amplify little wins,” said Chin, who eventually abandoned startups which offered micro, specialised versions of Amazon and Yelp.

That startup founders were Silicon Valley’s “cool kids”, glamorous buccaneers compared to engineers and corporate drones, could make failure tricky to recognise, let alone accept, he said. “People are very encouraging. Everything is amazing, cool, awesome. But then they go home and don’t use your product.”

Chin is bullish about his new company, Bannerman, an Uber-type service for event security and bodyguards, and has no regrets about rolling the tech dice. “I love what I do. I couldn’t do anything else.”

Read the entire story here.

Image: Boo.com, 1999. Courtesy of the WayBackMachine, Internet Archive.

How to Get Blazingly Fast Internet

Chattanooga,_TennesseeIt’s rather simple in theory, and only requires two steps. Step 1: Follow the lead of a city like Chattanooga, Tennessee. Step 2: Tell you monopolistic cable company what to do with its cables. Done. Now you have a 1 Gigabit Internet connection — around 50-100 times faster than your mother’s Wifi.

This experiment is fueling a renaissance of sorts in the Southern U.S. city and other metropolitan areas can only look on in awe. It comes as no surprise that the cable oligarchs at Comcast, Time Warner and AT&T are looking for any way to halt the city’s progress into the 21st Century.

The Guardian:

Loveman’s department store on Market Street in Chattanooga closed its doors in 1993 after almost a century in business, another victim of a nationwide decline in downtowns that hollowed out so many US towns. Now the opulent building is buzzing again, this time with tech entrepreneurs taking advantage of the fastest internet in the western hemisphere.

Financed by the cash raised from the sale of logistics group Access America, a group of thirty-something local entrepreneurs have set up Lamp Post, an incubator for a new generation of tech companies, in the building. A dozen startups are currently working out of the glitzy downtown office.

“We’re not Silicon Valley. No one will ever replicate that,” says Allan Davis, one of Lamp Post’s partners. “But we don’t need to be and not everyone wants that. The expense, the hassle. You don’t need to be there to create great technology. You can do it here.”

He’s not alone in thinking so. Lamp Post is one of several tech incubators in this mid-sized Tennessee city. Money is flowing in. Chattanooga has gone from close to zero venture capital in 2009 to more than five organized funds with investable capital over $50m in 2014 – not bad for a city of 171,000 people.

The city’s go-getting mayor Andy Berke, a Democrat tipped for higher office, is currently reviewing plans for a city center tech zone specifically designed to meet the needs of its new workforce.

In large part the success is being driven by The Gig. Thanks to an ambitious roll-out by the city’s municipally owned electricity company, EPB, Chattanooga is one of the only places on Earth with internet at speeds as fast as 1 gigabit per second – about 50 times faster than the US average.

The tech buildup comes after more than a decade of reconstruction in Chattanooga that has regenerated the city with a world-class aquarium, 12 miles of river walks along the Tennessee River, an arts district built around the Hunter Museum of American Arts, high-end restaurants and outdoor activities.

But it’s the city’s tech boom has sparked interest from other municipalities across the world. It also comes as the Federal Communications Commission (FCC) prepares to address some of the biggest questions the internet has faced when it returns from the summer break. And while the FCC discusses whether Comcast, the world’s biggest cable company, should take over Time Warner, the US’s second largest cable operator, and whether to allow those companies to set up fast lanes (and therefore slow lanes) for internet traffic, Chattanooga is proof that another path is possible.

It’s a story that is being watched very closely by Big Cable’s critics. “In DC there is often an attitude that the only way to solve our problems is to hand them over to big business. Chattanooga is a reminder that the best solutions are often local and work out better than handing over control to Comcast or AT&T to do whatever they want with us,” said Chris Mitchell, director of community broadband networks at advocacy group the Institute for Local Self-Reliance.

On Friday, the US cable industry called on the FCC to block Chattanooga’s plan to expand, as well as a similar plan for Wilson, North Carolina.

“The success of public broadband is a mixed record, with numerous examples of failures,” USTelecom said in a blog post. “With state taxpayers on the financial hook when a municipal broadband network goes under, it is entirely reasonable for state legislatures to be cautious in limiting or even prohibiting that activity.”

Mayor Berke has dealt with requests for visits from everyone from tiny rural communities to “humungous international cities”. “You don’t see many mid-sized cities that have the kind of activity that we have right now in Chattanooga,” he said. “What the Gig did was change the idea of what our city could be. Mid-sized southern cities are not generally seen as being ahead of the technological curve, the Gig changed that. We now have people coming in looking to us as a leader.”

It’s still early days but there have already been notable successes. In addition to Access America’s sale for an undisclosed sum, last year restaurant booking site OpenTable bought a local company, QuickCue, for $11.5m. “That’s a great example of a story that just doesn’t happen in other mid-sized southern cities,” said Berke.

But it’s what Chattanooga can do next that has the local tech community buzzed.

EPB’s high-speed network came about after it decided to set up a smart electric grid in order to cut power outages. EPB estimated it would take 10 years to build the system and raised a $170m through a municipal bond to pay for it. In 2009 president Barack Obama launched the American Recovery and Reinvestment Act, a stimulus programme aimed at getting the US economy back on track amid the devastation of the recession. EPB was awarded $111m to get its smart grid up and running. Less than three years later the whole service territory was built.

The fibre-optic network uses IntelliRupter PulseClosers, made by S&C Electric, that can reroute power during outages. The University of California at Berkeley estimates that power outages cost the US economy $80bn a year through business disruption with manufacturers stopping their lines and restaurants closing. Chattanooga’s share of that loss was about $100m, EPB estimates. The smart grid can detect a fault in milliseconds and route power around problems. Since the system was installed the duration of power outages has been cut in half.

But it was the other uses of that fiber that fired up enthusiasm in Chattanooga. “When we first started talking about this and the uses of the smart grid we would say to customers and community groups ‘Oh and it can also offer very high-speed internet, TV and phone.’ The electric power stuff was no longer of interest. This is what what people got excited about and it’s the same today,” said EPB vice president Danna Bailey.

Read the entire story here.

Image: Chattanooga, TN skyline. Courtesy of Wikipedia.

The IBM Songbook

IBM Songbook

It would be fascinating to see a Broadway or West End show based on lyrics penned in honor of IBM and Thomas Watson, Sr., its first president. Makes you wonder if faithful employees of say, Facebook or Apple, would ever write a songbook — not in jest — for their corporate alma mater. I think not.

From ars technica:

“For thirty-seven years,” reads the opening passage in the book, “the gatherings and conventions of our IBM workers have expressed in happy songs the fine spirit of loyal cooperation and good fellowship which has promoted the signal success of our great IBM Corporation in its truly International Service for the betterment of business and benefit to mankind.”

That’s a hell of a mouthful, but it’s only the opening volley in the war on self-respect and decency that is the 1937 edition of Songs of the IBM, a booklet of corporate ditties first published in 1927 on the order of IBM company founder Thomas Watson, Sr.

The 1937 edition of the songbook is a 54-page monument to glassey-eyed corporate inhumanity, with every page overflowing with trite praise to The Company and Its Men. The booklet reads like a terribly parody of a hymnal—one that praises not the traditional Christian trinity but the new corporate triumvirate of IBM the father, Watson the son, and American entrepreneurship as the holy spirit:

Thomas Watson is our inspiration,
Head and soul of our splendid I.B.M.
We are pledged to him in every nation,
Our President and most beloved man.
His wisdom has guided each division
In service to all humanity
We have grown and broadened with his vision,
None can match him or our great company.
T. J. Watson, we all honor you,
You’re so big and so square and so true,
We will follow and serve with you forever,
All the world must know what I. B. M. can do.

—from “To Thos. J. Watson, President, I.B.M. Our Inspiration”

The wording transcends sense and sanity—these aren’t songs that normal human beings would choose to line up and sing, are they? Have people changed so much in the last 70-80 years that these songs—which seem expressly designed to debase their singers and deify their subjects—would be joyfully sung in harmony without complaint at company meetings? Were workers in the 1920s and 1930s so dehumanized by the rampaging robber barons of high industry that the only way to keep a desirable corporate job at a place like IBM was to toe the line and sing for your paycheck?

Surely no one would stand for this kind of thing in the modern world—to us, company songs seem like relics of a less-enlightened age. If anything, the mindless overflowing trite words sound like the kind of praises one would find directed at a cult of personality dictator in a decaying wreck of a country like North Korea.

Indeed, some of the songs in the book wouldn’t be out of place venerating the Juche ideal instead of IBM:

We don’t pretend we’re gay.
We always feel that way,
Because we’re filling the world with sunshine.
With I.B.M. machines,
We’ve got the finest means,
For brightly painting the clouds with sunshine.

—from “Painting the Clouds with Sunshine”

Surely no one would stand for this kind of thing in the modern world—to us, company songs seem like relics of a less-enlightened age. If anything, the mindless overflowing trite words sound like the kind of praises one would find directed at a cult of personality dictator in a decaying wreck of a country like North Korea.

Tie an onion to your belt

All right, time to come clean: it’s incredibly easy to cherry pick terrible examples out of a 77-year old corporate songbook (though this songbook makes it easy because of how crazy it is to modern eyes). Moreover, to answer one of the rhetorical questions above, no—people have not changed so much over the past 80-ish years that they could sing mawkishly pro-IBM songs with an irony-free straight face. At least, not without some additional context.

There’s a decade-old writeup on NetworkWorld about the IBM corporate song phenomena that provides a lot of the glue necessary to build a complete mental picture of what was going on in both employees’ and leaderships’ heads. The key takeaway to deflate a lot of the looniness is that the majority of the songs came out of the Great Depression era, and employees lucky enough to be steadfastly employed by a company like IBM often werereally that grateful.

The formal integration of singing as an aspect of IBM’s culture at the time was heavily encouraged by Thomas J. Watson Sr. Watson and his employees co-opted the era’s showtunes and popular melodies for their proto-filking, ensuring that everyone would know the way the song went, if not the exact wording. Employees belting out “To the International Ticketograph Division” to the tune of “My Bonnie Lies Over the Ocean” (“In I.B.M. There’s a division. / That’s known as the Ticketograph; / It’s peopled by men who have vision, / Progressive and hard-working staff”) really isn’t all that different from any other team-building exercise that modern companies do—in fact, in a lot of ways, it’s far less humiliating than a company picnic with Mandatory Interdepartmental Three-Legged Races.

Many of the songs mirror the kinds of things that university students of the same time period might sing in honor of their alma mater. When viewed from the perspective of the Depression and post-Depression era, the singing is still silly—but it also makes a lot more sense. Watson reportedly wanted to inspire loyalty and cohesion among employees—and, remember, this was also an era where “normal” employee behavior was to work at a single company for most of one’s professional life, and then retire with a pension. It’s certainly a lot easier to sing a company’s praises if there’s paid retirement at the end of the last verse.

Read the entire article and see more songs here.

Image: Page 99-100 of the IBM Songbook, 1937. Courtesy of IBM / are technica.

Don’t Hitchhike, Unless You’re a Robot

hitchbot

 

A Canadian is trying valiantly to hitchhike across the nation, from coast-to-coast — Nova Scotia to British Columbia. While others have made this trek before, this journey is peculiar in one respect. The intrepid hiker is a child-sized robot. She or he — we don’t really know — is named hitchBOT.

hitchBOT is currently still in eastern Canada; New Brunswick to be more precise. So one has to wonder if (s)he would have made better progress from commandeering one of Google’s self-propelled, driverless cars to make the 3,781 mile journey.

Read the entire story and follow hitchBOT’s progress across Canada here.

Image courtesy of hitchBOT / Independent.

 

Computer Generated Reality

[tube]nLtmEjqzg7M[/tube]

Computer games have come a very long way since the pioneering days of Pong and Pacman. Games are now so realistic that many are indistinguishable from the real-world characters and scenarios they emulate. It is a testament to the skill and ingenuity of hardware and software engineers and the creativity of developers who bring all the diverse underlying elements of a game together. Now, however, they have a match in the form of computer system that is able to generate richly  imagined and rendered world for use in the games themselves. It’s all done through algorithms.

From Technology Review:

Read the entire story here.

Video: No Man’s Sky. Courtesy of Hello Games.

 

 

Goostman Versus Turing

eugene-goostman

Some computer scientists believe that “Eugene Goostman” may have overcome the famous hurdle proposed by Alan Turning, by cracking the eponymous Turning Test. Eugene is a 13 year-old Ukrainian “boy” constructed from computer algorithms designed to feign intelligence and mirror human thought processes. During a text-based exchange Eugene managed to convince his human interrogators that he was a real boy — and thus his creators claim to have broken the previously impenetrable Turing barrier.

Other researchers and philosophers disagree: they claim that it’s easier to construct an artificial intelligence that converses in good, but limited English — Eugene is Ukrainian after all — than it would be to develop a native anglophone adult. So, the Turning Test barrier may yet stand.

From the Guardian:

From 2001: a Space Odyssey to Her, the idea of an intelligent computer that can hold conversations with humans has long been a dream of science-fiction writers, but that fantasy may just have taken a step closer to becoming reality. On Saturday, a computer algorithm claiming to be a 13-year-old boy called Eugene Goostman passed the Turing test, it has been claimed. However, some commentators have suggested that pretending to be a 13-year-old Ukrainian child is not as difficult as pretending to be an Anglophone adult, and that the result should not count.

The program convinced 33% of the human judges at an event at the Royal Society that it was actually a human. The Turing test, which was devised in 1950 by the pioneering computer scientist and second wold war codebreaker Alan Turing is a classic benchmark in artificial intelligence – it determines a machine’s ability to show intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Transcripts of the conversations from the latest Royal Society test are not yet available but below are transcripts from a previous event at Bletchley Park on 23 June 2012, with the same program.

Can you tell Eugene isn’t human?

Test 1

[12:43:37] Judge: Hi, I’m Cathy.

[12:43:43] Eugene: Cathy! Nice to meet you! You’re a woman, if I’m not mistaken.

[12:44:20] Judge: Right. What is your name?

[12:44:29] Eugene: People call me Eugene. Or “Zhenya”. It is a short name for Eugene in Russia and Ukraine.

[12:45:17] Judge: Nice name. Are you from either area?

[12:45:41] Eugene: Agree 🙂 Oh, what a fruitful conversation;-)

[12:45:52] Judge: Do you really think so?

[12:46:04] Eugene: No, your guess was wrong. I don’t think so. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

[12:46:56] Judge: What speciality? What’s yours?

[12:47:01] Eugene: Huh? And I forgot to ask you where you are from.

[12:47:51] Judge: The universe and you?

[12:48:01] Eugene: Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?

Test 2

[16:11:25] Judge: what music do you like?

[16:11:32] Eugene: I like Eminem – because he doesn’t have a thing to do with “polite correctness”.

[16:11:46] Judge: OMG i love eminem, he’s my favourite artist!

[16:12:00] Eugene: I wonder what Eminem’s nick mean. I associate it only with a word “enema”. But he is a cool guy, I like his The Real Slim Shady.

[16:12:55] Judge: i know what his nickname means, his real name is Marshall Mathers. From the initials M&M phynetically sounding like Eminem.

[16:13:06] Eugene: Eminem is a famous crapper. No, raper… ripper… Yeah, I mean – rapper. I like his Stan and The Real Slim Shady.

[16:13:32] Judge: I like Hailies Song.

[16:13:46] Eugene: Interesting tastes. I like science-fiction books, by the way. Where do you came from, by the way? Could you tell me about the place where you live?

[16:14:22] Judge: Epsom, home of the derby. yourself?

[16:14:27] Eugene: My guinea pig says that name Derby sounds very nice.

[16:14:41] Judge: i shopuld [sic] talk with him sometime.

[16:14:48] Eugene: I think you should take some typing lessons. Or just type more accurately.

[16:15:04] Judge: That’s very rude.

[16:15:09] Eugene: Ummm … Frankly, I didn’t get your question 🙁

[16:15:49] Judge: “Ummm” isnt a word. i think you need typing lessons

[16:15:58] Eugene: My thoughts are same. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Test 3

[16:06:31] Judge: Hi there, do you prefer to wear a dress or trousers to work?

[16:06:37] Eugene: I am a scholar. I’m too young to make money.

[16:07:32] Judge: What was the last film you saw at the cinema?

[16:07:38] Eugene: Huh? Could you tell me what are you? I mean your profession.

[16:08:11] Judge: I am an IT manager, and you?

[16:08:16] Eugene: Manager? So how is it to work as manager?

[16:09:13] Judge: Hard work but rewarding, what do you do?

[16:09:18] Eugene: I am only 13, so I’m attending school so far.

[16:09:51] Judge: Have you seen teletubbies?

[16:10:04] Eugene: No, I haven’t seen it, unfortunately. But I hope it is better than these crappy Star Wars. And I forgot to ask you where you are from …

[16:10:15] Judge: Brighton, and you?

Read the entire story and Eugene’s conversation with real humans here.

Image: A conversation with Eugene Goostman. Courtesy of BBC.

I Think, Therefore I am, Not Robot

Robbie_the_Robot_2006

A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.

From New Scientist:

So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.

Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.

But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0” if A and B are the same and “1” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.

Memory edit

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.

Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.

Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.

Open minds

“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”

Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.

Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”

Read the entire story here.

Image: Robbie the Robot, Forbidden Planet. Courtesy of San Diego Comic Con, 2006 / Wikipedia.

The (Space) Explorers Club

clangers

Thirteen private companies recently met in New York city to present their plans and ideas for their commercial space operations. Ranging from space tourism to private exploration of the Moon and asteroid mining the companies gathered at the Explorers Club to herald a new phase of human exploration.

From Technology Review:

It was a rare meeting of minds. Representatives from 13 commercial space companies gathered on May 1 at a place dedicated to going where few have gone before: the Explorers Club in New York.

Amid the mansions and high-end apartment buildings just off Central Park, executives from space-tourism companies, rocket-making startups, and even a business that hopes to make money by mining asteroids for useful materials showed off displays and gave presentations.

The Explorers Club event provided a snapshot of what may be a new industry in the making. In an era when NASA no longer operates manned space missions and government funding for unmanned missions is tight, a host of startups—most funded by space enthusiasts with very deep pockets—have stepped up in hope of filling the gap. In the past few years, several have proved themselves. Elon Musk’s SpaceX, for example, delivers cargo to the International Space Station for NASA. Both Richard Branson’s Virgin Galactic and rocket-plane builder XCOR Aerospace plan to perform demonstrations this year that will help catapult commercial spaceflight from the fringe into the mainstream.

The advancements being made by space companies could matter to more than the few who can afford tickets to space. SpaceX has already shaken incumbents in the $190 billion satellite launch industry by offering cheaper rides into space for communications, mapping, and research satellites.

However, space tourism also looks set to become significantly cheaper. “People don’t have to actually go up for it to impact them,” says David Mindell, an MIT professor of aeronautics and astronautics and a specialist in the history of engineering. “At $200,000 you’ll have a lot more ‘space people’ running around, and over time that could have a big impact.” One direct result, says Mindell, may be increased public support for human spaceflight, especially “when everyone knows someone who’s been into space.”

Along with reporters, Explorer Club members, and members of the public who had paid the $75 to $150 entry fee, several former NASA astronauts were in attendance to lend their endorsements—including the MC for the evening, Michael López-Alegría, veteran of the space shuttle and the ISS. Also on hand, highlighting the changing times with his very presence, was the world’s first second-generation astronaut, Richard Garriott. Garriott’s father flew missions on Skylab and the space shuttle in the 1970s and 1980s, respectively. However, Garriott paid his own way to the International Space Station in 2008 as a private citizen.

The evening was a whirlwind of activity, with customer testimonials and rapid-fire displays of rocket launches, spacecraft in orbit, and space ships under construction and being tested. It all painted a picture of an industry on the move, with multiple companies offering services from suborbital experiences and research opportunities to flights to Earth orbit and beyond.

The event also offered a glimpse at the plans of several key players.

Lauren De Niro Pipher, head of astronaut relations at Virgin Galactic, revealed that the company’s founder plans to fly with his family aboard the Virgin Galactic SpaceShipTwo rocket plane in November or December of this year. The flight will launch the company’s suborbital spaceflight business, for which De Niro Pipher said more than 700 customers have so far put down deposits on tickets costing $200,000 to $250,000.

The director of business development for Blue Origin, Bretton Alexander, announced his company’s intention to begin test flights of its first full-scale vehicle within the next year. “We have not publicly started selling rides in space as others have,” said Alexander during his question-and-answer session. “But that is our plan to do that, and we look forward to doing that, hopefully soon.”

Blue Origin is perhaps the most secretive of the commercial spaceflight companies, typically revealing little of its progress toward the services it plans to offer: suborbital manned spaceflight and, later, orbital flight. Like Virgin, it was founded by a wealthy entrepreneur, in this case Amazon founder Jeff Bezos. The company, which is headquartered in Kent, Washington, has so far conducted at least one supersonic test flight and a test of its escape rocket system, both at its West Texas test center.

Also on hand was the head of Planetary Resources, Chris Lewicki, a former spacecraft engineer and manager for Mars programs at NASA. He showed off a prototype of his company’s Arkyd 100, an asteroid-hunting space telescope the size of a toaster oven. If all goes according to plan, a fleet of Arkyd 100s will first scan the skies from Earth orbit in search of nearby asteroids that might be rich in mineral wealth and water, to be visited by the next generation of Arkyd probes. Water is potentially valuable for future space-based enterprises as rocket fuel (split into its constituent elements of hydrogen and oxygen) and for use in life support systems. Planetary Resources plans to “launch early, launch often,” Lewicki told me after his presentation. To that end, the company is building a series of CubeSat-size spacecraft dubbed Arkyd 3s, to be launched from the International Space Station by the end of this year.

Andrew Antonio, experience manager at a relatively new company, World View Enterprises, showed a computer-generated video of his company’s planned balloon flights to the edge of space. A manned capsule will ascend to 100,000 feet, or about 20 miles up, from which the curvature of Earth and the black sky of space are visible. At $75,000 per ticket (reduced to $65,000 for Explorers Club members), the flight will be more affordable than competing rocket-powered suborbital experiences but won’t go as high. Antonio said his company plans to launch a small test vehicle “in about a month.”

XCOR’s director of payload sales and operations, Khaki Rodway, showed video clips of the company’s Lynx suborbital rocket plane coming together in Mojave, California, as well as a profile of an XCOR spaceflight customer. Hangared just down the flight line at the same air and space port where Virgin Galactic’s SpaceShipTwo is undergoing flight testing, the Lynx offers seating for one paying customer per flight at $95,000. XCOR hopes the Lynx will begin flying by the end of this year.

Read the entire article here.

Image: Still from the Clangers TV show. Courtesy of BBC / Smallfilms.

Intimate Anonymity

A new mobile app lets you share all your intimate details with a stranger for 20 days. The fascinating part of this social experiment is that the stranger remains anonymous throughout. The app known as 20 Day Stranger is brought to us by the venerable MIT Media Lab. It may never catch on, but you can be sure that psychologists are gleefully awaiting some data.

From Slate:

Social media is all about connecting with people you know, people you sort of know, or people you want to know. But what about all those people you didn’t know you wanted to know? They’re out there, too, and the new iPhone app 20 Day Stranger wants to put you in touch with them. Created by the MIT Media Lab’s Playful Systems research group, the app connects strangers and allows them to update each other about any and every detail of their lives for 20 days. But the people are totally anonymous and can interact directly only at the end of their 20 days together, when they can exchange one message each.

20 Day Stranger uses information from the iPhone’s sensors to alert your stranger-friend when you wake up (and start moving the phone), when you’re in a car or bus (from GPS tracking), and where you are. But it isn’t totally privacy-invading: The app also takes steps to keep both people anonymous. When it shows your stranger-friend that you’re walking around somewhere, it accompanies the notification with images from a half-mile radius of where you actually are on Google Maps. Your stranger-friend might be able to figure out what area you’re in, or they might not.

Kevin Slavin, the director of Playful Systems, explained to Fast Company that the app’s goal is to introduce people online in a positive and empathetic way, rather than one that’s filled with suspicion or doubt. Though 20 Day Stranger is currently being beta tested, Playful Systems’ goal is to generally release it in the App Store. But the group is worried about getting people to adopt it all over instead of building up user bases in certain geographic areas. “There’s no one type of person what will make it useful,” Slavin said. “It’s the heterogeneous quality of everyone in aggregate. Which is a bad [promotional] strategy if you’re making commercial software.”

At this point it’s not that rare to interact frequently with someone you’ve never met in person on social media. What’s unusual it not to know their name or anything about who they are. But an honest window into another person’s life without the pressure of identity could expand your worldview and maybe even stimulate introspection. It sounds like a step up from Secret, that’s for sure.

Read the entire article here.

Plastic, Heal Thyself!

[tube]sybsT1_0qwQ[/tube]

Blood is a remarkable substance: it transports vital oxygen to nourish our cells, it carries signalling chemicals that control our actions, it delivers armies of substances, at a moment’s notice, to ward against bodily infection and injury. Now, imagine a similar, bio-mimetic process in plastic, which remarkably allows a plastic material to heal itself.

From New Scientist:

If you prick it, does it not bleed? Puncture this plastic and it will heal itself with oozing fluids, in a process that mimics the way blot clots form to repair wounds. The plastic could one day be used to automatically patch holes in distant spacecraft or repair fighter jets on the fly.

So far, efforts to develop materials that fix themselves the way biological tissue mends itself have been limited. Scott White at the University of Illinois at Urbana-Champaign and his colleagues developed one of the first versions in 2001, but that material could only heal microscopic cracks.

Now his team have created a plastic lined with a type of artificial vascular system that can heal damage large enough to be visible to the naked eye.

The key is a pair of liquids that react when they are mixed. One fluid contains long, thin molecules and the other contains three-sided molecules. When the fluids mix, the molecules join together to create a scaffold, similar to the way blood platelets and fibrin proteins join to form a clot.

After a few minutes of contact, the liquids turn into a thick gel that fills the damaged area. Over a few hours, other ingredients within the fluids cause the gel to harden.

Strength from weakness

To test the concept, the team ran separate channels of each liquid through a plastic square and punctured it, creating a 4-millimetre hole with 35 millimetres of surrounding cracks. This also tore open the fluid channels.

Pumps on the edge of the plastic square squirted the fluids into the channels, where they oozed out and mixed, filling the hole and the radiating cracks within 20 minutes. The material hardened in about 3 hours, and the resulting patch was around 60 per cent as strong as the original plastic.

Holes larger than 8 millimetres proved more difficult to fill, as gravity caused the gel to sag before it could harden. The team thinks using foams in place of fluids would fill larger gaps, but they haven’t tested that idea yet.

Eventually, White and his team envision plastics with multiple criss-crossing channels, to ensure that the fluids always overlap with a damaged area. Embedding this synthetic vascular network would weaken the original material, but not by much, they say.

“You pay the price for being able to repair this damage, but it is certainly one that nature has figured out how to tolerate,” says team member Jeff Moore, also at the University of Illinois. “If you just look to things like bone or trees, they are all vascularised.”

Read the entire article here.

Image: Self-healing materials fix large-scale damage. Courtesy of University of Illinois at Urbana-Champaign.

Paper is the Next Big Thing

Da-Vinci-Hammer-Codex

Luddites and technophobes rejoice, paper-bound books may be with us for quite some time. And, there may be some genuinely scientific reasons why physical books will remain. Recent research shows that people learn more effectively when reading from paper versus its digital offspring.

From Wired:

Paper books were supposed to be dead by now. For years, information theorists, marketers, and early adopters have told us their demise was imminent. Ikea even redesigned a bookshelf to hold something other than books. Yet in a world of screen ubiquity, many people still prefer to do their serious reading on paper.

Count me among them. When I need to read deeply—when I want to lose myself in a story or an intellectual journey, when focus and comprehension are paramount—I still turn to paper. Something just feels fundamentally richer about reading on it. And researchers are starting to think there’s something to this feeling.

To those who see dead tree editions as successors to scrolls and clay tablets in history’s remainder bin, this might seem like literary Luddism. But I e-read often: when I need to copy text for research or don’t want to carry a small library with me. There’s something especially delicious about late-night sci-fi by the light of a Kindle Paperwhite.

What I’ve read on screen seems slippery, though. When I later recall it, the text is slightly translucent in my mind’s eye. It’s as if my brain better absorbs what’s presented on paper. Pixels just don’t seem to stick. And often I’ve found myself wondering, why might that be?

The usual explanation is that internet devices foster distraction, or that my late-thirty-something brain isn’t that of a true digital native, accustomed to screens since infancy. But I have the same feeling when I am reading a screen that’s not connected to the internet and Twitter or online Boggle can’t get in the way. And research finds that kids these days consistently prefer their textbooks in print rather than pixels. Whatever the answer, it’s not just about habit.

Another explanation, expressed in a recent Washington Post article on the decline of deep reading, blames a sweeping change in our lifestyles: We’re all so multitasked and attention-fragmented that our brains are losing the ability to focus on long, linear texts. I certainly feel this way, but if I don’t read deeply as often or easily as I used to, it does still happen. It just doesn’t happen on screen, and not even on devices designed specifically for that experience.

Maybe it’s time to start thinking of paper and screens another way: not as an old technology and its inevitable replacement, but as different and complementary interfaces, each stimulating particular modes of thinking. Maybe paper is a technology uniquely suited for imbibing novels and essays and complex narratives, just as screens are for browsing and scanning.

“Reading is human-technology interaction,” says literacy professor Anne Mangen of Norway’s University of Stavenger. “Perhaps the tactility and physical permanence of paper yields a different cognitive and emotional experience.” This is especially true, she says, for “reading that can’t be done in snippets, scanning here and there, but requires sustained attention.”

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Read the entire electronic article here.

Image: Leicester or Hammer Codex, by Leonardo da Vinci (1452-1519). Courtesy of Wikipedia / Public domain.

 

Neuromorphic Chips

Neuromorphic chips are here. But don’t worry these are not brain implants that you might expect to see in a William Gibson or Iain Banks novel. Neuromorphic processors are designed to simulate brain function, and learn or mimic certain types of human processes such as sensory perception, image processing and object recognition. The field is making tremendous advances, with companies like Qualcomm — better known for its mobile and wireless chips — leading the charge. Until recently complex sensory and mimetic processes had been the exclusive realm of supercomputers.

From Technology Review:

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.

Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.

No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.

Read the entire article here.

Nuclear Codes and Floppy Disks

Floppy_disksSometimes a good case can be made for remaining a technological Luddite; sometimes eschewing the latest-and-greatest technical gizmo may actually work for you.

 

Take the case of the United States’ nuclear deterrent. A recent report on CBS 60 Minutes showed us how part of the computer system responsible for launch control of US intercontinental ballistic missiles (ICBM) still uses antiquated 8-inch floppy disks. This part of the national defense is so old and arcane it’s actually more secure than most contemporary computing systems and communications infrastructure. So, next time your internet-connected, cloud-based tablet or laptop gets hacked consider reverting to a pre-1980s device.

From ars technica:

In a report that aired on April 27, CBS 60 Minutes correspondent Leslie Stahl expressed surprise that part of the computer system responsible for controlling the launch of the Minuteman III intercontinental ballistic missiles relied on data loaded from 8-inch floppy disks. Most of the young officers stationed at the launch control center had never seen a floppy disk before they became “missileers.”

An Air Force officer showed Stahl one of the disks, marked “Top Secret,” which is used with the computer that handles what was once called the Strategic Air Command Digital Network (SACDIN), a communication system that delivers launch commands to US missile forces. Beyond the floppies, a majority of the systems in the Wyoming US Air Force launch control center (LCC) Stahl visited dated back to the 1960s and 1970s, offering the Air Force’s missile forces an added level of cyber security, ICBM forces commander Major General Jack Weinstein told 60 Minutes.

“A few years ago we did a complete analysis of our entire network,” Weinstein said. “Cyber engineers found out that the system is extremely safe and extremely secure in the way it’s developed.”

However, not all of the Minuteman launch control centers’ aging hardware is an advantage. The analog phone systems, for example, often make it difficult for the missileers to communicate with each other or with their base. The Air Force commissioned studies on updating the ground-based missile force last year, and it’s preparing to spend $19 million this year on updates to the launch control centers. The military has also requested $600 million next year for further improvements.

Read the entire article here.

Image: Various floppy disks. Courtesy: George George Chernilevsky,  2009 / Wikipedia.

Google: The Standard Oil of Our Age

Google’s aim to organize the world’s information sounds benign enough. But delve a little deeper into its research and development efforts or witness its boundless encroachment into advertising, software, phones, glasses, cars, home automation, travel, internet services, artificial intelligence, robotics, online shopping (and so on), and you may get a more uneasy and prickly sensation. Is Google out to organize information or you? Perhaps it’s time to begin thinking about Google as a corporate hegemony, not quite a monopoly yet, but so powerful that counter-measures become warranted.

An open letter, excerpted below, from Mathias Döpfner, CEO of Axel Springer AG, does us all a service by raising the alarm bells.

From the Guardian:

Dear Eric Schmidt,

As you know, I am a great admirer of Google’s entrepreneurial success. Google’s employees are always extremely friendly to us and to other publishing houses, but we are not communicating with each other on equal terms. How could we? Google doesn’t need us. But we need Google. We are afraid of Google. I must state this very clearly and frankly, because few of my colleagues dare do so publicly. And as the biggest among the small, perhaps it is also up to us to be the first to speak out in this debate. You yourself speak of the new power of the creators, owners, and users.

In the long term I’m not so sure about the users. Power is soon followed by powerlessness. And this is precisely the reason why we now need to have this discussion in the interests of the long-term integrity of the digital economy’s ecosystem. This applies to competition – not only economic, but also political. As the situation stands, your company will play a leading role in the various areas of our professional and private lives – in the house, in the car, in healthcare, in robotronics. This is a huge opportunity and a no less serious threat. I am afraid that it is simply not enough to state, as you do, that you want to make the world a “better place”.

Google lists its own products, from e-commerce to pages from its own Google+ network, higher than those of its competitors, even if these are sometimes of less value for consumers and should not be displayed in accordance with the Google algorithm. It is not even clearly pointed out to the user that these search results are the result of self-advertising. Even when a Google service has fewer visitors than that of a competitor, it appears higher up the page until it eventually also receives more visitors.

You know very well that this would result in long-term discrimination against, and weakening of, any competition, meaning that Google would be able to develop its superior market position still further. And that this would further weaken the European digital economy in particular.

This also applies to the large and even more problematic set of issues concerning data security and data utilisation. Ever since Edward Snowden triggered the NSA affair, and ever since the close relations between major American online companies and the American secret services became public, the social climate – at least in Europe – has fundamentally changed. People have become more sensitive about what happens to their user data. Nobody knows as much about its customers as Google. Even private or business emails are read by Gmail and, if necessary, can be evaluated. You yourself said in 2010: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.” This is a remarkably honest sentence. The question is: are users happy with the fact that this information is used not only for commercial purposes – which may have many advantages, yet a number of spooky negative aspects as well – but could end up in the hands of the intelligence services, and to a certain extent already has?

Google is sitting on the entire current data trove of humanity, like the giant Fafner in The Ring of the Nibelung: “Here I lie and here I hold.” I hope you are aware of your company’s special responsibility. If fossil fuels were the fuels of the 20th century, then those of the 21st century are surely data and user profiles. We need to ask ourselves whether competition can generally still function in the digital age, if data is so extensively concentrated in the hands of one party.

There is a quote from you in this context that concerns me. In 2009 you said: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” The essence of freedom is precisely the fact that I am not obliged to disclose everything that I am doing, that I have a right to confidentiality and, yes, even to secrets; that I am able to determine for myself what I wish to disclose about myself. The individual right to this is what makes a democracy. Only dictatorships want transparent citizens instead of a free press.

Against this background, it greatly concerns me that Google – which has just announced the acquisition of drone manufacturer Titan Aerospace – has been seen for some time as being behind a number of planned enormous ships and floating working environments that can cruise and operate in the open ocean. What is the reason for this development? You don’t have to be a conspiracy theorist to find this alarming.

Historically, monopolies have never survived in the long term. Either they have failed as a result of their complacency, which breeds its own success, or they have been weakened by competition – both unlikely scenarios in Google’s case. Or they have been restricted by political initiatives.

Another way would be voluntary self-restraint on the part of the winner. Is it really smart to wait until the first serious politician demands the breakup of Google? Or even worse – until the people refuse to follow?

Sincerely yours,

Mathias Döpfner

Read the entire article here.

 

Mesh Networks: Coming to a Phone Near You

firechat-screenshot

Soon you’ll be able to text and chat online without the need of a cellular network or the Internet. There is a catch though: you’ll need yet another chat-app for your smartphone and you will need to be within a 100 or so yards of your chatting friend. But, this is just the beginning of so-called “mesh networks” that can be formed through peer-to-peer device connections avoiding the need for cellular communications. As mobile devices continue to proliferate such local, device-to-device connections could become more practical.

From Technology Review:

Mobile app stores are stuffed with messaging apps from WhatsApp to Tango and their many imitators. But FireChat, released last week for the iPhone, stands out. It’s the only one that can be used without cell-phone reception.

FireChat makes use of a feature Apple introduced in the latest version of its iOS mobile software, iOS7, called multipeer connectivity. This feature allows phones to connect to one another directly using Bluetooth or Wi-Fi as an alternative to the Internet. If you’re using FireChat, its “nearby” chat room lets you exchange messages with other users within 100 feet without sending data via your cellular provider.

Micha Benoliel, CEO and cofounder of startup Open Garden, which made FireChat, says the app shows how smartphones can be set free from cellular networks. He hopes to enable many more Internet-optional apps with the upcoming release of software tools that will help developers build FireChat-style apps for iPhone, or for Android, Mac, and Windows devices. “This approach is very interesting for multiplayer gaming and all kinds of communication apps,” says Benoliel.

Anthony DiPasquale, a developer with consultancy Thoughtbot, says FireChat is the only app he’s aware of that’s been built to make use of multipeer connectivity, perhaps because the feature remains unfamiliar to most Apple developers. “I hope more people start to use it soon,” he says. “It’s an awesome framework with a lot of potential. There is probably a great use for multipeer connectivity in every situation where there are people grouped together wanting to share some sort of information.” DiPasquale has dabbled in using multipeer connectivity himself, creating an experimental app that streams music from one device to several others nearby.

The new feature of iOS7 currently only supports data moving directly from one device to another, and from one device to several others. However, Open Garden’s forthcoming software will extend the feature so that data can hop between two iPhones out of range of one another via intermediary devices. That approach, known as mesh networking, is at the heart of several existing projects to create disaster-proof or community-controlled communications networks (see “Build Your Own Internet with Mobile Mesh Networking”).

Apps built to exploit such device-to-device schemes can offer security and privacy benefits over those that rely on the Internet. For example, messages sent using FireChat to nearby devices don’t pass through any systems operated by either Open Garden or a wireless carrier (although they are broadcast to all FireChat users nearby).

That means the content of a message and metadata could not be harvested from a central communications hub by an attacker or government agency. “This method of communication is immune to firewalls like the ones installed in China and North Korea,” says Mattt Thompson, a software engineer who writes the iOS and Mac development blog NSHipster. Recent revelations about large-scale surveillance of online services and the constant litany of data breaches make this a good time for apps that don’t rely on central servers, he says. “As users become more mindful of the security and privacy implications of technologies they rely on, moving in the direction of local, ad-hoc networking makes a lot of sense.”

However, peer-to-peer and mesh networking apps also come with their own risks, since an eavesdropper could gain access to local traffic just by using a device within range.

Read the entire article here.

Image courtesy of Open Garden.

Father of Distributed Computing

Leslie_LamportDistributed computing is a foundational element for most modern day computing. It paved the way for processing to be shared across multiple computers and, nowadays, within the cloud. Most technology companies, including IBM, Google, Amazon, and Facebook, use distributed computing to provide highly scalable and reliable computing power for their systems and services. Yet, Bill Gates did not invent distributed computing, nor did Steve Jobs. In fact, it was pioneered in the mid-1970s by an unsung hero of computer science, Leslie Lamport. Know aged 73 Leslie Lamport was recognized with this year’s Turing Award.

From Technology Review:

This year’s winner of the Turing Award—often referred to as the Nobel Prize of computing—was announced today as Leslie Lamport, a computer scientist whose research made possible the development of the large, networked computer systems that power, among other things, today’s cloud and Web services. The Association for Computing Machinery grants the award annually, with an associated prize of $250,000.

Lamport, now 73 and a researcher with Microsoft, was recognized for a series of major breakthroughs that began in the 1970s. He devised algorithms that make it possible for software to function reliably even if it is running on a collection of independent computers or components that suffer from delays in communication or sometimes fail altogether.

That work, within a field now known as distributed computing, remains crucial to the sprawling data centers used by Internet giants, and is also involved in coördinating the multiple cores of modern processors in computers and mobile devices. Lamport talked to MIT Technology Review’s Tom Simonite about why his ideas have lasted.

Why is distributed computing important?

Distribution is not something that you just do, saying “Let’s distribute things.” The question is ‘How do you get it to behave coherently?’”

My Byzantine Generals work [on making software fault-tolerant, in 1980] came about because I went to SRI and had a contract to build a reliable prototype computer for flying airplanes for NASA. That used multiple computers that could fail, and so there you have a distributed system. Today there are computers in Palo Alto and Beijing and other places, and we want to use them together, so we build distributed systems. Computers with multiple processors inside are also distributed systems.

We no longer use computers like those you worked with in the 1970s and ’80s. Why have your distributed-computing algorithms survived?

Some areas have had enormous changes, but the aspect of things I was looking at, the fundamental notions of synchronization, are the same.

Running multiple processes on a single computer is very different from a set of different computers talking over a relatively slow network, for example. [But] when you’re trying to reason mathematically about their correctness, there’s no fundamental difference between the two systems.

I [developed] Paxos [in 1989] because people at DEC [Digital Equipment Corporation] were building a distributed file system. The Paxos algorithm is very widely used now. Look inside of Bing or Google or Amazon—where they’ve got rooms full of computers, they’ll probably be running an instance of Paxos.

More recently, you have worked on ways to improve how software is built. What’s wrong with how it’s done now?

People seem to equate programming with coding, and that’s a problem. Before you code, you should understand what you’re doing. If you don’t write down what you’re doing, you don’t know whether you understand it, and you probably don’t if the first thing you write down is code. If you’re trying to build a bridge or house without a blueprint—what we call a specification—it’s not going to be very pretty or reliable. That’s how most code is written. Every time you’ve cursed your computer, you’re cursing someone who wrote a program without thinking about it in advance.

There’s something about the culture of software that has impeded the use of specification. We have a wonderful way of describing things precisely that’s been developed over the last couple of millennia, called mathematics. I think that’s what we should be using as a way of thinking about what we build.

Read the entire story here.

Image: Leslie Lamport, 2005. Courtesy of Wikipedia.

Building The 1,000 Mile Per Hour Car

BloodhoundSSC_front_dynamic_medium_Feb2014

First start with a jet engine. Then, perhaps add a second for auxiliary power. And, while your at it, throw in a rocket engine as well for some extra thrust. Add aluminum wheels with no tires. Hire a fighter pilot to “drive” it. Oh, and name it Bloodhound SSC (Supersonic Car). You’re on your way! Visit the official  Bloodhound website here.

From ars technica:

Human beings achieved many ‘firsts’ in the 20th century. We climbed the planet’s highest mountains, dived its deepest undersea trench, flew over it faster than the speed of sound, and even escaped it altogether in order to visit the moon. Beyond visiting Mars, it may feel like there are no more milestones left to reach. Yet people are still trying to push the envelope, even if they have to travel a little farther to get there.

Richard Noble is one such person. He’s spearheading a project called Bloodhound SSC that will visit uncharted territory on its way to a new land speed record on the far side of 1,000mph. The idea of a car capable of 1,000mph might sound ludicrous at first blush, but consider Noble’s credentials. The British businessman is responsible for previous land speed records in 1983 and 1997, the first of which came with him behind the wheel.

Bloodhound’s ancestors

Noble had been captivated by speed as a child after watching Cobb attempt to break a water speed record on Loch Ness in Scotland. Inspired by the achievements of fellow countrymen Campbell and Cobb, he wanted to reclaim the record for Britain. After building—and then crashing—one of the UK’s first jet-powered cars (Thrust 1), he acquired a surplus engine from English Electric Lightning. The Lightning was Britain’s late-1950s interceptor, designed to shoot down Soviet bombers over the North Sea. It was built around two powerful Rolls Royce Avon engines that gave it astonishing performance for the time. Just one of these engines was sufficient to convince John Ackroyd to accept Noble’s job offer as Thrust 2’s designer, and work began on the car in 1978, albeit in a shoestring fashion.

Thrust 2, now with a more powerful variant of the Avon engine, went to Bonneville at the end of September 1981. Until now, Noble had only driven the car on runways in the UK, never faster than 260mph. For two weeks the team built up speed at Bonneville before the rain arrived, flooding the lake and ending any record attempts for the year. Thrust 2 had peaked at 500mph, but Gabelich’s record would stand for a while longer. Thrust 2 returned the following September to again find Bonneville’s flats under several inches of water. Once it was clear that Bonneville was no good for anything other than hovercraft, the search was on for a new location.

Noble and Thrust 2 found themselves in the Black Rock desert in Nevada, now best known as the site of the Burning Man festival. Helpfully, the surface of the alkaline playa was much better suited to Thrust 2’s solid metal wheels. (At Bonneville these had cut ruts into the salt, requiring a new track for each run.) 1982 wasn’t to be Thrust 2’s year either, averaging 590mph and teaching Noble and his team a lot before the weather came and stopped things. Finally in 1983 everything went according to plan, and on October 4, Thrust 2 reached a peak speed of 650mph, setting a new world land speed record of 633.5mph.

It’s easy to see how the mindset required to successfully break a land speed record wouldn’t be satisfied just doing it once; it seems everyone comes back for another bite at the cherry. Noble was no exception. He knew that Breedlove was planning on taking back the record and that the American had a pair of General Electric J-79 engines with which to do so. 700mph was the next headline speed, with the speed of sound not much further away. Eager not to lose the record, Noble planned to defend it with Thrust 2’s successor, Thrust SSC (the initials stand for SuperSonic Car).

Thrust 2’s success came despite the lack of any significant aerodynamic design or refinement. Going supersonic meant that aerodynamics couldn’t be ignored any longer though. In 1992, Noble met the man who would design his new car, a retired aerodynamicist called Ron Ayers. Ayers would learn much on Thrust SSC—and another land speed car, 2006’s diesel-powered JCB Dieselmax—that would inform his design for Bloodhound SSC. At first though, he was reluctant to get involved. “The first thing I told him was he’d kill himself,” Ayers told Ars. Yet curiosity got the better of Ayers, and he began to see solutions for the various problems that at first made this look like an impossible challenge. A second chance meeting between Noble and Ayers followed, and before long Ayers was Thrust SSC’s concept designer and aerodynamicist.

Now, Ayers had the problem of working out what shape a supersonic car ought to take. That came from computational fluid dynamics (CFD). No one had attempted to use computer modeling to design a land speed record car until then, but even now no wind tunnels capable of supersonic speeds also feature a rolling road, necessary to accurately account for the effect of having wheels at those speeds. The University of Swansea in Wales created a CFD simulation of a supersonic vehicle, but “the problem was, at that time neither I nor anyone else trusted [CFD],” Ayers explained. His skepticism vanished following tests with scale models fired down a rocket sled track belonging to the UK Defense establishment (located at Pendine Sands, the site of many 1920s land speed records). The CFD data matched that from the rocket sled track to within a few percent, something that astonished both Ayers and the other aerodynamicists with whom he shared his findings.

Thrust SSC would use a pair of Rolls Royce Spey engines, taken from a British F-4 Phantom, mounted quite far forward on either side of the car, with the driver’s cockpit in-between. Together with a long, pointed nose and a T-shaped tail fin and stabilizer, Thrust SSC looked much more like a jet fighter with no wings than a car. Fittingly, the car got a driver to suit its looks. Land speed records aren’t cheap, something Noble (and probably every other record chaser) knew from bitter experience. He managed to scrape together enough funding to make three record attempts with Thrust 2 even though his attention was split between fund-raising and learning how to operate and control the car. For the sequel he wisely decided to leave the driving to someone else, concentrating his efforts on leading the project and raising the money. Thirty people applied for the job, a mix of drag racers and fighter pilots. The successful candidate was one of the latter, RAF Wing Commander Andy Green. Green had plenty of supersonic experience in RAF Phantoms and tornadoes; he also had a daredevil streak, evident in his choice of hobbies.

By 1997 the car was ready for Black Rock Desert. So, too, were Breedlove and his Spirit of America, setting the stage for a transatlantic, transonic shoot-out. Spirit of America narrowly escaped disaster the previous year, turning sharply right at ~675mph and rolling onto its side in the process. 1997 was to be no kinder to the Americans. On October 15, a sonic boom announced to the world that Green (backed by Noble) was now the fastest man on earth. Thrust SSC set a two-way average of 763mph, or Mach 1.015, exactly 50 years and a day after the first Mach 1 flight.

Noble, Green, and Ayers set another land speed record in 2006, albeit with a much slower car. JCB Dieselmax set a new world record for a diesel-powered vehicle, reaching just over 350mph. Even though Bloodhound SSC will go much faster, Ayers told me they gathered a lot of useful knowledge then that is being applied to the current project.

Bloodhound SSC

A number of factors appear to be necessary for a land speed record attempt: a car with a sufficiently powerful engine, a suitable location, and someone motivated enough to raise the money to make it happen. A little bit of competition helps with the last of these. Breedlove, Green, and Arfons spurred each other on in the 1960s, and it was the threat of Breedlove going supersonic that sparked Thrust SSC. As you might expect, competition was also the original impetus behind Bloodhound SSC. Noble learned that Steve Fossett was planning a land speed record attempt. The ballooning adventurer bought Spirit of America from Breedlove in 2006, and he set his sights on 800mph. Noble needed a new car that incorporated the lessons learned from Thrusts 2 and SSC.

What makes the car go?

The key to any land speed record car is its engine, and Bloodhound SSC is no exception. Rather than depend on decades-old surplus, Noble and Green approached the UK government to see if they could help. “We thought we’d earned the right to do this properly with the right technology,” Noble told the UK’s Director magazine. The Ministry of Defense agreed on the condition that Bloodhound SSC be exciting enough a project to rekindle the interest in science and technology that Apollo or Concorde created in the 1960s and 1970s. In return for inspiring a new generation of engineers, Bloodhound SSC could have an EJ200 jet engine, a type more often found in the Eurofighter Typhoon.

Thrust SSC needed the combined thrust of two Spey jet engines to break the sound barrier. To go 30 percent faster, Bloodhound SSC will need more power than a single EJ200 can provide—at full reheat just over 20,000lbf (90 kN), roughly as much as one of the two engines on its predecessor (albeit at half the weight). The Bloodhound team decided upon rocket power for the remaining thrust. We asked Ayers why they opted for this approach, and he explained that it had several advantages over a pair of jets. For one thing, it needs only one air intake, meaning a lower drag design than Thrust SSC’s twin engines. To reach the kind of performance target Bloodhound SSC is aiming at with a pair of jets, it would require designing variable geometry air intakes. While this sort of engineering solution is used by fighter aircraft, it would add unnecessary cost, complexity, and weight to Bloodhound SSC. What’s more, a rocket can provide much more thrust for its size and weight than a jet. Finally, using rocket power means being able to accelerate much more rapidly, which should help limit the length of track needed.

Read the entire article here.

Image: Bloodhound SCC. Courtesy of Bloodhound.

 

Research Without a Research Lab

Many technology companies have separate research teams, or even divisions, that play with new product ideas and invent new gizmos. The conventional wisdom suggests that businesses like Microsoft or IBM need to keep their innovative, far-sighted people away from those tasked with keeping yesterday’s products functioning and today’s customers happy. Google and a handful of other innovators on the other hand follow a different mantra; they invent in hallways and cubes — everywhere.

From Technology Review:

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.

“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Alan MacCormack, an adjunct professor at Harvard Business School who studies innovation and product development in the technology sector, says Google’s approach to research helps it deal with a conundrum facing many large companies. “Many firms are trying to balance a corporate strategy that defines who they are in five years with trying to discover new stuff that is unpredictable—this model has allowed them to do both.” Embedding people working on fundamental research into the core business also makes it possible for Google to encourage creative contributions from workers who would typically be far removed from any kind of research and development, adds MacCormack.

Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.

Cynthia Wagner Weick, a management professor at University of the Pacific, thinks that Google’s approach stems from its cofounders’ determination to avoid the usual corporate approach of keeping fundamental research isolated. “They are interested in solving major problems, and not just in the IT and communications space,” she says. Weick recently published a paper singling out Google, Edwards Lifescience, and Elon Musk’s companies, Tesla Motors and Space X, as examples of how tech companies can meet short-term needs while also thinking about far-off ideas.

Google can also draw on academia to boost its fundamental research. It spends millions each year on more than 100 research grants to universities and a few dozen PhD fellowships. At any given time it also hosts around 30 academics who “embed” at the company for up to 18 months. But it has lured many leading computing thinkers away from academia in recent years, particularly in artificial intelligence (see “Is Google Cornering the Market on Deep Learning?”). Those that make the switch get to keep publishing academic research while also gaining access to resources, tools and data unavailable inside universities.

Spector argues that it’s increasingly difficult for academic thinkers to independently advance a field like computer science without the involvement of corporations. Access to piles of data and working systems like those of Google is now a requirement to develop and test ideas that can move the discipline forward, he says. “Google’s played a larger role than almost any company in bringing that empiricism into the mainstream of the field,” he says. “Because of machine learning and operation at scale you can do things that are vastly different. You don’t want to separate researchers from data.”

It’s hard to say how long Google will be able to count on luring leading researchers, given the flush times for competing Silicon Valley startups. “We’re back to a time when there are a lot of startups out there exploring new ground,” says MacCormack, and if competitors can amass more interesting data, they may be able to leach away Google’s research mojo.

Read the entire story here.