Tag Archives: innovation

The Death of Permissionless Innovation


The internet and its user-friendly interface, the World Wide Web (Web), was founded on the principle of openness. The acronym soup of standards, such as TCP/IP, HTTP and HTML, paved the way for unprecedented connectivity and interoperability. Anyone armed with a computer and a connection, adhering to these standards, could now connect and browse and share data with any one else.

This is a simplified view of Sir Tim Berners-Lee vision for the Web in 1989 — the same year that brought us Seinfeld and The Simpsons. Berners-Lee invented the Web. His invention fostered an entire global technological and communications revolution over the next  quarter century.

However, Berners-Lee did something much more important. Rather than keeping the Web to himself and his colleagues, and turning to Silicon Valley to found and fund the next billion dollar startup, he pursued a path to give the ideas and technologies away. Critically, the open standards of the internet and Web enabled countless others to innovate and to profit.

One of the innovators to reap the greatest rewards from this openness is Facebook’s Mark Zuckerberg. Yet, in the ultimate irony, Facebook has turned the Berners-Lee model of openness and permissionless innovation on its head. It’s billion-plus users are members of a private, corporate-controlled walled garden. Innovation, to a large extent, is now limited by the whims of Facebook. Increasingly so, open innovation on the internet is stifled and extinguished by the constraints manufactured and controlled for Facebook’s own ends. This makes Zuckerberg’s vision of making the world “more open and connected” thoroughly laughable.

From the Guardian:

If there were a Nobel prize for hypocrisy, then its first recipient ought to be Mark Zuckerberg, the Facebook boss. On 23 August, all his 1.7 billion users were greeted by this message: “Celebrating 25 years of connecting people. The web opened up to the world 25 years ago today! We thank Sir Tim Berners-Lee and other internet pioneers for making the world more open and connected.”

Aw, isn’t that nice? From one “pioneer” to another. What a pity, then, that it is a combination of bullshit and hypocrisy. In relation to the former, the guy who invented the web, Tim Berners-Lee, is as mystified by this “anniversary” as everyone else. “Who on earth made up 23 August?” he asked on Twitter. Good question. In fact, as the Guardian pointed out: “If Facebook had asked Berners-Lee, he’d probably have told them what he’s been telling people for years: the web’s 25th birthday already happened, two years ago.”

“In 1989, I delivered a proposal to Cern for the system that went on to become the worldwide web,” he wrote in 2014. It was that year, not this one, that he said we should celebrate as the web’s 25th birthday.

It’s not the inaccuracy that grates, however, but the hypocrisy. Zuckerberg thanks Berners-Lee for “making the world more open and connected”. So do I. What Zuck conveniently omits to mention, though, is that he is embarked upon a commercial project whose sole aim is to make the world more “connected” but less open. Facebook is what we used to call a “walled garden” and now call a silo: a controlled space in which people are allowed to do things that will amuse them while enabling Facebook to monetise their data trails. One network to rule them all. If you wanted a vision of the opposite of the open web, then Facebook is it.

The thing that makes the web distinctive is also what made the internet special, namely that it was designed as an open platform. It was designed to facilitate “permissionless innovation”. If you had a good idea that could be realised using data packets, and possessed the programming skills to write the necessary software, then the internet – and the web – would do it for you, no questions asked. And you didn’t need much in the way of financial resources – or to ask anyone for permission – in order to realise your dream.

An open platform is one on which anyone can build whatever they like. It’s what enabled a young Harvard sophomore, name of Zuckerberg, to take an idea lifted from two nice-but-dim oarsmen, translate it into computer code and launch it on an unsuspecting world. And in the process create an empire of 1.7 billion subjects with apparently limitless revenues. That’s what permissionless innovation is like.

The open web enabled Zuckerberg to do this. But – guess what? – the Facebook founder has no intention of allowing anyone to build anything on his platform that does not have his express approval. Having profited mightily from the openness of the web, in other words, he has kicked away the ladder that elevated him to his current eminence. And the whole thrust of his company’s strategy is to persuade billions of future users that Facebook is the only bit of the internet they really need.

Read the entire article here.

Image: The NeXT Computer used by Tim Berners-Lee at CERN. Courtesy: Science Museum, London. GFDL CC-BY-SA.

Send to Kindle

What Keeps NASA Going?

Apollo 17 Commander Gene Cernan on lunar rover

Apollo astronaut Eugene Cernan is the last human to have set foot on a world other than Earth. It’s been 44 years since he last stepped off the moon. In fact, in 1972 he drove around using the lunar rover and found time to scribble his daughter’s initials on the dusty lunar surface. So, other than forays to the International Space Station (ISS) and trips to service the Hubble Space Telescope (HST) NASA has kept humans firmly rooted to the homeland.

Of course, in the intervening decades the space agency has not rested on its laurels. NASA has sent probes and robots all over the Solar System and beyond: Voyager to the gas giants and on to interstellar space,  Dawn to visit asteroids; Rosetta (in concert with the European Space Agency) to visit a comet; SOHO and its countless cousins to keep an eye on our home star; Galileo and Pioneer to Jupiter; countless spacecraft including Curiosity Rover to Mars; Messenger to map Mercury; Magellan to probe the clouds of Venus; Cassini to survey Saturn and its fascinating moons; and of course, New Horizons to Pluto and beyond.

Spiral galaxies together with irregular galaxies make up approximately 60% of the galaxies in the local Universe. However, despite their prevalence, each spiral galaxy is unique — like snowflakes, no two are alike. This is demonstrated by the striking face-on spiral galaxy NGC 6814, whose luminous nucleus and spectacular sweeping arms, rippled with an intricate pattern of dark dust, are captured in this NASA/ESA Hubble Space Telescope image. NGC 6814 has an extremely bright nucleus, a telltale sign that the galaxy is a Seyfert galaxy. These galaxies have very active centres that can emit strong bursts of radiation. The luminous heart of NGC 6814 is a highly variable source of X-ray radiation, causing scientists to suspect that it hosts a supermassive black hole with a mass about 18 million times that of the Sun. As NGC 6814 is a very active galaxy, many regions of ionised gas are studded along  its spiral arms. In these large clouds of gas, a burst of star formation has recently taken place, forging the brilliant blue stars that are visible scattered throughout the galaxy.

Our mechanical human proxies reach out a little farther each day to learn more about our universe and our place in it. Exploration and discovery is part of our human DNA; it’s what we do. NASA is our vehicle. So, it’s good to see what NASA is planning. The agency just funded eight advanced-technology programs that officials believe may help transform space exploration. The grants are part of the NASA Innovative Advanced Concepts (NIAC) program. The most interesting, perhaps, are a program to evaluate inducing hibernation in Mars-bound astronauts, and an assessment of directed energy propulsion for interstellar travel.

Our science and technology becomes more and more like science fiction each day.

Read more about NIAC programs here.

Image 1: Apollo 17 mission commander Eugene A. Cernan makes a short checkout of the Lunar Roving Vehicle during the early part of the first Apollo 17 extravehicular activity at the Taurus-Littrow landing site. Courtesy: NASA.

Image 2: Hubble Spies a Spiral Snowflake, galaxy NGC 6814. Courtesy: NASA/ESA Hubble Space Telescope.

Send to Kindle

First, Order a Pizza. Second, World Domination


Tech startups that plan to envelope the globe with their never-thought-of-before-but-cannot-do-without technologies and services have to begin somewhere. Usually, the path to worldwide domination begins with pizza.

From the Washington Post:

In an ordinary conference room in this city of start-ups, a group of engineers sat down to order pizza in an entirely new way.

“Get me a pizza from Pizz’a Chicago near my office,” one of the engineers said into his smartphone. It was their first real test of Viv, the artificial-intelligence technology that the team had been quietly building for more than a year. Everyone was a little nervous. Then, a text from Viv piped up: “Would you like toppings with that?”

The engineers, eight in all, started jumping in: “Pepperoni.” “Half cheese.” “Caesar salad.” Emboldened by the result, they peppered Viv with more commands: Add more toppings. Remove toppings. Change medium size to large.

About 40 minutes later — and after a few hiccups when Viv confused the office address — a Pizz’a Chicago driver showed up with four made-to-order pizzas.

The engineers erupted in cheers as the pizzas arrived. They had ordered pizza, from start to finish, without placing a single phone call and without doing a Google search — without any typing at all, actually. Moreover, they did it without downloading an app from Domino’s or Grubhub.

Of course, a pizza is just a pizza. But for Silicon Valley, a seemingly small change in consumer behavior or design can mean a tectonic shift in the commercial order, with ripple effects across an entire economy. Engineers here have long been animated by the quest to achieve the path of least friction — to use the parlance of the tech world — to the proverbial pizza.

The stealthy, four-year-old Viv is among the furthest along in an endeavor that many in Silicon Valley believe heralds that next big shift in computing — and digital commerce itself. Over the next five years, that transition will turn smartphones — and perhaps smart homes and cars and other devices — into virtual assistants with supercharged conversational capabilities, said Julie Ask, an expert in mobile commerce at Forrester.

Powered by artificial intelligence and unprecedented volumes of data, they could become the portal through which billions of people connect to every service and business on the Internet. It’s a world in which you can order a taxi, make a restaurant reservation and buy movie tickets in one long unbroken conversation — no more typing, searching or even clicking.

Viv, which will be publicly demonstrated for the first time at a major industry conference on Monday, is one of the most highly anticipated technologies expected to come out of a start-up this year. But Viv is by no means alone in this effort. The quest to define the next generation of artificial-intelligence technology has sparked an arms race among the five major tech giants: Apple, Google, Microsoft, Facebook and Amazon.com have all announced major investments in virtual-assistant software over the past year.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Practice May Make You Perfect, But Not Creative

Practice will help you improve in a field with well-defined and well-developed tasks, processes and rules. This includes areas like sports and musicianship. Though, keep in mind that it may indeed take some accident of genetics to be really good at one of these disciplines in the first place.

But, don’t expect practice to make you better in all areas of life, particularly in creative endeavors. Creativity stems from original thought not replicable behavior. Scott Kaufman director of the Imagination Institute at the University of Pennsylvania reminds us of this in a recent book review.” The authors of Peak: Secrets from the New Science of Expertise, psychologist Anders Ericsson and journalist Robert Pool, review a swath of research on human learning and skill acquisition and conclude that deliberate, well-structured practice can help anyone master new skills. I think we can all agree with this conclusion.

But like Kaufman I believe that many creative “skills” lie in an area of human endeavor that is firmly beyond the assistance of practice. Most certainly practice will help an artist hone and improve her brushstrokes; but practice alone will not bring forth her masterpiece. So, here is a brief summary of 12 key elements that Kaufman distilled from over 50 years of research studies into creativity:

Excerpts from Creativity Is Much More Than 10,000 Hours of Deliberate Practice by Scott Kaufman:

  1. Creativity is often blind. If only creativity was all about deliberate practice… in reality, it’s impossible for creators to know completely whether their new idea or product will be well received.
  2. Creative people often have messy processes. While expertise is characterized by consistency and reliability, creativity is characterized by many false starts and lots and lots of trial-and-error.
  3. Creators rarely receive helpful feedback. When creators put something novel out into the world, the reactions are typically either acclaim or rejection
  4. The “10-Year Rule” is not a rule. The idea that it takes 10 years to become a world-class expert in any domain is not a rule. [This is the so-called Ericsson rule from his original paper on deliberate practice amongst musicians.]
  5. Talent is relevant to creative accomplishment. If we define talent as simply the rate at which a person acquires expertise, then talent undeniably matters for creativity.
  6. Personality is relevant. Not only does the speed of expertise acquisition matter, but so do a whole host of other traits. People differ from one another in a multitude of ways… At the very least, research has shown that creative people do tend to have a greater inclination toward nonconformity, unconventionality, independence, openness to experience, ego strength, risk taking, and even mild forms of psychopathology.
  7. Genes are relevant. [M]odern behavioral genetics has discovered that virtually every single psychological trait — including the inclination and willingness to practice — is influenced by innate genetic endowment.
  8. Environmental experiences also matter. [R]esearchers have found that many other environmental experiences substantially affect creativity– including socioeconomic origins, and the sociocultural, political, and economic context in which one is raised.
  9. Creative people have broad interests. While the deliberate practice approach tends to focus on highly specialized training… creative experts tend to have broader interests and greater versatility compared to their less creative expert colleagues.
  10. Too much expertise can be detrimental to creative greatness. The deliberate practice approach assumes that performance is a linear function of practice. Some knowledge is good, but too much knowledge can impair flexibility.
  11. Outsiders often have a creative advantage. If creativity were all about deliberate practice, then outsiders who lack the requisite expertise shouldn’t be very creative. But many highly innovative individuals were outsiders to the field in which they contributed. Many marginalized people throughout history — including immigrants — came up with highly creative ideas not in spite of their experiences as an outsider, but because of their experiences as an outsider.
  12. Sometimes the creator needs to create a new path for others to deliberately practice. Creative people are not just good at solving problems, however. They are also good at finding problems.

In my view the most salient of Kaufman’s dozen ingredients for creativity are #11 and #12 — and I can personally attest to their importance: fresh ideas are more likely to come from outsiders; and, creativeness in one domain often stems from experiences in another, unrelated, realm.

Read Kaufman’s enlightening article in full here.

Send to Kindle

Re-Innovation: Silicon Valley’s Trivial Pursuit Problem

I read and increasing number of articles like the one excerpted below, which cause me to sigh with exasperation yet again. Is Silicon Valley — that supposed beacon of global innovation — in danger of becoming a drainage ditch of regurgitated sameness, of me-too banality?

It’s frustrating to watch many of our self-proclaimed brightest tech minds re-package colorful “new” solutions to our local trivialities, yet again, and over and over. So, here we are, celebrating the arrival of the “next big thing”; the next tech unicorn with a valuation above $1 billion, which proposes to upend and improve all our lives, yet again.

DoorDash. Seamless. Deliveroo. HelloFresh. HomeChef. SpoonRocket. Sprig. GrubHub. Instacart. These are all great examples of too much money chasing too few truly original ideas. I hope you’ll agree: a cool compound name is a cool compound name, but it certainly does not for innovation make. By the way, whatever happened to WebVan?

Where are my slippers? Yawn.

From Wired:

Founded in 2013, DoorDash is a food delivery service. It’s also the latest startup to be eying a valuation of more than $1 billion. DoorDash already raised $40 million in March; according to Bloomberg, it may soon reap another round of funding that would put the company in the same lofty territory as Uber, Airbnb, and more than 100 other so-called unicorns.

Not that DoorDash is doing anything terribly original. Startups bringing food to your door are everywhere. There’s Instacart, which wants to shop for groceries for you. Deliveroo and Postmastes, like DoorDash, are looking to overtake Seamless as the way we get takeout at home. Munchery, SpoonRocket, and Sprig offer pre-made meals. Blue Apron, Gobble, HelloFresh, and HomeChef deliver ingredients to make the food ourselves. For the moment, investors are giddily rushing to subsidize this race to our doors. But skeptics say that the payout those investors are banking on might never come.

Even in a crowded field, funding for these delivery startups continues to grow. CB Insights, a research group that tracks startup investments, said this summer that the sector was “starting to get a little crowded.” Last year, venture-backed food delivery startups based in the US reaped more than $1 billion in equity funding; during first half of this year, they pulled in $750 million more, CB Insights found.

The enormous waves of funding may prove money poorly spent if Silicon Valley finds itself in a burst bubble. Bill Gurley, the well-known investor and a partner at venture firm Benchmark, believes delivery startups may soon be due for a rude awakening. Unlike the first dotcom bubble, he said, smartphones might offer help, because startups are able to collect more data. But he compared the optimism investors are showing for such low-margin operations to the misplaced enthusiasms of 1999.  “It’s the same shit,” Gurley said during a recent appearance. (Gurley’s own investment in food delivery service, GrubHub, went public in April 2014 and is now valued at more than $2.2 billion.)

Read the entire article here.


Send to Kindle

Back to the Future

France_in_XXI_Century_Latest_fashionJust over a hundred years ago, at the turn of the 19th century, Jean-Marc Côté and some of his fellow French artists were commissioned to imagine what the world would look like in 2000. Their colorful sketches and paintings portrayed some interesting inventions, though all seem to be grounded in familiar principles and incremental innovations — mechanical helpers, ubiquitous propellers and wings. Interestingly, none of these artist-futurists imagined a world beyond Victorian dress, gender inequality and wars. But these are gems nonetheless.

France_in_XXI_Century._Air_cabSome of their works found their way into cigar boxes and cigarette cases, others were exhibited at the 1900 World Exhibition in Paris. My three favorites: a Tailor of the Latest Fashion, the Aero-cab Station and the Whale Bus. See the full complement of these remarkable futuristic visions at the Public Domain Review, and check out the House Rolling Through the Countryside and At School.

I suspect our contemporary futurists — born in the late 20th or early 21st-century — will fall prey to the same narrow visions when asked to sketch our planet in 3000. But despite the undoubted wealth of new gadgets and gizmos a thousand years from now the challenge would be to see if their imagined worlds might be at peace and with equality for all.
France_in_XXI_Century_Whale_busImages courtesy of the Public Domain Review, a project of the Open Knowledge Foundation. Public Domain.


Send to Kindle

The Tech Emperor Has No Clothes


Bill Hewlett. David Packard. Bill Gates. Steve Allen. Steve Jobs. Larry Ellison. Gordon Moore. Tech titans. Moguls of the microprocessor. Their names hold a key place in the founding and shaping of our technological evolution. That they catalyzed and helped create entire economic sectors goes without doubt. Yet, a deeper, objective analysis of market innovation shows that the view of the lone, great-man (or two) — combating and succeeding against all-comers — may be more of a self-perpetuating myth than actual reality. The idea that single, visionary individual drives history and shapes the future is but a long and enduring invention.

From Technology Review:

Since Steve Jobs’s death, in 2011, Elon Musk has emerged as the leading celebrity of Silicon Valley. Musk is the CEO of Tesla Motors, which produces electric cars; the CEO of SpaceX, which makes rockets; and the chairman of SolarCity, which provides solar power systems. A self-made billionaire, programmer, and engineer—as well as an inspiration for Robert Downey Jr.’s Tony Stark in the Iron Man movies—he has been on the cover of Fortune and Time. In 2013, he was first on the Atlantic’s list of “today’s greatest inventors,” nominated by leaders at Yahoo, Oracle, and Google. To believers, Musk is steering the history of technology. As one profile described his mystique, his “brilliance, his vision, and the breadth of his ambition make him the one-man embodiment of the future.”

Musk’s companies have the potential to change their sectors in fundamental ways. Still, the stories around these advances—and around Musk’s role, in particular—can feel strangely outmoded.

The idea of “great men” as engines of change grew popular in the 19th century. In 1840, the Scottish philosopher Thomas Carlyle wrote that “the history of what man has accomplished in this world is at bottom the history of the Great Men who have worked here.” It wasn’t long, however, before critics questioned this one–dimensional view, arguing that historical change is driven by a complex mix of trends and not by any one person’s achievements. “All of those changes of which he is the proximate initiator have their chief causes in the generations he descended from,” Herbert Spencer wrote in 1873. And today, most historians of science and technology do not believe that major innovation is driven by “a lone inventor who relies only on his own imagination, drive, and intellect,” says Daniel Kevles, a historian at Yale. Scholars are “eager to identify and give due credit to significant people but also recognize that they are operating in a context which enables the work.” In other words, great leaders rely on the resources and opportunities available to them, which means they do not shape history as much as they are molded by the moments in which they live.

Musk’s success would not have been possible without, among other things, government funding for basic research and subsidies for electric cars and solar panels. Above all, he has benefited from a long series of innovations in batteries, solar cells, and space travel. He no more produced the technological landscape in which he operates than the Russians created the harsh winter that allowed them to vanquish Napoleon. Yet in the press and among venture capitalists, the great-man model of Musk persists, with headlines citing, for instance, “His Plan to Change the Way the World Uses Energy” and his own claim of “changing history.”

The problem with such portrayals is not merely that they are inaccurate and unfair to the many contributors to new technologies. By warping the popular understanding of how technologies develop, great-man myths threaten to undermine the structure that is actually necessary for future innovations.

Space cowboy

Elon Musk, the best-selling biography by business writer Ashlee Vance, describes Musk’s personal and professional trajectory—and seeks to explain how, exactly, the man’s repeated “willingness to tackle impossible things” has “turned him into a deity in Silicon Valley.”

Born in South Africa in 1971, Musk moved to Canada at age 17; he took a job cleaning the boiler room of a lumber mill and then talked his way into an internship at a bank by cold-calling a top executive. After studying physics and economics in Canada and at the Wharton School of the University of Pennsylvania, he enrolled in a PhD program at Stanford but opted out after a couple of days. Instead, in 1995, he cofounded a company called Zip2, which provided an online map of businesses—“a primitive Google maps meets Yelp,” as Vance puts it. Although he was not the most polished coder, Musk worked around the clock and slept “on a beanbag next to his desk.” This drive is “what the VCs saw—that he was willing to stake his existence on building out this platform,” an early employee told Vance. After Compaq bought Zip2, in 1999, Musk helped found an online financial services company that eventually became PayPal. This was when he “began to hone his trademark style of entering an ultracomplex business and not letting the fact that he knew very little about the industry’s nuances bother him,” Vance writes.

When eBay bought PayPal for $1.5 billion, in 2002, Musk emerged with the wherewithal to pursue two passions he believed could change the world. He founded SpaceX with the goal of building cheaper rockets that would facilitate research and space travel. Investing over $100 million of his personal fortune, he hired engineers with aeronautics experience, built a factory in Los Angeles, and began to oversee test launches from a remote island between Hawaii and Guam. At the same time, Musk cofounded Tesla Motors to develop battery technology and electric cars. Over the years, he cultivated a media persona that was “part playboy, part space cowboy,” Vance writes.

Musk sells himself as a singular mover of mountains and does not like to share credit for his success. At SpaceX, in particular, the engineers “flew into a collective rage every time they caught Musk in the press claiming to have designed the Falcon rocket more or less by himself,” Vance writes, referring to one of the company’s early models. In fact, Musk depends heavily on people with more technical expertise in rockets and cars, more experience with aeronautics and energy, and perhaps more social grace in managing an organization. Those who survive under Musk tend to be workhorses willing to forgo public acclaim. At SpaceX, there is Gwynne Shotwell, the company president, who manages operations and oversees complex negotiations. At Tesla, there is JB Straubel, the chief technology officer, responsible for major technical advances. Shotwell and Straubel are among “the steady hands that will forever be expected to stay in the shadows,” writes Vance. (Martin Eberhard, one of the founders of Tesla and its first CEO, arguably contributed far more to its engineering achievements. He had a bitter feud with Musk and left the company years ago.)

Likewise, Musk’s success at Tesla is undergirded by public-sector investment and political support for clean tech. For starters, Tesla relies on lithium-ion batteries pioneered in the late 1980s with major funding from the Department of Energy and the National Science Foundation. Tesla has benefited significantly from guaranteed loans and state and federal subsidies. In 2010, the company reached a loan agreement with the Department of Energy worth $465 million. (Under this arrangement, Tesla agreed to produce battery packs that other companies could benefit from and promised to manufacture electric cars in the United States.) In addition, Tesla has received $1.29 billion in tax incentives from Nevada, where it is building a “gigafactory” to produce batteries for cars and consumers. It has won an array of other loans and tax credits, plus rebates for its consumers, totaling another $1 billion, according to a recent series by the Los Angeles Times.

It is striking, then, that Musk insists on a success story that fails to acknowledge the importance of public-sector support. (He called the L.A. Times series “misleading and deceptive,” for instance, and told CNBC that “none of the government subsidies are necessary,” though he did admit they are “helpful.”)

If Musk’s unwillingness to look beyond himself sounds familiar, Steve Jobs provides a recent antecedent. Like Musk, who obsessed over Tesla cars’ door handles and touch screens and the layout of the SpaceX factory, Jobs brought a fierce intensity to product design, even if he did not envision the key features of the Mac, the iPod, or the iPhone. An accurate version of Apple’s story would give more acknowledgment not only to the work of other individuals, from designer Jonathan Ive on down, but also to the specific historical context in which Apple’s innovation occurred. “There is not a single key technology behind the iPhone that has not been state funded,” says economist Mazzucato. This includes the wireless networks, “the Internet, GPS, a touch-screen display, and … the voice-activated personal assistant Siri.” Apple has recombined these technologies impressively. But its achievements rest on many years of public-sector investment. To put it another way, do we really think that if Jobs and Musk had never come along, there would have been no smartphone revolution, no surge of interest in electric vehicles?

Read the entire story here.

Image: Titan Oceanus. Trevi Fountain, Rome. Public Domain.

Send to Kindle

A Patent to End All Patents

You’ve seen the “we’ll help you file your patent application” infomercials on late night cable. The underlying promise is simple: your unique invention will find its way into every household on Earth and consequently will thrust you into the financial stratosphere making you the planet’s first gazillionaire. Of course, this will happen only after you part with your hard-earned cash for help in filing the patent. Incidentally, filing a patent with the US Patent and Trademark Office (USPTO) usually starts at around $10-15,000.

Some patents are truly extraordinary in their optimistic silliness: wind harnessing bicycle, apparatus for simulating a high-five, flatulence deodorizer, jet-powered surfboard, thong diaper, life-size interactive bowl of soup, nicotine infused coffee, edible business cards, magnetic rings to promote immortality, and so it goes. Remember, though, this is the United States, and most crazy things are possible and profitable. So, you could well find yourself becoming addicted to those 20oz nicotine infused lattes each time you pull up at the local coffee shop on your jet-powered surfboard.

But perhaps the most recent thoroughly earnest and whacky patent filing comes from Boeing no less. It’s for a laser-powered fusion-fission jet engine. The engine uses ultra-high powered lasers to fuse pellets of hydrogen, causing uranium to fission, which generates heat and subsequently electricity. All of this powering your next flight to Seattle. So, the next time you fly on a Boeing aircraft, keep in mind what some of the company’s engineers have in store for you 100 or 1,000 years from now. I think I’d prefer to be disassembled and beamed up.

From ars technica:

Assume the brace position: Boeing has received a patent for, I kid you not, a laser-powered fusion-fission jet propulsion system. Boeing envisions that this system could replace both rocket and turbofan engines, powering everything from spacecraft to missiles to airplanes.

The patent, US 9,068,562, combines inertial confinement fusion, fission, and a turbine that generates electricity. It sounds completely crazy because it is. Currently, this kind of engine is completely unrealistic given our mastery of fusion, or rather our lack thereof. Perhaps in the future (the distant, distant future that is), this could be a rather ingenious solution. For now, it’s yet another patent head-scratcher.

To begin with, imagine the silhouette of a big turbofan engine, like you’d see on a commercial jetliner. Somewhere in the middle of the engine there is a fusion chamber, with a number of very strong lasers focused on a single point. A hohlraum (pellet) containing a mix of deuterium and tritium (hydrogen isotopes) is placed at this focal point. The lasers are all turned on at the same instant, creating massive pressure on the pellet, which implodes and causes the hydrogen atoms to fuse. (This is called inertial confinement fusion, as opposed to the magnetic confinement fusion that is carried out in a tokamak.)

According to the patent, the hot gases produced by the fusion are pushed out of a nozzle at the back of the engine, creating thrust—but that’s not all! One of the by-products of hydrogen fusion is lots of fast neutrons. In Boeing’s patented design, there is a shield around the fusion chamber that’s coated with a fissionable material (uranium-238 is one example given). The neutrons hit the fissionable material, causing a fission reaction that generates lots of heat.

Finally, there’s some kind of heat exchanger system that takes the heat from the fission reaction and uses that heat (via a heated liquid or gas) to drive a turbine. This turbine generates the electricity that powers the lasers. Voilà: a fusion-fission rocket engine thing.

Let’s talk a little bit about why this is such an outlandish idea. To begin with, this patented design involves placing a lump of material that’s made radioactive in an airplane engine—and these vehicles are known to sometimes crash. Today, the only way we know of efficiently harvesting radioactive decay is a giant power plant, and we cannot get inertial fusion to fire more than once in a reasonable amount of time (much less on the short timescales needed to maintain thrust). This process requires building-sized lasers, like those found at the National Ignition Facility in California. Currently, the technique only works poorly. Those two traits are not conducive to air travel.

But this is the USA we’re talking about, where patents can be issued on firewalls (“being wielded in one of most outrageous trolling campaigns we have ever seen,” according to the EFF) and universities can claim such rights on “agent-based collaborative recognition-primed decision-making” (EFF: “The patent reads a little like what might result if you ate a dictionary filled with buzzwords and drank a bottle of tequila”). As far as patented products go, it is pretty hard to imagine this one actually being built in the real world. Putting aside the difficulties of inertial confinement fusion (we’re nowhere near hitting the break-even point), it’s also a bit far-fetched to shoehorn all of these disparate and rather difficult-to-work-with technologies into a small chassis that hangs from the wing of a commercial airplane.

Read the entire story here.


Send to Kindle

Innovating the Disruption Or Disrupting the Innovation

Corporate America has a wonderful knack of embracing a meaningful idea and then overusing it to such an extent that it becomes thoroughly worthless. Until recently, every advertiser, every manufacturer, every service, shamelessly promoted itself as an innovator. Everything a company did was driven by innovation: employees succeeded by innovating; the CEO was innovation incarnate; products were innovative; new processes drove innovation — in fact, the processes themselves were innovative. Any business worth its salt produced completely innovative stuff from cupcakes to tires, from hair color to drill bits, from paper towels to hoses. And consequently this overwhelming ocean of innovation — which upon closer inspection actually isn’t real innovation — becomes worthless, underwhelming drivel.

So, what next for corporate America? Well, latch on to the next meme of course — disruption. Yawn.


HBO’s Silicon Valley is back, with its pitch-perfect renderings of the culture and language of the tech world — like at the opening of the “Disrupt” startup competition run by the Tech Crunch website at the end of last season. “We’re making the world a better place through scalable fault-tolerant distributed databases” — the show’s writers didn’t have to exercise their imagination much to come up with those little arias of geeky self-puffery, or with the name Disrupt, which, as it happens, is what the Tech Crunch conferences are actually called. As is most everything else these days. “Disrupt” and “disruptive” are ubiquitous in the names of conferences, websites, business school degree programs and business book best-sellers. The words pop up in more than 500 TED Talks: “How to Avoid Disruption in Business and in Life,” “Embracing Disruption,” “Disrupting Higher Education,” “Disrupt Yourself.” It transcends being a mere buzzword. As the philosopher Jeremy Bentham said two centuries ago, there is a point where jargon becomes a species of the sublime.

 To give “disruptive” its due, it actually started its life with some meat on its bones. It was popularized in a 1997 book by Clayton Christensen of the Harvard Business School. According to Christensen, the reason why established companies fail isn’t that they don’t keep up with new technologies, but that their business models are disrupted by scrappy, bottom-fishing startups that turn out stripped-down versions of existing products at prices the established companies can’t afford to match. That’s what created an entry point for “disruptive innovations” like the Model T Ford, Craigslist classifieds, Skype and no-frills airlines.

Christensen makes a nice point. Sometimes you can get the world to beat a path to your door by building a crappier mousetrap, too, if you price it right. Some scholars have raised questions about that theory, but it isn’t the details of the story that have put “disruptive” on everybody’s lips; it’s the word itself. Buzzwords feed off their emotional resonances, not their ideas. And for pure resonance, “disruptive” is hard to beat. It’s a word with deep roots. I suspect I first encountered it when my parents read me the note that the teacher pinned to my sweater when I was sent home from kindergarten. Or maybe it reminds you of the unruly kid who was always pushing over the juice table. One way or another, the word evokes obstreperous rowdies, the impatient people who are always breaking stuff. It says something that “disrupt” is from the Latin for “shatter.”

Disrupt or be disrupted. The consultants and business book writers have proclaimed that as the chronic condition of the age, and everybody is clambering to be classed among the disruptors rather than the disruptees. The lists of disruptive companies in the business media include not just Amazon and Uber but also Procter and Gamble and General Motors. What company nowadays wouldn’t claim to be making waves? It’s the same with that phrase “disruptive technologies.” That might be robotics or next-generation genomics, sure. But CNBC also touts the disruptive potential of an iPhone case that converts to a video game joystick.

These days, people just use “disruptive” to mean shaking things up, though unlike my kindergarten teacher, they always infuse a note of approval. As those Tech Crunch competitors assured us, disruption makes the world a better place. Taco Bell has created a position called “Resident Disruptor,” and not to be outdone, McDonald’s is running radio ads describing its milkshake blenders as a disruptive technology. Well, OK, blenders really do shake things up. But by the time a tech buzzword has been embraced by the fast food chains, it’s getting a little frayed at the edges. “Disruption” was never really a new idea in the first place, just a new name for a fact of life as old as capitalism. Seventy years ago the economist Joseph Schumpeter was calling it the “gales of creative destruction,” and he just took the idea from Karl Marx.

Read the entire story here.

Send to Kindle

A New Mobile App or Genomic Understanding?


Silicon Valley has been a tremendous incubator for some of most our recent inventions: the first integrated transistor chip, which led to Intel; the first true personal computer, which led to Apple. Yet, this esteemed venture capital (VC) community now seems to need a self-medication of innovation. Aren’t we all getting a little jaded from yet another “new, great mobile app” — worth in the tens of billions (but having no revenue model) — courtesy of a bright and young group of 20-somethings?

It is indeed gratifying to see innovators, young and old, rewarded for their creativity and perseverance. Yet, we should be encouraging more of our pioneers to look beyond the next cool smartphone invention. Perhaps our technological and industrial luminaries and their retinues of futurists could do us all a favor if they channeled more of their speculative funds at longer-term and more significant endeavors: cost-effective desalination; cheaper medications; understanding and curing our insidious diseases; antibiotic replacements; more effective recycling; cleaner power; cheaper and stronger infrastructure; more effective education. These are all difficult problems. But therein lies the reward.

Clearly some pioneering businesses are investing in these areas. But isn’t it time we insisted that the majority of our private and public intellectual capital (and financial) should be invested in truly meaningful ways. Here’s an example from Iceland — with their national human genome project.

From ars technica:

An Icelandic genetics firm has sequenced the genomes of 2,636 of its countrymen and women, finding genetic markers for a variety of diseases, as well as a new timeline for the paternal ancestor of all humans.

Iceland is, in many ways, perfectly suited to being a genetic case study. It has a small population with limited genetic diversity, a result of the population descending from a small number of settlers—between 8 and 20 thousand, who arrived just 1100 years ago. It also has an unusually well-documented genealogical history, with information sometimes stretching all the way back to the initial settlement of the country. Combined with excellent medical records, it’s a veritable treasure trove for genetic researchers.

The researchers at genetics firm deCODE compared the complete genomes of participants with historical and medical records, publishing their findings in a series of four papers in Nature Genetics last Wednesday. The wealth of data allowed them to track down genetic mutations that are related to a number of diseases, some of them rare. Although few diseases are caused by a single genetic mutation, a combination of mutations can increase the risk for certain diseases. Having access to a large genetic sample with corresponding medical data can help to pinpoint certain risk-increasing mutations.

Among their headline findings was the identification of the gene ABCA7 as a risk factor for Alzheimer’s disease. Although previous research had established that a gene in this region was involved in Alzheimer’s, this result delivers a new level of precision. The researchers replicated their results in further groups in Europe and the United States.

Also identified was a genetic mutation that causes early-onset atrial fibrillation, a heart condition causing an irregular and often very fast heart rate. It’s the most common cardiac arrhythmia condition, and it’s considered early-onset if it’s diagnosed before the age of 60. The researchers found eight Icelanders diagnosed with the condition, all carrying a mutation in the same gene, MYL4.

The studies also turned up a gene with an unusual pattern of inheritance. It causes increased levels of thyroid stimulation when it’s passed down from the mother, but decreased levels when inherited from the father.

Genetic research in mice often involves “knocking out” or switching off a particular gene to explore the effects. However, mouse genetics aren’t a perfect approximation of human genetics. Obviously, doing this in humans presents all sorts of ethical problems, but a population such as Iceland provides the perfect natural laboratory to explore how knockouts affect human health.

The data showed that eight percent of people in Iceland have the equivalent of a knockout, one gene that isn’t working. This provides an opportunity to look at the data in a different way: rather than only looking for people with a particular diagnosis and finding out what they have in common genetically, the researchers can look for people who have genetic knockouts, and then examine their medical records to see how their missing genes affect their health. It’s then possible to start piecing together the story of how certain genes affect physiology.

Finally, the researchers used the data to explore human history, using Y chromosome data from 753 Icelandic males. Based on knowledge about mutation rates, Y chromosomes can be used to trace the male lineage of human groups, establishing dates of events like migrations. This technique has also been used to work out when the common ancestor of all humans was alive. The maternal ancestor, known as “Mitochondrial Eve,” is thought to have lived 170,000 to 180,000 years ago, while the paternal ancestor had previously been estimated to have lived around 338,000 years ago.

The Icelandic data allowed the researchers to calculate what they suggest is a more accurate mutation rate, placing the father of all humans at around 239,000 years ago. This is the estimate with the greatest likelihood, but the full range falls between 174,000 and 321,000 years ago. This estimate places the paternal ancestor closer in time to the maternal ancestor.

Read the entire story here.

Image: Gígjökull, an outlet glacier extending from Eyjafjallajökull, Iceland. Courtesy of Andreas Tille / Wikipedia.

Send to Kindle

Cross-Connection Requires a Certain Daring

A previously unpublished essay by Isaac Asimov on the creative process shows us his well reasoned thinking on the subject. While he believed that deriving new ideas could be done productively in a group, he seemed to gravitate more towards the notion of the lone creative genius. Both, however, require the innovator(s) to cross-connect thoughts, often from disparate sources.

From Technology Review:

How do people get new ideas?

Presumably, the process of creativity, whatever it is, is essentially the same in all its branches and varieties, so that the evolution of a new art form, a new gadget, a new scientific principle, all involve common factors. We are most interested in the “creation” of a new scientific principle or a new application of an old one, but we can be general here.

One way of investigating the problem is to consider the great ideas of the past and see just how they were generated. Unfortunately, the method of generation is never clear even to the “generators” themselves.

But what if the same earth-shaking idea occurred to two men, simultaneously and independently? Perhaps, the common factors involved would be illuminating. Consider the theory of evolution by natural selection, independently created by Charles Darwin and Alfred Wallace.

There is a great deal in common there. Both traveled to far places, observing strange species of plants and animals and the manner in which they varied from place to place. Both were keenly interested in finding an explanation for this, and both failed until each happened to read Malthus’s “Essay on Population.”

Both then saw how the notion of overpopulation and weeding out (which Malthus had applied to human beings) would fit into the doctrine of evolution by natural selection (if applied to species generally).

Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.

Undoubtedly in the first half of the 19th century, a great many naturalists had studied the manner in which species were differentiated among themselves. A great many people had read Malthus. Perhaps some both studied species and read Malthus. But what you needed was someone who studied species, read Malthus, and had the ability to make a cross-connection.

That is the crucial point that is the rare characteristic that must be found. Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, “How stupid of me not to have thought of this.”

But why didn’t he think of it? The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table. Making the cross-connection requires a certain daring. It must, for any cross-connection that does not require daring is performed at once by many and develops not as a “new idea,” but as a mere “corollary of an old idea.”

It is only afterward that a new idea seems reasonable. To begin with, it usually seems unreasonable. It seems the height of unreason to suppose the earth was round instead of flat, or that it moved instead of the sun, or that objects required a force to stop them when in motion, instead of a force to keep them moving, and so on.

A person willing to fly in the face of reason, authority, and common sense must be a person of considerable self-assurance. Since he occurs only rarely, he must seem eccentric (in at least that respect) to the rest of us. A person eccentric in one respect is often eccentric in others.

Consequently, the person who is most likely to get new ideas is a person of good background in the field of interest and one who is unconventional in his habits. (To be a crackpot is not, however, enough in itself.)

Once you have the people you want, the next question is: Do you want to bring them together so that they may discuss the problem mutually, or should you inform each of the problem and allow them to work in isolation?

My feeling is that as far as creativity is concerned, isolation is required. The creative person is, in any case, continually working at it. His mind is shuffling his information at all times, even when he is not conscious of it. (The famous example of Kekule working out the structure of benzene in his sleep is well-known.)

The presence of others can only inhibit this process, since creation is embarrassing. For every new good idea you have, there are a hundred, ten thousand foolish ones, which you naturally do not care to display.

Nevertheless, a meeting of such people may be desirable for reasons other than the act of creation itself.

Read the entire article here.

Send to Kindle

Research Without a Research Lab

Many technology companies have separate research teams, or even divisions, that play with new product ideas and invent new gizmos. The conventional wisdom suggests that businesses like Microsoft or IBM need to keep their innovative, far-sighted people away from those tasked with keeping yesterday’s products functioning and today’s customers happy. Google and a handful of other innovators on the other hand follow a different mantra; they invent in hallways and cubes — everywhere.

From Technology Review:

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.

“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Alan MacCormack, an adjunct professor at Harvard Business School who studies innovation and product development in the technology sector, says Google’s approach to research helps it deal with a conundrum facing many large companies. “Many firms are trying to balance a corporate strategy that defines who they are in five years with trying to discover new stuff that is unpredictable—this model has allowed them to do both.” Embedding people working on fundamental research into the core business also makes it possible for Google to encourage creative contributions from workers who would typically be far removed from any kind of research and development, adds MacCormack.

Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.

Cynthia Wagner Weick, a management professor at University of the Pacific, thinks that Google’s approach stems from its cofounders’ determination to avoid the usual corporate approach of keeping fundamental research isolated. “They are interested in solving major problems, and not just in the IT and communications space,” she says. Weick recently published a paper singling out Google, Edwards Lifescience, and Elon Musk’s companies, Tesla Motors and Space X, as examples of how tech companies can meet short-term needs while also thinking about far-off ideas.

Google can also draw on academia to boost its fundamental research. It spends millions each year on more than 100 research grants to universities and a few dozen PhD fellowships. At any given time it also hosts around 30 academics who “embed” at the company for up to 18 months. But it has lured many leading computing thinkers away from academia in recent years, particularly in artificial intelligence (see “Is Google Cornering the Market on Deep Learning?”). Those that make the switch get to keep publishing academic research while also gaining access to resources, tools and data unavailable inside universities.

Spector argues that it’s increasingly difficult for academic thinkers to independently advance a field like computer science without the involvement of corporations. Access to piles of data and working systems like those of Google is now a requirement to develop and test ideas that can move the discipline forward, he says. “Google’s played a larger role than almost any company in bringing that empiricism into the mainstream of the field,” he says. “Because of machine learning and operation at scale you can do things that are vastly different. You don’t want to separate researchers from data.”

It’s hard to say how long Google will be able to count on luring leading researchers, given the flush times for competing Silicon Valley startups. “We’re back to a time when there are a lot of startups out there exploring new ground,” says MacCormack, and if competitors can amass more interesting data, they may be able to leach away Google’s research mojo.

Read the entire story here.

Send to Kindle

Under the Covers at Uber


A mere four years ago Uber was being used mostly by Silicon Valley engineers to reserve local limo rides. Now, the Uber app is in the hands of millions of people and being used to book car transportation across sixty cities in six continents. Google recently invested $258 million in the company, which gives Uber a value of around $3.5 billion. Those who have used the service — drivers and passengers alike — swear by it; the service is convenient and the app is simple and engaging. But that doesn’t seem to justify the enormous valuation. So, what’s going on?

From Wired:

When Uber cofounder and CEO Travis Kalanick was in sixth grade, he learned to code on a Commodore 64. His favorite things to program were videogames. But in the mid-’80s, getting the machine to do what he wanted still felt a lot like manual labor. “Back then you would have to do the graphics pixel by pixel,” Kalanick says. “But it was cool because you were like, oh my God, it’s moving across the screen! My monster is moving across the screen!” These days, Kalanick, 37, has lost none of his fascination with watching pixels on the move.

In Uber’s San Francisco headquarters, a software tool called God View shows all the vehicles on the Uber system moving at once. On a laptop web browser, tiny cars on a map show every Uber driver currently on the city’s streets. Tiny eyeballs on the same map show the location of every customer currently looking at the Uber app on their smartphone. In a way, the company anointed by Silicon Valley’s elite as the best hope for transforming global transportation couldn’t have a simpler task: It just has to bring those cars and those eyeballs together — the faster and cheaper, the better.

“Uber should feel magical to the customer,” Kalanick says one morning in November. “They just push the button and the car comes. But there’s a lot going on under the hood to make that happen.”

A little less than four years ago, when Uber was barely more than a private luxury car service for Silicon Valley’s elite techies, Kalanick sat watching the cars crisscrossing San Francisco on God View and had a Matrix-y moment when he “started seeing the math.” He was going to make the monster move — not just across the screen but across cities around the globe. Since then, Uber has expanded to some 60 cities on six continents and grown to at least 400 employees. Millions of people have used Uber to get a ride, and revenue has increased at a rate of nearly 20 percent every month over the past year.

The company’s speedy ascent has taken place in parallel with a surge of interest in the so-called sharing economy — using technology to connect consumers with goods and services that would otherwise go unused. Kalanick had the vision to see potential profit in the empty seats of limos and taxis sitting idle as drivers wait for customers to call.

But Kalanick doesn’t put on the airs of a visionary. In business he’s a brawler. Reaching Uber’s goals has meant digging in against the established bureaucracy in many cities, where giving rides for money is heavily regulated. Uber has won enough of those fights to threaten the market share of the entrenched players. It not only offers a more efficient way to hail a ride but gives drivers a whole new way to see where demand is bubbling up. In the process, Uber seems capable of opening up sections of cities that taxis and car services never bothered with before.

In an Uber-fied future, fewer people own cars, but everybody has access to them.

In San Francisco, Uber has become its own noun — you “get an Uber.” But to make it a verb — to get to the point where everyone Ubers the same way they Google — the company must outperform on transportation the same way Google does on search.

No less than Google itself believes Uber has this potential. In a massive funding round in August led by the search giant’s venture capital arm, Uber received $258 million. The investment reportedly valued Uber at around $3.5 billion and pushed the company to the forefront of speculation about the next big tech IPO — and Kalanick as the next great tech leader.

The deal set Silicon Valley buzzing about what else Uber could become. A delivery service powered by Google’s self-driving cars? The new on-the-ground army for ferrying all things Amazon? Jeff Bezos also is an Uber investor, and Kalanick cites him as an entrepreneurial inspiration. “Amazon was just books and then some CDs,” Kalanick says. “And then they’re like, you know what, let’s do frickin’ ladders!” Then came the Kindle and Amazon Web Services — examples, Kalanick says, of how an entrepreneur’s “creative pragmatism” can defy expectations. He clearly enjoys daring the world to think of Uber as merely another way to get a ride.

“We feel like we’re still realizing what the potential is,” he says. “We don’t know yet where that stops.”

From the back of an Uber-summoned Mercedes GL450 SUV, Kalanick banters with the driver about which make and model will replace the discontinued Lincoln Town Car as the default limo of choice.

Mercedes S-Class? Too expensive, Kalanick says. Cadillac XTS? Too small.

So what is it?

“OK, I’m glad you asked,” Kalanick says. “This is going to blow you away, dude. Are you ready? Have you seen the 2013 Ford Explorer?” Spacious, like a Lexus crossover, but way cheaper.

As Uber becomes a dominant presence in urban transportation, it’s easy to imagine the company playing a role in making this prophecy self-fulfilling. It’s just one more sign of how far Uber has come since Kalanick helped create the company in 2009. In the beginning, it was just a way for him and his cofounder, StumbleUpon creator Garrett Camp, and their friends to get around in style.

They could certainly afford it. At age 21, Kalanick, born and raised in Los Angeles, had started a Napster-like peer-to-peer file-sharing search engine called Scour that got him sued for a quarter-trillion dollars by major media companies. Scour filed for bankruptcy, but Kalanick cofounded Red Swoosh to serve digital media over the Internet for the same companies that had sued him. Akamai bought the company in 2007 in a stock deal worth $19 million.

By the time he reached his thirties, Kalanick was a seasoned veteran in the startup trenches. But part of him wondered if he still had the drive to build another company. His breakthrough came when he was watching, of all things, a Woody Allen movie. The film was Vicky Christina Barcelona, which Allen made in 2008, when he was in his seventies. “I’m like, that dude is old! And he is still bringing it! He’s still making really beautiful art. And I’m like, all right, I’ve got a chance, man. I can do it too.”

Kalanick charged into Uber and quickly collided with the muscular resistance of the taxi and limo industry. It wasn’t long before San Francisco’s transportation agency sent the company a cease-and-desist letter, calling Uber an unlicensed taxi service. Kalanick and Uber did neither, arguing vehemently that it merely made the software that connected drivers and riders. The company kept offering rides and building its stature among tech types—a constituency city politicians have been loathe to alienate—as the cool way to get around.

Uber has since faced the wrath of government and industry in other cites, notably New York, Chicago, Boston, and Washington, DC.

One councilmember opposed to Uber in the nation’s capital was self-described friend of the taxi industry Marion Barry (yes, that Marion Barry). Kalanick, in DC to lobby on Uber’s behalf, told The Washington Post he had an offer for the former mayor: “I will personally chauffeur him myself in his silver Jaguar to work every day of the week, if he can just make this happen.” Though that ride never happened, the council ultimately passed a legal framework that Uber called “an innovative model for city transportation legislation across the country.”

Though Kalanick clearly relishes a fight, he lights up more when talking about Uber as an engineering problem. To fulfill its promise—a ride within five minutes of the tap of a smartphone button—Uber must constantly optimize the algorithms that govern, among other things, how many of its cars are on the road, where they go, and how much a ride costs. While Uber offers standard local rates for its various options, times of peak demand send prices up, which Uber calls surge pricing. Some critics call it price-gouging, but Kalanick says the economics are far less insidious. To meet increased demand, drivers need extra incentive to get out on the road. Since they aren’t employees, the marketplace has to motivate them. “Most things are dynamically priced,” Kalanick points out, from airline tickets to happy hour cocktails.

Kalanick employs a data-science team of PhDs from fields like nuclear physics, astrophysics, and computational biology to grapple with the number of variables involved in keeping Uber reliable. They stay busy perfecting algorithms that are dependable and flexible enough to be ported to hundreds of cities worldwide. When we met, Uber had just gone live in Bogotè, Colombia, as well as Shanghai, Dubai, and Bangalore.

And it’s no longer just black cars and yellow cabs. A newer option, UberX, offers lower-priced rides from drivers piloting their personal vehicles. According to Uber, only certain late-model cars are allowed, and drivers undergo the same background screening as others in the service. In an Uber-fied version of the future, far fewer people may own cars but everybody would have access to them. “You know, I hadn’t driven for a year, and then I drove over the weekend,” Kalanick says. “I had to jump-start my car to get going. It was a little awkward. So I think that’s a sign.”

Back at Uber headquarters, burly drivers crowd the lobby while nearby, coders sit elbow to elbow. Like other San Francisco startups on the cusp of something bigger, Uber is preparing to move to a larger space. Its new digs will be in the same building as Square, the mobile payments company led by Twitter mastermind Jack Dorsey. Twitter’s offices are across the street. The symbolism is hard to miss: Uber is joining the coterie of companies that define San Francisco’s latest tech boom.

Still, part of that image depends on Uber’s outsize potential to expand what it does. The logistical numbers it crunches to make it easier for people to get around would seem a natural fit for a transition into a delivery service. Uber coyly fuels that perception with publicity stunts like ferrying ice cream and barbecue to customers through its app. It’s easy to imagine such promotions quietly doubling as proofs of concept. News of Google’s massive investment prompted visions of a push-button delivery service powered by Google’s self-driving cars.

If Uber expands into delivery, its competition will suddenly include behemoths like Amazon, eBay, and Walmart.

Kalanick acknowledges that the most recent round of investment is intended to fund Uber’s growth, but that’s as far as he’ll go. “In a lot of ways, it’s not the money that allows you to do new things. It’s the growth and the ability to find things that people want and to use your creativity to target those,” he says. “There are a whole hell of a lot of other things that we can do and intend on doing.”

But the calculus of delivery may not even be the hardest part. If Uber were to expand into delivery, its competition—for now other ride-sharing startups such as Lyft, Sidecar, and Hailo—would include Amazon, eBay, and Walmart too.

One way to skirt rivalry with such giants is to offer itself as the back-end technology that can power same-day online retail. In early fall, Google launched its Shopping Express service in San Francisco. The program lets customers shop online at local stores through a Google-powered app; Google sends a courier with their deliveries the same day.

David Krane, the Google Ventures partner who led the investment deal, says there’s nothing happening between Uber and Shopping Express. He also says self-driving delivery vehicles are nowhere near ready to be looked at seriously as part of Uber. “Those meetings will happen when the technology is ready for such discussion,” he says. “That is many moons away.”

Read the entire article here.

Image courtesy of Uber.

Send to Kindle



Famed architect Norman Foster has a brilliant and restless mind. So, he’s not content to stop imagining, even with some of the world’s most innovative and recognizable architectural designs to his credit — 30 St. Mary Axe (London’s “gherkin” or pickle skyscraper), Hearst Tower, and the Millau Viaduct.

Foster is also an avid cyclist, which leads to his re-imagining of the lowly bicycle lane as a more lofty construct. Two hundred miles or so of raised bicycle lanes suspended above London, running mostly above railway lines, the SkyCycle. What a gorgeous idea.

From the Guardian:

Gliding through the air on a bike might so far be confined to the fantasy realms of singing nannies and aliens in baskets, but riding over rooftops could one day form part of your regular commute to work, if Norman Foster has his way.

Unveiled this week, in an appropriately light-headed vision for the holiday season, SkyCycle proposes a network of elevated bike paths hoisted aloft above railway lines, allowing you to zip through town blissfully liberated from the roads.

The project, which has the backing of Network Rail and Transport for London, would see over 220km of car-free routes installed above London’s suburban rail network, suspended on pylons above the tracks and accessed at over 200 entrance points. At up to 15 metres wide, each of the ten routes would accommodate 12,000 cyclists per hour and improve journey times by up to 29 minutes, according to the designers.

Lord Foster, who says that cycling is one of his great passions, describes the plan as “a lateral approach to finding space in a congested city.”

“By using the corridors above the suburban railways,” he said, “we could create a world-class network of safe, car-free cycle routes that are ideally located for commuters.”

Developed by landscape practice Exterior Architecture, with Foster and Partners and Space Syntax, the proposed network would cover a catchment area of six million people, half of whom live and work within 10 minutes of an entrance. But its ambitions stretch beyond London alone.

“The dream is that you could wake up in Paris and cycle to the Gare du Nord,” says Sam Martin of Exterior Architecture. “Then get the train to Stratford, and cycle straight into central London in minutes, without worrying about trucks and buses.”

Developed over the last two years, the initial idea came from the student project of one of Martin’s employees, Oli Clark, who proposed a network of elevated cycle routes weaving in and around Battersea power station. “It was a hobby in the office for a while,” says Martin. “Then we arranged a meeting at City Hall with the deputy mayor of transport – and bumped into Boris in the lift.”

Bumping into Boris has been the fateful beginning for some of the mayor’s other adventures in novelty infrastructure, including Anish Kapoor’s Orbit tower, apparently forged in a chance meeting with Lakshmi Mittal in the cloakrooms at Davos. Other encounters have resulted in cycle “superhighways” (which many blame for the recent increase in accidents) and a £60 million cable car that doesn’t really go anywhere. But could SkyCycle be different?

“It’s about having an eye on the future,” says Martin. “If London keeps growing and spreading itself out, with people forced to commute increasingly longer distances, then in 20 years it’s just going to be a ghetto for people in suits. After rail fare increases this week, a greater percentage of people’s income is being taken up with transport. There has to be another way to allow everyone access to the centre, and stop this doughnut effect.”

After meeting with Network Rail last year, the design team has focused on a 6.5km trial route from Stratford to Liverpool Street Station, following the path of the overground line, a stretch they estimate would cost around £220 million. Working with Roger Ridsdill-Smith, Foster’s head of structural engineering, responsible for the Millennium Bridge, they have developed what Martin describes as “a system akin to a tunnel-boring machine, but happening above ground”.

“It’s no different to the electrification of the lines west of Paddington,” he says. “It would involve a series of pylons installed along the outside edge of the tracks, from which a deck would project out. Trains could still run while the cycle decks were being installed.”

As for access, the proposal would see the installation of vertical hydraulic platforms next to existing railway stations, as well as ramps that took advantage of the raised topography around viaducts and cuttings. “It wouldn’t be completely seamless in terms of the cycling experience,” Martin admits. “But it could be a place for Boris Bike docking stations, to avoid people having to get their own equipment up there.” He says the structure could also be a source of energy creation, supporting solar panels and rain water collection.

The rail network has long been seen as a key to opening up cycle networks, given the amount of available land alongside rail lines, but no proposal has yet suggested launching cyclists into the air.

Read the entire article here.

Image: How the proposed SkyCycle tracks could look. Courtesy of Foster and Partners / Guardian.

Send to Kindle

Content Versus Innovation

VHS-cassetteThe entertainment and media industry is not known for its innovation. Left to its own devices we would all be consuming news from broadsheets and a town crier, and digesting shows at the theater. Not too long ago the industry, led by Hollywood heavyweights, was doing its utmost to kill emerging forms of media consumption, such as the video tape cassette and the VCR.

Following numerous regulatory, legal and political skirmishes innovation finally triumphed over entrenched interests, allowing VHS tape, followed by the DVD, to flourish, albeit for a while. This of course paved the way for new forms of distribution — the rise of Blockbuster and a myriad of neighborhood video rental stores.

In a great ironic twist, the likes of Blockbuster failed to recognize signals from the market that without significant and continual innovation their business models would subsequently crumble. Now Netflix and other streaming services have managed to end our weekend visits to the movie rental store.

A fascinating article excerpted below takes a look back at the lengthy, and continuing, fight between the conservative media empires and the market’s constant pull from technological innovation.

[For a fresh perspective on the future of media distribution, see our recent posting here.]

From TechCrunch:

The once iconic video rental giant Blockbuster is shutting down its remaining stores across the country. Netflix, meanwhile, is emerging as the leader in video rental, now primarily through online streaming. But Blockbuster, Netflix and home media consumption (VCR/DVD/Blu-ray) may never have existed at all in their current form if the content industry had been successful in banning or regulating them. In 1983, nearly 30 years before thousands of websites blacked out in protest of SOPA/PIPA, video stores across the country closed in protest against legislation that would bar their market model.

A Look Back

In 1977, the first video-rental store opened. It was 600 square feet and located on Wilshire Boulevard in Los Angeles. George Atkinson, the entrepreneur who decided to launch this idea, charged $50 for an “annual membership” and $100 for a “lifetime membership” but the memberships only allowed people to rent videos for $10 a day. Despite an unusual business model, Atkinson’s store was an enormous success, growing to 42 affiliated stores in fewer than 20 months and resulting in numerous competitors.

In retrospect, Atkinson’s success represented the emergence of an entirely new market: home consumption of paid content. It would become an $18 billion dollar domestic market, and, rather than cannibalize from the existing movie theater market, it would eclipse it and thereby become a massive revenue source for the industry.

Atkinson’s success in 1977 is particularly remarkable as the Sony Betamax (the first VCR) had only gone on sale domestically in 1975 at a cost of $1,400 (which in 2013 U.S. dollars is $6,093). As a comparison, the first DVD player in 1997 cost $1,458 in 2013 dollars and the first Blu-ray player in 2006 cost $1,161 in 2013 dollars. And unlike the DVD and Blu-ray player, it would take eight years, until 1983, for the VCR to reach 10 percent of U.S. television households. Atkinson’s success, and that of his early competitors, was in catering to a market of well under 10 percent of U.S. households.

While many content companies realized this as a massive new revenue stream — e.g. 20th Century Fox buying one video rental company for $7.5 million in 1979 — the content industry lawyers and lobbyists tried to stop the home content market through litigation and regulation.

The content industry sued to ban the sale of the Betamax, the first VCR. This legal strategy was coupled by leveraging the overwhelming firepower of the content industry in Washington. If they lost in court to ban the technology and rental business model, then they would ban the technology and rental business model in Congress.

Litigation Attack

In 1976, the content industry filed suit against Sony, seeking an injunction to prevent the company from “manufacturing, distributing, selling or offering for sale Betamax or Betamax tapes.” Essentially granting this remedy would have banned the VCR for all Americans. The content industry’s motivation behind this suit was largely to deal with individuals recording live television, but the emergence of the rental industry was likely a contributing factor.

While Sony won at the district court level in 1979, in 1981 it lost at the Court of Appeals for the Ninth Circuit where the court found that Sony was liable for copyright infringement by their users — recording broadcast television. The Appellate court ordered the lower court to impose an appropriate remedy, advising in favor of an injunction to block the sale of the Betamax.

And in 1981, under normal circumstances, the VCR would have been banned then and there. Sony faced liability well beyond its net worth, so it may well have been the end of Sony, or at least its U.S. subsidiary, and the end of the VCR. Millions of private citizens could have been liable for damages for copyright infringement for recording television shows for personal use. But Sony appealed this ruling to the Supreme Court.

The Supreme Court is able to take very few cases. For example in 2009, 1.1 percent of petitions for certiorari were granted, and of these approximately 70 percent are cases where there is a conflict among different courts (here there was no conflict). But in 1982, the Supreme Court granted certiorari and agreed to hear the case.

After an oral hearing, the justices took a vote internally, and originally only one of them was persuaded to keep the VCR as legal (but after discussion, the number of justices in favor of the VCR would eventually increase to four).

With five votes in favor of affirming the previous ruling the Betamax (VCR) was to be illegal in the United States (see Justice Blackmun’s papers).

But then, something even more unusual happened – which is why we have the VCR and subsequent technologies: The Supreme Court decided for both sides to re-argue a portion of the case. Under the Burger Court (when he was Chief Justice), this only happened in 2.6 percent of the cases that received oral argument. In the re-argument of the case, a crucial vote switched sides, which resulted in a 5-4 decision in favor of Sony. The VCR was legal. There would be no injunction barring its sale.

The majority opinion characterized the lawsuit as an “unprecedented attempt to impose copyright liability upon the distributors of copying equipment and rejected “[s]uch an expansion of the copyright privilege” as “beyond the limits” given by Congress. The Court even cited Mr. Rogers who testified during the trial:

I have always felt that with the advent of all of this new technology that allows people to tape the ‘Neighborhood’ off-the-air . . . Very frankly, I am opposed to people being programmed by others.

On the absolute narrowest of legal grounds, through a highly unusual legal process (and significant luck), the VCR was saved by one vote at the Supreme Court in 1984.

Regulation Attack

In 1982 legislation was introduced in Congress to give copyright holders the exclusive right to authorize the rental of prerecorded videos. Legislation was reintroduced in 1983, the Consumer Video Sales Rental Act of 1983. This legislation would have allowed the content industry to shut down the rental market, or charge exorbitant fees, by making it a crime to rent out movies purchased commercially. In effect, this legislation would have ended the existing market model of rental stores. With 34 co-sponsors, major lobbyists and significant campaign contributions to support it, this legislation had substantial support at the time.

Video stores saw the Consumer Video Sales Rental Act as an existential threat, and on October 21, 1983, about 30 years before the SOPA/PIPA protests, video stores across the country closed down for several hours in protest. While the 1983 legislation died in committee, the legislation would be reintroduced in 1984. In 1984, similar legislation was enacted, The Record Rental Amendment of 1984, which banned the renting and leasing of music. In 1990, Congress banned the renting of computer software.

But in the face of public backlash from video retailers and customers, Congress did not pass the Consumer Video Sales Rental Act.

At the same time, the movie studios tried to ban the Betamax VCR through legislation. Eventually the content industry decided to support legislation that would require compulsory licensing rather than an outright ban. But such a compulsory licensing scheme would have drastically driven up the costs of video tape players and may have effectively banned the technology (similar regulations did ban other technologies).

For the content industry, banning the technology was a feature, not a bug.

Read the entire article here.

Image: Video Home System (VHS) cassette tape. Courtesy of Wikipedia.

Send to Kindle

Let the Sunshine In

A ingeniously simple and elegant idea brings sunshine to a small town in Norway.

From the Guardian:

On the market square in Rjukan stands a statue of the town’s founder, a noted Norwegian engineer and industrialist called Sam Eyde, sporting a particularly fine moustache. One hand thrust in trouser pocket, the other grasping a tightly rolled drawing, the great man stares northwards across the square at an almost sheer mountainside in front of him.

Behind him, to the south, rises the equally sheer 1,800-metre peak known as Gaustatoppen. Between the mountains, strung out along the narrow Vestfjord valley, lies the small but once mighty town that Eyde built in the early years of the last century, to house the workers for his factories.

He was plainly a smart guy, Eyde. He harnessed the power of the 100-metre Rjukanfossen waterfall to generate hydro-electricity in what was, at the time, the world’s biggest power plant. He pioneered new technologies – one of which bears his name – to produce saltpetre by oxidising nitrogen from air, and made industrial quantities of hydrogen by water electrolysis.

But there was one thing he couldn’t do: change the elevation of the sun. Deep in its east-west valley, surrounded by high mountains, Rjukan and its 3,400 inhabitants are in shadow for half the year. During the day, from late September to mid-March, the town, three hours’ north-west of Oslo, is not dark (well, it is almost, in December and January, but then so is most of Norway), but it’s certainly not bright either. A bit … flat. A bit subdued, a bit muted, a bit mono.

Since last week, however, Eyde’s statue has gazed out upon a sight that even the eminent engineer might have found startling. High on the mountain opposite, 450 metres above the town, three large, solar-powered, computer-controlled mirrors steadily track the movement of the sun across the sky, reflecting its rays down on to the square and bathing it in bright sunlight. Rjukan – or at least, a small but vital part of Rjukan – is no longer stuck where the sun don’t shine.

“It’s the sun!” grins Ingrid Sparbo, disbelievingly, lifting her face to the light and closing her eyes against the glare. A retired secretary, Sparbo has lived all her life in Rjukan and says people “do sort of get used to the shade. You end up not thinking about it, really. But this … This is so warming. Not just physically, but mentally. It’s mentally warming.”

Two young mothers wheel their children into the square, turn, and briefly bask: a quick hit. On a freezing day, an elderly couple sit wide-eyed on one of the half-dozen newly installed benches, smiling at the warmth on their faces. Children beam. Lots of people take photographs. A shop assistant, Silje Johansen, says it’s “awesome. Just awesome.”

Pushing his child’s buggy, electrical engineer Eivind Toreid is more cautious. “It’s a funny thing,” he says. “Not real sunlight, but very like it. Like a spotlight. I’ll go if I’m free and in town, yes. Especially in autumn and in the weeks before the sun comes back. Those are the worst: you look just a short way up the mountainside and the sun is right there, so close you can almost touch it. But not here.”

Pensioners Valborg and Eigil Lima have driven from Stavanger – five long hours on the road – specially to see it. Heidi Fieldheim, who lives in Oslo now but spent six years in Rjukan with her husband, a local man, says she heard all about it on the radio. “But it’s far more than I expected,” she says. “This will bring much happiness.”

Across the road in the Nyetider cafe, sporting – by happy coincidence – a particularly fine set of mutton chops, sits the man responsible for this unexpected access to happiness. Martin Andersen is a 40-year-old artist and lifeguard at the municipal baths who, after spells in Berlin, Paris, Mali and Oslo, pitched up in Rjukan in the summer of 2001.

The first inkling of an artwork Andersen dubbed the Solspeil, or sun mirror, came to him as the month of September began to fade: “Every day, we would take our young child for a walk in the buggy,” he says, “and every day I realised we were having to go a little further down the valley to find the sun.” By 28 September, Andersen realised, the sun completely disappears from Rjukan’s market square. The occasion of its annual reappearance, lighting up the bridge across the river by the old fire station, is a date indelibly engraved in the minds of all Rjukan residents: 12 March.

And throughout the seemingly endless intervening months, Andersen says: “We’d look up and see blue sky above, and the sun high on the mountain slopes, but the only way we could get to it was to go out of town. The brighter the day, the darker it was down here. And it’s sad, a town that people have to leave in order to feel the sun.”

A hundred years ago, Eyde had already grasped the gravity of the problem. Researching his own plan, Andersen discovered that, as early as 1913, Eyde was considering a suggestion by one of his factory workers for a system of mountain-top mirrors to redirect sunlight into the valley below.

The industrialist eventually abandoned the plan for want of adequate technology, but soon afterwards his company, Norsk Hydro, paid for the construction of a cable car to carry the long-suffering townsfolk, for a modest sum, nearly 500m higher up the mountain and into the sunlight. (Built in 1928, the Krossobanen is still running, incidentally; £10 for the return trip. The view is majestic and the coffee at the top excellent. A brass plaque in the ticket office declares the facility a gift from the company “to the people of Rjukan, because for six months of the year, the sun does not shine in the bottom of the valley”.)

Andersen unearthed a partially covered sports stadium in Arizona that was successfully using small mirrors to keep its grass growing. He learned that in the Middle East and other sun-baked regions of the world, vast banks of hi-tech tracking mirrors called heliostats concentrate sufficient reflected sunlight to heat steam turbines and drive whole power plants.He persuaded the town hall to come up with the cash to allow him to develop his project further. He contacted an expert in the field, Jonny Nersveen, who did the maths and told him it could probably work. He visited Viganella, an Italian village that installed a similar sun mirror in 2006.

And 12 years after he first dreamed of his Solspeil, a German company specialising in so-called CSP – concentrated solar power – helicoptered in the three 17 sq m glass mirrors that now stand high above the market square in Rjukan. “It took,” he says, “a bit longer than we’d imagined.” First, the municipality wasn’t used to dealing with this kind of project: “There’s no rubber stamp for a sun mirror.” But Andersen also wanted to be sure it was right – that Rjukan’s sun mirror would do what it was intended to do.

Viganella’s single polished steel mirror, he says, lights a much larger area, but with a far weaker, more diffuse light. “I wanted a smaller, concentrated patch of sunlight: a special sunlit spot in the middle of town where people could come for a quick five minutes in the sun.” The result, you would have to say, is pretty much exactly that: bordered on one side by the library and town hall, and on the other by the tourist office, the 600 sq ms of Rjukan’s market square, to be comprehensively remodelled next year in celebration, now bathes in a focused beam of bright sunlight fully 80-90% as intense as the original.

Their efforts monitored by webcams up on the mountain and down in the square, their movement dictated by computer in a Bavarian town outside Munich, the heliostats generate the solar power they need to gradually tilt and rotate, following the sun on its brief winter dash across the sky.

It really works. Even the objectors – and there were, in town, plenty of them; petitions and letter-writing campaigns and a Facebook page organised against what a large number of locals saw initially as a vanity project and, above all, a criminal waste of money – now seem largely won over.

Read the entire article here.

Image: Light reflected by the mirrors of Rjukan, Norway. Courtesy of David Levene / Guardian.

Send to Kindle

Six Rules to Super-Charge Your Creativity

Creative minds by their very nature are all different. Yet upon further examination it seems that there are some key elements and common routines that underlie many of the great, innovative thinkers. First and foremost, of course, is to be an early-bird.

From the Guardian:

One morning this summer, I got up at first light – I’d left the blinds open the night before – then drank a strong cup of coffee, sat near-naked by an open window for an hour, worked all morning, then had a martini with lunch. I took a long afternoon walk, and for the rest of the week experimented with never working for more than three hours at a stretch.

This was all in an effort to adopt the rituals of some great artists and thinkers: the rising-at-dawn bit came from Ernest Hemingway, who was up at around 5.30am, even if he’d been drinking the night before; the strong coffee was borrowed from Beethoven, who personally counted out the 60 beans his morning cup required. Benjamin Franklin swore by “air baths”, which was his term for sitting around naked in the morning, whatever the weather. And the midday cocktail was a favourite of VS Pritchett (among many others). I couldn’t try every trick I discovered in a new book, Daily Rituals: How Great Minds Make Time, Find Inspiration And Get To Work; oddly, my girlfriend was unwilling to play the role of Freud’s wife, who put toothpaste on his toothbrush each day to save him time. Still, I learned a lot. For example: did you know that lunchtime martinis aren’t conducive to productivity?

As a writer working from home, of course, I have an unusual degree of control over my schedule – not everyone could run such an experiment. But for anyone who thinks of their work as creative, or who pursues creative projects in their spare time, reading about the habits of the successful, can be addictive. Partly, that’s because it’s comforting to learn that even Franz Kafka struggled with the demands of his day job, or that Franklin was chronically disorganised. But it’s also because of a covert thought that sounds delusionally arrogant if expressed out loud: just maybe, if I took very hot baths like Flaubert, or amphetamines like Auden, I might inch closer to their genius.

Several weeks later, I’m no longer taking “air baths”, while the lunchtime martini didn’t last more than a day (I mean, come on). But I’m still rising early and, when time allows, taking long walks. Two big insights have emerged. One is how ill-suited the nine-to-five routine is to most desk-based jobs involving mental focus; it turns out I get far more done when I start earlier, end a little later, and don’t even pretend to do brain work for several hours in the middle. The other is the importance of momentum. When I get straight down to something really important early in the morning, before checking email, before interruptions from others, it beneficially alters the feel of the whole day: once interruptions do arise, they’re never quite so problematic. Another technique I couldn’t manage without comes from the writer and consultant Tony Schwartz: use a timer to work in 90-minute “sprints”, interspersed with signficant breaks. (Thanks to this, I’m far better than I used to be at separating work from faffing around, rather than spending half the day flailing around in a mixture of the two.)

The one true lesson of the book, says its author, Mason Currey, is that “there’s no one way to get things done”. For every Joyce Carol Oates, industriously plugging away from 8am to 1pm and again from 4pm to 7pm, or Anthony Trollope, timing himself typing 250 words per quarter-hour, there’s a Sylvia Plath, unable to stick to a schedule. (Or a Friedrich Schiller, who could only write in the presence of the smell of rotting apples.) Still, some patterns do emerge. Here, then, are six lessons from history’s most creative minds.

1. Be a morning person

It’s not that there aren’t successful night owls: Marcel Proust, for one, rose sometime between 3pm and 6pm, immediately smoked opium powders to relieve his asthma, then rang for his coffee and croissant. But very early risers form a clear majority, including everyone from Mozart to Georgia O’Keeffe to Frank Lloyd Wright. (The 18th-century theologian Jonathan Edwards, Currey tells us, went so far as to argue that Jesus had endorsed early rising “by his rising from the grave very early”.) For some, waking at 5am or 6am is a necessity, the only way to combine their writing or painting with the demands of a job, raising children, or both. For others, it’s a way to avoid interruption: at that hour, as Hemingway wrote, “There is no one to disturb you and it is cool or cold and you come to your work and warm as you write.” There’s another, surprising argument in favour of rising early, which might persuade sceptics: that early-morning drowsiness might actually be helpful. At one point in his career, the novelist Nicholson Baker took to getting up at 4.30am, and he liked what it did to his brain: “The mind is newly cleansed, but it’s also befuddled… I found that I wrote differently then.”

Psychologists categorise people by what they call, rather charmingly, “morningness” and “eveningness”, but it’s not clear that either is objectively superior. There is evidence that morning people are happier and more conscientious, but also that night owls might be more intelligent. If you’re determined to join the ranks of the early risers, the crucial trick is to start getting up at the same time daily, but to go to bed only when you’re truly tired. You might sacrifice a day or two to exhaustion, but you’ll adjust to your new schedule more rapidly.

2. Don’t give up the day job

Time is short, my strength is limited, the office is a horror, the apartment is noisy,” Franz Kafka complained to his fiancee, “and if a pleasant, straightforward life is not possible, then one must try to wriggle through by subtle manoeuvres.” He crammed in his writing between 10.30pm and the small hours of the morning. But in truth, a “pleasant, straightforward life” might not have been preferable, artistically speaking: Kafka, who worked in an insurance office, was one of many artists who have thrived on fitting creative activities around the edges of a busy life. William Faulkner wrote As I Lay Dying in the afternoons, before commencing his night shift at a power plant; TS Eliot’s day job at Lloyds bank gave him crucial financial security; William Carlos Williams, a paediatrician, scribbled poetry on the backs of his prescription pads. Limited time focuses the mind, and the self-discipline required to show up for a job seeps back into the processes of art. “I find that having a job is one of the best things in the world that could happen to me,” wrote Wallace Stevens, an insurance executive and poet. “It introduces discipline and regularity into one’s life.” Indeed, one obvious explanation for the alcoholism that pervades the lives of full-time authors is that it’s impossible to focus on writing for more than a few hours a day, and, well, you’ve got to make those other hours pass somehow.

3. Take lots of walks

There’s no shortage of evidence to suggest that walking – especially walking in natural settings, or just lingering amid greenery, even if you don’t actually walk much – is associated with increased productivity and proficiency at creative tasks. But Currey was surprised, in researching his book, by the sheer ubiquity of walking, especially in the daily routines of composers, including Beethoven, Mahler, Erik Satie and Tchaikovksy, “who believed he had to take a walk of exactly two hours a day and that if he returned even a few minutes early, great misfortunes would befall him”. It’s long been observed that doing almost anything other than sitting at a desk can be the best route to novel insights. These days, there’s surely an additional factor at play: when you’re on a walk, you’re physically removed from many of the sources of distraction – televisions, computer screens – that might otherwise interfere with deep thought.

Read the entire article here.

Image: Frank Lloyd Wright, architect, c. March 1, 1926. Courtesy of U.S. Library of Congress.

Send to Kindle

Bots That Build Themselves

Wouldn’t it be a glorious breakthrough if your next furniture purchase could assemble itself? No more sifting though stepwise Scandinavian manuals describing your next “Fjell” or “Bestå” pieces from IKEA; no more looking for a magnifying glass to decipher strange text from Asia; no more searches for an Allen wrench that fits those odd hexagonal bolts. Now, to set your expectations, recent innovations at the macro-mechanical level are not yet quite in the same league as planet-sized self-assembling spaceships (from the mind of Iain Banks). But, researchers and engineers are making progress.

From ars technica:

At a certain level of complexity and obligation, sets of blocks can easily go from fun to tiresome to assemble. Legos? K’Nex? Great. Ikea furniture? Bridges? Construction scaffolding? Not so much. To make things easier, three scientists at MIT recently exhibited a system of self-assembling cubic robots that could in theory automate the process of putting complex systems together.

The blocks, dubbed M-Blocks, use a combination of magnets and an internal flywheel to move around and stick together. The flywheels, running off an internal battery, generate angular momentum that allows the blocks to flick themselves at each other, spinning them through the air. Magnets on the surfaces of the blocks allow them to click into position.

Each flywheel inside the blocks can spin at up to 20,000 rotations per minute. Motion happens when the flywheel spins and then is suddenly braked by a servo motor that tightens a belt encircling the flywheel, imparting its angular momentum to the body of the blocks. That momentum sends the block flying at a certain velocity toward its fellow blocks (if there is a lot of it) or else rolling across the ground (if there’s less of it). Watching a video of the blocks self-assembling, the effect is similar to watching Sid’s toys rally in Toy Story—a little off-putting to see so many parts moving into a whole at once, unpredictably moving together like balletic dying fish.

Each of the blocks is controlled by a 32-bit ARM microprocessor and three 3.7 volt batteries that afford each one between 20 and 100 moves before the battery life is depleted. Rolling is the least complicated motion, though the blocks can also use their flywheels to turn corners, climb over each other, or even complete a leap from ground level to three blocks high, sticking the landing on top of a column 51 percent of the time.

The blocks use 6-axis inertial measurement units, like those found on planes, ships, or spacecrafts, to figure out how they are oriented in space. Each cube has an IR LED and a photodiode that cubes use to communicate with each other.

The authors note that the cubes’ motion is not very precise yet; one cube is considered to have moved successfully if it hits its goal position within three tries. The researchers found the RPMs needed to generate momentum for different movements through trial and error.

If the individual cube movements weren’t enough, groups of the cubes can also move together in either a cluster or as a row of cubes rolling in lockstep. A set of four cubes arranged in a square attempting to roll together in a block approaches the limits of the cubes’ hardware, the authors write. The cubes can even work together to get around an obstacle, rolling over each other and stacking together World War Z-zombie style until the bump in the road has been crossed.

Read the entire article here.

Video: M-Blocks. Courtesy of ars technica.

Send to Kindle

Sounds of Extinction

Camera aficionados will find themselves lamenting the demise of the film advance. Now that the world has moved on from film to digital you will no longer hear that distinctive mechanical sound as you wind on the film, and hope the teeth on the spool engage the plastic of the film.

Hardcore computer buffs will no doubt miss the beep-beep-hiss sound of the 56K modem — that now seemingly ancient box that once connected us to… well, who knows what it actually connected us to at that speed.

Our favorite arcane sounds, soon to become relegated to the audio graveyard: the telephone handset slam, the click and carriage return of the typewriter, the whir of reel-to-reel tape, the crackle of the diamond stylus as it first hits an empty groove on a 33.

More sounds you may (or may not) miss below.

From Wired:

The forward march of technology has a drum beat. These days, it’s custom text-message alerts, or your friend saying “OK, Glass” every five minutes like a tech-drunk parrot. And meanwhile, some of the most beloved sounds are falling out of the marching band.

The boops and beeps of bygone technology can be used to chart its evolution. From the zzzzzzap of the Tesla coil to the tap-tap-tap of Morse code being sent via telegraph, what were once the most important nerd sounds in the world are now just historical signposts. But progress marches forward, and for every irritatingly smug Angry Pigs grunt we have to listen to, we move further away from the sound of the Defender ship exploding.

Let’s celebrate the dying cries of technology’s past. The follow sounds are either gone forever, or definitely on their way out. Bow your heads in silence and bid them a fond farewell.

The Telephone Slam

Ending a heated telephone conversation by slamming the receiver down in anger was so incredibly satisfying. There was no better way to punctuate your frustration with the person on the other end of the line. And when that receiver hit the phone, the clack of plastic against plastic was accompanied by a slight ringing of the phone’s internal bell. That’s how you knew you were really pissed — when you slammed the phone so hard, it rang.

There are other sounds we’ll miss from the phone. The busy signal died with the rise of voicemail (although my dad refuses to get voicemail or call waiting, so he’s still OG), and the rapid click-click-click of the dial on a rotary phone is gone. But none of those compare with hanging up the phone with a forceful slam.

Tapping a touchscreen just does not cut it. So the closest thing we have now is throwing the pitifully fragile smartphone against the wall.

The CRT Television

The only TVs left that still use cathode-ray tubes are stashed in the most depressing places — the waiting rooms of hospitals, used car dealerships, and the dusty guest bedroom at your grandparents’ house. But before we all fell prey to the magical resolution of zeros and ones, boxy CRT televisions warmed (literally) the living rooms of every home in America. The sounds they made when you turned them on warmed our hearts, too — the gentle whoosh of the degaussing coil as the set was brought to life with the heavy tug of a pull-switch, or the satisfying mechanical clunk of a power button. As the tube warmed up, you’d see the visuals slowly brighten on the screen, giving you ample time to settle into the couch to enjoy latest episode of Seinfeld.

Read the entire article here.

Image courtesy of Wired.

Send to Kindle

Next Up: Apple TV

Robert Hof argues that the time is ripe for Steve Jobs’ corporate legacy to reinvent the TV. Apple transformed the personal computer industry, the mobile phone market and the music business. Clearly the company has all the components in place to assemble another innovation.

From Technology Review:

Steve Jobs couldn’t hide his frustration. Asked at a technology conference in 2010 whether Apple might finally turn its attention to television, he launched into an exasperated critique of TV. Cable and satellite TV companies make cheap, primitive set-top boxes that “squash any opportunity for innovation,” he fumed. Viewers are stuck with “a table full of remotes, a cluster full of boxes, a bunch of different [interfaces].” It was the kind of technological mess that cried out for Apple to clean it up with an elegant product. But Jobs professed to have no idea how his company could transform the TV.

Scarcely a year later, however, he sounded far more confident. Before he died on October 5, 2011, he told his biographer, ­Walter Isaacson, that Apple wanted to create an “integrated television set that is completely easy to use.” It would sync with other devices and Apple’s iCloud online storage service and provide “the simplest user interface you could imagine.” He added, tantalizingly, “I finally cracked it.”

Precisely what he cracked remains hidden behind Apple’s shroud of secrecy. Apple has had only one television-related product—the black, hockey-puck-size Apple TV device, which streams shows and movies to a TV. For years, Jobs and Tim Cook, his successor as CEO, called that device a “hobby.” But under the guise of this hobby, Apple has been steadily building hardware, software, and services that make it easier for people to watch shows and movies in whatever way they wish. Already, the company has more of the pieces for a compelling next-generation TV experience than people might realize.

And as Apple showed with the iPad and iPhone, it doesn’t have to invent every aspect of a product in order for it to be disruptive. Instead, it has become the leader in consumer electronics by combining existing technologies with some of its own and packaging them into products that are simple to use. TV seems to be at that moment now. People crave something better than the fusty, rigidly controlled cable TV experience, and indeed, the technologies exist for something better to come along. Speedier broadband connections, mobile TV apps, and the availability of some shows and movies on demand from Netflix and Hulu have made it easier to watch TV anytime, anywhere. The number of U.S. cable and satellite subscribers has been flat since 2010.

Apple would not comment. But it’s clear from two dozen interviews with people close to Apple suppliers and partners, and with people Apple has spoken to in the TV industry, that television—the medium and the device—is indeed its next target.

The biggest question is not whether Apple will take on TV, but when. The company must eventually come up with another breakthrough product; with annual revenue already topping $156 billion, it needs something very big to keep growth humming after the next year or two of the iPad boom. Walter Price, managing director of Allianz Global Investors, which holds nearly $1 billion in Apple shares, met with Apple executives in September and came away convinced that it would be years before Apple could get a significant share of the $345 billion worldwide market for televisions. But at $1,000, the bare minimum most analysts expect an Apple television to cost, such a product would eventually be a significant revenue generator. “You sell 10 million of those, it can move the needle,” he says.

Cook, who replaced Jobs as CEO in August 2011, could use a boost, too. He has presided over missteps such as a flawed iPhone mapping app that led to a rare apology and a major management departure. Seen as a peerless operations whiz, Cook still needs a revolutionary product of his own to cement his place next to Saint Steve. Corey Ferengul, a principal at the digital media investment firm Apace Equities and a former executive at Rovi, which provided TV programming guide services to Apple and other companies, says an Apple TV will be that product: “This will be Tim Cook’s first ‘holy shit’ innovation.”

What Apple Already Has

Rapt attention would be paid to whatever round-edged piece of brushed-aluminum hardware Apple produced, but a television set itself would probably be the least important piece of its television strategy. In fact, many well-connected people in technology and television, from TV and online video maven Mark Cuban to venture capitalist and former Apple executive Jean-Louis Gassée, can’t figure out why Apple would even bother with the machines.

For one thing, selling televisions is a low-margin business. No one subsidizes the purchase of a TV the way your wireless carrier does with the iPhone (an iPhone might cost you $200, but Apple’s revenue from it is much higher than that). TVs are also huge and difficult to stock in stores, let alone ship to homes. Most of all, the upgrade cycle that powers Apple’s iPhone and iPad profit engine doesn’t apply to television sets—no one replaces them every year or two.

But even though TVs don’t line up neatly with the way Apple makes money on other hardware, they are likely to remain central to people’s ever-increasing consumption of video, games, and other forms of media. Apple at least initially could sell the screens as a kind of Trojan horse—a way of entering or expanding its role in lines of business that are more profitable, such as selling movies, shows, games, and other Apple hardware.

Read the entire article following the jump.

Image courtesy of Apple, Inc.

Send to Kindle

RIP: Chief Innovation Officer

“Innovate or die” goes the business mantra. Embrace creativity or you and your company will fall by the wayside and wither into insignificance.

A leisurely skim through a couple of dozen TV commercials, print ads and online banners will reinforce the notion — we are surrounded by innovators.

Absolutely everyone is innovating: Subway innovates with a new type of sandwich; Campbell Soup innovates by bringing a new blend to market more quickly; Skyy vodka innovates by adding a splash of lemon flavoring; Mercedes innovates by adding blind spot technology in its car door mirrors; Delta Airlines innovates by adding an inch more legroom for weary fliers; Bank of America innovates by communicating with customers via Twitter; L’Oreal innovates by boosting lashes. Innovation is everywhere and all the time.

Or is it?

There was a time when innovation meant radical, disruptive change: think movable type, printing, telegraphy, light bulb, mass production, photographic film, transistor, frozen food processing, television.

Now, the word innovation is liberally applied to just about anything. Marketers and advertisers have co-opted the word in service of coolness and an entrepreneurial halo. But, overuse of the label and its attachment to most new products and services in general has ensured that its value has become greatly diminished. Rather than connoting disruptive change, innovation in business is no more than a corporate cliché designed to market the coolness or an incremental improvement. So, who needs a Chief Innovation Officer anymore? After all, we are now all innovators.

From the Wall Street Journal:

Got innovation? Just about every company says it does.

Businesses throw around the term to show they’re on the cutting edge of everything from technology and medicine to snacks and cosmetics. Companies are touting chief innovation officers, innovation teams, innovation strategies and even innovation days.

But that doesn’t mean the companies are actually doing any innovating. Instead they are using the word to convey monumental change when the progress they’re describing is quite ordinary.

Like the once ubiquitous buzzwords “synergy” and “optimization,” innovation is in danger of becoming a cliché—if it isn’t one already.

“Most companies say they’re innovative in the hope they can somehow con investors into thinking there is growth when there isn’t,” says Clayton Christensen, a professor at Harvard Business School and the author of the 1997 book, “The Innovator’s Dilemma.”

A search of annual and quarterly reports filed with the Securities and Exchange Commission shows companies mentioned some form of the word “innovation” 33,528 times last year, which was a 64% increase from five years before that.

More than 250 books with “innovation” in the title have been published in the last three months, most of them dealing with business, according to a search of Amazon.com.

The definition of the term varies widely depending on whom you ask. To Bill Hickey, chief executive of Bubble Wrap’s maker, Sealed Air Corp., it means inventing a product that has never existed, such as packing material that inflates on delivery.

To Ocean Spray Cranberries Inc. CEO Randy Papadellis, it is turning an overlooked commodity, such as leftover cranberry skins, into a consumer snack like Craisins.

To Pfizer Inc.’s PFE +0.85% research and development head, Mikael Dolsten, it is extending a product’s scope and application, such as expanding the use of a vaccine for infants that is also effective in older adults.

Scott Berkun, the author of the 2007 book “The Myths of Innovation,” which warns about the dilution of the word, says that what most people call an innovation is usually just a “very good product.”

He prefers to reserve the word for civilization-changing inventions like electricity, the printing press and the telephone—and, more recently, perhaps the iPhone.

Mr. Berkun, now an innovation consultant, advises clients to ban the word at their companies.

“It is a chameleon-like word to hide the lack of substance,” he says.

Mr. Berkun tracks innovation’s popularity as a buzzword back to the 1990s, amid the dot-com bubble and the release of James M. Utterback’s “Mastering the Dynamics of Innovation” and Mr. Christensen’s “Dilemma.”

The word appeals to large companies because it has connotations of being agile and “cool,” like start-ups and entrepreneurs, he says.

Read the entire article after the jump.

Image: Draisine, also called Laufmaschine (“running machine”), from around 1820. The Laufmaschine was invented by the German Baron Karl von Drais in Mannheim in 1817. Being the first means of transport to make use of the two-wheeler principle, the Laufmaschine is regarded as the archetype of the bicycle. Courtesy of Wikipedia.

Send to Kindle

The Death of Scientific Genius

There is a certain school of thought that asserts that scientific genius is a thing of the past. After all, we haven’t seen the recent emergence of pivotal talents such as Galileo, Newton, Darwin or Einstein. Is it possible that fundamentally new ways to look at our world — that a new mathematics or a new physics is no longer possible?

In a recent essay in Nature, Dean Keith Simonton, professor of psychology at UC Davis, argues that such fundamental and singular originality is a thing of the past.

From ars technica:

Einstein, Darwin, Galileo, Mendeleev: the names of the great scientific minds throughout history inspire awe in those of us who love science. However, according to Dean Keith Simonton, a psychology professor at UC Davis, the era of the scientific genius may be over. In a comment paper published in Nature last week, he explains why.

The “scientific genius” Simonton refers to is a particular type of scientist; their contributions “are not just extensions of already-established, domain-specific expertise.” Instead, “the scientific genius conceives of a novel expertise.” Simonton uses words like “groundbreaking” and “overthrow” to illustrate the work of these individuals, explaining that they each contributed to science in one of two major ways: either by founding an entirely new field or by revolutionizing an already-existing discipline.

Today, according to Simonton, there just isn’t room to create new disciplines or overthrow the old ones. “It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline,” he writes. Furthermore, most scientific fields aren’t in the type of crisis that would enable paradigm shifts, according to Thomas Kuhn’s classic view of scientific revolutions. Simonton argues that instead of finding big new ideas, scientists currently work on the details in increasingly specialized and precise ways.

And to some extent, this argument is demonstrably correct. Science is becoming more and more specialized. The largest scientific fields are currently being split into smaller sub-disciplines: microbiology, astrophysics, neuroscience, and paleogeography, to name a few. Furthermore, researchers have more tools and the knowledge to hone in on increasingly precise issues and questions than they did a century—or even a decade—ago.

But other aspects of Simonton’s argument are a matter of opinion. To me, separating scientists who “build on what’s already known” from those who “alter the foundations of knowledge” is a false dichotomy. Not only is it possible to do both, but it’s impossible to establish—or even make a novel contribution to—a scientific field without piggybacking on the work of others to some extent. After all, it’s really hard to solve the problems that require new solutions if other people haven’t done the work to identify them. Plate tectonics, for example, was built on observations that were already widely known.

And scientists aren’t done altering the foundations of knowledge, either. In science, as in many other walks of life, we don’t yet know everything we don’t know. Twenty years ago, exoplanets were hypothetical. Dark energy, as far as we knew, didn’t exist.

Simonton points out that “cutting-edge work these days tends to emerge from large, well-funded collaborative teams involving many contributors” rather than a single great mind. This is almost certainly true, especially in genomics and physics. However, it’s this collaboration and cooperation between scientists, and between fields, that has helped science progress past where we ever thought possible. While Simonton uses “hybrid” fields like astrophysics and biochemistry to illustrate his argument that there is no room for completely new scientific disciplines, I see these fields as having room for growth. Here, diverse sets of ideas and methodologies can mix and lead to innovation.

Simonton is quick to assert that the end of scientific genius doesn’t mean science is at a standstill or that scientists are no longer smart. In fact, he argues the opposite: scientists are probably more intelligent now, since they must master more theoretical work, more complicated methods, and more diverse disciplines. In fact, Simonton himself would like to be wrong; “I hope that my thesis is incorrect. I would hate to think that genius in science has become extinct,” he writes.

Read the entire article after the jump.

Image: Einstein 1921 by F. Schmutzer. Courtesy of Wikipedia.

Send to Kindle

Light From Gravity

Often the best creative ideas and the most elegant solutions are the simplest. GravityLight is an example of this type of innovation. Here’s the problem: replace damaging and expensive kerosene fuel lamps in Africa with a less harmful and cheaper alternative. And, the solution:

From ars technica:

A London design consultancy has developed a cheap, clean, and safer alternative to the kerosene lamp. Kerosene burning lamps are thought to be used by over a billion people in developing nations, often in remote rural parts where electricity is either prohibitively expensive or simply unavailable. Kerosene’s potential replacement, GravityLight, is powered by gravity without the need of a battery—it’s also seen by its creators as a superior alternative to solar-powered lamps.

Kerosene lamps are problematic in three ways: they release pollutants which can contribute to respiratory disease; they pose a fire risk; and, thanks to the ongoing need to buy kerosene fuel, they are expensive to run. Research out of Brown University from July of last year called kerosene lamps a “significant contributor to respiratory diseases, which kill over 1.5 million people every year” in developing countries. The same paper found that kerosene lamps were responsible for 70 percent of fires (which cause 300,000 deaths every year) and 80 percent of burns. The World Bank has compared the indoor use of a kerosene lamp with smoking two packs of cigarettes per day.

The economics of the kerosene lamps are nearly as problematic, with the fuel costing many rural families a significant proportion of their money. The designers of the GravityLight say 10 to 20 percent of household income is typical, and they describe kerosene as a poverty trap, locking people into a “permanent state of subsistence living.” Considering that the median rural price of kerosene in Tanzania, Mali, Ghana, Kenya, and Senegal is $1.30 per liter, and the average rural income in Tanzania is under $9 per month, the designers’ figures seem depressingly plausible.

Approached by the charity Solar Aid to design a solar-powered LED alternative, London design consultancy Therefore shifted the emphasis away from solar, which requires expensive batteries that degrade over time. The company’s answer is both more simple and more radical: an LED lamp driven by a bag of sand, earth, or stones, pulled toward the Earth by gravity.

It takes only seconds to hoist the bag into place, after which the lamp provides up to half an hour of ambient light, or about 18 minutes of brighter task lighting. Though it isn’t clear quite how much light the GravityLight emits, its makers insist it is more than a kerosene lamp. Also unclear are the precise inner workings of the device, though clearly the weighted bag pulls a cord, driving an inner mechanism with a low-powered dynamo, with the aid of some robust plastic gearing. Talking to Ars by telephone, Therefore’s Jim Fullalove was loath to divulge details, but did reveal the gearing took the kinetic energy from a weighted bag descending at a rate of a millimeter per second to power a dynamo spinning at 2000rpm.

Read more about GravityLight after the jump.

Video courtesy of GravityLight.

Send to Kindle

Innovation Before Its Time

Product driven companies, inventors from all backgrounds and market researchers have long studied how some innovations take off while others fizzle. So, why do some innovations gain traction? Given two similar but competing inventions, what factors lead to one eclipsing the other? Why do some pioneering ideas and inventions fail only to succeed from a different instigator years, sometimes decades, later? Answers to these questions would undoubtedly make many inventors household names, but as is the case in most human endeavors, the process of innovation is murky and more of an art than a science.

Author and columnist Matt Ridley offers some possible answers to the conundrum.

From the Wall Street Journal:

Bill Moggridge, who invented the laptop computer in 1982, died last week. His idea of using a hinge to attach a screen to a keyboard certainly caught on big, even if the first model was heavy, pricey and equipped with just 340 kilobytes of memory. But if Mr. Moggridge had never lived, there is little doubt that somebody else would have come up with the idea.

The phenomenon of multiple discovery is well known in science. Innovations famously occur to different people in different places at the same time. Whether it is calculus (Newton and Leibniz), or the planet Neptune (Adams and Le Verrier), or the theory of natural selection (Darwin and Wallace), or the light bulb (Edison, Swan and others), the history of science is littered with disputes over bragging rights caused by acts of simultaneous discovery.

As Kevin Kelly argues in his book “What Technology Wants,” there is an inexorability about technological evolution, expressed in multiple discovery, that makes it look as if technological innovation is an autonomous process with us as its victims rather than its directors.

Yet some inventions seem to have occurred to nobody until very late. The wheeled suitcase is arguably such a, well, case. Bernard Sadow applied for a patent on wheeled baggage in 1970, after a Eureka moment when he was lugging his heavy bags through an airport while a local worker effortlessly pushed a large cart past. You might conclude that Mr. Sadow was decades late. There was little to stop his father or grandfather from putting wheels on bags.

Mr. Sadow’s bags ran on four wheels, dragged on a lead like a dog. Seventeen years later a Northwest Airlines pilot, Robert Plath, invented the idea of two wheels on a suitcase held vertically, plus a telescopic handle to pull it with. This “Rollaboard,” now ubiquitous, also feels as if it could have been invented much earlier.

Or take the can opener, invented in the 1850s, eight decades after the can. Early 19th-century soldiers and explorers had to make do with stabbing bayonets into food cans. “Why doesn’t somebody come up with a wheeled cutter?” they must have muttered (or not) as they wrenched open the cans.

Perhaps there’s something that could be around today but hasn’t been invented and that will seem obvious to future generations. Or perhaps not. It’s highly unlikely that brilliant inventions are lying on the sidewalk ignored by the millions of entrepreneurs falling over each other to innovate. Plenty of terrible ideas are tried every day.

Understanding why inventions take so long may require mentally revisiting a long-ago time. For a poorly paid Napoleonic soldier who already carried a decent bayonet, adding a can opener to his limited kitbag was probably a waste of money and space. Indeed, going back to wheeled bags, if you consider the abundance of luggage porters with carts in the 1960s, the ease of curbside drop-offs at much smaller airports and the heavy iron casters then available, 1970 seems about the right date for the first invention of rolling luggage.

Read the entire article following the jump.

Image: Joseph Swan, inventor of the incandescent light bulb, which was first publicly demonstrated on 18 December 1878. Courtesy of Wikipedia.

Send to Kindle

Let the Wealthy Fund Innovation?

Nathan Myhrvold, former CTO of Microsoft, suggests that the wealthy should “think big” by funding large-scale and long-term innovation. Arguably, this would be a much preferred alternative to the wealthy using their millions to gain (more) political influence in much of the West, especially the United States. Myhrvold is now a backer of TerraPower, a nuclear energy startup.

From Technology Review:

For some technologists, it’s enough to build something that makes them financially successful. They retire happily. Others stay with the company they founded for years and years, enthralled with the platform it gives them. Think how different the work Steve Jobs did at Apple in 2010 was from the innovative ride he took in the 1970s.

A different kind of challenge is to start something new. Once you’ve made it, a new venture carries some disadvantages. It will be smaller than your last company, and more frustrating. Startups require a level of commitment not everyone is ready for after tasting success. On the other hand, there’s no better time than that to be an entrepreneur. You’re not gambling your family’s entire future on what happens next. That is why many accomplished technologists are out in the trenches, leading and funding startups in unprecedented areas.

Jeff Bezos has Blue Origin, a company that builds spaceships. Elon Musk has Tesla, an electric-car company, and SpaceX, another rocket-ship company. Bill Gates took on big challenges in the developing world—combating malaria, HIV, and poverty. He is also funding inventive new companies at the cutting edge of technology. I’m involved in some of them, including TerraPower, which we formed to commercialize a promising new kind of nuclear reactor.

There are few technologies more daunting to inventors (and investors) than nuclear power. On top of the logistics, science, and engineering, you have to deal with the regulations and politics. In the 1970s, much of the world became afraid of nuclear energy, and last year’s events in Fukushima haven’t exactly assuaged those fears.

So why would any rational group of people create a nuclear power company? Part of the reason is that Bill and I have been primed to think long-term. We have the experience and resources to look for game-changing ideas—and the confidence to act when we think we’ve found one. Other technologists who fund ambitious projects have similar motivations. Elon Musk and Jeff Bezos are literally reaching for the stars because they believe NASA and its traditional suppliers can’t innovate at the same rate they can.

In the next few decades, we need more technology leaders to reach for some very big advances. If 20 of us were to try to solve energy problems—with carbon capture and storage, or perhaps some other crazy idea—maybe one or two of us would actually succeed. If nobody tries, we’ll all certainly fail.

I believe the world will need to rely on nuclear energy. A looming energy crisis will force us to rework the underpinnings of our energy economy. That happened last in the 19th century, when we moved at unprecedented scale toward gas and oil. The 20th century didn’t require a big switcheroo, but looking into the 21st century, it’s clear that we have a much bigger challenge.

Read the entire article following the jump.

Image: Nathan Myhrvold. Courtesy of AllThingsD.

Send to Kindle

How Apple With the Help of Others Invented the iPhone

Apple’s invention of the iPhone is story of insight, collaboration, cannibalization and dogged persistence over the period of a decade.

From Slate:

Like many of Apple’s inventions, the iPhone began not with a vision, but with a problem. By 2005, the iPod had eclipsed the Mac as Apple’s largest source of revenue, but the music player that rescued Apple from the brink now faced a looming threat: The cellphone. Everyone carried a phone, and if phone companies figured out a way to make playing music easy and fun, “that could render the iPod unnecessary,” Steve Jobs once warned Apple’s board, according to Walter Isaacson’s biography.

Fortunately for Apple, most phones on the market sucked. Jobs and other Apple executives would grouse about their phones all the time. The simplest phones didn’t do much other than make calls, and the more functions you added to phones, the more complicated they were to use. In particular, phones “weren’t any good as entertainment devices,” Phil Schiller, Apple’s longtime marketing chief, testified during the company’s patent trial with Samsung. Getting music and video on 2005-era phones was too difficult, and if you managed that, getting the device to actually play your stuff was a joyless trudge through numerous screens and menus.

That was because most phones were hobbled by a basic problem—they didn’t have a good method for input. Hard keys (like the ones on the BlackBerry) worked for typing, but they were terrible for navigation. In theory, phones with touchscreens could do a lot more, but in reality they were also a pain to use. Touchscreens of the era couldn’t detect finger presses—they needed a stylus, and the only way to use a stylus was with two hands (one to hold the phone and one to hold the stylus). Nobody wanted a music player that required two-handed operation.

This is the story of how Apple reinvented the phone. The general outlines of this tale have been told before, most thoroughly in Isaacson’s biography. But the Samsung case—which ended last month with a resounding victory for Apple—revealed a trove of details about the invention, the sort of details that Apple is ordinarily loath to make public. We got pictures of dozens of prototypes of the iPhone and iPad. We got internal email that explained how executives and designers solved key problems in the iPhone’s design. We got testimony from Apple’s top brass explaining why the iPhone was a gamble.

Put it all together and you get remarkable story about a device that, under the normal rules of business, should not have been invented. Given the popularity of the iPod and its centrality to Apple’s bottom line, Apple should have been the last company on the planet to try to build something whose explicit purpose was to kill music players. Yet Apple’s inner circle knew that one day, a phone maker would solve the interface problem, creating a universal device that could make calls, play music and videos, and do everything else, too—a device that would eat the iPod’s lunch. Apple’s only chance at staving off that future was to invent the iPod killer itself. More than this simple business calculation, though, Apple’s brass saw the phone as an opportunity for real innovation. “We wanted to build a phone for ourselves,” Scott Forstall, who heads the team that built the phone’s operating system, said at the trial. “We wanted to build a phone that we loved.”

The problem was how to do it. When Jobs unveiled the iPhone in 2007, he showed off a picture of an iPod with a rotary-phone dialer instead of a click wheel. That was a joke, but it wasn’t far from Apple’s initial thoughts about phones. The click wheel—the brilliant interface that powered the iPod (which was invented for Apple by a firm called Synaptics)—was a simple, widely understood way to navigate through menus in order to play music. So why not use it to make calls, too?

In 2005, Tony Fadell, the engineer who’s credited with inventing the first iPod, got hold of a high-end desk phone made by Samsung and Bang & Olufsen that you navigated using a set of numerical keys placed around a rotating wheel. A Samsung cell phone, the X810, used a similar rotating wheel for input. Fadell didn’t seem to like the idea. “Weird way to hold the cellphone,” he wrote in an email to others at Apple. But Jobs thought it could work. “This may be our answer—we could put the number pad around our clickwheel,” he wrote. (Samsung pointed to this thread as evidence for its claim that Apple’s designs were inspired by other companies, including Samsung itself.)

Around the same time, Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

Read the entire article after the jump.

Retro design iPhone courtesy of Ubergizmo.

Send to Kindle

Corporate R&D meets Public Innovation

As corporate purse strings have drawn tighter some companies have looked for innovation beyond the office cubicle.

From Technology Review:

Where does innovation come from? For one answer, consider the work of MIT professor Eric von Hippel, who has calculated that ordinary U.S. consumers spend $20 billion in time and money trying to improve on household products—for example, modifying a dog-food bowl so it doesn’t slide on the floor. Von Hippel estimates that these backyard Edisons collectively invest more in their efforts than the largest corporation anywhere does in R&D.

The low-tech kludges of consumers might once have had little impact. But one company, Procter & Gamble, has actually found a way to tap into them; it now gets many of its ideas for new Swiffers and toothpaste tubes from the general public. One way it has managed to do so is with the help of InnoCentive, a company in Waltham, Massachusetts, that specializes in organizing prize competitions over the Internet. Volunteer “solvers” can try to earn $500 to $1 million by coming up with answers to a company’s problems.

We like Procter & Gamble’s story because the company has discovered a creative, systematic way to pay for ideas originating far outside of its own development labs. It’s made an innovation in funding innovation, which is the subject of this month’s Technology Review business report.

How we pay for innovation is a question prompted, in part, by the beleaguered state of the venture capital industry. Over the long term, it’s the system that’s most often gotten the economic incentives right. Consider that although fewer than two of every 1,000 new American businesses are venture backed, these account for 11 percent of public companies and 6 percent of U.S. employment, according to Harvard Business School professor Josh Lerner. (Many of those companies, although not all, have succeeded because they’ve brought new technology to market.)

Yet losses since the dot-com boom in the late 1990s have taken a toll. In August, the nation’s largest public pension fund, the California Public Employees Retirement System, said it would basically stop investing with the state’s venture funds, citing returns of 0.0 percent over a decade.

The crisis has partly to do with the size of venture funds—$1 billion isn’t uncommon. That means they need big money plays at a time when entrepreneurs are headed on exactly the opposite course. On the Web, it’s never been cheaper to start a company. You can outsource software development, rent a thousand servers, and order hardware designs from China. That is significant because company founders can often get the money they need from seed accelerators, angel investors, or Internet-based funding mechanisms such as Kickstarter.

“We’re in a period of incredible change in how you fund innovation, especially entrepreneurial innovation,” says Ethan Mollick, a professor of management science at the Wharton School. He sees what’s happening as a kind of democratization—the bets are getting smaller, but also more spread out and numerous. He thinks this could be a good thing. “One of the ways we get more innovation is by taking more draws,” he says.

In an example of the changes ahead, Mollick cites plans by the U.S. Securities and Exchange Commission to allow “crowdfunding”—it will let companies raise $1 million or so directly from the public, every year, over the Internet. (This activity had previously been outlawed as a hazard to gullible investors.) Crowdfunding may lead to a major upset in the way inventions get financed, especially those with popular appeal and modest funding requirements, like new gadget designs.

Read the entire article after the jump.

Image courtesy of Louisiana Department of Education.

Send to Kindle

A View on Innovation

Joi Ito Director of the MIT Media Lab muses on the subject of innovation in this article excerpted from the Edge.

From the Edge:

I grew up in Japan part of my life, and we were surrounded by Buddhists. If you read some of the interesting books from the Dalai Lama talking about happiness, there’s definitely a difference in the way that Buddhists think about happiness, the world and how it works, versus the West. I think that a lot of science and technology has this somewhat Western view, which is how do you control nature, how do you triumph over nature? Even if you look at the gardens in Europe, a lot of it is about look at what we made this hedge do.

What’s really interesting and important to think about is, as we start to realize that the world is complex, and as the science that we use starts to become complex and, Timothy Leary used this quote, “Newton’s laws work well when things are normal sized, when they’re moving at a normal speed.” You can predict the motion of objects using Newton’s laws in most circumstances, but when things start to get really fast, really big, and really complex, you find out that Newton’s laws are actually local ordinances, and there’s a bunch of other stuff that comes into play.

One of the things that we haven’t done very well is we’ve been looking at science and technology as trying to make things more efficient, more effective on a local scale, without looking at the system around it. We were looking at objects rather than the system, or looking at the nodes rather than the network. When we talk about big data, when we talk about networks, we understand this.

I’m an Internet guy, and I divide the world into my life before the Internet and after the Internet. I helped build one of the first commercial Internet service providers in Japan, and when we were building that, there was a tremendous amount of resistance. There were lawyers who wrote these big articles about how the Internet was illegal because there was no one in charge. There was a competing standard back then called X.25, which was being built by the telephone companies and the government. It was centrally-planned, huge specifications; it was very much under control.

The Internet was completely distributed. David Weinberger would use the term ‘small pieces loosely joined.’ But it was really a decentralized innovation that was somewhat of a kind of working anarchy. As we all know, the Internet won. What the Internet winning was, was the triumph of distributed innovation over centralized innovation. It was a triumph of chaos over control. There were a bunch of different reasons. Moore’s law, lowering the cost of innovation—it was this kind of complexity that was going on, the fact that you could change things later, that made this kind of distributed innovation work. What happened when the Internet happened is that the Internet combined with Moore’s law, kept on driving the cost of innovation lower and lower and lower and lower. When you think about the Googles or the Yahoos or the Facebooks of the world, those products, those services were created not in big, huge R&D labs with hundreds of millions of dollars of funding; they were created by kids in dorm rooms.

In the old days, you’d have to have an idea and then you’d write a proposal for a grant or a VC, and then you’d raise the money, you’d plan the thing, you would hire the people and build it. Today, what you do is you build the thing, you raise the money and then you figure out the plan and then you figure out the business model. It’s completely the opposite, you don’t have to ask permission to innovate anymore. What’s really important is, imagine if somebody came up to you and said, “I’m going to build the most popular encyclopedia in the world, and the trick is anyone can edit it.” You wouldn’t have given the guy a desk, you wouldn’t have given the guy five bucks. But the fact that he can just try that, and in retrospect it works, it’s fine, what we’re realizing is that a lot of the greatest innovations that we see today are things that wouldn’t have gotten approval, right?

The Internet, the DNA and the philosophy of the Internet is all about freedom to connect, freedom to hack, and freedom to innovate. It’s really lowering the cost of distribution and innovation. What’s really important about that is that when you started thinking about how we used to innovate was we used to raise money and we would make plans. Well, it’s an interesting coincidence because the world is now so complex, so fast, so unpredictable, that you can’t. Your plans don’t really work that well. Every single major thing that’s happened, both good and bad, was probably unpredicted, and most of our plans failed.

Today, what you want is you want to have resilience and agility, and you want to be able to participate in, and interact with the disruptive things. Everybody loves the word ‘disruptive innovation.’ Well, how does, and where does disruptive innovation happen? It doesn’t happen in the big planned R&D labs; it happens on the edges of the network. Many important ideas, especially in the consumer Internet space, but more and more now in other things like hardware and biotech, you’re finding it happening around the edges.

What does it mean, innovation on the edges? If you sit there and you write a grant proposal, basically what you’re doing is you’re saying, okay, I’m going to build this, so give me money. By definition it’s incremental because first of all, you’ve got to be able to explain what it is you’re going to make, and you’ve got to say it in a way that’s dumbed-down enough that the person who’s giving you money can understand it. By definition, incremental research isn’t going to be very disruptive. Scholarship is somewhat incremental. The fact that if you have a peer review journal, it means five other people have to believe that what you’re doing is an interesting thing. Some of the most interesting innovations that happen, happen when the person doing it doesn’t even know what’s going on. True discovery, I think, happens in a very undirected way, when you figure it out as you go along.

Look at YouTube. First version of YouTube, if you saw 2005, it’s a dating site with video. It obviously didn’t work. The default was I am male, looking for anyone between 18 and 35, upload video. That didn’t work. They pivot it, it became Flicker for video. That didn’t work. Then eventually they latched onto Myspace and it took off like crazy. But they figured it out as they went along. This sort of discovery as you go along is a really, really important mode of innovation. The problem is, whether you’re talking about departments in academia or you’re talking about traditional sort of R&D, anything under control is not going to exhibit that behavior.

If you apply that to what I’m trying to do at the Media Lab, the key thing about the Media Lab is that we have undirected funds. So if a kid wants to try something, he doesn’t have to write me a proposal. He doesn’t have to explain to me what he wants to do. He can just go, or she can just go, and do whatever they want, and that’s really important, this undirected research.

The other part that’s really important, as you start to look for opportunities is what I would call pattern recognition or peripheral vision. There’s a really interesting study, if you put a dot on a screen and you put images like colors around it. If you tell the person to look at the dot, they’ll see the stuff on the first reading, but the minute you give somebody a financial incentive to watch it, I’ll give you ten bucks to watch the dot, those peripheral images disappear. If you’ve ever gone mushroom hunting, it’s a very similar phenomenon. If you are trying to find mushrooms in a forest, the whole thing is you have to stop looking, and then suddenly your pattern recognition kicks in and the mushrooms pop out. Hunters do this same thing, archers looking for animals.

When you focus on something, what you’re actually doing is only seeing really one percent of your field of vision. Your brain is filling everything else in with what you think is there, but it’s actually usually wrong, right? So what’s really important when you’re trying to discover those disruptive things that are happening in your periphery. If you are a newspaper and you’re trying to figure out what is the world like without printing presses, well, if you’re staring at your printing press, you’re not looking at the stuff around you. So what’s really important is how do you start to look around you?

Read the entire article following the jump.

Send to Kindle

D-School is the Place

Forget art school, engineering school, law school and B-school (business). For wannabe innovators the current place to be is D-school. Design school, that is.

Design school teaches a problem solving method known as “design thinking”. Before it was re-branded in corporatespeak this used to be known as “trial and error”.

Many corporations are finding this approach to be both a challenge and a boon; after all, even in 2012, not many businesses encourage their employees to fail.

From the Wall Street Journal:

In 2007, Scott Cook, founder of Intuit Inc., the software company behind TurboTax, felt the company wasn’t innovating fast enough. So he decided to adopt an approach to product development that has grown increasingly popular in the corporate world: design thinking.

Loosely defined, design thinking is a problem-solving method that involves close observation of users or customers and a development process of extensive—often rapid—trial and error.

Mr. Cook said the initiative, termed “Design for Delight,” involves field research with customers to understand their “pain points”—an examination of what frustrates them in their offices and homes.

Intuit staffers then “painstorm” to come up with a variety of solutions to address the problems, and experiment with customers to find the best ones.

In one instance, a team of Intuit employees was studying how customers could take pictures of tax forms to reduce typing errors. Some younger customers, taking photos with their smartphones, were frustrated that they couldn’t just complete their taxes on their mobiles. Thus was born the mobile tax app SnapTax in 2010, which has been downloaded more than a million times in the past two years, the company said.

At SAP AG, hundreds of employees across departments work on challenges, such as building a raincoat out of a trash bag or designing a better coffee cup. The hope is that the sessions will train them in the tenets of design thinking, which they can then apply to their own business pursuits, said Carly Cooper, an SAP director who runs many of the sessions.

Last year, when SAP employees talked to sales representatives after closing deals, they found that one of the sales representatives’ biggest concerns was simply, when were they going to get paid. The insight led SAP to develop a new mobile product allowing salespeople to check on the status of their commissions.

Read the entire article after the jump.

Send to Kindle

The SpeechJammer and Other Innovations to Come

The mind boggles at the possible situations when a SpeechJammer (affectionately known as the “Shutup Gun”) might come in handy – raucous parties, boring office meetings, spousal arguments, playdates with whiny children.

From the New York Times:

When you aim the SpeechJammer at someone, it records that person’s voice and plays it back to him with a delay of a few hundred milliseconds. This seems to gum up the brain’s cognitive processes — a phenomenon known as delayed auditory feedback — and can painlessly render the person unable to speak. Kazutaka Kurihara, one of the SpeechJammer’s creators, sees it as a tool to prevent loudmouths from overtaking meetings and public forums, and he’d like to miniaturize his invention so that it can be built into cellphones. “It’s different from conventional weapons such as samurai swords,” Kurihara says. “We hope it will build a more peaceful world.”

Read the entire list of 32 weird and wonderful innovations after the jump.

Graphic courtesy of Chris Nosenzo / New York Times.

Send to Kindle