When Is a Novel Not a Novel?

Persepolis-books1and2-covers

In the eyes of many teachers and parents, a novel is not a novel when it is graphic — as in, graphic novel, with illustrations and images, not necessarily explicit in content (when did “graphic” come to connote negativity anyway?) Educators who tell their students to put the graphic novel back on the shelf — in favor of a more wordy tome — still tend to perceive this form of literature as nothing more than a bound, cartoonish comic strip aimed at childish readers or nerdy boys. Not so! Graphic novels are not your father’s Dandy or Beano (though, in themselves are entertaining too).

Some critically acclaimed and riveting works have recently debuted in graphic form, and the genre is holding its own and slowly proving its worth. The stories both true and imagined are rich and moving, and the illustrations, far from detracting the eye, add gravitas and depth. And, the subjects now go far beyond the realm of superheroes, zombies and robots — they walk us through all that is to be human: tragedy, atrocity, love, angst, guilt, loss, joy.

A few recent classics come to mind: Persepolis, a French-language, autobiographical graphic novel by Marjane Satrapi; Logicomix: An Epic Search for Truth, a graphic novel about the quest for logic and reason in mathematics, by Apostolos Doxiadis; Fun Home: A Family Tragicomic by Alison Bechdel; Maus I: A Survivor’s Tale: My Father Bleeds History, by Art Spiegelman; Blankets an autobiographical graphic novel by Craig Thompson.

From Washington Post:

A Young girl, a primary grade-schooler with a well-worn library card, was enthusiastically reading a riveting memoir when a stern tone descended upon her.

“What is that?” the teacher asked/accused.

“It’s a graphic novel,” came the girl’s reply.

Such works, the girl was told, were unacceptable for classroom “reading time,” let alone for a book report. The teacher’s sharp ruling boiled down to a four-word excuse for banishment:

“Graphic. Novels. Aren’t. Books.”

Sigh.

Here we go again…

Really? Two decades after Art Spiegelman’s landmark Holocaust graphic novel “Maus” won the Pulitzer Prize and helped stake a fresh claim for comics as literature — paving the way for the appreciation of such works as “Persepolis” and “Blankets” and “American Born Chinese” — do a significant number of teachers and administrators remain mired in such backward thinking?

Unfortunately, my rhetoric is rhetorical. These curricular “world-is-flat’ers” are still thick on our school grounds. But it’s time for the culture’s tectonic plates to more rapidly force a shift in academic thought.

As we step into 2014, this lingering bias in curriculum needs to cease. We fervently urge the least enlightened of our educators to catch up with the rest of the class. And to make our case, let us present Exhibit A:

The young girl who faced that rebuke of illustrated books was a relative of mine. And that book (a-hem) in question was “Stitches: A Memoir,” acclaimed author David Small’s poignant personal story of a dysfunctional childhood home — including his adolescent battle with throat cancer, which may have been caused by his doctor-father’s early over-embrace of X-ray radiation. In Small’s masterful prose and liquid pictures, we vividly experience the voiceless boy-patient’s raw emotions.

Even four years ago, quite a few people would have begged to differ with that grade-school teacher. “Stitches” climbed the bestseller list of the New York Times, which deemed the book worthy of review; was named one of the best books of the year by such outlets as Publishers Weekly; and was a finalist for the 2009 National Book Award for Young People’s Literature. No less than Pulitzer Prize-winning cartoonist/author/playwright/screenwriter Jules Feiffer said aptly of Small’s masterpiece: “It left me speechless.”

Of the teacher’s wrong-headed thinking, I was left speechless. Her decision was not a mere judgment against one book, but an ignorant indictment of all graphic novels. As blanket criticism, it was unabashedly threadbare.

Consider my commentary here, then, to be a criticism of that criticism. Because what the larger academic problem calls for is not damnation, but persuasion. A struck match. Into Plato’s cave, let us bring truer illumination.

What follows is not some broad indictment of modern American education. I was born into a brood of teachers — the family crest might as well be a chalkboard — and I deeply value what too often is one of the nation’s more thankless and underpaid cornerstone careers. Plus, as an artist who has spoken to thousands of impressive educators — many of whom appreciated my history-themed syndicated comic strip — I applaud those who thoughtfully and passionately help inform and shape young minds, while keeping an open mind themselves. On this front, so many of them “get” it.

What this essay is, at heart, is an extended hand in the name of better understanding — especially as our schools are filled with so-called “reluctant readers” and other struggling learners. We face an educational imperative: Why not use every effective teaching tool at our disposal? Decades of studies have shown the power of visual learning as an effective scholastic technique. Author Neil Gaiman (winner of the Newbery and Carnegie medals for children’s lit) recently noted that comics were once falsely accused of fostering illiteracy. We now know that comics — the marriage of word and picture in a dynamic relationship that fires synapses across the brain — can be a bridge to literacy and a path to learning. Armed with that knowledge, the last thing we need blocking that footbridge is the Reluctant Teacher.

Fortunately, 2013 rises to aid our cause. It was a banner year for graphic novels; top authors ranged from a young hip-hop fan to a heroic septuagenarian congressman writing his first comic — and in between were a couple of world-class cartoonists who also happen to be widely recognized educators.

Great works help beget great change. So here, then, is our examination of 10 stellar graphic novels and illustrated books from the year past (all equally fit for adult consumption, to boot). Because the writing is on the classroom wall. As generations are weaned on the Internet, our culture grows ever more visual. And the take-home lesson is this:

Let us meet our young minds where they live.

Let us smartly employ the resources of visual learning.

Read the entire article here.

Image: Persepolis 1 and Persepolis 2, book covers by Marjane Satrapi. Courtesy of Marjane Satrapi / Wikipedia.

 

How to Rendezvous With a Comet

[tube]ktrtvCvZb28[/tube]

First, you will need a significant piece of space hardware. Second, you will need to launch it having meticulously planned its convoluted trajectory through the solar system. Third, wait 12 years for the craft to reach the comet. Fourth, and with fingers crossed, launch a landing probe from the craft on to the 2.5 mile wide comet 67 P/Churyumov-Gerasimenko, while all are hurtling through space at around 25,000 miles per hour.

So far so good. The Rosetta spacecraft woke up from its self-induced 30-month hibernation on January 20, having slumbered to conserve energy. Now it continues on its final leg of the journey — a year-long trek to catch the comet.

Visit the European Space Agency (ESA) Rosetta mission home page here.

From ars technica:

The Rosetta spacecraft is due to wake up on the morning of January 20 after an 30-month hibernation in deep space. For the past ten years, the three-ton spacecraft has been on a one-way trip to a 4 km-wide comet. When it arrives, it will set about performing a maneuver that has never been done before: landing on a comet’s surface.

The spacecraft has already achieved some success on its long journey through the solar system. It has passed by two asteroids—Steins in 2008 and Lutetia in 2010—and it tried out some of its instruments on them. Because Rosetta’s journey is so protracted, however, preserving energy has been of the utmost importance, which is why it was put into hibernation in June 2011. The journey has taken so long because the spacecraft needed to be “gravity-assisted” by many planets in order to reach the necessary velocity to match the comet’s orbit.

When it wakes up, Rosetta is expected to take a few hours to establish contact with Earth, 673 million km (396 million mi) away. The scientists involved will wait with bated breath. Dan Andrews, part of a team at the Open University who built one of Rosetta’s on-board instruments, said, “If there isn’t sufficient power, Rosetta will go back to sleep and try again later. The wake-up process is driven by software commands already on the spacecraft. It will wake itself up autonomously and spend some time warming up and orienting its antenna toward Earth to ‘phone home.’”

If multiple attempts fail to wake Rosetta, it could mean the end of the mission.

Rosetta should reach comet 67P/Churyumov-Gerasimenko in May 2014, at which point it will decelerate to match the speed of the comet. In August 2014, Rosetta will enter orbit around the comet to scout 67P’s surface in search of a landing spot. Then, in November 2014, Rosetta’s on-board lander, Philae, will be ejected from the orbiting spacecraft onto the surface of the comet. There are a lot of things that need to come together perfectly for this to go smoothly, but space endeavors are designed to charter unknown territories, and Rosetta will be doing just that.

If Rosetta manages this mission successfully, it will make history as the first spacecraft to land on the surface of a comet. Success is by no means assured, as scientists have no idea what to expect when Rosetta arrives at the comet. Will the comet’s surface be icy, soft, hard, or rocky? This information will affect what kind of landing the spacecraft can expect and whether it will sink into the comet or bounce off. Another problem is that comet 67P is small and has a weak gravitational field, which will make holding the spacecraft on its surface challenging, even after a successful landing.

At a cost of €1 billion ($1.36 billion) it’s important that we get some value for our money with this mission. To ensure we do, Rosetta was designed to help answer some of the most basic questions about Earth and our solar system, such as where water and life originated, even if the landing doesn’t work out as well as we hope it will.

Comets are thought to have delivered some of the chemicals needed for life, including water to Earth and possibly other planets. This is why comet ISON, which sadly did not survive its close encounter with the Sun, had created excitement among scientists. If it had survived, it would have been the closest scientists could get to a comet with modern instruments.

Comet ISON’s demise means Rosetta is more important than ever. Without measuring the composition of comets, we won’t fully understand the origin of our planet. Comet 67P is thought to have preserved the very earliest ingredients of the solar system, acting as a small, deep-freeze time capsule. The hope is that it will now reveal its long-held secrets to Rosetta.

Andrews said, “It will be the first time a spacecraft will approach a comet and actually stay with it for a prolonged period of time, studying the processes whereby a comet ‘switches on’ as it approaches the Sun.”

Once on the comet’s surface, the Philae lander will deploy instruments to measure different forms of the elements hydrogen, carbon, nitrogen, and oxygen in the comet ice. This will allow scientists to understand the composition of the water and organic components that were collected by the comet 4.6 billion years ago, at the very start of the Solar System.

Read the entire article here.

Video: Rosetta’s Twelve-Year Journey to Land on a Comet. Courtesy of European Space Agency (ESA) Space Science.

 

MondayMap: Best of the Worst

The-United-States-of-Shame

Today’s map is not for the faint of heart, but fascinating nonetheless. It tells us that if are a resident of West Virginia you are more likely to die from a heart attack, whereas if you’re from Alabama you’ll die from a stroke, and in Kentucky, well, cancer will get you first, but in Georgia you more likely to contract the flu.

Utah seems to have the highest predilection for porn, while Rhode Islanders love their illicit drugs, Coloradans prefer only cocaine and residents of New Mexico have a penchant for alcohol. On the educational front, Maine tops the list with the lowest SAT scores, but Texas has the lowest high school graduation rates.

The map is based on a wide collection of published statistics, and a few less objective measures such as in the case of North Dakota (ugliest residents).

Find more details about the map here.

Map courtesy of Jeff Wysaski over at his blog Pleated Jeans.

Zynga: Out to Pasture or Buying the Farm?

FarmVille_logoBy one measure, Zynga’s FarmVille on Facebook (and MSN) is extremely successful. The measure being dedicated and addicted players numbering in the millions each day. By another measure, Zynga isn’t faring very well at all, and that’s making money. Despite a valuation of over $3 billion, the company is struggling to find a way to convert virtual game currency into real dollar spend.

How the internet ecosystem manages to reward the lack of real and sustainable value creation is astonishing to those on the outside — but good for those on the inside. Would that all companies could bask in the glory of venture capital and IPO bubbles on such flimsy financial foundations. Quack!

Zynga has been on company deathwatch for a while. Read on to see some of its peers that seem to be on life-support

From ars technica:

HTC

To say that 2013 was a bad year for Taiwanese handset maker HTC is probably something of an understatement. The year was capped off by the indictment of six HTC employees on a variety of charges such as taking kickbacks, falsifying expenses, and leaking company trade secrets—including elements of HTC’s new interface for Android phones. Thomas Chien, the former vice president of design for HTC, was reportedly taking the information to a group in Beijing that was planning to form a new company, according to The Wall Street Journal.

On top of that, despite positive reviews for its flagship HTC One line, the company has been struggling to sell the phone. Blame it on bad marketing, bad execution, or just bad management, but HTC has been beaten down badly by Samsung.

The investigation of Chien started in August, but it was hardly the worst news HTC had last year as the company’s executive ranks thinned and losses mounted. There was reshuffling of deck chairs at the top of the company as CEO Peter Chou handed off chunks of his operational duties to co-founder and chairwoman Cher Wang—giving her control over marketing, sales, and the company’s supply chain in the wake of a parts shortage that hampered the launch of the HTC One. The Wall Street Journal reported that HTC couldn’t get camera parts for the One because suppliers believed “it is no longer a tier one customer,” according to an unnamed executive.

That’s a pretty dramatic fall from HTC’s peak, when the company vaulted from contract manufacturer to major mobile player. Way back in the heady days of 2011, HTC was second only to Apple in US cell phone market share, and it held 9.3 percent of the global market. Now it’s in fourth place in the US, with just 6.7 percent market share based on comScore numbers—behind Google’s Motorola and just ahead of LG Electronics by a hair. Its sales in the last quarter of 2013 were down by 40 percent from last year, and revenues for 2013 were down by 28.6 percent from 2012. With a patent infringement suit from Nokia over chips in the HTC One and One Mini still hanging over its head in the United Kingdom, the company could face a ban on selling some of its phones there.

Executives insist that HTC won’t be sold, especially to a Chinese buyer—the politics of such a deal being toxic to a Taiwanese company. But ironically, the Chinese market is perhaps HTC’s best hope in the long term—the company does more than a third of its business there. The company’s best bet may be going back to manufacturing phones with someone else’s name on the faceplate and leaving the marketing to someone else.

AMD

Advanced Micro Devices is still on deathwatch. Yes, AMD reported a quarterly profit of $48 million in September thanks to a gift from the game console gods (and IBM Power’s fall from grace). But that was hardly enough to jolt the chip company out of what has been a really bad year—and AMD is trying to manage expectations for the results for the final quarter of 2013.

AMD is caught between a rock and a hard place—or more specifically, between Intel and ARM. On the bright side, it probably has nothing to fear from ARM in the low-cost Windows device market considering how horrifically Windows RT fared in 2013. AMD actually gained in market share in the x86 space thanks to the Xbox One and PS4—both of which replace non-x86 consoles. And AMD still holds a substantial chunk of the graphics processor market—and all those potential sales in Bitcoin miners to go with it.

But in the PC space, AMD’s market share declined to a mere 15.8 percent (of what is a much smaller pie than it used to be). And in a future driven increasingly by mobile and low-power devices, AMD hasn’t been able to make any gains with the two low-power chips it introduced in 2013—Kabini and Temash. Those chips were supposed to finally give AMD a competitive footing with Intel on low-cost PCs and tablets, but they ended up being middling in comparison.

All that adds up to 2014 being a very important year for AMD—one that could end with AMD essentially being a graphics and specialty processor chip designer. The company has already divorced itself from its own fabrication capability and slashed its workforce, so there isn’t much more to cut but bone if the markets demand better margins.

Read the entire article here.

Image: FarmVille logo. Courtesy of Wikipedia.

The Diminishing Value of the Ever More Expensive College Degree

graduationParadoxically the U.S. college degree is becoming less valuable while it continues an inexorable rise in cost. With academic standards now generally lower than ever and grade inflation pervasive most recent college graduates are in a bind — limited employment prospects and a huge debt burden. Something must give soon, and its likely to be the colleges.

From WSJ:

The American political class has long held that higher education is vital to individual and national success. The Obama administration has dubbed college “the ticket to the middle class,” and political leaders from Education Secretary Arne Duncan to Federal Reserve Chairman Ben Bernanke have hailed higher education as the best way to improve economic opportunity. Parents and high-school guidance counselors tend to agree.

Yet despite such exhortations, total college enrollment has fallen by 1.5% since 2012. What’s causing the decline? While changing demographics—specifically, a birth dearth in the mid-1990s—accounts for some of the shift, robust foreign enrollment offsets that lack. The answer is simple: The benefits of a degree are declining while costs rise.

A key measure of the benefits of a degree is the college graduate’s earning potential—and on this score, their advantage over high-school graduates is deteriorating. Since 2006, the gap between what the median college graduate earned compared with the median high-school graduate has narrowed by $1,387 for men over 25 working full time, a 5% fall. Women in the same category have fared worse, losing 7% of their income advantage ($1,496).

A college degree’s declining value is even more pronounced for younger Americans. According to data collected by the College Board, for those in the 25-34 age range the differential between college graduate and high school graduate earnings fell 11% for men, to $18,303 from $20,623. The decline for women was an extraordinary 19.7%, to $14,868 from $18,525.

Meanwhile, the cost of college has increased 16.5% in 2012 dollars since 2006, according to the Bureau of Labor Statistics’ higher education tuition-fee index. Aggressive tuition discounting from universities has mitigated the hike, but not enough to offset the clear inflation-adjusted increase. Even worse, the lousy economy has caused household income levels to fall, limiting a family’s ability to finance a degree.

This phenomenon leads to underemployment. A study I conducted with my colleague Jonathan Robe, the 2013 Center for College Affordability and Productivity report, found explosive growth in the number of college graduates taking relatively unskilled jobs. We now have more college graduates working in retail than soldiers in the U.S. Army, and more janitors with bachelor’s degrees than chemists. In 1970, less than 1% of taxi drivers had college degrees. Four decades later, more than 15% do.

This is only partly the result of the Great Recession and botched public policies that have failed to produce employment growth. It’s also the result of an academic arms race in which universities have spent exorbitant sums on luxury dormitories, climbing walls, athletic subsidies and bureaucratic bloat. More significantly, it’s the result of sending more high-school graduates to college than professional fields can accommodate.

In 1970, when 11% of adult Americans had bachelor’s degrees or more, degree holders were viewed as the nation’s best and brightest. Today, with over 30% with degrees, a significant portion of college graduates are similar to the average American—not demonstrably smarter or more disciplined. Declining academic standards and grade inflation add to employers’ perceptions that college degrees say little about job readiness.

There are exceptions. Applications to top universities are booming, as employers recognize these graduates will become our society’s future innovators and leaders. The earnings differential between bachelor’s and master’s degree holders has grown in recent years, as those holding graduate degrees are perceived to be sharper and more responsible.

But unless colleges plan to offer master’s degrees in janitorial studies, they will have to change. They currently have little incentive to do so, as they are often strangled by tenure rules, spoiled by subsides from government and rich alumni, and more interested in trivial things—second-rate research by third-rate scholars; ball-throwing contests—than imparting knowledge. Yet dire financial straits from falling demand for their product will force two types of changes within the next five years.

Image: college graduates. Courtesy of Business Insider.

An Ode to the Sinclair ZX81

Sinclair-ZX81What do the PDP-11, Commodore PET, APPLE II and Sinclair’s ZX81 have in common? And, more importantly, for anyone under the age of 35, what on earth are they?  Well, these are respectively, the first time-share mainframe, first personal computer, first Apple computer, and the first home-based computer programmed by theDiagonal’s friendly editor back in the pioneering days of computation.

The article below on technological nostalgia pushed the recall button, bringing back vivid memories of dot matrix printers, FORTRAN, large floppy diskettes (5 1/4 inch), reel-to-reel tape storage, and the 1Kb of programmable memory on the ZX81. In fact, despite the tremendous and now laughable limitations of the ZX81 — one had to save and load programs via a tape cassette — programming the device at home was a true revelation.

Some would go so far as to say that the first computer is very much like the first kiss or the first date. Well, not so. But fun nonetheless, and responsible for much in the way of future career paths.

From ars technica:

Being a bunch of technology journalists who make our living on the Web, we at Ars all have a fairly intimate relationship with computers dating back to our childhood—even if for some of us, that childhood is a bit more distant than others. And our technological careers and interests are at least partially shaped by the devices we started with.

So when Cyborgology’s David Banks recently offered up an autobiography of himself based on the computing devices he grew up with, it started a conversation among us about our first computing experiences. And being the most (chronologically) senior of Ars’ senior editors, the lot fell to me to pull these recollections together—since, in theory, I have the longest view of the bunch.

Considering the first computer I used was a Digital Equipment Corp. PDP-10, that theory is probably correct.

The DEC PDP-10 and DECWriter II Terminal

In 1979, I was a high school sophomore at Longwood High School in Middle Island, New York, just a short distance from the Department of Energy’s Brookhaven National Labs. And it was at Longwood that I got the first opportunity to learn how to code, thanks to a time-share connection we had to a DEC PDP-10 at the State University of New York at Stony Brook.

The computer lab at Longwood, which was run by the math department and overseen by my teacher Mr. Dennis Schultz, connected over a leased line to SUNY. It had, if I recall correctly, six LA36 DECWriter II terminals connected back to the mainframe—essentially dot-matrix printers with keyboards on them. Turn one on while the mainframe was down, and it would print over and over:

PDP-10 NOT AVAILABLE

Time at the terminals was a precious resource, so we were encouraged to write out all of our code by hand first on graph paper and then take a stack of cards over to the keypunch. This process did wonders for my handwriting. I spent an inordinate amount of time just writing BASIC and FORTRAN code in block letters on graph-paper notebooks.

One of my first fully original programs was an aerial combat program that used three-dimensional arrays to track the movement of the player’s and the programmed opponent’s airplanes as each maneuvered to get the other in its sights. Since the program output to pin-fed paper, that could be a tedious process.

At a certain point, Mr. Shultz, who had been more than tolerant of my enthusiasm, had to crack down—my code was using up more than half the school’s allotted system storage. I can’t imagine how much worse it would have been if we had video terminals.

Actually, I can imagine, because in my senior year I was introduced to the Apple II, video, and sound. The vastness of 360 kilobytes of storage and the ability to code at the keyboard were such a huge luxury after the spartan world of punch cards that I couldn’t contain myself. I soon coded a student parking pass database for my school—while also coding a Dungeons & Dragons character tracking system, complete with combat resolution and hit point tracking.

—Sean Gallagher

A printer terminal and an acoustic coupler

I never saw the computer that gave me my first computing experience, and I have little idea what it actually was. In fact, if I ever knew where it was located, I’ve since forgotten. But I do distinctly recall the gateway to it: a locked door to the left of the teacher’s desk in my high school biology lab. Fortunately, the guardian—commonly known as Mr. Dobrow—was excited about introducing some of his students to computers, and he let a number of us spend our lunch hours experimenting with the system.

And what a system it was. Behind the physical door was another gateway, this one electronic. Since the computer was located in another town, you had to dial in by modem. The modems of the day were something different entirely from what you may recall from AOL’s dialup heyday. Rather than plugging straight in to your phone line, you dialed in manually—on a rotary phone, no less—then dropped the speaker and mic carefully into two rubber receptacles spaced to accept the standard-issue hardware of the day. (And it was standard issue; AT&T was still a monopoly at the time.)

That modem was hooked into a sort of combination of line printer and keyboard. When you were entering text, the setup acted just like a typewriter. But as soon as you hit the return key, it transmitted, and the mysterious machine at the other end responded, sending characters back that were dutifully printed out by the same machine. This meant that an infinite loop would unleash a spray of paper, and it had to be terminated by hanging up the phone.

It took us a while to get to infinite loops, though. Mr. Dobrow started us off on small simulations of things like stock markets and malaria control. Eventually, we found a way to list all the programs available and discovered a Star Trek game. Photon torpedoes were deadly, but the phasers never seemed to work, so before too long one guy had the bright idea of trying to hack the game (although that wasn’t the term that we used). We were off.

John Timmer

Read the entire article here.

Image: Sinclair ZX81. Courtesy of Wikipedia.

A Window that Vacuums Sound

We are all familiar with double-glazed windows that reduce transmission of sound by way of a partial vacuum between the two or more panes of glass. However, open a double-glazed window to let in some fresh air and the benefit of the sound reduction is gone. So, what if you could invent a window that lets in air but cuts out the noise pollution? Sounds impossible. But not to materials scientists Sang-Hoon Kim and Seong-Hyun Lee from South Korea.

From Technology Review:

Noise pollution is one of the bugbears of modern life. The sound of machinery, engines, neighbours and the like can seriously affect our quality of life and that of the other creatures that share this planet.

But insulating against sound is a difficult and expensive business. Soundproofing generally works on the principle of transferring sound from the air into another medium which absorbs and attenuates it.

So the notion of creating a barrier that absorbs sound while allowing the free of passage of air seems, at first thought, entirely impossible. But that’s exactly what Sang-Hoon Kima at the Mokpo National Maritime University in South Korea and Seong-Hyun Lee at the Korea Institute of Machinery and Materials, have achieved.

These guys have come up with a way to separate sound from the air in which it travels and then to attenuate it. This has allowed them to build a window that allows air to flow but not sound.

The design is relatively simple and relies on two exotic acoustic phenomenon. The first is to create a material with a negative bulk modulus.

A material’s bulk modulus is essentially its resistance to compression and this is an important factor in determining the speed at which sound moves through it. A material with a negative bulk modulus exponentially attenuates any sound passing through it.

However, it’s hard to imagine a solid material having a negative bulk modulus, which is where a bit of clever design comes in handy.

Kima and Lee’s idea is to design a sound resonance chamber in which the resonant forces oppose any compression. With careful design, this leads to a negative bulk modulus for a certain range of frequencies.

Their resonance chamber is actually very simple—it consists of two parallel plates of transparent acrylic plastic about 150 millimetres square and separated by 40 millimetres, rather like a section of double-glazing about the size of a paperback book.

This chamber is designed to ensure that any sound resonating inside it acts against the way the same sound compresses the chamber. When this happens the bulk modulus of the entire chamber is negative.

An important factor in this is how efficiently the sound can get into the chamber and here Kima and Lee have another trick. To maximise this efficiency, they drill a 50 millimetre hole through each piece of acrylic. This acts as a diffraction element causing any sound that hits the chamber to diffract strongly into it.

The result is a double-glazed window with a negative bulk modulus that strongly attenuates the sound hitting it.

Kima and Lee use their double-glazing unit as a building block to create larger windows. In tests with a 3x4x3 “wall” of building blocks, they say their window reduces sound levels by 20-35 decibels over a sound range of 700 Hz to 2,200 Hz. That’s a significant reduction.

And by using extra building blocks with smaller holes, they can extend this range to cover lower frequencies.

What’s handy about these windows is that holes through them also allow the free flow of air, giving ample ventilation as well.

Read the entire article here.

God Is a Thermodynamicist

Physicists and cosmologists are constantly postulating and testing new ideas to explain the universe and everything within. Over the last hundred years or so, two such ideas have grown to explain much about our cosmos, and do so very successfully — quantum mechanics, which describes the very small, and relativity which describes the very large. However, these two views do no reconcile, leaving theoreticians and researchers looking for a more fundamental theory of everything. One possible idea banishes the notions of time and gravity — treating them both as emergent properties of a deeper reality.

From New Scientist:

As revolutions go, its origins were haphazard. It was, according to the ringleader Max Planck, an “act of desperation”. In 1900, he proposed the idea that energy comes in discrete chunks, or quanta, simply because the smooth delineations of classical physics could not explain the spectrum of energy re-radiated by an absorbing body.

Yet rarely was a revolution so absolute. Within a decade or so, the cast-iron laws that had underpinned physics since Newton’s day were swept away. Classical certainty ceded its stewardship of reality to the probabilistic rule of quantum mechanics, even as the parallel revolution of Einstein’s relativity displaced our cherished, absolute notions of space and time. This was complete regime change.

Except for one thing. A single relict of the old order remained, one that neither Planck nor Einstein nor any of their contemporaries had the will or means to remove. The British astrophysicist Arthur Eddington summed up the situation in 1915. “If your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation,” he wrote.

In this essay, I will explore the fascinating question of why, since their origins in the early 19th century, the laws of thermodynamics have proved so formidably robust. The journey traces the deep connections that were discovered in the 20th century between thermodynamics and information theory – connections that allow us to trace intimate links between thermodynamics and not only quantum theory but also, more speculatively, relativity. Ultimately, I will argue, those links show us how thermodynamics in the 21st century can guide us towards a theory that will supersede them both.

In its origins, thermodynamics is a theory about heat: how it flows and what it can be made to do (see diagram). The French engineer Sadi Carnot formulated the second law in 1824 to characterise the mundane fact that the steam engines then powering the industrial revolution could never be perfectly efficient. Some of the heat you pumped into them always flowed into the cooler environment, rather than staying in the engine to do useful work. That is an expression of a more general rule: unless you do something to stop it, heat will naturally flow from hotter places to cooler places to even up any temperature differences it finds. The same principle explains why keeping the refrigerator in your kitchen cold means pumping energy into it; only that will keep warmth from the surroundings at bay.

A few decades after Carnot, the German physicist Rudolph Clausius explained such phenomena in terms of a quantity characterising disorder that he called entropy. In this picture, the universe works on the back of processes that increase entropy – for example dissipating heat from places where it is concentrated, and therefore more ordered, to cooler areas, where it is not.

That predicts a grim fate for the universe itself. Once all heat is maximally dissipated, no useful process can happen in it any more: it dies a “heat death”. A perplexing question is raised at the other end of cosmic history, too. If nature always favours states of high entropy, how and why did the universe start in a state that seems to have been of comparatively low entropy? At present we have no answer, and later I will mention an intriguing alternative view.

Perhaps because of such undesirable consequences, the legitimacy of the second law was for a long time questioned. The charge was formulated with the most striking clarity by the British physicist James Clerk Maxwell in 1867. He was satisfied that inanimate matter presented no difficulty for the second law. In an isolated system, heat always passes from the hotter to the cooler, and a neat clump of dye molecules readily dissolves in water and disperses randomly, never the other way round. Disorder as embodied by entropy does always increase.

Maxwell’s problem was with life. Living things have “intentionality”: they deliberately do things to other things to make life easier for themselves. Conceivably, they might try to reduce the entropy of their surroundings and thereby violate the second law.

Information is power

Such a possibility is highly disturbing to physicists. Either something is a universal law or it is merely a cover for something deeper. Yet it was only in the late 1970s that Maxwell’s entropy-fiddling “demon” was laid to rest. Its slayer was the US physicist Charles Bennett, who built on work by his colleague at IBM, Rolf Landauer, using the theory of information developed a few decades earlier by Claude Shannon. An intelligent being can certainly rearrange things to lower the entropy of its environment. But to do this, it must first fill up its memory, gaining information as to how things are arranged in the first place.

This acquired information must be encoded somewhere, presumably in the demon’s memory. When this memory is finally full, or the being dies or otherwise expires, it must be reset. Dumping all this stored, ordered information back into the environment increases entropy – and this entropy increase, Bennett showed, will ultimately always be at least as large as the entropy reduction the demon originally achieved. Thus the status of the second law was assured, albeit anchored in a mantra of Landauer’s that would have been unintelligible to the 19th-century progenitors of thermodynamics: that “information is physical”.

But how does this explain that thermodynamics survived the quantum revolution? Classical objects behave very differently to quantum ones, so the same is presumably true of classical and quantum information. After all, quantum computers are notoriously more powerful than classical ones (or would be if realised on a large scale).

The reason is subtle, and it lies in a connection between entropy and probability contained in perhaps the most profound and beautiful formula in all of science. Engraved on the tomb of the Austrian physicist Ludwig Boltzmann in Vienna’s central cemetery, it reads simply S = k log W. Here S is entropy – the macroscopic, measurable entropy of a gas, for example – while k is a constant of nature that today bears Boltzmann’s name. Log W is the mathematical logarithm of a microscopic, probabilistic quantity W – in a gas, this would be the number of ways the positions and velocities of its many individual atoms can be arranged.

On a philosophical level, Boltzmann’s formula embodies the spirit of reductionism: the idea that we can, at least in principle, reduce our outward knowledge of a system’s activities to basic, microscopic physical laws. On a practical, physical level, it tells us that all we need to understand disorder and its increase is probabilities. Tot up the number of configurations the atoms of a system can be in and work out their probabilities, and what emerges is nothing other than the entropy that determines its thermodynamical behaviour. The equation asks no further questions about the nature of the underlying laws; we need not care if the dynamical processes that create the probabilities are classical or quantum in origin.

There is an important additional point to be made here. Probabilities are fundamentally different things in classical and quantum physics. In classical physics they are “subjective” quantities that constantly change as our state of knowledge changes. The probability that a coin toss will result in heads or tails, for instance, jumps from ½ to 1 when we observe the outcome. If there were a being who knew all the positions and momenta of all the particles in the universe – known as a “Laplace demon”, after the French mathematician Pierre-Simon Laplace, who first countenanced the possibility – it would be able to determine the course of all subsequent events in a classical universe, and would have no need for probabilities to describe them.

In quantum physics, however, probabilities arise from a genuine uncertainty about how the world works. States of physical systems in quantum theory are represented in what the quantum pioneer Erwin Schrödinger called catalogues of information, but they are catalogues in which adding information on one page blurs or scrubs it out on another. Knowing the position of a particle more precisely means knowing less well how it is moving, for example. Quantum probabilities are “objective”, in the sense that they cannot be entirely removed by gaining more information.

That casts in an intriguing light thermodynamics as originally, classically formulated. There, the second law is little more than impotence written down in the form of an equation. It has no deep physical origin itself, but is an empirical bolt-on to express the otherwise unaccountable fact that we cannot know, predict or bring about everything that might happen, as classical dynamical laws suggest we can. But this changes as soon as you bring quantum physics into the picture, with its attendant notion that uncertainty is seemingly hardwired into the fabric of reality. Rooted in probabilities, entropy and thermodynamics acquire a new, more fundamental physical anchor.

It is worth pointing out, too, that this deep-rooted connection seems to be much more general. Recently, together with my colleagues Markus Müller of the Perimeter Institute for Theoretical Physics in Waterloo, Ontario, Canada, and Oscar Dahlsten at the Centre for Quantum Technologies in Singapore, I have looked at what happens to thermodynamical relations in a generalised class of probabilistic theories that embrace quantum theory and much more besides. There too, the crucial relationship between information and disorder, as quantified by entropy, survives (arxiv.org/1107.6029).

One theory to rule them all

As for gravity – the only one of nature’s four fundamental forces not covered by quantum theory – a more speculative body of research suggests it might be little more than entropy in disguise (see “Falling into disorder”). If so, that would also bring Einstein’s general theory of relativity, with which we currently describe gravity, firmly within the purview of thermodynamics.

Take all this together, and we begin to have a hint of what makes thermodynamics so successful. The principles of thermodynamics are at their roots all to do with information theory. Information theory is simply an embodiment of how we interact with the universe – among other things, to construct theories to further our understanding of it. Thermodynamics is, in Einstein’s term, a “meta-theory”: one constructed from principles over and above the structure of any dynamical laws we devise to describe reality’s workings. In that sense we can argue that it is more fundamental than either quantum physics or general relativity.

If we can accept this and, like Eddington and his ilk, put all our trust in the laws of thermodynamics, I believe it may even afford us a glimpse beyond the current physical order. It seems unlikely that quantum physics and relativity represent the last revolutions in physics. New evidence could at any time foment their overthrow. Thermodynamics might help us discern what any usurping theory would look like.

For example, earlier this year, two of my colleagues in Singapore, Esther Hänggi and Stephanie Wehner, showed that a violation of the quantum uncertainty principle – that idea that you can never fully get rid of probabilities in a quantum context – would imply a violation of the second law of thermodynamics. Beating the uncertainty limit means extracting extra information about the system, which requires the system to do more work than thermodynamics allows it to do in the relevant state of disorder. So if thermodynamics is any guide, whatever any post-quantum world might look like, we are stuck with a degree of uncertainty (arxiv.org/abs/1205.6894).

My colleague at the University of Oxford, the physicist David Deutsch, thinks we should take things much further. Not only should any future physics conform to thermodynamics, but the whole of physics should be constructed in its image. The idea is to generalise the logic of the second law as it was stringently formulated by the mathematician Constantin Carathéodory in 1909: that in the vicinity of any state of a physical system, there are other states that cannot physically be reached if we forbid any exchange of heat with the environment.

James Joule’s 19th century experiments with beer can be used to illustrate this idea. The English brewer, whose name lives on in the standard unit of energy, sealed beer in a thermally isolated tub containing a paddle wheel that was connected to weights falling under gravity outside. The wheel’s rotation warmed the beer, increasing the disorder of its molecules and therefore its entropy. But hard as we might try, we simply cannot use Joule’s set-up to decrease the beer’s temperature, even by a fraction of a millikelvin. Cooler beer is, in this instance, a state regrettably beyond the reach of physics.

God, the thermodynamicist

The question is whether we can express the whole of physics simply by enumerating possible and impossible processes in a given situation. This is very different from how physics is usually phrased, in both the classical and quantum regimes, in terms of states of systems and equations that describe how those states change in time. The blind alleys down which the standard approach can lead are easiest to understand in classical physics, where the dynamical equations we derive allow a whole host of processes that patently do not occur – the ones we have to conjure up the laws of thermodynamics expressly to forbid, such as dye molecules reclumping spontaneously in water.

By reversing the logic, our observations of the natural world can again take the lead in deriving our theories. We observe the prohibitions that nature puts in place, be it on decreasing entropy, getting energy from nothing, travelling faster than light or whatever. The ultimately “correct” theory of physics – the logically tightest – is the one from which the smallest deviation gives us something that breaks those taboos.

There are other advantages in recasting physics in such terms. Time is a perennially problematic concept in physical theories. In quantum theory, for example, it enters as an extraneous parameter of unclear origin that cannot itself be quantised. In thermodynamics, meanwhile, the passage of time is entropy increase by any other name. A process such as dissolved dye molecules forming themselves into a clump offends our sensibilities because it appears to amount to running time backwards as much as anything else, although the real objection is that it decreases entropy.

Apply this logic more generally, and time ceases to exist as an independent, fundamental entity, but one whose flow is determined purely in terms of allowed and disallowed processes. With it go problems such as that I alluded to earlier, of why the universe started in a state of low entropy. If states and their dynamical evolution over time cease to be the question, then anything that does not break any transformational rules becomes a valid answer.

Such an approach would probably please Einstein, who once said: “What really interests me is whether God had any choice in the creation of the world.” A thermodynamically inspired formulation of physics might not answer that question directly, but leaves God with no choice but to be a thermodynamicist. That would be a singular accolade for those 19th-century masters of steam: that they stumbled upon the essence of the universe, entirely by accident. The triumph of thermodynamics would then be a revolution by stealth, 200 years in the making.

Read the entire article here.

Under the Covers at Uber

uber-image

A mere four years ago Uber was being used mostly by Silicon Valley engineers to reserve local limo rides. Now, the Uber app is in the hands of millions of people and being used to book car transportation across sixty cities in six continents. Google recently invested $258 million in the company, which gives Uber a value of around $3.5 billion. Those who have used the service — drivers and passengers alike — swear by it; the service is convenient and the app is simple and engaging. But that doesn’t seem to justify the enormous valuation. So, what’s going on?

From Wired:

When Uber cofounder and CEO Travis Kalanick was in sixth grade, he learned to code on a Commodore 64. His favorite things to program were videogames. But in the mid-’80s, getting the machine to do what he wanted still felt a lot like manual labor. “Back then you would have to do the graphics pixel by pixel,” Kalanick says. “But it was cool because you were like, oh my God, it’s moving across the screen! My monster is moving across the screen!” These days, Kalanick, 37, has lost none of his fascination with watching pixels on the move.

In Uber’s San Francisco headquarters, a software tool called God View shows all the vehicles on the Uber system moving at once. On a laptop web browser, tiny cars on a map show every Uber driver currently on the city’s streets. Tiny eyeballs on the same map show the location of every customer currently looking at the Uber app on their smartphone. In a way, the company anointed by Silicon Valley’s elite as the best hope for transforming global transportation couldn’t have a simpler task: It just has to bring those cars and those eyeballs together — the faster and cheaper, the better.

“Uber should feel magical to the customer,” Kalanick says one morning in November. “They just push the button and the car comes. But there’s a lot going on under the hood to make that happen.”

A little less than four years ago, when Uber was barely more than a private luxury car service for Silicon Valley’s elite techies, Kalanick sat watching the cars crisscrossing San Francisco on God View and had a Matrix-y moment when he “started seeing the math.” He was going to make the monster move — not just across the screen but across cities around the globe. Since then, Uber has expanded to some 60 cities on six continents and grown to at least 400 employees. Millions of people have used Uber to get a ride, and revenue has increased at a rate of nearly 20 percent every month over the past year.

The company’s speedy ascent has taken place in parallel with a surge of interest in the so-called sharing economy — using technology to connect consumers with goods and services that would otherwise go unused. Kalanick had the vision to see potential profit in the empty seats of limos and taxis sitting idle as drivers wait for customers to call.

But Kalanick doesn’t put on the airs of a visionary. In business he’s a brawler. Reaching Uber’s goals has meant digging in against the established bureaucracy in many cities, where giving rides for money is heavily regulated. Uber has won enough of those fights to threaten the market share of the entrenched players. It not only offers a more efficient way to hail a ride but gives drivers a whole new way to see where demand is bubbling up. In the process, Uber seems capable of opening up sections of cities that taxis and car services never bothered with before.

In an Uber-fied future, fewer people own cars, but everybody has access to them.

In San Francisco, Uber has become its own noun — you “get an Uber.” But to make it a verb — to get to the point where everyone Ubers the same way they Google — the company must outperform on transportation the same way Google does on search.

No less than Google itself believes Uber has this potential. In a massive funding round in August led by the search giant’s venture capital arm, Uber received $258 million. The investment reportedly valued Uber at around $3.5 billion and pushed the company to the forefront of speculation about the next big tech IPO — and Kalanick as the next great tech leader.

The deal set Silicon Valley buzzing about what else Uber could become. A delivery service powered by Google’s self-driving cars? The new on-the-ground army for ferrying all things Amazon? Jeff Bezos also is an Uber investor, and Kalanick cites him as an entrepreneurial inspiration. “Amazon was just books and then some CDs,” Kalanick says. “And then they’re like, you know what, let’s do frickin’ ladders!” Then came the Kindle and Amazon Web Services — examples, Kalanick says, of how an entrepreneur’s “creative pragmatism” can defy expectations. He clearly enjoys daring the world to think of Uber as merely another way to get a ride.

“We feel like we’re still realizing what the potential is,” he says. “We don’t know yet where that stops.”

From the back of an Uber-summoned Mercedes GL450 SUV, Kalanick banters with the driver about which make and model will replace the discontinued Lincoln Town Car as the default limo of choice.

Mercedes S-Class? Too expensive, Kalanick says. Cadillac XTS? Too small.

So what is it?

“OK, I’m glad you asked,” Kalanick says. “This is going to blow you away, dude. Are you ready? Have you seen the 2013 Ford Explorer?” Spacious, like a Lexus crossover, but way cheaper.

As Uber becomes a dominant presence in urban transportation, it’s easy to imagine the company playing a role in making this prophecy self-fulfilling. It’s just one more sign of how far Uber has come since Kalanick helped create the company in 2009. In the beginning, it was just a way for him and his cofounder, StumbleUpon creator Garrett Camp, and their friends to get around in style.

They could certainly afford it. At age 21, Kalanick, born and raised in Los Angeles, had started a Napster-like peer-to-peer file-sharing search engine called Scour that got him sued for a quarter-trillion dollars by major media companies. Scour filed for bankruptcy, but Kalanick cofounded Red Swoosh to serve digital media over the Internet for the same companies that had sued him. Akamai bought the company in 2007 in a stock deal worth $19 million.

By the time he reached his thirties, Kalanick was a seasoned veteran in the startup trenches. But part of him wondered if he still had the drive to build another company. His breakthrough came when he was watching, of all things, a Woody Allen movie. The film was Vicky Christina Barcelona, which Allen made in 2008, when he was in his seventies. “I’m like, that dude is old! And he is still bringing it! He’s still making really beautiful art. And I’m like, all right, I’ve got a chance, man. I can do it too.”

Kalanick charged into Uber and quickly collided with the muscular resistance of the taxi and limo industry. It wasn’t long before San Francisco’s transportation agency sent the company a cease-and-desist letter, calling Uber an unlicensed taxi service. Kalanick and Uber did neither, arguing vehemently that it merely made the software that connected drivers and riders. The company kept offering rides and building its stature among tech types—a constituency city politicians have been loathe to alienate—as the cool way to get around.

Uber has since faced the wrath of government and industry in other cites, notably New York, Chicago, Boston, and Washington, DC.

One councilmember opposed to Uber in the nation’s capital was self-described friend of the taxi industry Marion Barry (yes, that Marion Barry). Kalanick, in DC to lobby on Uber’s behalf, told The Washington Post he had an offer for the former mayor: “I will personally chauffeur him myself in his silver Jaguar to work every day of the week, if he can just make this happen.” Though that ride never happened, the council ultimately passed a legal framework that Uber called “an innovative model for city transportation legislation across the country.”

Though Kalanick clearly relishes a fight, he lights up more when talking about Uber as an engineering problem. To fulfill its promise—a ride within five minutes of the tap of a smartphone button—Uber must constantly optimize the algorithms that govern, among other things, how many of its cars are on the road, where they go, and how much a ride costs. While Uber offers standard local rates for its various options, times of peak demand send prices up, which Uber calls surge pricing. Some critics call it price-gouging, but Kalanick says the economics are far less insidious. To meet increased demand, drivers need extra incentive to get out on the road. Since they aren’t employees, the marketplace has to motivate them. “Most things are dynamically priced,” Kalanick points out, from airline tickets to happy hour cocktails.

Kalanick employs a data-science team of PhDs from fields like nuclear physics, astrophysics, and computational biology to grapple with the number of variables involved in keeping Uber reliable. They stay busy perfecting algorithms that are dependable and flexible enough to be ported to hundreds of cities worldwide. When we met, Uber had just gone live in Bogotè, Colombia, as well as Shanghai, Dubai, and Bangalore.

And it’s no longer just black cars and yellow cabs. A newer option, UberX, offers lower-priced rides from drivers piloting their personal vehicles. According to Uber, only certain late-model cars are allowed, and drivers undergo the same background screening as others in the service. In an Uber-fied version of the future, far fewer people may own cars but everybody would have access to them. “You know, I hadn’t driven for a year, and then I drove over the weekend,” Kalanick says. “I had to jump-start my car to get going. It was a little awkward. So I think that’s a sign.”

Back at Uber headquarters, burly drivers crowd the lobby while nearby, coders sit elbow to elbow. Like other San Francisco startups on the cusp of something bigger, Uber is preparing to move to a larger space. Its new digs will be in the same building as Square, the mobile payments company led by Twitter mastermind Jack Dorsey. Twitter’s offices are across the street. The symbolism is hard to miss: Uber is joining the coterie of companies that define San Francisco’s latest tech boom.

Still, part of that image depends on Uber’s outsize potential to expand what it does. The logistical numbers it crunches to make it easier for people to get around would seem a natural fit for a transition into a delivery service. Uber coyly fuels that perception with publicity stunts like ferrying ice cream and barbecue to customers through its app. It’s easy to imagine such promotions quietly doubling as proofs of concept. News of Google’s massive investment prompted visions of a push-button delivery service powered by Google’s self-driving cars.

If Uber expands into delivery, its competition will suddenly include behemoths like Amazon, eBay, and Walmart.

Kalanick acknowledges that the most recent round of investment is intended to fund Uber’s growth, but that’s as far as he’ll go. “In a lot of ways, it’s not the money that allows you to do new things. It’s the growth and the ability to find things that people want and to use your creativity to target those,” he says. “There are a whole hell of a lot of other things that we can do and intend on doing.”

But the calculus of delivery may not even be the hardest part. If Uber were to expand into delivery, its competition—for now other ride-sharing startups such as Lyft, Sidecar, and Hailo—would include Amazon, eBay, and Walmart too.

One way to skirt rivalry with such giants is to offer itself as the back-end technology that can power same-day online retail. In early fall, Google launched its Shopping Express service in San Francisco. The program lets customers shop online at local stores through a Google-powered app; Google sends a courier with their deliveries the same day.

David Krane, the Google Ventures partner who led the investment deal, says there’s nothing happening between Uber and Shopping Express. He also says self-driving delivery vehicles are nowhere near ready to be looked at seriously as part of Uber. “Those meetings will happen when the technology is ready for such discussion,” he says. “That is many moons away.”

Read the entire article here.

Image courtesy of Uber.

SkyCycling

London-skycycle

Famed architect Norman Foster has a brilliant and restless mind. So, he’s not content to stop imagining, even with some of the world’s most innovative and recognizable architectural designs to his credit — 30 St. Mary Axe (London’s “gherkin” or pickle skyscraper), Hearst Tower, and the Millau Viaduct.

Foster is also an avid cyclist, which leads to his re-imagining of the lowly bicycle lane as a more lofty construct. Two hundred miles or so of raised bicycle lanes suspended above London, running mostly above railway lines, the SkyCycle. What a gorgeous idea.

From the Guardian:

Gliding through the air on a bike might so far be confined to the fantasy realms of singing nannies and aliens in baskets, but riding over rooftops could one day form part of your regular commute to work, if Norman Foster has his way.

Unveiled this week, in an appropriately light-headed vision for the holiday season, SkyCycle proposes a network of elevated bike paths hoisted aloft above railway lines, allowing you to zip through town blissfully liberated from the roads.

The project, which has the backing of Network Rail and Transport for London, would see over 220km of car-free routes installed above London’s suburban rail network, suspended on pylons above the tracks and accessed at over 200 entrance points. At up to 15 metres wide, each of the ten routes would accommodate 12,000 cyclists per hour and improve journey times by up to 29 minutes, according to the designers.

Lord Foster, who says that cycling is one of his great passions, describes the plan as “a lateral approach to finding space in a congested city.”

“By using the corridors above the suburban railways,” he said, “we could create a world-class network of safe, car-free cycle routes that are ideally located for commuters.”

Developed by landscape practice Exterior Architecture, with Foster and Partners and Space Syntax, the proposed network would cover a catchment area of six million people, half of whom live and work within 10 minutes of an entrance. But its ambitions stretch beyond London alone.

“The dream is that you could wake up in Paris and cycle to the Gare du Nord,” says Sam Martin of Exterior Architecture. “Then get the train to Stratford, and cycle straight into central London in minutes, without worrying about trucks and buses.”

Developed over the last two years, the initial idea came from the student project of one of Martin’s employees, Oli Clark, who proposed a network of elevated cycle routes weaving in and around Battersea power station. “It was a hobby in the office for a while,” says Martin. “Then we arranged a meeting at City Hall with the deputy mayor of transport – and bumped into Boris in the lift.”

Bumping into Boris has been the fateful beginning for some of the mayor’s other adventures in novelty infrastructure, including Anish Kapoor’s Orbit tower, apparently forged in a chance meeting with Lakshmi Mittal in the cloakrooms at Davos. Other encounters have resulted in cycle “superhighways” (which many blame for the recent increase in accidents) and a £60 million cable car that doesn’t really go anywhere. But could SkyCycle be different?

“It’s about having an eye on the future,” says Martin. “If London keeps growing and spreading itself out, with people forced to commute increasingly longer distances, then in 20 years it’s just going to be a ghetto for people in suits. After rail fare increases this week, a greater percentage of people’s income is being taken up with transport. There has to be another way to allow everyone access to the centre, and stop this doughnut effect.”

After meeting with Network Rail last year, the design team has focused on a 6.5km trial route from Stratford to Liverpool Street Station, following the path of the overground line, a stretch they estimate would cost around £220 million. Working with Roger Ridsdill-Smith, Foster’s head of structural engineering, responsible for the Millennium Bridge, they have developed what Martin describes as “a system akin to a tunnel-boring machine, but happening above ground”.

“It’s no different to the electrification of the lines west of Paddington,” he says. “It would involve a series of pylons installed along the outside edge of the tracks, from which a deck would project out. Trains could still run while the cycle decks were being installed.”

As for access, the proposal would see the installation of vertical hydraulic platforms next to existing railway stations, as well as ramps that took advantage of the raised topography around viaducts and cuttings. “It wouldn’t be completely seamless in terms of the cycling experience,” Martin admits. “But it could be a place for Boris Bike docking stations, to avoid people having to get their own equipment up there.” He says the structure could also be a source of energy creation, supporting solar panels and rain water collection.

The rail network has long been seen as a key to opening up cycle networks, given the amount of available land alongside rail lines, but no proposal has yet suggested launching cyclists into the air.

Read the entire article here.

Image: How the proposed SkyCycle tracks could look. Courtesy of Foster and Partners / Guardian.

Dear IRS: My Tax Return is Late Because…

google-search-goldfish

Completing an annual tax return, and sending even more hard-earned cash, to the government is not much fun for anyone. So, it’s no surprise that many people procrastinate. In the UK, the organization entrusted with gathering pounds and pennies from the public is Her Majesty’s Revenue and Customs department — the equivalent of the Internal Revenue Service (IRS) in the US.

HMRC recently released a list of the worst excuses from taxpayers for not filing their returns on time. It includes such gems as “late due to death of a pet goldfish” and “late due to run in with a cow.” This re-confirms that the British are indeed the eighth wonder of the world.

From the Telegraph:

A builder who handed in his tax return late blamed the death of his pet goldfish, while a farmer said it was the fault of an unruly cow.

A third culprit said he failed to send in his forms after becoming obsessed with an erupting volcano on the television news.

They were among thousands of excuses used by individuals and businesses last year in a bid to avoid paying a penalty for a late tax return.

But, while HM Revenue & Customs says it considers genuine explanations, it has little regard for lame excuses.

As the top ten was disclosed, officials said all had been hit with £100 fines for late returns. They had all appealed, but lost their actions.

The list was released to encourage the self-employed, and other taxpayers, to meet this year’s January 31 deadline. In all, 10.9 million people are due to file tax returns this month. The number required to fill in a self-assessment form has been inflated by changes to Child Benefit. Any household with an individual earning more than £50,000 must now complete the form if they still receive the benefit.

Ruth Owen, the director general of personal tax, said: “There will always be unforeseen events that mean a taxpayer could not file their tax return on time.

“However, your pet goldfish passing away isn’t one of them.”

The ten worst excuses:

1. My pet goldfish died (self-employed builder)

2. I had a run-in with a cow (Midlands farmer)

3. After seeing a volcanic eruption on the news, I couldn’t concentrate on anything else (London woman)

4. My wife won’t give me my mail (self-employed trader)

5. My husband told me the deadline was March 31, and I believed him (Leicester hairdresser)

6. I’ve been far too busy touring the country with my one-man play (Coventry writer)

7. My bad back means I can’t go upstairs. That’s where my tax return is (a working taxi driver)

8. I’ve been cruising round the world in my yacht, and only picking up post when I’m on dry land (South East man)

9. Our business doesn’t really do anything (Kent financial services firm)

10. I’ve been too busy submitting my clients’ tax returns (London accountant)

Read the entire article here.

Image courtesy of Google Search.

The Military-Industrial-Ski-Resort-Complex

This undated picture released by North KThe demented machinations of the world’s greatest living despot, Kim Jung-un, continue. This time the great dictator is on the piste, inspecting a North Korean ski resort newly outfitted with two chair-lifts. And, no Dennis Rodman in sight.

From the Guardian:

It may not have the fir-lined pistes and abundant glühwein of the Swiss resorts of Linden or Wichtracht, close to where Kim Jong-un was educated, but the North Korean leader’s new ski resort at least has a ski lift.

In pictures released by the Korean Central News Agency on Tuesday, Kim can be seen riding the chair lift and admiring the empty pistes.

In August, Switzerland refused to supply machinery to North Korea in a £4.5m deal, describing it as a “propaganda” project, but North Korea has managed to acquire two ski lifts.

Kim took a test ride on one at the Masik Pass ski resort, which he said was “at the centre of the world’s attention”.

He noted “with great satisfaction” that everything was “impeccable” and ordered the authorities to serve the people well so that visitors may “keenly feel the loving care of the party”. He also commanded that the opening ceremony should be held at the earliest possible date.

The resort was described by the news agency as a “great monumental structure in the era of Songun,” referring to the nation’s “military first” policy.

Thousands of soldiers and workers, so called “shock brigades”, built the slopes, hotels and amenities. Earlier this year reporters witnessed workers pounding at the stone with hammers, young women marching with shovels over their shoulders and minivans equipped with loudspeakers blasting patriotic music into the mountain air.

Kim was educated in Berne, Switzerland, where mountains were the backdrop to his studies. Some have speculated that he must have skied during his time there as well as indulging in his often-reported love of basketball.

At the resort, Kim was accompanied by military leaders and Pak Myong-chol, a sports official known to have been associated with Kim’s late uncle who was executed this month.

Jang Song-thaek, Kim’s mentor, was put to death on charges including corruption and plotting to overthrow the state.

The execution was the biggest upheaval since Kim inherited power after the death of his father Kim Jong-il, in December 2011.

Kim visited the resort in June and commanded that work be finished by the end of the year. In the new photographs, he can be seen visiting a hotel room, a spa and a ski shop.

The 30-year-old likes to be associated with expensive, high-profile leisure projects as well as the more frequent party congresses and military inspections. Projects associated with him include a new water park, an amusement park and a horse riding club.

The Munsu water park in Pyongyang opened in October and Kim was photographed in a cinema in the newly-renovated Rungna people’s amusement park. State media also showed footage of Kim on a rollercoaster in the same park.

North Korea is one of the poorest countries in the world with an estimated per capita GDP of under £1,100. Government attempts to increase economic growth are often frustrated by the fear of opening the country to foreign influence.

Read the entire article here.

Image: North Korean leader Kim Jong-un inspects Masik Pass ski resort, Kangwon province. Courtesy of the AFP/Getty Images, Guardian.

Teens and the Internet: Don’t Panic

Some view online social networks, smartphones and texting as nothing but bad news for the future socialization of our teens. After all, they’re usually hunched heads down, thumbs out, immersed in their own private worlds, oblivious to all else, all the while paradoxically and simultaneously, publishing and sharing anything and everything to anyone.

Yet, others, including as Microsoft researcher Danah Boyd, have a more benign view of the technological maelstrom that surrounds our kids. In her book It’s Complicated: The Social Lives of Networked Teens, she argues that teenagers aren’t doing anything different today online than their parents and grandparents often did in person. Parents will take comfort from Boyd’s analysis that today’s teens will become much like their parents: behaving and worrying about many of the same issues that their parents did. Of course, teens will find this very, very uncool indeed.

From Technology Review:

Kids today! They’re online all the time, sharing every little aspect of their lives. What’s wrong with them? Actually, nothing, says Danah Boyd, a Microsoft researcher who studies social media. In a book coming out this winter, It’s Complicated: The Social Lives of Networked Teens, Boyd argues that teenagers aren’t doing much online that’s very different from what kids did at the sock hop, the roller rink, or the mall. They do so much socializing online mostly because they have little choice, Boyd says: parents now generally consider it unsafe to let kids roam their neighborhoods unsupervised. Boyd, 36, spoke with MIT Technology Review’s deputy editor, Brian Bergstein, at Microsoft Research’s offices in Manhattan.

I feel like you might have titled the book Everybody Should Stop Freaking Out.

It’s funny, because one of the early titles was Like, Duh. Because whenever I would show my research to young people, they’d say, “Like, duh. Isn’t this so obvious?” And it opens with the anecdote of a boy who says, “Can you just talk to my mom? Can you tell her that I’m going to be okay?” I found that refrain so common among young people.

You and your colleague Alice Marwick interviewed 166 teenagers for this book. But you’ve studied social media for a long time. What surprised you?

It was shocking how heavily constrained their mobility was. I had known it had gotten worse since I was a teenager, but I didn’t get it—the total lack of freedom to just go out and wander. Young people weren’t even trying to sneak out [of the house at night]. They were trying to get online, because that’s the place where they hung out with their friends.

And I had assumed based on the narratives in the media that bullying was on the rise. I was shocked that data showed otherwise.

Then why do narratives such as “Bullying is more common online” take hold?

It’s made more visible. There is some awful stuff out there, but it frustrates me when a panic distracts us from the reality of what’s going on. One of my frustrations is that there are some massive mental health issues, and we want to blame the technology [that brings them to light] instead of actually dealing with mental health issues.

take your point that Facebook or Insta­gram is the equivalent of yesterday’s hangouts. But social media amplify everyday situations in difficult new ways. For example, kids might instantly see on Facebook that they’re missing out on something other kids are doing together.

That can be a blessing or a curse. These interpersonal conflicts ramp up much faster [and] can be much more hurtful. That’s one of the challenges for this cohort of youth: some of them have the social and emotional skills that are necessary to deal with these conflicts; others don’t. It really sucks when you realize that somebody doesn’t like you as much as you like them. Part of it is, then, how do you use that as an opportunity not to just wallow in your self-pity but to figure out how to interact and be like “Hey, let’s talk through what this friendship is like”?

You contend that teenagers are not cavalier about privacy, despite appearances, and adeptly shift sensitive conversations into chat and other private channels.

Many adults assume teens don’t care about privacy because they’re so willing to participate in social media. They want to be in public. But that doesn’t mean that they want to be public. There’s a big difference. Privacy isn’t about being isolated from others. It’s about having the capacity to control a social situation.

So if parents can let go of some common fears, what should they be doing?

One thing that I think is dangerous is that we’re trained that we are the experts at everything that goes on in our lives and our kids’ lives. So the assumption is that we should teach them by telling them. But I think the best way to teach is by asking questions: “Why are you posting that? Help me understand.” Using it as an opportunity to talk. Obviously there comes a point when your teenage child is going to roll their eyes and go, “I am not interested in explaining anything more to you, Dad.”

The other thing is being present. The hardest thing that I saw, overwhelmingly—the most unhealthy environments—were those where the parents were not present. They could be physically present and not actually present.

Read the entire article here.

Asimov Fifty Years On

1957-driverless-car

In 1964, Isaac Asimov wrote an essay for the New York Times entitled, Visit the World’s Fair in 2014. The essay was a free-wheeling opinion of things to come, viewed through the lens of New York’s World’s Fair of 1964. The essay shows that even a grand master of science fiction cannot predict the future — he got some things quite right and other things rather wrong. Some examples below, and his full essay are below.

That said, what has captured recent attention is Asimov’s thinking on the complex and evolving relationship between humans and technology, and the challenges of environmental stewardship in an increasingly over-populated and resource-starved world.

So, while Asimov was certainly not a teller of fortunes, we had many insights that many, even today, still lack.

Read the entire Isaac Asimov essay here.

What Asimov got right:

“Communications will become sight-sound and you will see as well as hear the person you telephone.”

“As for television, wall screens will have replaced the ordinary set…”

“Large solar-power stations will also be in operation in a number of desert and semi-desert areas…”

“Windows… will be polarized to block out the harsh sunlight. The degree of opacity of the glass may even be made to alter automatically in accordance with the intensity of the light falling upon it.”

What Asimov got wrong:

“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes.”

“…cars will be capable of crossing water on their jets…”

“For short-range travel, moving sidewalks (with benches on either side, standing room in the center) will be making their appearance in downtown sections.”

From the Atlantic:

In August of 1964, just more than 50 years ago, author Isaac Asimov wrote a piece in The New York Times, pegged to that summer’s World Fair.

In the essay, Asimov imagines what the World Fair would be like in 2014—his future, our present.

His notions were strange and wonderful (and conservative, as Matt Novak writes in a great run-down), in the way that dreams of the future from the point of view of the American mid-century tend to be. There will be electroluminescent walls for our windowless homes, levitating cars for our transportation, 3D cube televisions that will permit viewers to watch dance performances from all angles, and “Algae Bars” that taste like turkey and steak (“but,” he adds, “there will be considerable psychological resistance to such an innovation”).

He got some things wrong and some things right, as is common for those who engage in the sport of prediction-making. Keeping score is of little interest to me. What is of interest: what Asimov understood about the entangled relationships among humans, technological development, and the planet—and the implications of those ideas for us today, knowing what we know now.

Asimov begins by suggesting that in the coming decades, the gulf between humans and “nature” will expand, driven by technological development. “One thought that occurs to me,” he writes, “is that men will continue to withdraw from nature in order to create an environment that will suit them better. “

It is in this context that Asimov sees the future shining bright: underground, suburban houses, “free from the vicissitudes of weather, with air cleaned and light controlled, should be fairly common.” Windows, he says, “need be no more than an archaic touch,” with programmed, alterable, “scenery.” We will build our own world, an improvement on the natural one we found ourselves in for so long. Separation from nature, Asimov implies, will keep humans safe—safe from the irregularities of the natural world, and the bombs of the human one, a concern he just barely hints at, but that was deeply felt at the time.

But Asimov knows too that humans cannot survive on technology alone. Eight years before astronauts’ Blue Marble image of Earth would reshape how humans thought about the planet, Asimov sees that humans need a healthy Earth, and he worries that an exploding human population (6.5 billion, he accurately extrapolated) will wear down our resources, creating massive inequality.

Although technology will still keep up with population through 2014, it will be only through a supreme effort and with but partial success. Not all the world’s population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.

This troubled him, but the real problems lay yet further in the future, as “unchecked” population growth pushed urban sprawl to every corner of the planet, creating a “World-Manhattan” by 2450. But, he exclaimed, “society will collapse long before that!” Humans would have to stop reproducing so quickly to avert this catastrophe, he believed, and he predicted that by 2014 we would have decided that lowering the birth rate was a policy priority.

Asimov rightly saw the central role of the planet’s environmental health to a society: No matter how technologically developed humanity becomes, there is no escaping our fundamental reliance on Earth (at least not until we seriously leave Earth, that is). But in 1964 the environmental specters that haunt us today—climate change and impending mass extinctions—were only just beginning to gain notice. Asimov could not have imagined the particulars of this special blend of planetary destruction we are now brewing—and he was overly optimistic about our propensity to take action to protect an imperiled planet.

Read the entire article here.

Image: Driverless cars as imaged in 1957. Courtesy of America’s Independent Electric Light and Power Companies/Paleofuture.

 

 

 

The Future Tubes of the Internets

CerfKahnMedalOfFreedom

Back in 1973, when computer scientists Vint Cerf and Robert Kahn sketched out plans to connect a handful of government networks little did they realize the scale of their invention — TCP/IP (a standard protocol for the interconnection of computer networks. Now, the two patriarchs of the Internet revolution — with no Al Gore in sight — prognosticate on the next 40 years of the internet.

From the NYT:

Will 2014 be the year that the Internet is reined in?

When Edward J. Snowden, the disaffected National Security Agency contract employee, purloined tens of thousands of classified documents from computers around the world, his actions — and their still-reverberating consequences — heightened international pressure to control the network that has increasingly become the world’s stage. At issue is the technical principle that is the basis for the Internet, its “any-to-any” connectivity. That capability has defined the technology ever since Vinton Cerf and Robert Kahn sequestered themselves in the conference room of a Palo Alto, Calif., hotel in 1973, with the task of interconnecting computer networks for an elite group of scientists, engineers and military personnel.

The two men wound up developing a simple and universal set of rules for exchanging digital information — the conventions of the modern Internet. Despite many technological changes, their work prevails.

But while the Internet’s global capability to connect anyone with anything has affected every nook and cranny of modern life — with politics, education, espionage, war, civil liberties, entertainment, sex, science, finance and manufacturing all transformed — its growth increasingly presents paradoxes.

It was, for example, the Internet’s global reach that made classified documents available to Mr. Snowden — and made it so easy for him to distribute them to news organizations.

Yet the Internet also made possible widespread surveillance, a practice that alarmed Mr. Snowden and triggered his plan to steal and publicly release the information.

With the Snowden affair starkly highlighting the issues, the new year is likely to see renewed calls to change the way the Internet is governed. In particular, governments that do not favor the free flow of information, especially if it’s through a system designed by Americans, would like to see the Internet regulated in a way that would “Balkanize” it by preventing access to certain websites.

The debate right now involves two international organizations, usually known by their acronyms, with different views: Icann, the Internet Corporation for Assigned Names and Numbers, and the I.T.U., or International Telecommunication Union.

Icann, a nonprofit that oversees the Internet’s basic functions, like the assignment of names to websites, was established in 1998 by the United States government to create an international forum for “governing” the Internet. The United States continues to favor this group.

The I.T.U., created in 1865 as the International Telegraph Convention, is the United Nations telecommunications regulatory agency. Nations like Brazil, China and Russia have been pressing the United States to switch governance of the Internet to this organization.

Dr. Cerf, 70, and Dr. Kahn, 75, have taken slightly different positions on the matter. Dr. Cerf, who was chairman of Icann from 2000-7, has become known as an informal “Internet ambassador” and a strong proponent of an Internet that remains independent of state control. He has been one of the major supporters of the idea of “network neutrality” — the principle that Internet service providers should enable access to all content and applications, regardless of the source.

Dr. Kahn has made a determined effort to stay out of the network neutrality debate. Nevertheless, he has been more willing to work with the I.T.U., particularly in attempting to build support for a system, known as Digital Object Architecture, for tracking and authenticating all content distributed through the Internet.

Both men agreed to sit down, in separate interviews, to talk about their views on the Internet’s future. The interviews were edited and condensed.

The Internet Ambassador

After serving as a program manager at the Pentagon’s Defense Advanced Research Projects Agency, Vinton Cerf joined MCI Communications Corp., an early commercial Internet company that was purchased by Verizon in 2006, to lead the development of electronic mail systems for the Internet. In 2005, he became a vice president and “Internet evangelist” for Google. Last year he became the president of the Association for Computing Machinery, a leading international educational and scientific computing society.

Q. Edward Snowden’s actions have raised a new storm of controversy about the role of the Internet. Is it a significant new challenge to an open and global Internet?

A. The answer is no, I don’t think so. There are some similar analogues in history. The French historically copied every telex or every telegram that you sent, and they shared it with businesses in order to remain competitive. And when that finally became apparent, it didn’t shut down the telegraph system.

The Snowden revelations will increase interest in end-to-end cryptography for encrypting information both in transit and at rest. For many of us, including me, who believe that is an important capacity to have, this little crisis may be the trigger that induces people to spend time and energy learning how to use it.

You’ve drawn the analogy to a road or highway system. That brings to mind the idea of requiring a driver’s license to use the Internet, which raises questions about responsibility and anonymity.

I still believe that anonymity is an important capacity, that people should have the ability to speak anonymously. It’s argued that people will be encouraged to say untrue things, harmful things, especially if they believe they are anonymous.

There is a tension there, because in some environments the only way you will be able to behave safely is to have some anonymity.

Read the entire article here.

Image: Vinton Cerf and Robert Kahn receiving the Presidential Medal of Freedom from President George W. Bush in 2005. Courtesy of Wikipedia.

Content Versus Innovation

VHS-cassetteThe entertainment and media industry is not known for its innovation. Left to its own devices we would all be consuming news from broadsheets and a town crier, and digesting shows at the theater. Not too long ago the industry, led by Hollywood heavyweights, was doing its utmost to kill emerging forms of media consumption, such as the video tape cassette and the VCR.

Following numerous regulatory, legal and political skirmishes innovation finally triumphed over entrenched interests, allowing VHS tape, followed by the DVD, to flourish, albeit for a while. This of course paved the way for new forms of distribution — the rise of Blockbuster and a myriad of neighborhood video rental stores.

In a great ironic twist, the likes of Blockbuster failed to recognize signals from the market that without significant and continual innovation their business models would subsequently crumble. Now Netflix and other streaming services have managed to end our weekend visits to the movie rental store.

A fascinating article excerpted below takes a look back at the lengthy, and continuing, fight between the conservative media empires and the market’s constant pull from technological innovation.

[For a fresh perspective on the future of media distribution, see our recent posting here.]

From TechCrunch:

The once iconic video rental giant Blockbuster is shutting down its remaining stores across the country. Netflix, meanwhile, is emerging as the leader in video rental, now primarily through online streaming. But Blockbuster, Netflix and home media consumption (VCR/DVD/Blu-ray) may never have existed at all in their current form if the content industry had been successful in banning or regulating them. In 1983, nearly 30 years before thousands of websites blacked out in protest of SOPA/PIPA, video stores across the country closed in protest against legislation that would bar their market model.

A Look Back

In 1977, the first video-rental store opened. It was 600 square feet and located on Wilshire Boulevard in Los Angeles. George Atkinson, the entrepreneur who decided to launch this idea, charged $50 for an “annual membership” and $100 for a “lifetime membership” but the memberships only allowed people to rent videos for $10 a day. Despite an unusual business model, Atkinson’s store was an enormous success, growing to 42 affiliated stores in fewer than 20 months and resulting in numerous competitors.

In retrospect, Atkinson’s success represented the emergence of an entirely new market: home consumption of paid content. It would become an $18 billion dollar domestic market, and, rather than cannibalize from the existing movie theater market, it would eclipse it and thereby become a massive revenue source for the industry.

Atkinson’s success in 1977 is particularly remarkable as the Sony Betamax (the first VCR) had only gone on sale domestically in 1975 at a cost of $1,400 (which in 2013 U.S. dollars is $6,093). As a comparison, the first DVD player in 1997 cost $1,458 in 2013 dollars and the first Blu-ray player in 2006 cost $1,161 in 2013 dollars. And unlike the DVD and Blu-ray player, it would take eight years, until 1983, for the VCR to reach 10 percent of U.S. television households. Atkinson’s success, and that of his early competitors, was in catering to a market of well under 10 percent of U.S. households.

While many content companies realized this as a massive new revenue stream — e.g. 20th Century Fox buying one video rental company for $7.5 million in 1979 — the content industry lawyers and lobbyists tried to stop the home content market through litigation and regulation.

The content industry sued to ban the sale of the Betamax, the first VCR. This legal strategy was coupled by leveraging the overwhelming firepower of the content industry in Washington. If they lost in court to ban the technology and rental business model, then they would ban the technology and rental business model in Congress.

Litigation Attack

In 1976, the content industry filed suit against Sony, seeking an injunction to prevent the company from “manufacturing, distributing, selling or offering for sale Betamax or Betamax tapes.” Essentially granting this remedy would have banned the VCR for all Americans. The content industry’s motivation behind this suit was largely to deal with individuals recording live television, but the emergence of the rental industry was likely a contributing factor.

While Sony won at the district court level in 1979, in 1981 it lost at the Court of Appeals for the Ninth Circuit where the court found that Sony was liable for copyright infringement by their users — recording broadcast television. The Appellate court ordered the lower court to impose an appropriate remedy, advising in favor of an injunction to block the sale of the Betamax.

And in 1981, under normal circumstances, the VCR would have been banned then and there. Sony faced liability well beyond its net worth, so it may well have been the end of Sony, or at least its U.S. subsidiary, and the end of the VCR. Millions of private citizens could have been liable for damages for copyright infringement for recording television shows for personal use. But Sony appealed this ruling to the Supreme Court.

The Supreme Court is able to take very few cases. For example in 2009, 1.1 percent of petitions for certiorari were granted, and of these approximately 70 percent are cases where there is a conflict among different courts (here there was no conflict). But in 1982, the Supreme Court granted certiorari and agreed to hear the case.

After an oral hearing, the justices took a vote internally, and originally only one of them was persuaded to keep the VCR as legal (but after discussion, the number of justices in favor of the VCR would eventually increase to four).

With five votes in favor of affirming the previous ruling the Betamax (VCR) was to be illegal in the United States (see Justice Blackmun’s papers).

But then, something even more unusual happened – which is why we have the VCR and subsequent technologies: The Supreme Court decided for both sides to re-argue a portion of the case. Under the Burger Court (when he was Chief Justice), this only happened in 2.6 percent of the cases that received oral argument. In the re-argument of the case, a crucial vote switched sides, which resulted in a 5-4 decision in favor of Sony. The VCR was legal. There would be no injunction barring its sale.

The majority opinion characterized the lawsuit as an “unprecedented attempt to impose copyright liability upon the distributors of copying equipment and rejected “[s]uch an expansion of the copyright privilege” as “beyond the limits” given by Congress. The Court even cited Mr. Rogers who testified during the trial:

I have always felt that with the advent of all of this new technology that allows people to tape the ‘Neighborhood’ off-the-air . . . Very frankly, I am opposed to people being programmed by others.

On the absolute narrowest of legal grounds, through a highly unusual legal process (and significant luck), the VCR was saved by one vote at the Supreme Court in 1984.

Regulation Attack

In 1982 legislation was introduced in Congress to give copyright holders the exclusive right to authorize the rental of prerecorded videos. Legislation was reintroduced in 1983, the Consumer Video Sales Rental Act of 1983. This legislation would have allowed the content industry to shut down the rental market, or charge exorbitant fees, by making it a crime to rent out movies purchased commercially. In effect, this legislation would have ended the existing market model of rental stores. With 34 co-sponsors, major lobbyists and significant campaign contributions to support it, this legislation had substantial support at the time.

Video stores saw the Consumer Video Sales Rental Act as an existential threat, and on October 21, 1983, about 30 years before the SOPA/PIPA protests, video stores across the country closed down for several hours in protest. While the 1983 legislation died in committee, the legislation would be reintroduced in 1984. In 1984, similar legislation was enacted, The Record Rental Amendment of 1984, which banned the renting and leasing of music. In 1990, Congress banned the renting of computer software.

But in the face of public backlash from video retailers and customers, Congress did not pass the Consumer Video Sales Rental Act.

At the same time, the movie studios tried to ban the Betamax VCR through legislation. Eventually the content industry decided to support legislation that would require compulsory licensing rather than an outright ban. But such a compulsory licensing scheme would have drastically driven up the costs of video tape players and may have effectively banned the technology (similar regulations did ban other technologies).

For the content industry, banning the technology was a feature, not a bug.

Read the entire article here.

Image: Video Home System (VHS) cassette tape. Courtesy of Wikipedia.

MondayMap: The 124 States

USNeverWasBig

The slew of recent secessionist movements in the United States got Andrew Shears, a geography professor at Mansfield University, thinking — what would the nation look like if all previous state petitions and secessionist movements had succeeded? Well, our MondayMap shows the result: Texas would be a mere sliver of its current self; much of California would be named Reagan; the Navajo of the four corners region would have their own state; and North Dakota would cease to exist.

Read the entire article here.

Image: Map of the United States with 124 States. Courtesy of Andrew Shears.

 

Auf Wiedersehen, Pet

old VW campers parked next to each other

After almost 65 years rolling off the production line, the VW Camper nears the end of its current journey. Though, didn’t Volkswagen once claim that for its iconic Beetle, which now has an updated lease on life? Oh well. In the meantime, hippies and other traveling souls will mourn the passing of an iconic, albeit rather unreliable, mode of transport (and form of housing).

See more images here.

Image courtesy of the Guardian / Royston White.

The Benefits of BingeTV

Netflix_logoThe boss of Netflix, Ted Sarandos, is planning to kill you, with kindness. Actually, joy. His plan to recast our consumption of television programming aims to deliver mountains of joyful entertainment in place of the current wasteland of incremental TV-dross punctuated with schlock-TV.

While the plan is still in work, the fruits of Netflix’s labor are becoming apparent — binge-watching is rapidly assuming the momentum of a cultural wave in the US. The super-sized viewing gulp, from say thirteen successive hours of House of Cards or Orange is the New Black, is quickly replacing the national attention deficit disorder enabled by the once anxious and restless trigger finger on the TV-remote, as viewers flip from one mindless show to the next. Yet, by making Netflix into the next great media and entertainment empire Sarandos may be laying the foundation for an unintended, and more important, benefit — lengthening the attention-span of the average adult.

From the Guardian:

Is Ted Sarandos a force for evil? It’s a theory. Consider the evidence. Britain is already a country beset by various health-ruining, bank balance-depleting behaviours – binge-drinking, chain-smoking, overeating, watching football. Since January 2012 when Netflix launched its UK operation, Sarandos, its chief content officer, has created a new demographic of bingeing Britons –1.5 million subscribers who spend £5.99 a month to gorge on TV online.

To get a sense of what the 49-year-old Arizonan is doing to TV culture, imagine that you’ve just finished watching episode five of Joss Whedon’s Firefly on your laptop, courtesy of Netflix. You’ve got places to go, people to meet. But up pops a little box on screen saying the next episode starts in 12 seconds. Five hours later, you dimly realise that you’ve forgotten to pick up your kids from school and/or your boss has texted you 12 times wondering if you’re planning to show up today.

Dooes he feel responsible for creating this binge culture, not just here but across the world (Netflix has 38 million subscribers in 40 countries, who watch about a billion hours of TV shows and films each month), I ask Sarandos when we meet in a London hotel? He laughs. “I love it when it happens that you just have to watch. It only takes that little bit of prodding.”

Sarandos feels it is legitimate to prod the audience so that they can get what they want. Or what he thinks they want. All 13 episodes of, say, political thriller House of Cards with Kevin Spacey, or the same number of prison comedy drama Orange is the New Black with Taylor Schilling, released in one great virtual lump.

Why? “The television business is based on managed dissatisfaction. You’re watching a great television show you’re really wrapped up in? You might get 50 minutes of watching a week and then 18,000 minutes of waiting until the next episode comes along. I’d rather make it all about the joy.”

Sarandos says he got an intimation of the pleasures of binge-viewing as a teenager in Phoenix, Arizona. On Sundays in the mid-70s, he and his family would gather round the telly to watch Mary Hartman, Mary Hartman, a satire on soap operas. “If you worked, the only way could catch up with the five episodes they showed in the week was watching them back to back on Sunday night. So bingeing was already big in my subconscious.”

Years later, Sarandos binged again. “I really loved the Sopranos but didn’t have HBO. So someone would send me tapes of the show with three or four episodes. I would watch one episode and go: ‘Oh my God, I’ve got to watch one more.’ I’d watch the whole tape and champ at the bit for the next one.”

The TV revolution for which Sarandos and Netflix are responsible involves eliminating bit-champing and monetising instant gratification. Netflix has done well from that revolution: its reported net income was $29.5m for the quarter ending 30 June. Profits quintupled compared with the same period in 2012 – in part due to its new UK operation.

Sarandos hasn’t done badly either. He and his wife, former US ambassador Nicole Avant, have a $5.4m Beverly Hills property and recently bought comedian David Spade’s beachside Malibu home for $10.2m. Sarandos argues viewers have long been battling schedulers bent on stopping them seeing what they want, when they want. “Before time shifting, they would use VCRs to collect episodes and view them whenever they wanted. And, more importantly, in whatever doses they wanted. Then DVD box sets and later DVRs made that self-dosing even more sophisticated.”

He began studying how viewers consumed TV while working part-time in a strip-mall video store in the early 1980s. By 30, he was an executive for a company supplying Blockbuster with videos. In 2000, he was hired by Netflix to develop its service posting rental DVDs to customers. “We saw that people would return those discs for TV series very quickly, given they had three hours of programming on them – more quickly than they would a movie. They wanted the next hit.”

Netflix mutated from a DVD-by-post service to an on-demand internet network for films and TV series, and Sarandos found himself cutting deals with traditional TV networks to broadcast shows online a season after they were originally shown, instead of waiting for several years for them to be available for syndication.

Thanks to Netflix and its competitors, the old TV set in the living room is becoming redundant. That living-room fixture has been replaced by a host of mobile surrogates – tablet, laptop, Xbox and even smart phone.

Were that all Sarandos had achieved, he would have been minor player in the idiot box revolution. But a couple of years ago, he decided Netflix should commission its own drama series and release them globally in season-sized bundles. In making that happen, he radically changed not just how but what we watch.

Why bother? “Up till a couple of years ago, a network would make every pilot for a series into a one-off show. I started getting worried, thinking nobody’s going to make series any more, and so we wouldn’t be able to buy them [for Netflix] a season after they’ve been broadcast. So we said maybe we should develop that muscle ourselves.” Sarandos has a $2bn annual content budget, and spends as much as 10% on developing that muscle.

Strikingly, he didn’t spend that money on movies, but TV. Why? “Movies are becoming more global, which is making them less intimate. If you make a movie for the world, you don’t make it for any country.

“I think television is going in the opposite direction – richer characterisation, denser storylines – and so much more like reading a novel. It is a golden age for TV, but only because the best writers and directors increasingly like to work outside Hollywood.” Hence, perhaps, the successes of The Sopranos, The West Wing, The Wire, Downton Abbey, and by this time next year – he hopes – Netflix series such as the Wachowskis’ sci-fi drama Sense 8. TV, if Sarandos has his way, is the new Hollywood.

Netflix’s first foray into original drama came only last year with Lilyhammer, in which Steve Van Zandt, who played mobster Silvio Dante in The Sopranos, reprised his gangster chops as Frank “The Fixer” Tagliano, a mobster starting a new life in Lillehammer, Norway. The whole season of eight episodes was released on Netflix at the same time, delighting those suffering withdrawal symptoms after the end of The Sopranos. A second season is soon to be released.

Sarandos suggests Lilyhammer points the way to a new globalised future for TV drama – more than a fifth of Norway’s population watched the show. “It opened a world of possibilities – mainstream viewing of subtitled programming in the US and releasing in every language and every territory at the exact same moment. To me, that’s what the future will be like.”

Sarandos also tore up another page of the TV rulebook, the one that says each episode of a series must be the same length. “If you watched Arrested Development [the sitcom he recommissioned in May, seven years after it last broadcast] none of those episodes has the same running time – some were 28 minutes, some 47 minutes. I’m saying take as much time as you need to tell the story well. You couldn’t really do that on linear television because you have a grid, commercial breaks and the like.”

House of Cards, his second commissioned drama series whose first season was released in February, even better demonstrates Sarandos’s diabolical genius (if that is what it is). He once described himself as “a human algorithm” for his ability, developed in that Phoenix strip mall, for recommending movies based on a customer’s previous rentals. He did something similar when he commissioned House of Cards.

“It was generated by algorithm,” Sarandos says, grinning. But he’s not entirely joking. “I didn’t use data to make the show, but I used data to determine the potential audience to a level of accuracy very few people can do.”

It worked like this. In 2011, he learned that Hollywood director David Fincher, then working on his movie adaptation of The Girl with the Dragon Tattoo, was pitching his first TV series. Based on a script by Oscar-nominated writer Beau Willimon, it was to be a remake the 1990 BBC series House of Cards, and would star Kevin Spacey as an amoral US senator.

Some networks were sceptical, but Sarandos – not least because he’s a political junkie who loves political thrillers and, along with his wife, helped raise nearly $700m for Obama’s re-election campaign – was tempted. He unleashed his spreadsheets, using Netflix data to determine how many subscribers watched political dramas such as The West Wing or the original House of Cards.

“We’ve been collecting data for a long time. It showed how many Netflix members love The West Wing and the original House of Cards. It also showed who loved David Fincher’s films and Kevin Spacey’s.”

Armed with this data, Sarandos made the biggest gamble of his life. He went to David Fincher’s West Hollywood office and announced he wanted to spend $100m on not one, but two 13-part seasons of House of Cards. Based on his calculations, he says: “I felt that sounded like a pretty safe bet.”

Read the entire article here.

Image: Netflix logo. Courtesy of Wikipedia / Netflix.

A Kid’s Book For Adults

book_BoneByBoneOne of the most engaging new books for young children is a picture book that explains evolution. By way of whimsical illustrations and comparisons of animal skeletons the book — Bone By Bone — is able to deliver the story of evolutionary theory in an entertaining and compelling way.

Perhaps, it could be used just as well for those adults who have trouble grappling with the fruits of the scientific method. The Texas School Board of Education would make an ideal place to begin.

Bone By Bone is written by veterinarian Sara Levine.

From Slate:

In some of the best children’s books, dandelions turn into stars, sharks and radishes merge, and pancakes fall from the sky. No one would confuse these magical tales for descriptions of nature. Small children can differentiate between “the real world and the imaginary world,” as psychologist Alison Gopnik has written. They just “don’t see any particular reason for preferring to live in the real one.”

Children’s nuanced understanding of the not-real surely extends to the towering heap of books that feature dinosaurs as playmates who fill buckets of sand or bake chocolate-chip cookies. The imaginative play of these books may be no different to kids than radishsharks and llama dramas.

But as a parent, friendly dinos never steal my heart. I associate them, just a little, with old creationist images of animals frolicking near the Garden of Eden, which carried the message that dinosaurs and man, both created by God on the sixth day, co-existed on the Earth until after the flood. (Never mind the evidence that dinosaurs went extinct millions of years before humans appeared.) The founder of the Creation Museum in Kentucky calls dinosaurs “missionary lizards,” and that phrase echoes in my head when I see all those goofy illustrations of dinosaurs in sunglasses and hats.

I’ve been longing for another kind of picture book: one that appeals to young children’s wildest imagination in service of real evolutionary thinking. Such a book could certainly include dinosaur skeletons or fossils. But Bone by Bone, by veterinarian and professor Sara Levine, fills the niche to near perfection by relying on dogs, rabbits, bats, whales, and humans. Levine plays with differences in their skeletons to groom kids for grand scientific concepts.

Bone by Bone asks kids to imagine what their bodies would look like if they had different configurations of bones, like extra vertebrae, longer limbs, or fewer fingers. “What if your vertebrae didn’t stop at your rear end? What if they kept going?” Levine writes, as a boy peers over his shoulder at the spinal column. “You’d have a tail!”

“What kind of animal would you be if your leg bones were much, much longer than your arm bones?” she wonders, as a girl in pink sneakers rises so tall her face disappears from the page. “A rabbit or a kangaroo!” she says, later adding a pike and a hare. “These animals need strong hind leg bones for jumping.” Levine’s questions and answers are delightfully simple for the scientific heft they carry.

With the lightest possible touch, Levine introduces the idea that bones in different vertebrates are related and that they morph over time. She starts with vertebrae, skulls and ribs. But other structures bear strong kinships in these animals, too. The bone in the center of a horse’s hoof, for instance, is related to a human finger. (“What would happen if your middle fingers and the middle toes were so thick that they supported your whole body?”) The bones that radiate out through a bat’s wing are linked to those in a human hand. (“A web of skin connects the bones to make wings so that a bat can fly.”) This is different from the wings of a bird or an insect; with bats, it’s almost as if they’re swimming through air.

Of course, human hands did not shape-shift into bats’ wings, or vice versa. Both derive from a common ancestral structure, which means they share an evolutionary past. Homology, as this kind of relatedness is called, is among “the first and in many ways the best evidence for evolution,” says Josh Rosenau of the National Center for Science Education. Comparing bones also paves the way for comparing genes and molecules, for grasping evolution at the next level of sophistication. Indeed, it’s hard to look at the bat wings and human hands as presented here without lighting up, at least a little, with these ideas. So many smart writers focus on preparing young kids to read or understand numbers. Why not do more to ready them for the big ideas of science? Why not pave the way for evolution? (This is easier to do with older kids, with books like The Evolution of Calpurnia Tate and Why Don’t Your Eyelashes Grow?)

Read the entire story here.

Image: Bone By Bone, book cover. Courtesy: Lerner Publishing Group

A Cry For Attention

Peter-Essick

If Mother Earth could post a handful of selfies to awaken us all to the damage, destruction and devastation wrought by its so-called intelligent inhabitants, these would be the images. Peter Essick, National Geographic photo-essayist, gives our host a helping hand with a stunning collection of photographs in his new book, Our Beautiful, Fragile World; images of sadness and loss.

See more of Essick’s photographs here.

From ars technica:

The first song, The Ballad of Bill Hubbard, on Roger Waters’ album Amused to Death begins with an anecdote. It is the story of a wounded soldier asking to be abandoned to die on the battlefield. Told in a matter-of-fact tone by the aged voice of the soldier who abandoned him, it creates a strong counterpoint to the emotion that underlies the story. It evokes sepia-toned images of pain and loss.

Matter-of-fact story telling makes Peter Essick’s book, Our Beautiful, Fragile World, an emotional snapshot of environmental tragedies in progress. Essick is a photojournalist for National Geographic who has spent the last 25 years documenting man’s devastating impact on the environment. In this respect, Essick has the advantage of Waters in that the visual imagery linked to each story leaves nothing to chance.

Essick has put about a hundred of his most evocative images in a coffee table book. The images range over the world in location. We go from the wilds of Alaska, the Antarctic, and Torres Del Paine National Park in Chile, to the everyday in a Home Depot parking lot in Baltimore and a picnic on the banks of the Patuxent River.

The storytelling complements the imagery very well. Indeed, Essick’s matter-of-fact voice lets the reader draw their emotional response from the photos and their relationship to the story. The strongest are often the most mundane. The tragedy of incomplete and unsuccessful cleanup efforts in Chesapeake Bay is made all the more poignant by the image of recreational users enjoying the bay while adding further damage. This is the second theme of the book: even environmental damage can be made to look stunningly beautiful. The infinity room at Idaho Nuclear Engineering and Environmental Laboratory dazzles the eye, while one can’t help but stare in wonder at the splendid desolation created by mining the Canadian Oil Sands.

Despite the beauty, though, the overriding tone is one of sadness. Sadness for what we have lost, what we are losing, and what will soon be lost. In some sense, these images are about documenting what we have thrown away. This is a sepia-toned book, even though the images are not. I consider myself to be environmentally aware. I have made efforts to reduce my carbon footprint; I don’t own a car; we have reduced the amount of meat in our diet; we read food labels to try to purchase from sustainable sources. Yet, this book makes me realise how much more we have to do, while my own life tells me how hard that actually is.

This book is really a cry for attention. It brings into stark relief the hidden consequences of modern life. Our appetite for energy, for plastics, for food, and for metals is, without doubt, causing huge damage to the Earth. Some of it is local: hard rock mining leaving water not just undrinkable but too acidic to touch and land nearby unusable. Other problems are global: carbon emissions and climate change. Even amidst the evidence of this devastation, Essick remains sympathetic to the people caught in the story; that hard rock mining is done by people who are to be treated with dignity. This aspect of Essick’s approach gives his book a humanity that a simple environmental-warrior story would lack.

In only one place does Essick’s matter-of-fact approach breakdown. The story of climate change is deeply troubling, and he lets his pessimism and anger leak through. Although these feelings are not discussed directly, Essick—and, indeed many of us—are deeply frustrated by the lack of political will. Although the climate vignettes are too short to capture the issues, the failure of our society to act are laid out in plain sight.

The images are, without exception, stunning, and Essick has done about as well as is possible given the format. And, therein lies my only real complaint about the book. I don’t really get on with coffee table books. As you may have guessed from my effusiveness above, I love the photography. The central theme of the book is strong and compelling. The imagery, combined with the vignettes, are individually evocative. But, as with all coffee table books, the individual stories lack a certain… something. A good short story is evocative and complete, while still telling a complex story. The vignettes in coffee table books, however, are more like extended captions. What I want instead is a good short story.

Read the entire story here.

Image: Fertilizer: it helps more than just the plants grow. Unfortunately, all that is green is not good for you. The myth that because farmers use the land they are environmentally conscious is just that: a myth. Courtesy of Peter Essick, from his book Our Beautiful, Fragile World.

2014: The Year of Big Stuff

new-years-eve-2013

Over the closing days of each year, or the first few days of the coming one, prognosticators the world over tell us about the future. Yet, while no one, to date, has yet been proven to have prescient skills — despite what your psychic tells you — we all like to dabble in art of prediction. Google’s Eric Schmidt has one big prediction for 2014: big. Everything will be big — big data, big genomics, smartphones will be even bigger, and of course, so will mistakes.

So, with that, a big Happy New Year to all our faithful readers and seers across our fragile and beautiful blue planet.

From the Guardian:

What does 2014 hold? According to Eric Schmidt, Google’s executive chairman, it means smartphones everywhere – and also the possibility of genetics data being used to develop new cures for cancer.

In an appearance on Bloomberg TV, Schmidt laid out his thoughts about general technological change, Google’s biggest mistake, and how Google sees the economy going in 2014.

“The biggest change for consumers is going to be that everyone’s going to have a smartphone,” Schmidt says. “And the fact that so many people are connected to what is essentially a supercomputer means a whole new generation of applications around entertainment, education, social life, those kinds of things. The trend has been that mobile is winning; it’s now won. There are more tablets and phones being sold than personal computers – people are moving to this new architecture very fast.”

It’s certainly true that tablets and smartphones are outselling PCs – in fact smartphones alone have been doing that since the end of 2010. This year, it’s forecast that tablets will have passed “traditional” PCs (desktops, fixed-keyboard laptops) too.

Disrupting business

Next, Schmidt says there’s a big change – a disruption – coming for business through the arrival of “big data”: “The biggest disruptor that we’re sure about is the arrival of big data and machine intelligence everywhere – so the ability [for businesses] to find people, to talk specifically to them, to judge them, to rank what they’re doing, to decide what to do with your products, changes every business globally.”

But he also sees potential in the field of genomics – the parsing of all the data being collected from DNA and gene sequencing. That might not be surprising, given that Google is an investor in 23andme, a gene sequencing company which aims to collect the genomes of a million people so that it can do data-matching analysis on their DNA. (Unfortunately, that plan has hit a snag: 23andme has been told to cease operating by the US Food and Drug Administration because it has failed to respond to inquiries about its testing methods and publication of results.)

Here’s what Schmidt has to say on genomics: “The biggest disruption that we don’t really know what’s going to happen is probably in the genetics area. The ability to have personal genetics records and the ability to start gathering all of the gene sequencing into places will yield discoveries in cancer treatment and diagnostics over the next year that that are unfathomably important.”

It may be worth mentioning that “we’ll find cures through genomics” has been the promise held up by scientists every year since the human genome was first sequenced. So far, it hasn’t happened – as much as anything because human gene variation is remarkably big, and there’s still a lot that isn’t known about the interaction of what appears to be non-functional parts of our DNA (which doesn’t seem to code to produce proteins) and the parts that do code for proteins.

Biggest mistake

As for Google’s biggest past mistake, Schmidt says it’s missing the rise of Facebook and Twitter: “At Google the biggest mistake that I made was not anticipating the rise of the social networking phenomenon – not a mistake we’re going to make again. I guess in our defence were working on many other things, but we should have been in that area, and I take responsibility for that.” The results of that effort to catch up can be seen in the way that Google+ is popping up everywhere – though it’s wrong to think of Google+ as a social network, since it’s more of a way that Google creates a substrate on the web to track individuals.

And what is Google doing in 2014? “Google is very much investing, we’re hiring globally, we see strong growth all around the world with the arrival of the internet everywhere. It’s all green in that sense from the standpoint of the year. Google benefits from transitions from traditional industries, and shockingly even when things are tough in a country, because we’re “return-on-investment”-based advertising – it’s smarter to move your advertising from others to Google, so we win no matter whether the industries are in good shape or not, because people need our services, we’re very proud of that.”

For Google, the sky’s the limit: “the key limiter on our growth is our rate of innovation, how smart are we, how clever are we, how quickly can we get these new systems deployed – we want to do that as fast as we can.”

It’s worth noting that Schmidt has a shaky track record on predictions. At Le Web in 2011 he famously forecast that developers would be shunning iOS to start developing on Android first, and that Google TV would be installed on 50% of all TVs on sale by summer 2012.

It didn’t turn out that way: even now, many apps start on iOS, and Google TV fizzled out as companies such as Logitech found that it didn’t work as well as Android to tempt buyers.

Since that, Schmidt has been a lot more cautious about predicting trends and changes – although he hasn’t been above the occasional comment which seems calculated to get a rise from his audience, such as telling executives at a Gartner conference that Android was more secure than the iPhone – which they apparently found humourous.

Read the entire article here.

Image: Happy New Year, 2014 Google doodle. Courtesy of Google.