Tag Archives: ideas

Clowns, Ducks and Dancing Girls

OK, OK. I’ve had to break my own rule (again). You know, the one that states that I’m not supposed to write about politics. The subject is far too divisive, I’m told. However, as a US-based, Brit and hence a somewhat removed observer — though I can actually vote — I cannot stay on the sidelines.

Politics-Cruz-ducks-15Jan2016

For US politics and its never-ending election season is a process that must be observed, studied, dissected and savored. After all, it’s not really politics — it’s a hysterically entertaining reality TV show complete with dancing girls, duck hunting, character assassination, clowns, demagogues, guns, hypocrisy, plaid shirts, lies and so much more. Best of all, there are no policies or substantive ideas of any kind; just pure entertainment. Netflix should buy the exclusive rights!

Politics-Trump-rally-15Jan2016

Image, top: Phil Robertson, star of the Duck Dynasty reality TV show, says Cruz is the man for the job because he is godly, loves America, and is willing to kill a duck to make gumbo soup. Courtesy of the Guardian.

Image, bottom, Political rally for Donald Trump featuring gyrating dancing girls and warnings to the “enemy”. Courtesy of Fox News.

Myth Busting Silicon(e) Valley

map-Silicon-Valley

Question: what do silicone implants and Silicon Valley have in common?  Answer: they are both instruments of a grandiose illusion. The first, on a mostly personal level, promises eternal youth and vigor; the second, on a much grander scale, promises eternal wealth and greatness for humanity.

So, let’s leave aside the human cosmetic question for another time and concentrate on the broad deception that is current Silicon Valley. It’s a deception at many different levels —  self-deception of Silicon Valley’s young geeks and code jockeys, and the wider delusion that promises us all a glittering future underwritten by rapturous tech.

And, how best to debunk the myths that envelop the Valley like San Francisco’s fog, than to turn to Sam Biddle, former editor of Valleywag. He offers a scathing critique, which happens to be spot on. Quite rightly he asks if we need yet another urban, on-demand laundry app and what on earth is the value to society of “Yo”? But more importantly, he asks us to reconsider our misplaced awe and to knock Silicon Valley from its perch of self-fulfilling self-satisfaction. Yo and Facebook and Uber and Clinkle and Ringly and DogVacay and WhatsApp and the thousands of other trivial start-ups — despite astronomical valuations — will not be humanity’s savior. We need better ideas and deeper answers.

From GQ:

I think my life is better because of my iPhone. Yours probably is, too. I’m grateful to live in a time when I can see my baby cousins or experience any album ever without getting out of bed. I’m grateful that I will literally never be lost again, so long as my phone has battery. And I’m grateful that there are so many people so much smarter than I am who devise things like this, which are magical for the first week they show up, then a given in my life a week later.

We live in an era of technical ability that would have nauseated our ancestors with wonder, and so much of it comes from one very small place in California. But all these unimpeachable humanoid upgrades—the smartphones, the Google-gifted knowledge—are increasingly the exception, rather than the rule, of Silicon Valley’s output. What was once a land of upstarts and rebels is now being led by the money-hungry and the unspirited. Which is why we have a start-up that mails your dog curated treats and an app that says “Yo.” The brightest minds in tech just lately seem more concerned with silly business ideas and innocuous “disruption,” all for the shot at an immense payday. And when our country’s smartest people are working on the dumbest things, we all lose out.

That gap between the Silicon Valley that enriches the world and the Silicon Valley that wastes itself on the trivial is widening daily. And one of the biggest contributing factors is that the Valley has lost touch with reality by subscribing to its own self-congratulatory mythmaking. That these beliefs are mostly baseless, or at least egotistically distorted, is a problem—not just for Silicon Valley but for the rest of us. Which is why we’re here to help the Valley tear down its own myths—these seven in particular.

Myth #1: Silicon Valley Is the Universe’s Only True Meritocracy

 Everyone in Silicon Valley has convinced himself he’s helped create a free-market paradise, the software successor to Jefferson’s brotherhood of noble yeomen. “Silicon Valley has this way of finding greatness and supporting it,” said a member of Greylock Partners, a major venture-capital firm with over $2 billion under management. “It values meritocracy more than anyplace else.” After complaints of the start-up economy’s profound whiteness reached mainstream discussion just last year, companies like Apple, Facebook, and Twitter reluctantly released internal diversity reports. The results were as homogenized as expected: At Twitter, 79 percent of the leadership is male and 72 percent of it is white. At Facebook, senior positions are 77 percent male and 74 percent white. Twitter—a company whose early success can be directly attributed to the pioneering downloads of black smartphone users—hosts an entirely white board of directors. It’s a pounding indictment of Silicon Valley’s corporate psyche that Mark Zuckerberg—a bourgeois white kid from suburban New York who attended Harvard—is considered the Horatio Alger 2.0 paragon. When Paul Graham, the then head of the massive start-up incubator Y Combinator, told The New York Times that he could “be tricked by anyone who looks like Mark Zuckerberg,” he wasn’t just talking about Zuck’s youth.

If there’s any reassuring news, it’s not that tech’s diversity crisis is getting better, but that in the face of so much dismal news, people are becoming angry enough and brave enough to admit that the state of things is not good. Silicon Valley loves data, after all, and with data readily demonstrating tech’s overwhelming white-guy problem, even the true believers in meritocracy see the circumstances as they actually are.

Earlier this year, Ellen Pao became the most mentioned name in Silicon Valley as her gender-discrimination suit against her former employer, Kleiner Perkins Caufield & Byers, played out in court. Although the jury sided with the legendary VC firm, the Pao case was a watershed moment, bringing sunlight and national scrutiny to the issue of unchecked Valley sexism. For every defeated Ellen Pao, we can hope there are a hundred other female technology workers who feel new courage to speak up against wrongdoing, and a thousand male co-workers and employers who’ll reconsider their boys’-club bullshit. But they’ve got their work cut out for them.

Myth #4: School Is for Suckers, Just Drop Out

 Every year PayPal co-founder, investor-guru, and rabid libertarian Peter Thiel awards a small group of college-age students the Thiel Fellowship, a paid offer to either drop out or forgo college entirely. In exchange, the students receive money, mentorship, and networking opportunities from Thiel as they pursue a start-up of their choice. We’re frequently reminded of the tech titans of industry who never got a degree—Steve Jobs, Bill Gates, and Mark Zuckerberg are the most cited, though the fact that they’re statistical exceptions is an aside at best. To be young in Silicon Valley is great; to be a young dropout is golden.

The virtuous dropout hasn’t just made college seem optional for many aspiring barons—formal education is now excoriated in Silicon Valley as an obsolete system dreamed up by people who’d never heard of photo filters or Snapchat. Mix this cynicism with the libertarian streak many tech entrepreneurs carry already and you’ve got yourself a legit anti-education movement.

And for what? There’s no evidence that avoiding a conventional education today grants business success tomorrow. The gifted few who end up dropping out and changing tech history would have probably changed tech history anyway—you can’t learn start-up greatness by refusing to learn in a college classroom. And given that most start-ups fail, do we want an appreciable segment of bright young people gambling so heavily on being the next Zuck? More important, do we want an economy of CEOs who never had to learn to get along with their dorm-mates? Who never had the opportunity to grow up and figure out how to be a human being functioning in society? Who went straight from a bedroom in their parents’ house to an incubator that paid for their meals? It’s no wonder tech has an antisocial rep.

Myth #7: Silicon Valley Is Saving the World

Two years ago an online list of “57 start-up lessons” made its way through the coder community, bolstered by a co-sign from Paul Graham. “Wow, is this list good,” he commented. “It has the kind of resonance you only get when you’re writing from a lot of hard experience.” Among the platitudinous menagerie was this gem: “If it doesn’t augment the human condition for a huge number of people in a meaningful way, it’s not worth doing.” In a mission statement published on Andreessen Horowitz’s website, Marc Andreessen claimed he was “looking for the companies who are going to be the big winners because they are going to cause a fundamental change in the world.” The firm’s portfolio includes Ringly (maker of rings that light up when your phone does something), Teespring (custom T-shirts), DogVacay (pet-sitters on demand), and Hem (the zombified corpse of the furniture store Fab.com). Last year, wealthy Facebook alum Justin Rosenstein told a packed audience at TechCrunch Disrupt, “We in this room, we in technology, have a greater capacity to change the world than the kings and presidents of even a hundred years ago.” No one laughed, even though Rosenstein’s company, Asana, sells instant-messaging software.

 This isn’t just a matter of preening guys in fleece vests building giant companies predicated on their own personal annoyances. It’s wasteful and genuinely harmful to have so many people working on such trivial projects (Clinkle and fucking Yo) under the auspices of world-historical greatness. At one point recently, there were four separate on-demand laundry services operating in San Francisco, each no doubt staffed by smart young people who thought they were carving out a place of small software greatness. And yet for every laundry app, there are smart people doing smart, valuable things: Among the most recent batch of Y Combinator start-ups featured during March’s “Demo Day” were Diassess (twenty-minute HIV tests), Standard Cyborg (3D-printed limbs), and Atomwise (using supercomputing to develop new medical compounds). Those start-ups just happen to be sharing desk space at the incubator with “world changers” like Lumi (easy logo printing) and Underground Cellar (“curated, limited-edition wines with a twist”).

Read the entire article here.

Map: Silicon Valley, CA. Courtesy of Google.

 

Innovating the Disruption Or Disrupting the Innovation

Corporate America has a wonderful knack of embracing a meaningful idea and then overusing it to such an extent that it becomes thoroughly worthless. Until recently, every advertiser, every manufacturer, every service, shamelessly promoted itself as an innovator. Everything a company did was driven by innovation: employees succeeded by innovating; the CEO was innovation incarnate; products were innovative; new processes drove innovation — in fact, the processes themselves were innovative. Any business worth its salt produced completely innovative stuff from cupcakes to tires, from hair color to drill bits, from paper towels to hoses. And consequently this overwhelming ocean of innovation — which upon closer inspection actually isn’t real innovation — becomes worthless, underwhelming drivel.

So, what next for corporate America? Well, latch on to the next meme of course — disruption. Yawn.

From NPR/TED:

HBO’s Silicon Valley is back, with its pitch-perfect renderings of the culture and language of the tech world — like at the opening of the “Disrupt” startup competition run by the Tech Crunch website at the end of last season. “We’re making the world a better place through scalable fault-tolerant distributed databases” — the show’s writers didn’t have to exercise their imagination much to come up with those little arias of geeky self-puffery, or with the name Disrupt, which, as it happens, is what the Tech Crunch conferences are actually called. As is most everything else these days. “Disrupt” and “disruptive” are ubiquitous in the names of conferences, websites, business school degree programs and business book best-sellers. The words pop up in more than 500 TED Talks: “How to Avoid Disruption in Business and in Life,” “Embracing Disruption,” “Disrupting Higher Education,” “Disrupt Yourself.” It transcends being a mere buzzword. As the philosopher Jeremy Bentham said two centuries ago, there is a point where jargon becomes a species of the sublime.

 To give “disruptive” its due, it actually started its life with some meat on its bones. It was popularized in a 1997 book by Clayton Christensen of the Harvard Business School. According to Christensen, the reason why established companies fail isn’t that they don’t keep up with new technologies, but that their business models are disrupted by scrappy, bottom-fishing startups that turn out stripped-down versions of existing products at prices the established companies can’t afford to match. That’s what created an entry point for “disruptive innovations” like the Model T Ford, Craigslist classifieds, Skype and no-frills airlines.

Christensen makes a nice point. Sometimes you can get the world to beat a path to your door by building a crappier mousetrap, too, if you price it right. Some scholars have raised questions about that theory, but it isn’t the details of the story that have put “disruptive” on everybody’s lips; it’s the word itself. Buzzwords feed off their emotional resonances, not their ideas. And for pure resonance, “disruptive” is hard to beat. It’s a word with deep roots. I suspect I first encountered it when my parents read me the note that the teacher pinned to my sweater when I was sent home from kindergarten. Or maybe it reminds you of the unruly kid who was always pushing over the juice table. One way or another, the word evokes obstreperous rowdies, the impatient people who are always breaking stuff. It says something that “disrupt” is from the Latin for “shatter.”

Disrupt or be disrupted. The consultants and business book writers have proclaimed that as the chronic condition of the age, and everybody is clambering to be classed among the disruptors rather than the disruptees. The lists of disruptive companies in the business media include not just Amazon and Uber but also Procter and Gamble and General Motors. What company nowadays wouldn’t claim to be making waves? It’s the same with that phrase “disruptive technologies.” That might be robotics or next-generation genomics, sure. But CNBC also touts the disruptive potential of an iPhone case that converts to a video game joystick.

These days, people just use “disruptive” to mean shaking things up, though unlike my kindergarten teacher, they always infuse a note of approval. As those Tech Crunch competitors assured us, disruption makes the world a better place. Taco Bell has created a position called “Resident Disruptor,” and not to be outdone, McDonald’s is running radio ads describing its milkshake blenders as a disruptive technology. Well, OK, blenders really do shake things up. But by the time a tech buzzword has been embraced by the fast food chains, it’s getting a little frayed at the edges. “Disruption” was never really a new idea in the first place, just a new name for a fact of life as old as capitalism. Seventy years ago the economist Joseph Schumpeter was calling it the “gales of creative destruction,” and he just took the idea from Karl Marx.

Read the entire story here.

Why, Not What

Great leaders, be they individuals, organization or companies, share a simple and yet common trait. Ethnographer Simon Sinek tells us what sets great leaders apart — think Wright brothers, Martin Luther King, Apple — and why some ideas take root and others don’t.

[tube]qp0HIF3SfI4[/tube]

Las Vegas, Tianducheng and Paris: Cultural Borrowing

These three locations in Nevada, China (near Hangzhou) and Paris, France, have something in common. People the world over travel to these three places to see what they share. But only one has an original. In this case, we’re talking about the Eiffel Tower.

Now, this architectural grand theft is subject to a lengthy debate — the merits of mimicry, on a vast scale. There is even a fascinating coffee table sized book dedicated to this growing trend: Original Copies: Architectural Mimicry in Contemporary China, by Bianca Bosker.

Interestingly, the copycat trend only seems worrisome if those doing the copying are in a powerful and growing nation, and the copying is done on a national scale, perhaps for some form of cultural assimilation. After all, we don’t hear similar cries when developers put up a copy of Venice in Las Vegas — that’s just for entertainment we are told.

Yet haven’t civilizations borrowed, and stolen, ideas both good and bad throughout the ages? The answer of course is an unequivocal yes. Humans are avaricious collectors of memes that work — it’s more efficient to borrow than to invent. The Greeks borrowed from the Egyptians; the Romans borrowed from the Greeks; the Turks borrowed from the Romans; the Arabs borrowed from the Turks; the Spanish from the Arabs, the French from the Spanish, the British from the French, and so on. Of course what seems to be causing a more recent stir is that China is doing the borrowing, and on such a rapid and grand scale — the nation is copying not just buildings (and most other products) but entire urban landscapes. However, this is one way that empires emerge and evolve. In this case, China’s acquisitive impulses could, perhaps, be tempered if most nations of the world borrowed less from the Chinese — money that is. But that’s another story.

[div class=attrib]From the Atlantic:[end-div]

The latest and most famous case of Chinese architectural mimicry doesn’t look much like its predecessors. On December 28, German news weekly Der Spiegel reported that the Wangjing Soho, Zaha Hadid’s soaring new office and retail development under construction in Beijing, is being replicated, wall for wall and window for window, in Chongqing, a city in central China.

To most outside observers, this bold and quickly commissioned counterfeit represents a familiar form of piracy. In fashion, technology, and architecture, great ideas trickle down, often against the wishes of their progenitors. But in China, architectural copies don’t usually ape the latest designs.

In the vast space between Beijing and Chongqing lies a whole world of Chinese architectural simulacra that quietly aspire to a different ideal. In suburbs around China’s booming cities, developers build replicas of towns like Halstatt, Austria and Dorchester, England. Individual homes and offices, too, are designed to look like Versailles or the Chrysler Building. The most popular facsimile in China is the White House. The fastest-urbanizing country in history isn’t scanning design magazines for inspiration; it’s watching movies.

At Beijing’s Palais de Fortune, two hundred chateaus sit behind gold-tipped fences. At Chengdu’s British Town, pitched roofs and cast-iron street lamps dot the streets. At Shanghai’s Thames Town, a Gothic cathedral has become a tourist attraction in itself. Other developments have names like “Top Aristocrat,” (Beijing), “the Garden of Monet” (Shanghai), and “Galaxy Dante,” (Shenzhen).

Architects and critics within and beyond China have treated these derivative designs with scorn, as shameless kitsch or simply trash. Others cite China’s larger knock-off culture, from handbags to housing, as evidence of the innovation gap between China and the United States. For a larger audience on the Internet, they are merely a punchline, another example of China’s endlessly entertaining wackiness.

In short, the majority of Chinese architectural imitation, oozing with historical romanticism, is not taken seriously.

But perhaps it ought to be.

In Original Copies: Architectural Mimicry in Contemporary China, the first detailed book on the subject, Bianca Bosker argues that the significance of these constructions has been unfairly discounted. Bosker, a senior technology editor at the Huffington Post, has been visiting copycat Chinese villages for some six years, and in her view, these distorted impressions of the West offer a glance at the hopes, dreams and contradictions of China’s middle class.

“Clearly there’s an acknowledgement that there’s something great about Paris,” says Bosker. “But it’s also: ‘We can do it ourselves.'”

Armed with firsthand observation, field research, interviews, and a solid historical background, Bosker’s book is an attempt to change the way we think about Chinese duplitecture. “We’re seeing the Chinese dream in action,” she says. “It has to do with this ability to take control of your life. There’s now this plethora of options to choose from.” That is something new in China, as is the role that private enterprise is taking in molding built environments that will respond to people’s fantasies.

While the experts scoff, the people who build and inhabit these places are quite proud of them. As the saying goes, “The way to live best is to eat Chinese food, drive an American car, and live in a British house. That’s the ideal life.” The Chinese middle class is living in Orange County, Beijing, the same way you listen to reggae music or lounge in Danish furniture.

In practice, though, the depth and scale of this phenomenon has few parallels. No one knows how many facsimile communities there are in China, but the number is increasing every day. “Every time I go looking for more,” Bosker says, “I find more.”

How many are there?

“At least hundreds.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Tianducheng, 13th arrondissement, Paris in China. Courtesy of Bianca Bosker/University of Hawaii Press.[end-div]

Mourning the Lost Art of Handwriting

In this age of digital everything handwriting does still matter. Some of you may even still have a treasured fountain pen. Novelist Philip Hensher suggests why handwriting has import and value in his new book, The Missing Ink.

[div class=attrib]From the Guardian:[end-div]

About six months ago, I realised that I had no idea what the handwriting of a good friend of mine looked like. I had known him for over a decade, but somehow we had never communicated using handwritten notes. He had left voice messages for me, emailed me, sent text messages galore. But I don’t think I had ever had a letter from him written by hand, a postcard from his holidays, a reminder of something pushed through my letter box. I had no idea whether his handwriting was bold or crabbed, sloping or upright, italic or rounded, elegant or slapdash.

It hit me that we are at a moment when handwriting seems to be about to vanish from our lives altogether. At some point in recent years, it has stopped being a necessary and inevitable intermediary between people – a means by which individuals communicate with each other, putting a little bit of their personality into the form of their message as they press the ink-bearing point on to the paper. It has started to become just one of many options, and often an unattractive, elaborate one.

For each of us, the act of putting marks on paper with ink goes back as far as we can probably remember. At some point, somebody comes along and tells us that if you make a rounded shape and then join it to a straight vertical line, that means the letter “a”, just like the ones you see in the book. (But the ones in the book have a little umbrella over the top, don’t they? Never mind that, for the moment: this is how we make them for ourselves.) If you make a different rounded shape, in the opposite direction, and a taller vertical line, then that means the letter “b”. Do you see? And then a rounded shape, in the same direction as the first letter, but not joined to anything – that makes a “c”. And off you go.

Actually, I don’t think I have any memory of this initial introduction to the art of writing letters on paper. Our handwriting, like ourselves, seems always to have been there.

But if I don’t have any memory of first learning to write, I have a clear memory of what followed: instructions in refinements, suggestions of how to purify the forms of your handwriting.

You longed to do “joined-up writing”, as we used to call the cursive hand when we were young. Instructed in print letters, I looked forward to the ability to join one letter to another as a mark of huge sophistication. Adult handwriting was unreadable, true, but perhaps that was its point. I saw the loops and impatient dashes of the adult hand as a secret and untrustworthy way of communicating that one day I would master.

There was, also, wanting to make your handwriting more like other people’s. Often, this started with a single letter or figure. In the second year at school, our form teacher had a way of writing a 7 in the European way, with a cross-bar. A world of glamour and sophistication hung on that cross-bar; it might as well have had a beret on, be smoking Gitanes in the maths cupboard.

Your hand is formed by aspiration to the hand of others – by the beautiful italic strokes of a friend which seem altogether wasted on a mere postcard, or a note on your door reading “Dropped by – will come back later”. It’s formed, too, by anti-aspiration, the desire not to be like Denise in the desk behind who reads with her mouth open and whose writing, all bulging “m”s and looping “p”s, contains the atrocity of a little circle on top of every i. Or still more horrible, on occasion, usually when she signs her name, a heart. (There may be men in the world who use a heart-shaped jot, as the dot over the i is called, but I have yet to meet one. Or run a mile from one.)

Those other writing apparatuses, mobile phones, occupy a little bit more of the same psychological space as the pen. Ten years ago, people kept their mobile phone in their pockets. Now, they hold them permanently in their hand like a small angry animal, gazing crossly into our faces, in apparent need of constant placation. Clearly, people do regard their mobile phones as, in some degree, an extension of themselves. And yet we have not evolved any of those small, pleasurable pieces of behaviour towards them that seem so ordinary in the case of our pens. If you saw someone sucking one while they thought of the next phrase to text, you would think them dangerously insane.

We have surrendered our handwriting for something more mechanical, less distinctively human, less telling about ourselves and less present in our moments of the highest happiness and the deepest emotion. Ink runs in our veins, and shows the world what we are like. The shaping of thought and written language by a pen, moved by a hand to register marks of ink on paper, has for centuries, millennia, been regarded as key to our existence as human beings. In the past, handwriting has been regarded as almost the most powerful sign of our individuality. In 1847, in an American case, a witness testified without hesitation that a signature was genuine, though he had not seen an example of the handwriting for 63 years: the court accepted his testimony.

Handwriting is what registers our individuality, and the mark which our culture has made on us. It has been seen as the unknowing key to our souls and our innermost nature. It has been regarded as a sign of our health as a society, of our intelligence, and as an object of simplicity, grace, fantasy and beauty in its own right. Yet at some point, the ordinary pleasures and dignity of handwriting are going to be replaced permanently.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Stipula fountain pen. Courtesy of Wikipedia.[end-div]

Innovation Before Its Time

Product driven companies, inventors from all backgrounds and market researchers have long studied how some innovations take off while others fizzle. So, why do some innovations gain traction? Given two similar but competing inventions, what factors lead to one eclipsing the other? Why do some pioneering ideas and inventions fail only to succeed from a different instigator years, sometimes decades, later? Answers to these questions would undoubtedly make many inventors household names, but as is the case in most human endeavors, the process of innovation is murky and more of an art than a science.

Author and columnist Matt Ridley offers some possible answers to the conundrum.

[div class=attrib]From the Wall Street Journal:[end-div]

Bill Moggridge, who invented the laptop computer in 1982, died last week. His idea of using a hinge to attach a screen to a keyboard certainly caught on big, even if the first model was heavy, pricey and equipped with just 340 kilobytes of memory. But if Mr. Moggridge had never lived, there is little doubt that somebody else would have come up with the idea.

The phenomenon of multiple discovery is well known in science. Innovations famously occur to different people in different places at the same time. Whether it is calculus (Newton and Leibniz), or the planet Neptune (Adams and Le Verrier), or the theory of natural selection (Darwin and Wallace), or the light bulb (Edison, Swan and others), the history of science is littered with disputes over bragging rights caused by acts of simultaneous discovery.

As Kevin Kelly argues in his book “What Technology Wants,” there is an inexorability about technological evolution, expressed in multiple discovery, that makes it look as if technological innovation is an autonomous process with us as its victims rather than its directors.

Yet some inventions seem to have occurred to nobody until very late. The wheeled suitcase is arguably such a, well, case. Bernard Sadow applied for a patent on wheeled baggage in 1970, after a Eureka moment when he was lugging his heavy bags through an airport while a local worker effortlessly pushed a large cart past. You might conclude that Mr. Sadow was decades late. There was little to stop his father or grandfather from putting wheels on bags.

Mr. Sadow’s bags ran on four wheels, dragged on a lead like a dog. Seventeen years later a Northwest Airlines pilot, Robert Plath, invented the idea of two wheels on a suitcase held vertically, plus a telescopic handle to pull it with. This “Rollaboard,” now ubiquitous, also feels as if it could have been invented much earlier.

Or take the can opener, invented in the 1850s, eight decades after the can. Early 19th-century soldiers and explorers had to make do with stabbing bayonets into food cans. “Why doesn’t somebody come up with a wheeled cutter?” they must have muttered (or not) as they wrenched open the cans.

Perhaps there’s something that could be around today but hasn’t been invented and that will seem obvious to future generations. Or perhaps not. It’s highly unlikely that brilliant inventions are lying on the sidewalk ignored by the millions of entrepreneurs falling over each other to innovate. Plenty of terrible ideas are tried every day.

Understanding why inventions take so long may require mentally revisiting a long-ago time. For a poorly paid Napoleonic soldier who already carried a decent bayonet, adding a can opener to his limited kitbag was probably a waste of money and space. Indeed, going back to wheeled bags, if you consider the abundance of luggage porters with carts in the 1960s, the ease of curbside drop-offs at much smaller airports and the heavy iron casters then available, 1970 seems about the right date for the first invention of rolling luggage.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class]Image: Joseph Swan, inventor of the incandescent light bulb, which was first publicly demonstrated on 18 December 1878. Courtesy of Wikipedia.[end-div]

Old Concepts Die Hard

Regardless of how flawed old scientific concepts may be researchers have found that it is remarkably difficult for people to give these up and accept sound, new reasoning. Even scientists are creatures of habit.

[div class=attrib]From Scientific American:[end-div]

In one sense, science educators have it easy. The things they describe are so intrinsically odd and interesting — invisible fields, molecular machines, principles explaining the unity of life and origins of the cosmos — that much of the pedagogical attention-getting is built right in.  Where they have it tough, though, is in having to combat an especially resilient form of higher ed’s nemesis: the aptly named (if irredeemably clichéd) ‘preconceived idea.’ Worse than simple ignorance, naïve ideas about science lead people to make bad decisions with confidence. And in a world where many high-stakes issues fundamentally boil down to science, this is clearly a problem.

Naturally, the solution to the problem lies in good schooling — emptying minds of their youthful hunches and intuitions about how the world works, and repopulating them with sound scientific principles that have been repeatedly tested and verified. Wipe out the old operating system, and install the new. According to a recent paper by Andrew Shtulman and Joshua Valcarcel, however, we may not be able to replace old ideas with new ones so cleanly. Although science as a field discards theories that are wrong or lacking, Shtulman and Valcarcel’s work suggests that individuals —even scientifically literate ones — tend to hang on to their early, unschooled, and often wrong theories about the natural world. Even long after we learn that these intuitions have no scientific support, they can still subtly persist and influence our thought process. Like old habits, old concepts seem to die hard.

Testing for the persistence of old concepts can’t be done directly. Instead, one has to set up a situation in which old concepts, if present, measurably interfere with mental performance. To do this, Shtulman and Valcarcel designed a task that tested how quickly and accurately subjects verified short scientific statements (for example: “air is composed of matter.”). In a clever twist, the authors interleaved two kinds of statements — “consistent” ones that had the same truth-value under a naive theory and a proper scientific theory, and “inconsistent” ones. For example, the statement “air is composed of matter”  is inconsistent: it’s false under a naive theory (air just seems like empty space, right?), but is scientifically true. By contrast, the statement “people turn food into energy” is consistent: anyone who’s ever eaten a meal knows it’s true, and science affirms this by filling in the details about digestion, respiration and metabolism.

Shtulman and Valcarcel tested 150 college students on a battery of 200 such statements that included an equal and random mix of consistent and inconsistent statements from several domains, including astronomy, evolution, physiology, genetics, waves, and others. The scientists measured participants’ response speed and accuracy, and looked for systematic differences in how consistent vs. inconsistent statements were evaluated.

If scientific concepts, once learned, are fully internalized and don’t conflict with our earlier naive concepts, one would expect consistent and inconsistent statements to be processed similarly. On the other hand, if naive concepts are never fully supplanted, and are quietly threaded into our thought process, it should take take longer to evaluate inconsistent statements. In other words, it should take a bit of extra mental work (and time) to go against the grain of a naive theory we once held.

This is exactly what Shtulman and Valcarcel found. While there was some variability between the different domains tested, inconsistent statements took almost a half second longer to verify, on average. Granted, there’s a significant wrinkle in interpreting this result. Specifically, it may simply be the case that scientific concepts that conflict with naive intuition are simply learned more tenuously than concepts that are consistent with our intuition. Under this view, differences in response times aren’t necessarily evidence of ongoing inner conflict between old and new concepts in our brains — it’s just a matter of some concepts being more accessible than others, depending on how well they were learned.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of New Scientist.[end-div]

What Happened to TED?

No, not Ted Nugent or Ted Koppel or Ted Turner; we are talking about the TED.

Alex Pareene over at Salon offers a well rounded critique of TED. TED is a global forum of “ideas worth spreading” centered around annual conferences, loosely woven around themes of technology, entertainment and design (TED).

Richard Wurman started TED in 1984 as a self-congratulatory networking event for Silicon Valley insiders. Since changing hands in 2002, TED has grown into a worldwide brand, but still self-congratulatory, only more exclusive. Currently, it costs $6,000 annually to be admitted to the elite idea sharing club.

By way of background, TED’s mission statement follows:

We believe passionately in the power of ideas to change attitudes, lives and ultimately, the world. So we’re building here a clearinghouse that offers free knowledge and inspiration from the world’s most inspired thinkers, and also a community of curious souls to engage with ideas and each other.

[div class=attrib]From Salon:[end-div]

There was a bit of a scandal last week when it was reported that a TED Talk on income equality had been censored. That turned out to be not quite the entire story. Nick Hanauer, a venture capitalist with a book out on income inequality, was invited to speak at a TED function. He spoke for a few minutes, making the argument that rich people like himself are not in fact job creators and that they should be taxed at a higher rate.

The talk seemed reasonably well-received by the audience, but TED “curator” Chris Anderson told Hanauer that it would not be featured on TED’s site, in part because the audience response was mixed but also because it was too political and this was an “election year.”

Hanauer had his PR people go to the press immediately and accused TED of censorship, which is obnoxious — TED didn’t have to host his talk, obviously, and his talk was not hugely revelatory for anyone familiar with recent writings on income inequity from a variety of experts — but Anderson’s responses were still a good distillation of TED’s ideology.

In case you’re unfamiliar with TED, it is a series of short lectures on a variety of subjects that stream on the Internet, for free. That’s it, really, or at least that is all that TED is to most of the people who have even heard of it. For an elite few, though, TED is something more: a lifestyle, an ethos, a bunch of overpriced networking events featuring live entertainment from smart and occasionally famous people.

Before streaming video, TED was a conference — it is not named for a person, but stands for “technology, entertainment and design” — organized by celebrated “information architect” (fancy graphic designer) Richard Saul Wurman. Wurman sold the conference, in 2002, to a nonprofit foundation started and run by former publisher and longtime do-gooder Chris Anderson (not the Chris Anderson of Wired). Anderson grew TED from a woolly conference for rich Silicon Valley millionaire nerds to a giant global brand. It has since become a much more exclusive, expensive elite networking experience with a much more prominent public face — the little streaming videos of lectures.

It’s even franchising — “TEDx” events are licensed third-party TED-style conferences largely unaffiliated with TED proper — and while TED is run by a nonprofit, it brings in a tremendous amount of money from its members and corporate sponsorships. At this point TED is a massive, money-soaked orgy of self-congratulatory futurism, with multiple events worldwide, awards and grants to TED-certified high achievers, and a list of speakers that would cost a fortune if they didn’t agree to do it for free out of public-spiritedness.

According to a 2010 piece in Fast Company, the trade journal of the breathless bullshit industry, the people behind TED are “creating a new Harvard — the first new top-prestige education brand in more than 100 years.” Well! That’s certainly saying… something. (What it’s mostly saying is “This is a Fast Company story about some overhyped Internet thing.”)

To even attend a TED conference requires not just a donation of between $7,500 and $125,000, but also a complicated admissions process in which the TED people determine whether you’re TED material; so, as Maura Johnston says, maybe it’s got more in common with Harvard than is initially apparent.

Strip away the hype and you’re left with a reasonably good video podcast with delusions of grandeur. For most of the millions of people who watch TED videos at the office, it’s a middlebrow diversion and a source of factoids to use on your friends. Except TED thinks it’s changing the world, like if “This American Life” suddenly mistook itself for Doctors Without Borders.

The model for your standard TED talk is a late-period Malcolm Gladwell book chapter. Common tropes include:

  • Drastically oversimplified explanations of complex problems.
  • Technologically utopian solutions to said complex problems.
  • Unconventional (and unconvincing) explanations of the origins of said complex problems.

Staggeringly obvious observations presented as mind-blowing new insights.
What’s most important is a sort of genial feel-good sense that everything will be OK, thanks in large part to the brilliance and beneficence of TED conference attendees. (Well, that and a bit of Vegas magician-with-PowerPoint stagecraft.)

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Multi-millionaire Nick Hanauer delivers a speech at TED Talks. Courtesy of Time.[end-div]

Beautiful Explanations

Each year for the past 15 years Edge has posed a weighty question to a group of scientists, researchers, philosophers, mathematicians and thinkers. For 2012, Edge asked the question, “What Is Your Favorite Deep, Elegant, or Beautiful Explanation?”, to 192 of our best and brightest. Back came 192 different and no-less wonderful answers. We can post but a snippet here, so please visit the Edge, and then make a note to buy the book (it’s not available yet).

Read the entire article here.

The Mysterious Coherence Between Fundamental Physics and Mathematics
Peter Woit, Mathematical Physicist, Columbia University; Author, Not Even Wrong

Any first course in physics teaches students that the basic quantities one uses to describe a physical system include energy, momentum, angular momentum and charge. What isn’t explained in such a course is the deep, elegant and beautiful reason why these are important quantities to consider, and why they satisfy conservation laws. It turns out that there’s a general principle at work: for any symmetry of a physical system, you can define an associated observable quantity that comes with a conservation law:

1. The symmetry of time translation gives energy
2. The symmetries of spatial translation give momentum
3. Rotational symmetry gives angular momentum
4. Phase transformation symmetry gives charge

 

Einstein Explains Why Gravity Is Universal
Sean Carroll, Theoretical Physicist, Caltech; Author, From Eternity to Here: The Quest for the Ultimate Theory of Time

The ancient Greeks believed that heavier objects fall faster than lighter ones. They had good reason to do so; a heavy stone falls quickly, while a light piece of paper flutters gently to the ground. But a thought experiment by Galileo pointed out a flaw. Imagine taking the piece of paper and tying it to the stone. Together, the new system is heavier than either of its components, and should fall faster. But in reality, the piece of paper slows down the descent of the stone.

Galileo argued that the rate at which objects fall would actually be a universal quantity, independent of their mass or their composition, if it weren’t for the interference of air resistance. Apollo 15 astronaut Dave Scott once illustrated this point by dropping a feather and a hammer while standing in vacuum on the surface of the Moon; as Galileo predicted, they fell at the same rate.

Subsequently, many scientists wondered why this should be the case. In contrast to gravity, particles in an electric field can respond very differently; positive charges are pushed one way, negative charges the other, and neutral particles not at all. But gravity is universal; everything responds to it in the same way.

Thinking about this problem led Albert Einstein to what he called “the happiest thought of my life.” Imagine an astronaut in a spaceship with no windows, and no other way to peer at the outside world. If the ship were far away from any stars or planets, everything inside would be in free fall, there would be no gravitational field to push them around. But put the ship in orbit around a massive object, where gravity is considerable. Everything inside will still be in free fall: because all objects are affected by gravity in the same way, no one object is pushed toward or away from any other one. Sticking just to what is observed inside the spaceship, there’s no way we could detect the existence of gravity.

 

True or False: Beauty Is Truth
Judith Rich Harris, Independent Investigator and Theoretician; Author, The Nurture Assumption; No Two Alike: Human Nature and Human Individuality

“Beauty is truth, truth beauty,” said John Keats. But what did he know? Keats was a poet, not a scientist.

In the world that scientists inhabit, truth is not always beautiful or elegant, though it may be deep. In fact, it’s my impression that the deeper an explanation goes, the less likely it is to be beautiful or elegant.

Some years ago, the psychologist B. F. Skinner proposed an elegant explanation of “the behavior of organisms,” based on the idea that rewarding a response—he called it reinforcement—increases the probability that the same response will occur again in the future. The theory failed, not because it was false (reinforcement generally does increase the probability of a response) but because it was too simple. It ignored innate components of behavior. It couldn’t even handle all learned behavior. Much behavior is acquired or shaped through experience, but not necessarily by means of reinforcement. Organisms learn different things in different ways.

 

The Power Of One, Two, Three
Charles Seife, Professor of Journalism, New York University; formerly journalist, Science Magazine; Author, Proofiness: The Dark Arts of Mathematical Deception

Sometimes even the simple act of counting can tell you something profound.

One day, back in the late 1990s, when I was a correspondent for New Scientist magazine, I got an e-mail from a flack waxing rhapsodic about an extraordinary piece of software. It was a revolutionary data-compression program so efficient that it would squash every digital file by 95% or more without losing a single bit of data. Wouldn’t my magazine jump at the chance to tell the world about the computer program that will make their hard drives hold 20 times more information than every before.

No, my magazine wouldn’t.

No such compression algorithm could possibly exist; it was the algorithmic equivalent of a perpetual motion machine. The software was a fraud.

The reason: the pigeonhole principle.

 

Watson and Crick Explain How DNA Carries Genetic Information
Gary Klein, Cognitive Psychologist; Author, Sources of Power; Streetlights and Shadows: Searching for Keys to Adaptive Decision Making

In 1953, when James Watson pushed around some two-dimensional cut-outs and was startled to find that an adenine-thymine pair had an isomorphic shape to the guanine-cytosine pair, he solved eight mysteries simultaneously. In that instant he knew the structure of DNA: a helix. He knew how many strands: two. It was a double helix. He knew what carried the information: the nucleic acids in the gene, not the protein. He knew what maintained the attraction: hydrogen bonds. He knew the arrangement: The sugar-phosphate backbone was on the outside and the nucleic acids were in the inside. He knew how the strands match: through the base pairs. He knew the arrangement: the two identical chains ran in opposite directions. And he knew how genes replicated: through a zipper-like process.

The discovery that Watson and Crick made is truly impressive, but I am also interested in what we can learn from the process by which they arrived at their discovery. On the surface, the Watson-Crick story fits in with five popular claims about innovation, as presented below. However, the actual story of their collaboration is more nuanced than these popular claims suggest.

It is important to have clear research goals. Watson and Crick had a clear goal, to describe the structure of DNA, and they succeeded.

But only the first two of their eight discoveries had to do with this goal. The others, arguably the most significant, were unexpected byproducts.