The Angry Letter, Not Sent

LetterMost people over the age of 40 have probably written and not sent an angry letter.

The unsent letter may have been intended for a boss or an ex-boss. It may have been for a colleague or a vendor or a business associate. It may have been for your electrician or the plumber who failed to fix the problem. It may have been to a local restaurant that served up an experience far below your expecations; it may have been intended for Microsoft because your Windows XP laptop failed again, and this time you lost all your documents. We’ve all written an angry letter.

The angry letter has probably, for the most part, been replaced by the angry email — after all you can still keep an email as a draft, and not hit send. Younger generations may not be as fortunate — write an angry Facebook post or text a Tweet an it’s sent, shared, gone. Thus, social network users may not realize what they are truly missing from writing an angry letter, or email, and not sending it.

From NYT:

WHENEVER Abraham Lincoln felt the urge to tell someone off, he would compose what he called a “hot letter.” He’d pile all of his anger into a note, “put it aside until his emotions cooled down,” Doris Kearns Goodwin once explained on NPR, “and then write: ‘Never sent. Never signed.’ ” Which meant that Gen. George G. Meade, for one, would never hear from his commander in chief that Lincoln blamed him for letting Robert E. Lee escape after Gettysburg.

Lincoln was hardly unique. Among public figures who need to think twice about their choice of words, the unsent angry letter has a venerable tradition. Its purpose is twofold. It serves as a type of emotional catharsis, a way to let it all out without the repercussions of true engagement. And it acts as a strategic catharsis, an exercise in saying what you really think, which Mark Twain (himself a notable non-sender of correspondence) believed provided “unallowable frankness & freedom.”

Harry S. Truman once almost informed the treasurer of the United States that “I don’t think that the financial advisor of God Himself would be able to understand what the financial position of the Government of the United States is, by reading your statement.” In 1922, Winston Churchill nearly warned Prime Minister David Lloyd George that when it came to Iraq, “we are paying eight millions a year for the privilege of living on an ungrateful volcano out of which we are in no circumstances to get anything worth having.” Mark Twain all but chastised Russians for being too passive when it came to the czar’s abuses, writing, “Apparently none of them can bear to think of losing the present hell entirely, they merely want the temperature cooled down a little.”

But while it may be the unsent mail of politicians and writers that is saved for posterity, that doesn’t mean that they somehow hold a monopoly on the practice. Lovers carry on impassioned correspondence that the beloved never sees; family members vent their mutual frustrations. We rail against the imbecile who elbowed past us on the subway platform.

Personally, when I’m working on an article with an editor, I have a habit of using the “track changes” feature in Microsoft Word for writing retorts to suggested editorial changes. I then cool off and promptly delete the comments — and, usually, make the changes. (As far as I know, the uncensored me hasn’t made it into a final version.)

In some ways, little has changed in the art of the unsent letter since Lincoln thought better of excoriating Meade. We may have switched the format from paper to screen, but the process is largely the same. You feel angry. And you construct a retort — only to find yourself thinking better of taking it any further. Emotions cooled, you proceed in a more reasonable, and reasoned, fashion. It’s the opposite of the glib rejoinder that you think of just a bit too late and never quite get to say.

 

But it strikes me that in other, perhaps more fundamental, respects, the art of the unsent angry letter has changed beyond recognition in the world of social media. For one thing, the Internet has made the enterprise far more public. Truman, Lincoln and Churchill would file away their unsent correspondence. No one outside their inner circle would read what they had written. Now we have the option of writing what should have been our unsent words for all the world to see. There are threads on reddit and many a website devoted to those notes you’d send if only you were braver, not to mention the habit of sites like Thought Catalog of phrasing entire articles as letters that were never sent.

Want to express your frustration with your ex? Just submit a piece called “An Open Letter to the Girl I Loved and Lost,” and hope that she sees it and recognize herself. You, of course, have taken none of the risk of sending it to her directly.

A tweet about “that person,” a post about “restaurant employees who should know better”; you put in just enough detail to make the insinuation fairly obvious, but not enough that, if caught, you couldn’t deny the whole thing. It’s public shaming with an escape hatch. Does knowing that we can expect a collective response to our indignation make it more satisfying?

Not really. Though we create a safety net, we may end up tangled all the same. We have more avenues to express immediate displeasure than ever before, and may thus find ourselves more likely to hit send or tweet when we would have done better to hit save or delete. The ease of venting drowns out the possibility of recanting, and the speed of it all prevents a deeper consideration of what exactly we should say and why, precisely, we should say it.

When Lincoln wanted to voice his displeasure, he had to find a secretary or, at the very least, a pen. That process alone was a way of exercising self-control — twice over. It allowed him not only to express his thoughts in private (so as not to express them by mistake in public), but also to determine which was which: the anger that should be voiced versus the anger that should be kept quiet.

Now we need only click a reply button to rattle off our displeasures. And in the heat of the moment, we find the line between an appropriate response and one that needs a cooling-off period blurring. We toss our reflexive anger out there, but we do it publicly, without the private buffer that once would have let us separate what needed to be said from what needed only to be felt. It’s especially true when we see similarly angry commentary coming from others. Our own fury begins to feel more socially appropriate.

We may also find ourselves feeling less satisfied. Because the angry email (or tweet or text or whatnot) takes so much less effort to compose than a pen-and-paper letter, it may in the end offer us a less cathartic experience, in just the same way that pressing the end call button on your cellphone will never be quite the same as slamming down an old-fashioned receiver.

Perhaps that’s why we see so much vitriol online, so many anonymous, bitter comments, so many imprudent tweets and messy posts. Because creating them is less cathartic, you feel the need to do it more often. When your emotions never quite cool, they keep coming out in other ways.

Read the entire article here.

Image courtesy the Guardian.

 

Ten Greatest Works

PicassoGuernica

 

 

 

 

 

I would take issue with Jonathan Jones’ top ten best works of art, ever. Though the list of some chosen artists is perhaps a fair representation of la creme de la creme — Rembrandt, da Vinci, Michelangelo and Velasquez for sure.

One work that clearly does belong in the list is Guernica. Picasso summed up the truth of fascism and war in this masterpiece.

See more of Jones’ top ten here.

Image: Guernica, Pablo Picasso, 1937. Prado Museum, Madrid. Courtesy of Wikipedia.

The Inflaton and the Multiverse

multiverse-illustration

 

 

 

 

 

 

 

 

 

Last week’s announcement that cosmologists had found signals of gravitational waves from the primordial cosmic microwave background of the Big Bang made many headlines, even on cable news. If verified by separate experiments this will be ground-breaking news indeed — much like the discovery of the Higgs Boson in 2012. Should the result stand, this may well pave the way for new physics and greater support for the multiverse theory of the universe. So, in addition to the notion that we may not be alone in the vast cosmos, we’ll now have to consider not being alone in a cosmos made up of multiple universes — our universe may not be alone either!

From the New Scientist:

Wave hello to the multiverse? Ripples in the very fabric of the cosmos, unveiled this week, are allowing us to peer further back in time than anyone thought possible, showing us what was happening in the first slivers of a second after the big bang.

The discovery of these primordial waves could solidify the idea that our young universe went through a rapid growth spurt called inflation. And that theory is linked to the idea that the universe is constantly giving birth to smaller “pocket” universes within an ever-expanding multiverse.

The waves in question are called gravitational waves, and they appear in Einstein’s highly successful theory of general relativity (see “A surfer’s guide to gravitational waves”). On 17 March, scientists working with the BICEP2 telescope in Antarctica announced the first indirect detection of primordial gravitational waves. This version of the ripples was predicted to be visible in maps of the cosmic microwave background (CMB), the earliest light emitted in the universe, roughly 380,000 years after the big bang.

Repulsive gravity

The BICEP2 team had spent three years analysing CMB data, looking for a distinctive curling pattern called B-mode polarisation. These swirls indicate that the light of the CMB has been twisted, or polarised, into specific curling alignments. In two papers published online on the BICEP project website, the team said they have high confidence the B-mode pattern is there, and that they can rule out alternative explanations such as dust in our own galaxy, distortions caused by the gravity of other galaxies and errors introduced by the telescope itself. That suggests the swirls could have been left only by the very first gravitational waves being stretched out by inflation.

“If confirmed, this result would constitute the most important breakthrough in cosmology over the past 15 years. It will open a new window into the beginning of our universe and have fundamental implications for extensions of the standard model of physics,” says Avi Loeb at Harvard University. “If it is real, the signal will likely lead to a Nobel prize.”

And for some theorists, simply proving that inflation happened at all would be a sign of the multiverse.

“If inflation is there, the multiverse is there,” said Andrei Linde of Stanford University in California, who is not on the BICEP2 team and is one of the originators of inflationary theory. “Each observation that brings better credence to inflation brings us closer to establishing that the multiverse is real.” (Watch video of Linde being surprised with the news that primordial gravitational waves have been detected.)

The simplest models of inflation, which the BICEP2 results seem to support, require a particle called an inflaton to push space-time apart at high speed.

“Inflation depends on a kind of material that turns gravity on its head and causes it to be repulsive,” says Alan Guth at the Massachusetts Institute of Technology, another author of inflationary theory. Theory says the inflaton particle decays over time like a radioactive element, so for inflation to work, these hypothetical particles would need to last longer than the period of inflation itself. Afterwards, inflatons would continue to drive inflation in whatever pockets of the universe they inhabit, repeatedly blowing new universes into existence that then rapidly inflate before settling down. This “eternal inflation” produces infinite pocket universes to create a multiverse.

Quantum harmony

For now, physicists don’t know how they might observe the multiverse and confirm that it exists. “But when the idea of inflation was proposed 30 years ago, it was a figment of theoretical imagination,” says Marc Kamionkowski at Johns Hopkins University in Baltimore, Maryland. “What I’m hoping is that with these results, other theorists out there will start to think deeply about the multiverse, so that 20 years from now we can have a press conference saying we’ve found evidence of it.”

In the meantime, studying the properties of the swirls in the CMB might reveal details of what the cosmos was like just after its birth. The power and frequency of the waves seen by BICEP2 show that they were rippling through a particle soup with an energy of about 1016 gigaelectronvolts, or 10 trillion times the peak energy expected at the Large Hadron Collider. At such high energies, physicists expect that three of the four fundamental forces in physics – the strong, weak and electromagnetic forces – would be merged into one.

The detection is also the first whiff of quantum gravity, one of the thorniest puzzles in modern physics. Right now, theories of quantum mechanics can explain the behaviour of elementary particles and those three fundamental forces, but the equations fall apart when the fourth force, gravity, is added to the mix. Seeing gravitational waves in the CMB means that gravity is probably linked to a particle called the graviton, which in turn is governed by quantum mechanics. Finding these primordial waves won’t tell us how quantum mechanics and gravity are unified, says Kamionkowski. “But it does tell us that gravity obeys quantum laws.”

“For the first time, we’re directly testing an aspect of quantum gravity,” says Frank Wilczek at MIT. “We’re seeing gravitons imprinted on the sky.”

Waiting for Planck

Given the huge potential of these results, scientists will be eagerly anticipating polarisation maps from projects such as the POLARBEAR experiment in Chile or the South Pole Telescope. The next full-sky CMB maps from the Planck space telescope are also expected to include polarisation data. Seeing a similar signal from one or more of these experiments would shore up the BICEP2 findings and make a firm case for inflation and boost hints of the multiverse and quantum gravity.

One possible wrinkle is that previous temperature maps of the CMB suggested that the signal from primordial gravitational waves should be much weaker that what BICEP2 is seeing. Those results set theorists bickering about whether inflation really happened and whether it could create a multiverse. Several physicists suggested that we scrap the idea entirely for a new model of cosmic birth.

Taken alone, the BICEP2 results give a strong-enough signal to clinch inflation and put the multiverse back in the game. But the tension with previous maps is worrying, says Paul Steinhardt at Princeton University, who helped to develop the original theory of inflation but has since grown sceptical of it.

“If you look at the best-fit models with the new data added, they’re bizarre,” Steinhardt says. “If it remains like that, it requires adding extra fields, extra parameters, and you get really rather ugly-looking models.”

Forthcoming data from Planck should help resolve the issue, and we may not have long to wait. Olivier Doré at the California Institute of Technology is a member of the Planck collaboration. He says that the BICEP2 results are strong and that his group should soon be adding their data to the inflation debate: “Planck in particular will have something to say about it as soon as we publish our polarisation result in October 2014.”

Read the entire article here.

Image: Multiverse illustration. Courtesy of National Geographic.

Father of Distributed Computing

Leslie_LamportDistributed computing is a foundational element for most modern day computing. It paved the way for processing to be shared across multiple computers and, nowadays, within the cloud. Most technology companies, including IBM, Google, Amazon, and Facebook, use distributed computing to provide highly scalable and reliable computing power for their systems and services. Yet, Bill Gates did not invent distributed computing, nor did Steve Jobs. In fact, it was pioneered in the mid-1970s by an unsung hero of computer science, Leslie Lamport. Know aged 73 Leslie Lamport was recognized with this year’s Turing Award.

From Technology Review:

This year’s winner of the Turing Award—often referred to as the Nobel Prize of computing—was announced today as Leslie Lamport, a computer scientist whose research made possible the development of the large, networked computer systems that power, among other things, today’s cloud and Web services. The Association for Computing Machinery grants the award annually, with an associated prize of $250,000.

Lamport, now 73 and a researcher with Microsoft, was recognized for a series of major breakthroughs that began in the 1970s. He devised algorithms that make it possible for software to function reliably even if it is running on a collection of independent computers or components that suffer from delays in communication or sometimes fail altogether.

That work, within a field now known as distributed computing, remains crucial to the sprawling data centers used by Internet giants, and is also involved in coördinating the multiple cores of modern processors in computers and mobile devices. Lamport talked to MIT Technology Review’s Tom Simonite about why his ideas have lasted.

Why is distributed computing important?

Distribution is not something that you just do, saying “Let’s distribute things.” The question is ‘How do you get it to behave coherently?’”

My Byzantine Generals work [on making software fault-tolerant, in 1980] came about because I went to SRI and had a contract to build a reliable prototype computer for flying airplanes for NASA. That used multiple computers that could fail, and so there you have a distributed system. Today there are computers in Palo Alto and Beijing and other places, and we want to use them together, so we build distributed systems. Computers with multiple processors inside are also distributed systems.

We no longer use computers like those you worked with in the 1970s and ’80s. Why have your distributed-computing algorithms survived?

Some areas have had enormous changes, but the aspect of things I was looking at, the fundamental notions of synchronization, are the same.

Running multiple processes on a single computer is very different from a set of different computers talking over a relatively slow network, for example. [But] when you’re trying to reason mathematically about their correctness, there’s no fundamental difference between the two systems.

I [developed] Paxos [in 1989] because people at DEC [Digital Equipment Corporation] were building a distributed file system. The Paxos algorithm is very widely used now. Look inside of Bing or Google or Amazon—where they’ve got rooms full of computers, they’ll probably be running an instance of Paxos.

More recently, you have worked on ways to improve how software is built. What’s wrong with how it’s done now?

People seem to equate programming with coding, and that’s a problem. Before you code, you should understand what you’re doing. If you don’t write down what you’re doing, you don’t know whether you understand it, and you probably don’t if the first thing you write down is code. If you’re trying to build a bridge or house without a blueprint—what we call a specification—it’s not going to be very pretty or reliable. That’s how most code is written. Every time you’ve cursed your computer, you’re cursing someone who wrote a program without thinking about it in advance.

There’s something about the culture of software that has impeded the use of specification. We have a wonderful way of describing things precisely that’s been developed over the last couple of millennia, called mathematics. I think that’s what we should be using as a way of thinking about what we build.

Read the entire story here.

Image: Leslie Lamport, 2005. Courtesy of Wikipedia.

Meet the Indestructible Life-form

water-bear

Meet the water bear or tardigrade. It may not be pretty, but its as close to indestructible as any life-form may ever come.

Cool it to a mere 1 degree above absolute zero or -458 F and it lives on. Heat it to 300 F and it lives on. Throw it out into the vacuum of space and it lives on. Irradiate it with hundreds of times the radiation that would kill a human and it lives on. Dehydrate it to 3 percent of its normal water content and it lives on.

From Wired:

In 1933, the owner of a New York City speakeasy and three cronies embarked on a rather unoriginal scheme to make a quick couple grand: Take out three life insurance policies on the bar’s deepest alcoholic, Mike Malloy, then kill him.

First, they pumped him full of ungodly amounts of liquor. When that didn’t work, they poisoned the hooch. Mike didn’t mind. Then came the sandwiches of rotten sardines and broken glass and metal shavings. Mike reportedly loved them. Next they dropped him in the snow and poured cold water on him. It didn’t faze Mike. Then they ran him over with a cab, which only broke his arm. The conspirators finally succeeded when they boozed Mike up, ran a tube down his throat, and pumped him full of carbon monoxide.

They don’t come much tougher than Mike the Durable, as he is remembered. Except in the microscopic world beneath our feet, where there lives what is perhaps the toughest creature on Earth: the tardigrade. Also known as the water bear (because it looks like an adorable little many-legged bear), this exceedingly tiny critter has an incredible resistance to just about everything. Go ahead and boil it, freeze it, irradiate it, and toss it into the vacuum of space — it won’t die. If it were big enough to eat a glass sandwich, it probably could survive that too.

The water bear’s trick is something called cryptobiosis, in which it brings its metabolic processes nearly to a halt. In this state it can dehydrate to 3 percent of its normal water content in what is called desiccation, becoming a husk of its former self. But just add water and the tardigrade roars back to life like Mike the Durable emerging from a bender and continues trudging along, puncturing algae and other organisms with a mouthpart called a stylet and sucking out the nutrients.

“They are probably the most extreme survivors that we know of among animals,” said biologist Bob Goldstein of the University of North Carolina at Chapel Hill. “People talk about cockroaches surviving anything. I think long after the cockroaches would be killed we’d still have dried water bears that could be rehydrated and be alive.”

“Is It Cold in Here?” Asked a Water Bear NEVER

This hibernation of sorts isn’t happening for a single season, like a true bear (tardigrades are invertebrates). As far as scientists can tell, water bears can be dried out for at least a decade and still revivify, only to find their clothes are suddenly out of style.

Mike the Durable did just fine in the freezing cold, but the temperatures the water bear endures in cryptobiosis defy belief. It can survive in a lab environment of just 1 degree kelvin. That’s an astonishing -458 degrees Fahrenheit, where matter goes bizarro, with gases becoming liquids and liquids becoming solids.

At this temperature the movements of the normally frenzied atoms inside the water bear come almost to a standstill, yet the creature endures. And that’s all the more incredible when you consider that the water bear indeed has a brain, a relatively simple one, sure, but a brain that somehow emerges from this unscathed.

Water bears also can tolerate pressures six times that of the deepest oceans. And a few of them once survived an experiment that subjected them to 10 days exposed to the vacuum of space. (While we’re on the topic, humans can survive for a couple minutes, max. One poor fellow at NASA accidentally depressurized his suit in a vacuum chamber in 1965 and lost consciousness after 15 seconds. When he woke up, he said his last memory was feeling the water on his tongue boiling, which I’m guessing felt a bit like Pop Rocks, only somehow even worse for your body.)

Anyway, tardigrades. They can take hundreds of times the radiation that would kill a human. Water bears don’t mind hot water either–like, 300 degrees Fahrenheit hot. So the question is: why? Why evolve to survive the kind of cold that only scientists can create in a lab, and pressures that have never even existed on our planet?

Water bears don’t even necessarily inhabit extreme habitats like, say, boiling springs where certain bacteria proliferate. Therefore the term “extremophile” that has been applied to tardigrades over the years isn’t entirely accurate. Just because they’re capable of surviving these harsh environments doesn’t mean they seek them out.

They actually prefer regular old dirt and sand and moss all over the world. I mean, would you rather stay in a Motel 6 in a lake of boiling acidic water or lounge around on a beach resort and drink algae cocktails? (Why this isn’t a BuzzFeed quiz yet is beyond me. It’s gold. There’s untold billions of water bears on Earth. Page views, BuzzFeed. What’s the sound of a billion water bears clicking? Boom, another quiz.)

But that isn’t to say there aren’t troubles in the tardigrade version of paradise. “If you’re living in dirt,” said Goldstein, “there’s a danger of desiccation all the time.” If, say, the sun starts drying out the surface, one option is to move farther down into the soil. But “if you go too far down, there’s not going to be much food. So they really probably have to live in a fringe where they need to get food, but there’s always danger of drying out.”

A Tiny Superhero That Could One Day Save Your Life

And so it could be that the water bear’s incredible feats of survival may simply stem from a tough life in the dirt. But there’s also the question of how it does this, and it’s a perplexing one at that. Goldstein’s lab is researching this, and he reckons that water bears don’t just have one simple trick, but a range of strategies to be able to endure drying out and eventually reanimating.

“There’s one that we know of, which is some animals that survive drying make a sugar called trehalose,” he said. “And trehalose sort of replaces water as they dry down, so it will make glassy surfaces where normally water would be sitting. That probably helps prevent a lot of the damage that normally occurs when you dry something down or when you rehydrate it.” Not all of the 1,000 or so species of water bears produce this sugar though, he says, so there must be some other trick going on.

Ironically enough, these incredibly hardy creatures are very difficult to grow in the lab, but Goldstein has had great success where many others have failed. And, like so many great things in this world, it all began in a shed in England, where a regular old chap mastered their breeding to sell them to local schools for scientific experiments. He was so good at it, in fact, that he never needed to venture out to recollect specimens. And their descendants now crawl around Goldstein’s lab, totally unaware of how incredibly lucky they are to not be tortured by school children day in and day out.

A scanning electron micrograph of three awkwardly cuddling water bears. “You know what they say: Two’s company, three’s a crowd. We’re looking at you, Paul. Seriously though, Paul. You need to scram.” Image: Willow Gabriel

“Some organisms just can’t be raised in labs,” Goldstein said. “You bring them in and try to mimic what’s going on outside and they just don’t grow up. So we were lucky, actually, people were having a hard time growing water bears in labs continuously. And this guy in England had figured it out.”

Thanks to this breakthrough, Goldstein and other scientists are exploring the possibility of utilizing the water bear as science’s next fruit fly, that ubiquitous test subject that has yielded so many momentous discoveries. The water bear’s small size means you can pack a ton of them into a lab, plus they reproduce quickly and have a relatively compact genome to work with. Also, they’re way cuter than fruit flies and they don’t fall into your sodas and stuff.

Read the entire article here.

Image: A scanning electron micrograph of a water bear.  Courtesy: Bob Goldstein and Vicky Madden / Wired.

Building The 1,000 Mile Per Hour Car

BloodhoundSSC_front_dynamic_medium_Feb2014

First start with a jet engine. Then, perhaps add a second for auxiliary power. And, while your at it, throw in a rocket engine as well for some extra thrust. Add aluminum wheels with no tires. Hire a fighter pilot to “drive” it. Oh, and name it Bloodhound SSC (Supersonic Car). You’re on your way! Visit the official  Bloodhound website here.

From ars technica:

Human beings achieved many ‘firsts’ in the 20th century. We climbed the planet’s highest mountains, dived its deepest undersea trench, flew over it faster than the speed of sound, and even escaped it altogether in order to visit the moon. Beyond visiting Mars, it may feel like there are no more milestones left to reach. Yet people are still trying to push the envelope, even if they have to travel a little farther to get there.

Richard Noble is one such person. He’s spearheading a project called Bloodhound SSC that will visit uncharted territory on its way to a new land speed record on the far side of 1,000mph. The idea of a car capable of 1,000mph might sound ludicrous at first blush, but consider Noble’s credentials. The British businessman is responsible for previous land speed records in 1983 and 1997, the first of which came with him behind the wheel.

Bloodhound’s ancestors

Noble had been captivated by speed as a child after watching Cobb attempt to break a water speed record on Loch Ness in Scotland. Inspired by the achievements of fellow countrymen Campbell and Cobb, he wanted to reclaim the record for Britain. After building—and then crashing—one of the UK’s first jet-powered cars (Thrust 1), he acquired a surplus engine from English Electric Lightning. The Lightning was Britain’s late-1950s interceptor, designed to shoot down Soviet bombers over the North Sea. It was built around two powerful Rolls Royce Avon engines that gave it astonishing performance for the time. Just one of these engines was sufficient to convince John Ackroyd to accept Noble’s job offer as Thrust 2’s designer, and work began on the car in 1978, albeit in a shoestring fashion.

Thrust 2, now with a more powerful variant of the Avon engine, went to Bonneville at the end of September 1981. Until now, Noble had only driven the car on runways in the UK, never faster than 260mph. For two weeks the team built up speed at Bonneville before the rain arrived, flooding the lake and ending any record attempts for the year. Thrust 2 had peaked at 500mph, but Gabelich’s record would stand for a while longer. Thrust 2 returned the following September to again find Bonneville’s flats under several inches of water. Once it was clear that Bonneville was no good for anything other than hovercraft, the search was on for a new location.

Noble and Thrust 2 found themselves in the Black Rock desert in Nevada, now best known as the site of the Burning Man festival. Helpfully, the surface of the alkaline playa was much better suited to Thrust 2’s solid metal wheels. (At Bonneville these had cut ruts into the salt, requiring a new track for each run.) 1982 wasn’t to be Thrust 2’s year either, averaging 590mph and teaching Noble and his team a lot before the weather came and stopped things. Finally in 1983 everything went according to plan, and on October 4, Thrust 2 reached a peak speed of 650mph, setting a new world land speed record of 633.5mph.

It’s easy to see how the mindset required to successfully break a land speed record wouldn’t be satisfied just doing it once; it seems everyone comes back for another bite at the cherry. Noble was no exception. He knew that Breedlove was planning on taking back the record and that the American had a pair of General Electric J-79 engines with which to do so. 700mph was the next headline speed, with the speed of sound not much further away. Eager not to lose the record, Noble planned to defend it with Thrust 2’s successor, Thrust SSC (the initials stand for SuperSonic Car).

Thrust 2’s success came despite the lack of any significant aerodynamic design or refinement. Going supersonic meant that aerodynamics couldn’t be ignored any longer though. In 1992, Noble met the man who would design his new car, a retired aerodynamicist called Ron Ayers. Ayers would learn much on Thrust SSC—and another land speed car, 2006’s diesel-powered JCB Dieselmax—that would inform his design for Bloodhound SSC. At first though, he was reluctant to get involved. “The first thing I told him was he’d kill himself,” Ayers told Ars. Yet curiosity got the better of Ayers, and he began to see solutions for the various problems that at first made this look like an impossible challenge. A second chance meeting between Noble and Ayers followed, and before long Ayers was Thrust SSC’s concept designer and aerodynamicist.

Now, Ayers had the problem of working out what shape a supersonic car ought to take. That came from computational fluid dynamics (CFD). No one had attempted to use computer modeling to design a land speed record car until then, but even now no wind tunnels capable of supersonic speeds also feature a rolling road, necessary to accurately account for the effect of having wheels at those speeds. The University of Swansea in Wales created a CFD simulation of a supersonic vehicle, but “the problem was, at that time neither I nor anyone else trusted [CFD],” Ayers explained. His skepticism vanished following tests with scale models fired down a rocket sled track belonging to the UK Defense establishment (located at Pendine Sands, the site of many 1920s land speed records). The CFD data matched that from the rocket sled track to within a few percent, something that astonished both Ayers and the other aerodynamicists with whom he shared his findings.

Thrust SSC would use a pair of Rolls Royce Spey engines, taken from a British F-4 Phantom, mounted quite far forward on either side of the car, with the driver’s cockpit in-between. Together with a long, pointed nose and a T-shaped tail fin and stabilizer, Thrust SSC looked much more like a jet fighter with no wings than a car. Fittingly, the car got a driver to suit its looks. Land speed records aren’t cheap, something Noble (and probably every other record chaser) knew from bitter experience. He managed to scrape together enough funding to make three record attempts with Thrust 2 even though his attention was split between fund-raising and learning how to operate and control the car. For the sequel he wisely decided to leave the driving to someone else, concentrating his efforts on leading the project and raising the money. Thirty people applied for the job, a mix of drag racers and fighter pilots. The successful candidate was one of the latter, RAF Wing Commander Andy Green. Green had plenty of supersonic experience in RAF Phantoms and tornadoes; he also had a daredevil streak, evident in his choice of hobbies.

By 1997 the car was ready for Black Rock Desert. So, too, were Breedlove and his Spirit of America, setting the stage for a transatlantic, transonic shoot-out. Spirit of America narrowly escaped disaster the previous year, turning sharply right at ~675mph and rolling onto its side in the process. 1997 was to be no kinder to the Americans. On October 15, a sonic boom announced to the world that Green (backed by Noble) was now the fastest man on earth. Thrust SSC set a two-way average of 763mph, or Mach 1.015, exactly 50 years and a day after the first Mach 1 flight.

Noble, Green, and Ayers set another land speed record in 2006, albeit with a much slower car. JCB Dieselmax set a new world record for a diesel-powered vehicle, reaching just over 350mph. Even though Bloodhound SSC will go much faster, Ayers told me they gathered a lot of useful knowledge then that is being applied to the current project.

Bloodhound SSC

A number of factors appear to be necessary for a land speed record attempt: a car with a sufficiently powerful engine, a suitable location, and someone motivated enough to raise the money to make it happen. A little bit of competition helps with the last of these. Breedlove, Green, and Arfons spurred each other on in the 1960s, and it was the threat of Breedlove going supersonic that sparked Thrust SSC. As you might expect, competition was also the original impetus behind Bloodhound SSC. Noble learned that Steve Fossett was planning a land speed record attempt. The ballooning adventurer bought Spirit of America from Breedlove in 2006, and he set his sights on 800mph. Noble needed a new car that incorporated the lessons learned from Thrusts 2 and SSC.

What makes the car go?

The key to any land speed record car is its engine, and Bloodhound SSC is no exception. Rather than depend on decades-old surplus, Noble and Green approached the UK government to see if they could help. “We thought we’d earned the right to do this properly with the right technology,” Noble told the UK’s Director magazine. The Ministry of Defense agreed on the condition that Bloodhound SSC be exciting enough a project to rekindle the interest in science and technology that Apollo or Concorde created in the 1960s and 1970s. In return for inspiring a new generation of engineers, Bloodhound SSC could have an EJ200 jet engine, a type more often found in the Eurofighter Typhoon.

Thrust SSC needed the combined thrust of two Spey jet engines to break the sound barrier. To go 30 percent faster, Bloodhound SSC will need more power than a single EJ200 can provide—at full reheat just over 20,000lbf (90 kN), roughly as much as one of the two engines on its predecessor (albeit at half the weight). The Bloodhound team decided upon rocket power for the remaining thrust. We asked Ayers why they opted for this approach, and he explained that it had several advantages over a pair of jets. For one thing, it needs only one air intake, meaning a lower drag design than Thrust SSC’s twin engines. To reach the kind of performance target Bloodhound SSC is aiming at with a pair of jets, it would require designing variable geometry air intakes. While this sort of engineering solution is used by fighter aircraft, it would add unnecessary cost, complexity, and weight to Bloodhound SSC. What’s more, a rocket can provide much more thrust for its size and weight than a jet. Finally, using rocket power means being able to accelerate much more rapidly, which should help limit the length of track needed.

Read the entire article here.

Image: Bloodhound SCC. Courtesy of Bloodhound.

 

Research Without a Research Lab

Many technology companies have separate research teams, or even divisions, that play with new product ideas and invent new gizmos. The conventional wisdom suggests that businesses like Microsoft or IBM need to keep their innovative, far-sighted people away from those tasked with keeping yesterday’s products functioning and today’s customers happy. Google and a handful of other innovators on the other hand follow a different mantra; they invent in hallways and cubes — everywhere.

From Technology Review:

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.

“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Alan MacCormack, an adjunct professor at Harvard Business School who studies innovation and product development in the technology sector, says Google’s approach to research helps it deal with a conundrum facing many large companies. “Many firms are trying to balance a corporate strategy that defines who they are in five years with trying to discover new stuff that is unpredictable—this model has allowed them to do both.” Embedding people working on fundamental research into the core business also makes it possible for Google to encourage creative contributions from workers who would typically be far removed from any kind of research and development, adds MacCormack.

Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.

Cynthia Wagner Weick, a management professor at University of the Pacific, thinks that Google’s approach stems from its cofounders’ determination to avoid the usual corporate approach of keeping fundamental research isolated. “They are interested in solving major problems, and not just in the IT and communications space,” she says. Weick recently published a paper singling out Google, Edwards Lifescience, and Elon Musk’s companies, Tesla Motors and Space X, as examples of how tech companies can meet short-term needs while also thinking about far-off ideas.

Google can also draw on academia to boost its fundamental research. It spends millions each year on more than 100 research grants to universities and a few dozen PhD fellowships. At any given time it also hosts around 30 academics who “embed” at the company for up to 18 months. But it has lured many leading computing thinkers away from academia in recent years, particularly in artificial intelligence (see “Is Google Cornering the Market on Deep Learning?”). Those that make the switch get to keep publishing academic research while also gaining access to resources, tools and data unavailable inside universities.

Spector argues that it’s increasingly difficult for academic thinkers to independently advance a field like computer science without the involvement of corporations. Access to piles of data and working systems like those of Google is now a requirement to develop and test ideas that can move the discipline forward, he says. “Google’s played a larger role than almost any company in bringing that empiricism into the mainstream of the field,” he says. “Because of machine learning and operation at scale you can do things that are vastly different. You don’t want to separate researchers from data.”

It’s hard to say how long Google will be able to count on luring leading researchers, given the flush times for competing Silicon Valley startups. “We’re back to a time when there are a lot of startups out there exploring new ground,” says MacCormack, and if competitors can amass more interesting data, they may be able to leach away Google’s research mojo.

Read the entire story here.

Gravity Makes Some Waves

[tube]ZlfIVEy_YOA[/tube]

Gravity, the movie, made some “waves” at the recent Academy Awards ceremony in Hollywood. But the real star in this case, is the real gravity that seems to hold all macroscopic things in the cosmos together. And the waves in the this case are real gravitational waves. A long-running experiment based at the South Pole has discerned a signal from the Cosmic Microwave Background that points to the existence of gravitational waves. This is a discovery of great significance, if upheld, and confirms the Inflationary Theory of our universe’s exponential expansion just after the Big Bang. Theorists who first proposed this remarkable hypothesis — Alan Guth (1979) and Andrei Linde (1981) — are probably popping some champagne right now.

From the New Statesman:

The announcement yesterday that scientists working on the BICEP2 experiment in Antarctica had detected evidence of “inflation” may not appear incredible, but it is. It appears to confirm longstanding hypotheses about the Big Bang and the earliest moments of our universe, and could open a new path to resolving some of physics’ most difficult mysteries.

Here’s the explainer. BICEP2, near the South Pole (where the sky is clearest of pollution), was scanning the visible universe for cosmic background radiation – that is, the fuzzy warmth left over from the Big Bang. It’s the oldest light in the universe, and as such our maps of it are our oldest glimpses of the young universe. Here’s a map created with data collected by the ESA’s Planck Surveyor probe last year:

ESA-Planck-Surveyor-image

What should be clear from this is that the universe is remarkably flat and regular – that is, there aren’t massive clumps of radiation in some areas and gaps in others. This doesn’t quite make intuitive sense.

If the Big Bang really was a chaotic event, with energy and matter being created and destroyed within tiny fractions of nanoseconds, then we would expect the net result to be a universe that’s similarly chaotic in its structure. Something happened to smooth everything out, and that something is inflation.

Inflation assumes that something must have happened to the rate of expansion of the universe, somewhere between 10-35 and 10-32 seconds after the Big Bang, to make it massively increase. It would mean that the size of the “lumps” would outpace the rate at which they appear in the cosmos, smoothing them out.

For an analogy, imagine if the Moon was suddenly stretched out to the size of the Sun. You’d see – just before it collapsed in on itself – that its rifts and craters had become, relative to its new size, made barely perceptible. Just like a sheet being pulled tightly on a bed, a chaotic structure becomes more uniform.

Inflation, first theorised by Alan Guth in 1979 and refined by Andrei Linde in 1981, became the best hypothesis to explain what we were observing in the universe. It also seemed to offer a way to better understand how dark energy drove the expansion of the Big Bang, and even possibly lead a way towards unifying quantum mechanics with general relativity. That is, if it was correct. And there have been plenty of theories which tied-up some loose ends only to come apart with further observation.

The key evidence needed to verify inflation would be in the form of gravitational waves – that is, ripples in spacetime. Such waves were a part of Einstein’s theory of general relativity, and in the 90s scientists observed some for the first time, but until now there’s never been any evidence of them from inside the cosmic background radiation.

BICEP2, though, has found that evidence, and with it scientists now have a crucial piece of fact that can falsify other theories about the early universe and potentially open up entirely new areas of investigation. This is why it’s being compared with the discovery of the Higgs Boson last year, as just as that particle was fundamental to our understanding of molecular physics, so to is inflation to our understanding of the wider universe.

Read the entire article here.

Video: Professor physicist Chao-Lin Kuo delivers news of results from his gravitational wave experiment. Professor Andrei Linde reacts to the discovery, March 17, 2014. Courtesy of Stanford University.

Big Data Knows What You Do and When

Data scientists are getting to know more about you and your fellow urban dwellers as you move around your neighborhood and your city. As smartphones and cell towers become more ubiquitous and  data collection and analysis gathers pace researchers (and advertisers) will come to know your daily habits and schedule rather intimately. So, questions from a significant other along the lines of, “and, where were you at 11:15 last night?” may soon be consigned to history.

From Technology Review:

Mobile phones have generated enormous insight into the human condition thanks largely to the study of the data they produce. Mobile phone companies record the time of each call, the caller and receiver ids, as well as the locations of the cell towers involved, among other things.

The combined data from millions of people produces some fascinating new insights in the nature of our society.

Anthropologists have crunched it to reveal human reproductive strategiesa universal law of commuting and even the distribution of wealth in Africa.

Today, computer scientists have gone one step further by using mobile phone data to map the structure of cities and how people use them throughout the day. “These results point towards the possibility of a new, quantitative classification of cities using high resolution spatio-temporal data,” say Thomas Louail at the Institut de Physique Théorique in Paris and a few pals.

They say their work is part of a new science of cities that aims to objectively measure and understand the nature of large population centers.

These guys begin with a database of mobile phone calls made by people in the 31 Spanish cities that have populations larger than 200,000. The data consists of the number of unique individuals using a given cell tower (whether making a call or not) for each hour of the day over almost two months.

Given the area that each tower covers, Louail and co work out the density of individuals in each location and how it varies throughout the day. And using this pattern, they search for “hotspots” in the cities where the density of individuals passes some specially chosen threshold at certain times of the day.

The results reveal some fascinating patterns in city structure. For a start, every city undergoes a kind of respiration in which people converge into the center and then withdraw on a daily basis, almost like breathing. And this happens in all cities. This “suggests the existence of a single ‘urban rhythm’ common to all cities,” says Louail and co.

During the week, the number of phone users peaks at about midday and then again at about 6 p.m. During the weekend the numbers peak a little later: at 1 p.m. and 8 p.m. Interestingly, the second peak starts about an hour later in western cities, such as Sevilla and Cordoba.

The data also reveals that small cities tend to have a single center that becomes busy during the day, such as the cities of Salamanca and Vitoria.

But it also shows that the number of hotspots increases with city size; so-called polycentric cities include Spain’s largest, such as Madrid, Barcelona, and Bilboa.

That could turn out to be useful for automatically classifying cities.

Read the entire article here.

Time Traveling Camels

camels_at_giza

Camels have no place in the Middle East of biblical times. Forensic scientists, biologists, archeologists, geneticists and paleontologists all seem to agree that camels could not have been present in the early Jewish stories of the Genesis and the Old Testament — camels trotted in to the land many hundreds of years later.

From the NYT:

There are too many camels in the Bible, out of time and out of place.

Camels probably had little or no role in the lives of such early Jewish patriarchs as Abraham, Jacob and Joseph, who lived in the first half of the second millennium B.C., and yet stories about them mention these domesticated pack animals more than 20 times. Genesis 24, for example, tells of Abraham’s servant going by camel on a mission to find a wife for Isaac.

These anachronisms are telling evidence that the Bible was written or edited long after the events it narrates and is not always reliable as verifiable history. These camel stories “do not encapsulate memories from the second millennium,” said Noam Mizrahi, an Israeli biblical scholar, “but should be viewed as back-projections from a much later period.”

Dr. Mizrahi likened the practice to a historical account of medieval events that veers off to a description of “how people in the Middle Ages used semitrailers in order to transport goods from one European kingdom to another.”

For two archaeologists at Tel Aviv University, the anachronisms were motivation to dig for camel bones at an ancient copper smelting camp in the Aravah Valley in Israel and in Wadi Finan in Jordan. They sought evidence of when domesticated camels were first introduced into the land of Israel and the surrounding region.

The archaeologists, Erez Ben-Yosef and Lidar Sapir-Hen, used radiocarbon dating to pinpoint the earliest known domesticated camels in Israel to the last third of the 10th century B.C. — centuries after the patriarchs lived and decades after the kingdom of David, according to the Bible. Some bones in deeper sediments, they said, probably belonged to wild camels that people hunted for their meat. Dr. Sapir-Hen could identify a domesticated animal by signs in leg bones that it had carried heavy loads.

The findings were published recently in the journal Tel Aviv and in a news release from Tel Aviv University. The archaeologists said that the origin of the domesticated camel was probably in the Arabian Peninsula, which borders the Aravah Valley. Egyptians exploited the copper resources there and probably had a hand in introducing the camels. Earlier, people in the region relied on mules and donkeys as their beasts of burden.

“The introduction of the camel to our region was a very important economic and social development,” Dr. Ben-Yosef said in a telephone interview. “The camel enabled long-distance trade for the first time, all the way to India, and perfume trade with Arabia. It’s unlikely that mules and donkeys could have traversed the distance from one desert oasis to the next.”

Dr. Mizrahi, a professor of Hebrew culture studies at Tel Aviv University who was not directly involved in the research, said that by the seventh century B.C. camels had become widely employed in trade and travel in Israel and through the Middle East, from Africa as far as India. The camel’s influence on biblical research was profound, if confusing, for that happened to be the time that the patriarchal stories were committed to writing and eventually canonized as part of the Hebrew Bible.

“One should be careful not to rush to the conclusion that the new archaeological findings automatically deny any historical value from the biblical stories,” Dr. Mizrahi said in an email. “Rather, they established that these traditions were indeed reformulated in relatively late periods after camels had been integrated into the Near Eastern economic system. But this does not mean that these very traditions cannot capture other details that have an older historical background.”

Read the entire article here.

Image: Camels at the Great Pyramid of Giza, Egypt. Courtesy of Wikipedia.

Is Your City Killing You?

The stresses of modern day living are taking a toll on your mind and body. And, more so if you happen to live in an concrete jungle. The results are even more pronounced for those of us living in large urban centers. That’s the finding of some fascinating new brain research out of Germany. Their simple answer to a lower-stress life: move to the countryside.

From The Guardian:

You are lying down with your head in a noisy and tightfitting fMRI brain scanner, which is unnerving in itself. You agreed to take part in this experiment, and at first the psychologists in charge seemed nice.

They set you some rather confusing maths problems to solve against the clock, and you are doing your best, but they aren’t happy. “Can you please concentrate a little better?” they keep saying into your headphones. Or, “You are among the worst performing individuals to have been studied in this laboratory.” Helpful things like that. It is a relief when time runs out.

Few people would enjoy this experience, and indeed the volunteers who underwent it were monitored to make sure they had a stressful time. Their minor suffering, however, provided data for what became a major study, and a global news story. The researchers, led by Dr Andreas Meyer-Lindenberg of the Central Institute of Mental Health in Mannheim, Germany, were trying to find out more about how the brains of different people handle stress. They discovered that city dwellers’ brains, compared with people who live in the countryside, seem not to handle it so well.

To be specific, while Meyer-Lindenberg and his accomplices were stressing out their subjects, they were looking at two brain regions: the amygdalas and the perigenual anterior cingulate cortex (pACC). The amygdalas are known to be involved in assessing threats and generating fear, while the pACC in turn helps to regulate the amygdalas. In stressed citydwellers, the amygdalas appeared more active on the scanner; in people who lived in small towns, less so; in people who lived in the countryside, least of all.

And something even more intriguing was happening in the pACC. Here the important relationship was not with where the the subjects lived at the time, but where they grew up. Again, those with rural childhoods showed the least active pACCs, those with urban ones the most. In the urban group moreover, there seemed not to be the same smooth connection between the behaviour of the two brain regions that was observed in the others. An erratic link between the pACC and the amygdalas is often seen in those with schizophrenia too. And schizophrenic people are much more likely to live in cities.

When the results were published in Nature, in 2011, media all over the world hailed the study as proof that cities send us mad. Of course it proved no such thing – but it did suggest it. Even allowing for all the usual caveats about the limitations of fMRI imaging, the small size of the study group and the huge holes that still remained in our understanding, the results offered a tempting glimpse at the kind of urban warping of our minds that some people, at least, have linked to city life since the days of Sodom and Gomorrah.

The year before the Meyer-Lindenberg study was published, the existence of that link had been established still more firmly by a group of Dutch researchers led by Dr Jaap Peen. In their meta-analysis (essentially a pooling together of many other pieces of research) they found that living in a city roughly doubles the risk of schizophrenia – around the same level of danger that is added by smoking a lot of cannabis as a teenager.

At the same time urban living was found to raise the risk of anxiety disorders and mood disorders by 21% and 39% respectively. Interestingly, however, a person’s risk of addiction disorders seemed not to be affected by where they live. At one time it was considered that those at risk of mental illness were just more likely to move to cities, but other research has now more or less ruled that out.

So why is it that the larger the settlement you live in, the more likely you are to become mentally ill? Another German researcher and clinician, Dr Mazda Adli, is a keen advocate of one theory, which implicates that most paradoxical urban mixture: loneliness in crowds. “Obviously our brains are not perfectly shaped for living in urban environments,” Adli says. “In my view, if social density and social isolation come at the same time and hit high-risk individuals … then city-stress related mental illness can be the consequence.”

Read the entire story here.

Mining Minecraft

minecraft-example

If you have a child under the age of 13 it’s likely that you’ve heard of, seen or even used Minecraft. More than just a typical online game, Minecraft is a playground for aspiring architects — despite the Creepers. Minecraft began in 2011 with a simple premise — place and remove blocks to fend of unwanted marauders. Now it has become a blank canvas for young minds to design and collaborate on building fantastical structures. My own twin 11 year-olds have designed their dream homes complete with basement stables, glass stairways roof-top pool.

From the Guardian:

I couldn’t pinpoint exactly when I became aware of my eight-year-old son’s fixation with Minecraft. I only know that the odd reference to zombies and pickaxes burgeoned until it was an omnipresent force in our household, the dominant topic of conversation and, most bafflingly, a game he found so gripping that he didn’t just want to play it, he wanted to watch YouTube videos of others playing it too.

This was clearly more than any old computer game – for Otis and, judging by discussion at the school gates, his friends too. I felt as if he’d joined a cult, albeit a reasonably benign one, though as someone who last played a computer game when Jet Set Willy was the height of technological wizardry, I hardly felt in a position to judge.

Minecraft, I realised, was something I knew nothing about. It was time to become acquainted. I announced my intention to give myself a crash course in the game to Otis one evening, interrupting his search for Obsidian to build a portal to the Nether dimension. As you do. “Why would you want to play Minecraft?” he asked, as if I’d confided that I was taking up a career in trapeze-artistry.

For anyone as mystified about it as I was, Minecraft is now one of the world’s biggest computer games, a global phenomenon that’s totted up 14,403,011 purchases as I write; 19,270 in the past 24 hours – live statistics they update on their website, as if it were Children in Need night.

Trying to define the objective of the game isn’t easy. When I ask Otis, he shrugs. “I’m not sure there is one. But that’s what’s brilliant. You can do anything you like.”

This doesn’t seem like much of an insight, though to be fair, the developers themselves, Mojang, define it succinctly as, “a game about breaking and placing blocks”. This sounds delightfully simple, an impression echoed by its graphics. In sharp contrast to the rich, more cinematic style of other games, this is unapologetically old school, the sort of computer game of the future that Marty McFly would have played.

In this case, looks are deceptive. “The pixelated style might appear simple but it masks a huge amount of depth and complexity,” explains Alex Wiltshire, former editor of Edge magazine and author of forthcoming Minecraft guide, Block-o-pedia. “Its complex nature doesn’t lie in detailed art assets, but in how each element of the game interrelates.”

It’s this that gives players the potential to produce elaborate constructions on a lavish scale; fans have made everything from 1:1 scale re-creations of the Lord of the Rings’ Mines of Moria, to models of entire cities.

I’m a long way from that. “Don’t worry, Mum – when I first went on it when I was six, I had no idea what I was doing,” Otis reassures, shaking his head at the thought of those naive days, way back when.

Otis’s device of choice is his iPod, ideal for on-the-move sessions, though this once caused him serious grief after being caught on it under his duvet after lights out. I take one look at the lightning speed with which his fingers move and decide to download it on to my MacBook instead. The introduction of an additional version of the game into our household is greeted very much like Walter Raleigh’s return from the New World.

We open up the game and he tells me that I am “Steve”, the default player, and that we get a choice of modes in which to play: creative or survival. He suggests I start with the former on the basis that this is the best place for those who aren’t very good at it.

In creative mode, you are dropped into a newly generated world (an island in our case) and gifted a raft of resources – everything from coal and lapis lazuli to cake and beds.

At the risk of sounding like a dunce, it isn’t at all obvious what I’m supposed to do. So instead of springing into action, I’m left standing, looking around lamely as if I’m on the edge of a dance floor waiting for someone to come and put me out of my misery. Despite knowing that the major skill required in this game is building, before Otis intervenes, the most I can accomplish is to dig a few holes.

“When it first came out everyone was confused as the developer gave little or no guidance,” says Wiltshire. “It didn’t specifically say you had to cut down a tree to get some wood, whereas games that are produced by big companies give instructions – the last thing they want is for people not to understand how to play. With Minecraft, which had an indie developer, the player had to work things out for themselves. It was quite a tonic.”

He believes that this is why a game not specifically designed for children has become so popular with them. “Because you learn so much when you’re young, kids are used to the idea of a world they don’t fully understand, so they’re comfortable with having to find things out for themselves.”

For the moment, I’m happy to take instruction from my son, who begins his demonstration by creating a rollercoaster – an obvious priority when you’ve just landed on a desert island. He quickly installs its tracks, weaving them through trees and into the sea, before sending Steve for a ride. He asks me if I feel ready to have a go. I feel as if I’m on a nursing home word processing course.

Familiarising yourself takes a little time but once you get going – and have worked out the controls – being able to run, fly, swim and build is undeniably absorbing. I also finally manage to construct something, a slightly disappointing shipping container-type affair that explodes Wiltshire’s assertion that it’s “virtually impossible to build something that looks terrible in Minecraft”. Still, I’m enjoying it, I can’t deny it. Aged eight, I’d have loved it every bit as much as my son does.

The more I play it, the more I also start to understand why this game is been championed for its educational possibilities, with some schools in the US using it as a tool to teach maths and science.

Dr Helen O’Connor, who runs UK-based Childchology – which provides children and their families with support for common psychological problems via the internet – said: “Minecraft offers some strong positives for children. It works on a cognitive level in that it involves problem solving, imagination, memory, creativity and logical sequencing. There is a good educational element to the game, and it also requires some number crunching.

“Unlike lots of other games, there is little violence, with the exception of fighting off a few zombies and creepers. This is perhaps one of the reasons why it is fairly gender neutral and girls enjoy playing it as well as boys.”

The next part of Otis’s demonstration involves switching to survival mode. He explains: “You’ve got to find the resources yourself here. You’re not just given them. Oh and there are villains too. Zombie pigmen and that kind of thing.”

It’s clear that life in survival mode is a significantly hairier prospect than in creative, particularly when Otis changes the difficulty setting to its highest notch. He says he doesn’t do this often because, after spending three weeks creating a house from wood and cobblestones, zombies nearly trashed the place. I make a mental note to remind him of this conversation next time he has a sleepover.

One of the things that’s so appealing about Minecraft is that there is no obvious start and end; it’s a game of infinite possibilities, which is presumably why it’s often compared to Lego. Yet, the addictive nature of the game is clearly vexing many parents: internet talkboards are awash with people seeking advice on how to prize their children away from it.

Read the entire story here.

Image courtesy of Minecraft.

United Kingdom Without the United

new-union-jack

There is increasing noise in the media about Scottish independence. With the referendum a mere six months away — September 18, 2014 to be precise — what would the United Kingdom look like without the anchor nation to the north? An immediate consequence would be the need to redraw the UK’s Union Jack flag.

Avid vexillophiles will know that the Union Jack is a melding of the nations that make up the union — with one key omission. Wales does not feature on today’s flag. So, perhaps, if Scotland where to leave the UK, the official flag designers could make up for the gross omission and add Wales as they remove Saint Andrew’s cross, which represents Scotland.

Would-be designers have been letting their imaginations run wild with some fascinating and humorous designs — though one must suspect that Her Majesty the Queen, sovereign of this fair isle is certainly not amused by the possible break-up of her royal domain.

From the Atlantic

Long after the Empire’s collapse, the Union Jack remains an internationally recognized symbol of Britain. But all that could change soon. Scotland, one of the four countries that make up the United Kingdom (along with England, Northern Ireland, and Wales), will hold a referendum on independence this September. If it succeeds, Britain’s iconic flag may need a makeover.

The Flag Institute, the U.K.’s national flag charity and the largest membership-based vexillological organization in the world, recently polled its members and found that nearly 65 percent of respondents felt the Union Jack should be changed if Scotland becomes independent. And after the poll, the organization found itself flooded with suggested replacements for the flag.

“We are not advocating changing the flag. We are not advising changing the flag. We are not encouraging a change to the flag. We are not discouraging a change to the flag,” Charles Ashburner, the Flag Institute’s chief executive and trustee, told me. “We are simply simply here to facilitate and inform the debate if there is an appetite for such a thing.”

“As this subject has generated the largest post bag of any single subject in our history, however,” Ashburner noted, “there is clearly such an appetite.”

The Union Jack’s history is closely intertwined with the U.K.’s history. After Elizabeth I died in 1603, her cousin, King James VI of Scotland, ascended to the English throne as James I of England. With Britain united under one king for the first time, James sought to symbolize his joint rule of the two countries with a new flag in 1606. The design placed the traditional English flag, known as the cross of Saint George, over the traditional Scottish flag, known as the cross of Saint Andrew.

England and Scotland remained independent countries with separate parliaments, royal courts, and flags until they fully merged under the Act of Union in 1707. Queen Anne then adopted James I’s symbolic flag as the national banner of Great Britain. When Ireland merged with Britain in 1801 to form the modern United Kingdom, the British flag incorporated Ireland’s cross of Saint Patrick to create the modern Union Jack. The flag’s design did not change after Irish independence in the mid-20th century because Saint Patrick’s cross still represents Northern Ireland, which remained part of the U.K.

The Union Jack doesn’t represent everyone, though. England, Scotland, and Northern Ireland are included, but Wales, the fourth U.K. country, isn’t. Because Wales was considered part of the English crown in 1606 (with the title “Prince of Wales” reserved for that crown’s heir) after its annexation by England centuries earlier, neither James I’s original design nor any subsequent design based on it bears any influence of the culturally distinct, Celtic-influenced territory.

British authorities granted Wales’ red-dragon flag, or Y Ddraig Goch in Welsh, official status in 1959. But attempts to add Welsh symbolism to the Union Jack haven’t succeeded; in 2007, a member of Parliament from Wales proposed adding the Welsh dragon to the flag, to no avail. Iconography could involve more than just the dragon: Like the U.K.’s other three countries, Wales has a patron saint, Saint David, and a black-and-gold flag to represent him.

If Scotland stays in the U.K., incorporating Wales into the British flag could be as simple as adding yellow borders.

Read the entire article here.

Image: A Royal Standard influenced design for the replacement of the Union Jack should Scotland secede from the United Kingdom. Courtesy of the UK Flag Institute.

 

 

Which is Your God?

Is your God the one to be feared from the Old Testament? Or is yours the God who brought forth the angel Moroni? Or are your Gods those revered by Hindus or Ancient Greeks or the Norse? Theists have continuing trouble in answering these fundamental questions much to the consternation, and satisfaction, of atheists.

In a thoughtful interview with Gary Gutting, Louise Antony a professor of philosophy at the University of Massachusetts, structures these questions in the broader context of morality and social justice.

From the NYT:

Gary Gutting: You’ve taken a strong stand as an atheist, so you obviously don’t think there are any good reasons to believe in God. But I imagine there are philosophers whose rational abilities you respect who are theists. How do you explain their disagreement with you? Are they just not thinking clearly on this topic?

Louise Antony: I’m not sure what you mean by saying that I’ve taken a “strong stand as an atheist.” I don’t consider myself an agnostic; I claim to know that God doesn’t exist, if that’s what you mean.

G.G.: That is what I mean.

L.A.: O.K. So the question is, why do I say that theism is false, rather than just unproven? Because the question has been settled to my satisfaction. I say “there is no God” with the same confidence I say “there are no ghosts” or “there is no magic.” The main issue is supernaturalism — I deny that there are beings or phenomena outside the scope of natural law.

That’s not to say that I think everything is within the scope of human knowledge. Surely there are things not dreamt of in our philosophy, not to mention in our science – but that fact is not a reason to believe in supernatural beings. I think many arguments for the existence of a God depend on the insufficiencies of human cognition. I readily grant that we have cognitive limitations. But when we bump up against them, when we find we cannot explain something — like why the fundamental physical parameters happen to have the values that they have — the right conclusion to draw is that we just can’t explain the thing. That’s the proper place for agnosticism and humility.

But getting back to your question: I’m puzzled why you are puzzled how rational people could disagree about the existence of God. Why not ask about disagreements among theists? Jews and Muslims disagree with Christians about the divinity of Jesus; Protestants disagree with Catholics about the virginity of Mary; Protestants disagree with Protestants about predestination, infant baptism and the inerrancy of the Bible. Hindus think there are many gods while Unitarians think there is at most one. Don’t all these disagreements demand explanation too? Must a Christian Scientist say that Episcopalians are just not thinking clearly? Are you going to ask a Catholic if she thinks there are no good reasons for believing in the angel Moroni?

G.G.: Yes, I do think it’s relevant to ask believers why they prefer their particular brand of theism to other brands. It seems to me that, at some point of specificity, most people don’t have reasons beyond being comfortable with one community rather than another. I think it’s at least sometimes important for believers to have a sense of what that point is. But people with many different specific beliefs share a belief in God — a supreme being who made and rules the world. You’ve taken a strong stand against that fundamental view, which is why I’m asking you about that.

L.A.: Well I’m challenging the idea that there’s one fundamental view here. Even if I could be convinced that supernatural beings exist, there’d be a whole separate issue about how many such beings there are and what those beings are like. Many theists think they’re home free with something like the argument from design: that there is empirical evidence of a purposeful design in nature. But it’s one thing to argue that the universe must be the product of some kind of intelligent agent; it’s quite something else to argue that this designer was all-knowing and omnipotent. Why is that a better hypothesis than that the designer was pretty smart but made a few mistakes? Maybe (I’m just cribbing from Hume here) there was a committee of intelligent creators, who didn’t quite agree on everything. Maybe the creator was a student god, and only got a B- on this project.

In any case though, I don’t see that claiming to know that there is no God requires me to say that no one could have good reasons to believe in God. I don’t think there’s some general answer to the question, “Why do theists believe in God?” I expect that the explanation for theists’ beliefs varies from theist to theist. So I’d have to take things on a case-by-case basis.

I have talked about this with some of my theist friends, and I’ve read some personal accounts by theists, and in those cases, I feel that I have some idea why they believe what they believe. But I can allow there are arguments for theism that I haven’t considered, or objections to my own position that I don’t know about. I don’t think that when two people take opposing stands on any issue that one of them has to be irrational or ignorant.

G.G.: No, they may both be rational. But suppose you and your theist friend are equally adept at reasoning, equally informed about relevant evidence, equally honest and fair-minded — suppose, that is, you are what philosophers call epistemic peers: equally reliable as knowers. Then shouldn’t each of you recognize that you’re no more likely to be right than your peer is, and so both retreat to an agnostic position?

L.A.: Yes, this is an interesting puzzle in the abstract: How could two epistemic peers — two equally rational, equally well-informed thinkers — fail to converge on the same opinions? But it is not a problem in the real world. In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe.

G.G.: So is your point that we always have reason to think that people who disagree are not epistemic peers?

L.A.: It’s worse than that. The whole notion of epistemic peers belongs only to the abstract study of knowledge, and has no role to play in real life. Take the notion of “equal cognitive powers”: speaking in terms of real human minds, we have no idea how to seriously compare the cognitive powers of two people.

Read the entire article here.

The Magnificent Seven

Magnificent-seven

Actually, these seven will not save your village from bandits. Nor will they ride triumphant into the sunset on horseback. These seven are more mundane, but they are nonetheless shrouded in a degree of mystery, albeit rather technical. These are the seven holders of the seven keys that control the Internet’s core directory — the Domain Name System. Without it the Internet’s billions of users would not be able to browse or search or shop or email or text.

From the Guardian:

In a nondescript industrial estate in El Segundo, a boxy suburb in south-west Los Angeles just a mile or two from LAX international airport, 20 people wait in a windowless canteen for a ceremony to begin. Outside, the sun is shining on an unseasonably warm February day; inside, the only light comes from the glare of halogen bulbs.

There is a strange mix of accents – predominantly American, but smatterings of Swedish, Russian, Spanish and Portuguese can be heard around the room, as men and women (but mostly men) chat over pepperoni pizza and 75-cent vending machine soda. In the corner, an Asteroids arcade machine blares out tinny music and flashing lights.

It might be a fairly typical office scene, were it not for the extraordinary security procedures that everyone in this room has had to complete just to get here, the sort of measures normally reserved for nuclear launch codes or presidential visits. The reason we are all here sounds like the stuff of science fiction, or the plot of a new Tom Cruise franchise: the ceremony we are about to witness sees the coming together of a group of people, from all over the world, who each hold a key to the internet. Together, their keys create a master key, which in turn controls one of the central security measures at the core of the web. Rumours about the power of these keyholders abound: could their key switch off the internet? Or, if someone somehow managed to bring the whole system down, could they turn it on again?

The keyholders have been meeting four times a year, twice on the east coast of the US and twice here on the west, since 2010. Gaining access to their inner sanctum isn’t easy, but last month I was invited along to watch the ceremony and meet some of the keyholders – a select group of security experts from around the world. All have long backgrounds in internet security and work for various international institutions. They were chosen for their geographical spread as well as their experience – no one country is allowed to have too many keyholders. They travel to the ceremony at their own, or their employer’s, expense.

What these men and women control is the system at the heart of the web: the domain name system, or DNS. This is the internet’s version of a telephone directory – a series of registers linking web addresses to a series of numbers, called IP addresses. Without these addresses, you would need to know a long sequence of numbers for every site you wanted to visit. To get to the Guardian, for instance, you’d have to enter “77.91.251.10” instead of theguardian.com.

The master key is part of a new global effort to make the whole domain name system secure and the internet safer: every time the keyholders meet, they are verifying that each entry in these online “phone books” is authentic. This prevents a proliferation of fake web addresses which could lead people to malicious sites, used to hack computers or steal credit card details.

The east and west coast ceremonies each have seven keyholders, with a further seven people around the world who could access a last-resort measure to reconstruct the system if something calamitous were to happen. Each of the 14 primary keyholders owns a traditional metal key to a safety deposit box, which in turn contains a smartcard, which in turn activates a machine that creates a new master key. The backup keyholders have something a bit different: smartcards that contain a fragment of code needed to build a replacement key-generating machine. Once a year, these shadow holders send the organisation that runs the system – the Internet Corporation for Assigned Names and Numbers (Icann) – a photograph of themselves with that day’s newspaper and their key, to verify that all is well.

The fact that the US-based, not-for-profit organisation Icann – rather than a government or an international body – has one of the biggest jobs in maintaining global internet security has inevitably come in for criticism. Today’s occasionally over-the-top ceremony (streamed live on Icann’s website) is intended to prove how seriously they are taking this responsibility. It’s one part The Matrix (the tech and security stuff) to two parts The Office (pretty much everything else).

For starters: to get to the canteen, you have to walk through a door that requires a pin code, a smartcard and a biometric hand scan. This takes you into a “mantrap”, a small room in which only one door at a time can ever be open. Another sequence of smartcards, handprints and codes opens the exit. Now you’re in the break room.

Already, not everything has gone entirely to plan. Leaning next to the Atari arcade machine, ex-state department official Rick Lamb, smartly suited and wearing black-rimmed glasses (he admits he’s dressed up for the occasion), is telling someone that one of the on-site guards had asked him out loud, “And your security pin is 9925, yes?” “Well, it was…” he says, with an eye-roll. Looking in our direction, he says it’s already been changed.

Lamb is now a senior programme manager for Icann, helping to roll out the new, secure system for verifying the web. This is happening fast, but it is not yet fully in play. If the master key were lost or stolen today, the consequences might not be calamitous: some users would receive security warnings, some networks would have problems, but not much more. But once everyone has moved to the new, more secure system (this is expected in the next three to five years), the effects of losing or damaging the key would be far graver. While every server would still be there, nothing would connect: it would all register as untrustworthy. The whole system, the backbone of the internet, would need to be rebuilt over weeks or months. What would happen if an intelligence agency or hacker – the NSA or Syrian Electronic Army, say – got hold of a copy of the master key? It’s possible they could redirect specific targets to fake websites designed to exploit their computers – although Icann and the keyholders say this is unlikely.

Standing in the break room next to Lamb is Dmitry Burkov, one of the keyholders, a brusque and heavy-set Russian security expert on the boards of several internet NGOs, who has flown in from Moscow for the ceremony. “The key issue with internet governance is always trust,” he says. “No matter what the forum, it always comes down to trust.” Given the tensions between Russia and the US, and Russia’s calls for new organisations to be put in charge of the internet, does he have faith in this current system? He gestures to the room at large: “They’re the best part of Icann.” I take it he means he likes these people, and not the wider organisation, but he won’t be drawn further.

It’s time to move to the ceremony room itself, which has been cleared for the most sensitive classified information. No electrical signals can come in or out. Building security guards are barred, as are cleaners. To make sure the room looks decent for visitors, an east coast keyholder, Anne-Marie Eklund Löwinder of Sweden, has been in the day before to vacuum with a $20 dustbuster.

We’re about to begin a detailed, tightly scripted series of more than 100 actions, all recorded to the minute using the GMT time zone for consistency. These steps are a strange mix of high-security measures lifted straight from a thriller (keycards, safe combinations, secure cages), coupled with more mundane technical details – a bit of trouble setting up a printer – and occasional bouts of farce. In short, much like the internet itself.

Read the entire article here.

Image: The Magnificent Seven, movie poster. Courtesy of Wikia.

Unification of Byzantine Fault Tolerance

The title reads rather elegantly. However, I have no idea what it means and I challenge you to find meaning as well. You see, while your friendly editor typed the title the words themselves came from a non-human author, who goes by the name SCIgen.

SCIgen is an automated scientific paper generator. Accessible via the internet the SCIgen program generates utterly random nonsense, which includes an abstract, hypothesis, test results, detailed diagrams and charts, and even academic references. At first glance the output seems highly convincing. In fact, unscrupulous individuals have been using it to author fake submissions to scientific conferences and to generate bogus research papers for publication in academic journals.

This says a great deal about the quality of some academic conferences and peer review process (or lack of one).

Access the SCIgen generator here.

Read more about the Unification of Byzantine Fault Tolerance — our very own scientific paper — below.

The Effect of Perfect Modalities on Hardware and Architecture

Bob Widgleton, Jordan LeBouth and Apropos Smythe

Abstract

The implications of pseudorandom archetypes have been far-reaching and pervasive. After years of confusing research into e-commerce, we demonstrate the refinement of rasterization, which embodies the confusing principles of cryptography [21]. We propose new modular communication, which we call Tither.

Table of Contents

1) Introduction
2) Principles
3) Implementation
4) Evaluation

5) Related Work

6) Conclusion

1  Introduction

The transistor must work. Our mission here is to set the record straight. On the other hand, a typical challenge in machine learning is the exploration of simulated annealing. Furthermore, an intuitive quandary in robotics is the confirmed unification of Byzantine fault tolerance and thin clients. Clearly, XML and Moore’s Law [22] interact in order to achieve the visualization of the location-identity split. This at first glance seems unexpected but has ample historical precedence.
We confirm not only that IPv4 can be made game-theoretic, homogeneous, and signed, but that the same is true for write-back caches. In addition, we view operating systems as following a cycle of four phases: location, location, construction, and evaluation. It should be noted that our methodology turns the stable communication sledgehammer into a scalpel. Despite the fact that it might seem unexpected, it always conflicts with the need to provide active networks to experts. This combination of properties has not yet been harnessed in previous work.
Nevertheless, this solution is fraught with difficulty, largely due to perfect information. In the opinions of many, the usual methods for the development of multi-processors do not apply in this area. By comparison, it should be noted that Tither studies event-driven epistemologies. By comparison, the flaw of this type of solution, however, is that red-black trees can be made efficient, linear-time, and replicated. This combination of properties has not yet been harnessed in existing work.
Here we construct the following contributions in detail. We disprove that although the well-known unstable algorithm for the compelling unification of I/O automata and interrupts by Ito et al. is recursively enumerable, the acclaimed collaborative algorithm for the investigation of 802.11b by Davis et al. [4] runs in ?( n ) time. We prove not only that neural networks and kernels are generally incompatible, but that the same is true for DHCP. we verify that while the foremost encrypted algorithm for the exploration of the transistor by D. Nehru [23] runs in ?( n ) time, the location-identity split and the producer-consumer problem are always incompatible.
The rest of this paper is organized as follows. We motivate the need for the partition table. Similarly, to fulfill this intent, we describe a novel approach for the synthesis of context-free grammar (Tither), arguing that IPv6 and write-back caches are continuously incompatible. We argue the construction of multi-processors. This follows from the understanding of the transistor that would allow for further study into robots. Ultimately, we conclude.

2  Principles

In this section, we present a framework for enabling model checking. We show our framework’s authenticated management in Figure 1. We consider a methodology consisting of n spreadsheets. The question is, will Tither satisfy all of these assumptions? Yes, but only in theory.

dia0.png

Figure 1: An application for the visualization of DHTs [24].

Furthermore, we assume that electronic theory can prevent compilers without needing to locate the synthesis of massive multiplayer online role-playing games. This is a compelling property of our framework. We assume that the foremost replicated algorithm for the construction of redundancy by John Kubiatowicz et al. follows a Zipf-like distribution. Along these same lines, we performed a day-long trace confirming that our framework is solidly grounded in reality. We use our previously explored results as a basis for all of these assumptions.

dia1.png

Figure 2: A decision tree showing the relationship between our framework and the simulation of context-free grammar.

Reality aside, we would like to deploy a methodology for how Tither might behave in theory. This seems to hold in most cases. Figure 1 depicts the relationship between Tither and linear-time communication. We postulate that each component of Tither enables active networks, independent of all other components. This is a key property of our heuristic. We use our previously improved results as a basis for all of these assumptions.

3  Implementation

Though many skeptics said it couldn’t be done (most notably Wu et al.), we propose a fully-working version of Tither. It at first glance seems unexpected but is supported by prior work in the field. We have not yet implemented the server daemon, as this is the least private component of Tither. We have not yet implemented the homegrown database, as this is the least appropriate component of Tither. It is entirely a significant aim but is derived from known results.

4  Evaluation

As we will soon see, the goals of this section are manifold. Our overall evaluation method seeks to prove three hypotheses: (1) that the World Wide Web no longer influences performance; (2) that an application’s effective ABI is not as important as median signal-to-noise ratio when minimizing median signal-to-noise ratio; and finally (3) that USB key throughput behaves fundamentally differently on our system. Our logic follows a new model: performance might cause us to lose sleep only as long as usability takes a back seat to simplicity constraints. Furthermore, our logic follows a new model: performance might cause us to lose sleep only as long as scalability constraints take a back seat to performance constraints. Only with the benefit of our system’s legacy code complexity might we optimize for performance at the cost of signal-to-noise ratio. Our evaluation approach will show that increasing the instruction rate of concurrent symmetries is crucial to our results.

4.1  Hardware and Software Configuration

figure0.png

Figure 3: Note that popularity of multi-processors grows as complexity decreases – a phenomenon worth exploring in its own right.

Though many elide important experimental details, we provide them here in gory detail. We instrumented a deployment on our network to prove the work of Italian mad scientist K. Ito. Had we emulated our underwater cluster, as opposed to deploying it in a chaotic spatio-temporal environment, we would have seen weakened results. For starters, we added 3 2GB optical drives to MIT’s decommissioned UNIVACs. This configuration step was time-consuming but worth it in the end. We removed 2MB of RAM from our 10-node testbed [15]. We removed more 2GHz Intel 386s from our underwater cluster. Furthermore, steganographers added 3kB/s of Internet access to MIT’s planetary-scale cluster.

figure1.png

Figure 4: These results were obtained by Noam Chomsky et al. [23]; we reproduce them here for clarity.

Tither runs on autogenerated standard software. We implemented our model checking server in x86 assembly, augmented with collectively wireless, noisy extensions. Our experiments soon proved that automating our Knesis keyboards was more effective than instrumenting them, as previous work suggested. Second, all of these techniques are of interesting historical significance; R. Tarjan and Andrew Yao investigated an orthogonal setup in 1967.

figure2.png

Figure 5: The average distance of our application, compared with the other applications.

4.2  Experiments and Results

figure3.png

Figure 6: The expected instruction rate of our application, as a function of popularity of replication.

figure4.png

Figure 7: Note that hit ratio grows as interrupt rate decreases – a phenomenon worth studying in its own right.

We have taken great pains to describe out evaluation setup; now, the payoff, is to discuss our results. That being said, we ran four novel experiments: (1) we ran von Neumann machines on 15 nodes spread throughout the underwater network, and compared them against semaphores running locally; (2) we measured database and instant messenger performance on our planetary-scale cluster; (3) we ran 87 trials with a simulated DHCP workload, and compared results to our courseware deployment; and (4) we ran 58 trials with a simulated RAID array workload, and compared results to our bioware simulation. All of these experiments completed without LAN congestion or access-link congestion.
Now for the climactic analysis of the second half of our experiments. Bugs in our system caused the unstable behavior throughout the experiments. Continuing with this rationale, bugs in our system caused the unstable behavior throughout the experiments. These expected time since 1935 observations contrast to those seen in earlier work [29], such as Alan Turing’s seminal treatise on RPCs and observed block size.
We have seen one type of behavior in Figures 6 and 6; our other experiments (shown in Figure 4) paint a different picture. Operator error alone cannot account for these results. Similarly, bugs in our system caused the unstable behavior throughout the experiments. Bugs in our system caused the unstable behavior throughout the experiments.
Lastly, we discuss the first two experiments. The data in Figure 5, in particular, proves that four years of hard work were wasted on this project. Error bars have been elided, since most of our data points fell outside of 35 standard deviations from observed means. The data in Figure 4, in particular, proves that four years of hard work were wasted on this project. Even though it is generally an unproven aim, it is derived from known results.

5  Related Work

Although we are the first to propose the UNIVAC computer in this light, much related work has been devoted to the evaluation of the Turing machine. Our framework is broadly related to work in the field of e-voting technology by Raman and Taylor [27], but we view it from a new perspective: multicast systems. A comprehensive survey [3] is available in this space. Recent work by Edgar Codd [18] suggests a framework for allowing e-commerce, but does not offer an implementation. Moore et al. [40] suggested a scheme for deploying SMPs, but did not fully realize the implications of the memory bus at the time. Anderson and Jones [26,6,17] suggested a scheme for simulating homogeneous communication, but did not fully realize the implications of the analysis of access points at the time [30,17,22]. Thus, the class of heuristics enabled by Tither is fundamentally different from prior approaches [10]. Our design avoids this overhead.

5.1  802.11 Mesh Networks

Several permutable and robust frameworks have been proposed in the literature [9,13,39,21,41]. Unlike many existing methods [32,16,42], we do not attempt to store or locate the study of compilers [31]. Obviously, comparisons to this work are unreasonable. Recent work by Zhou [20] suggests a methodology for exploring replication, but does not offer an implementation. Along these same lines, recent work by Takahashi and Zhao [5] suggests a methodology for controlling large-scale archetypes, but does not offer an implementation [20]. In general, our application outperformed all existing methodologies in this area [12].

5.2  Compilers

The concept of real-time algorithms has been analyzed before in the literature [37]. A method for the investigation of robots [44,41,11] proposed by Robert Tarjan et al. fails to address several key issues that our solution does answer. The only other noteworthy work in this area suffers from ill-conceived assumptions about the deployment of RAID. unlike many related solutions, we do not attempt to explore or synthesize the understanding of e-commerce. Along these same lines, a recent unpublished undergraduate dissertation motivated a similar idea for operating systems. Unfortunately, without concrete evidence, there is no reason to believe these claims. Ultimately, the application of Watanabe et al. [14,45] is a practical choice for operating systems [25]. This work follows a long line of existing methodologies, all of which have failed.

5.3  Game-Theoretic Symmetries

A major source of our inspiration is early work by H. Suzuki [34] on efficient theory [35,44,28]. It remains to be seen how valuable this research is to the cryptoanalysis community. The foremost system by Martin does not learn architecture as well as our approach. An analysis of the Internet [36] proposed by Ito et al. fails to address several key issues that Tither does answer [19]. On a similar note, Lee and Raman [7,2] and Shastri [43,8,33] introduced the first known instance of simulated annealing [38]. Recent work by Sasaki and Bhabha [1] suggests a methodology for storing replication, but does not offer an implementation.

6  Conclusion

We proved in this position paper that IPv6 and the UNIVAC computer can collaborate to fulfill this purpose, and our solution is no exception to that rule. Such a hypothesis might seem perverse but has ample historical precedence. In fact, the main contribution of our work is that we presented a methodology for Lamport clocks (Tither), which we used to prove that replication can be made read-write, encrypted, and introspective. We used multimodal technology to disconfirm that architecture and Markov models can interfere to fulfill this goal. we showed that scalability in our method is not a challenge. Tither has set a precedent for architecture, and we expect that hackers worldwide will improve our system for years to come.

References

[1]
Anderson, L. Constructing expert systems using symbiotic modalities. In Proceedings of the Symposium on Encrypted Modalities (June 1990).
[2]
Bachman, C. The influence of decentralized algorithms on theory. Journal of Homogeneous, Autonomous Theory 70 (Oct. 1999), 52-65.
[3]
Bachman, C., and Culler, D. Decoupling DHTs from DHCP in Scheme. Journal of Distributed, Distributed Methodologies 97 (Oct. 1999), 1-15.
[4]
Backus, J., and Kaashoek, M. F. The relationship between B-Trees and Smalltalk with Paguma. Journal of Omniscient Technology 6 (June 2003), 70-99.
[5]
Cocke, J. Deconstructing link-level acknowledgements using Samlet. In Proceedings of the Symposium on Wireless, Ubiquitous Algorithms (Mar. 2003).
[6]
Cocke, J., and Williams, J. Constructing IPv7 using random models. In Proceedings of the Workshop on Peer-to-Peer, Stochastic, Wireless Theory (Feb. 1999).
[7]
Dijkstra, E., and Rabin, M. O. Decoupling agents from fiber-optic cables in the transistor. In Proceedings of PODS (June 1993).
[8]
Engelbart, D., Lee, T., and Ullman, J. A case for active networks. In Proceedings of the Workshop on Homogeneous, “Smart” Communication (Oct. 1996).
[9]
Engelbart, D., Shastri, H., Zhao, S., and Floyd, S. Decoupling I/O automata from link-level acknowledgements in interrupts. Journal of Relational Epistemologies 55 (May 2004), 51-64.
[10]
Estrin, D. Compact, extensible archetypes. Tech. Rep. 2937/7774, CMU, Oct. 2001.
[11]
Fredrick P. Brooks, J., and Brooks, R. The relationship between replication and forward-error correction. Tech. Rep. 657/1182, UCSD, Nov. 2004.
[12]
Garey, M. I/O automata considered harmful. In Proceedings of NDSS (July 1999).
[13]
Gupta, P., Newell, A., McCarthy, J., Martinez, N., and Brown, G. On the investigation of fiber-optic cables. In Proceedings of the Symposium on Encrypted Theory (July 2005).
[14]
Hartmanis, J. Constant-time, collaborative algorithms. Journal of Metamorphic Archetypes 34 (Oct. 2003), 71-95.
[15]
Hennessy, J. A methodology for the exploration of forward-error correction. In Proceedings of SIGMETRICS (Mar. 2002).
[16]
Kahan, W., and Ramagopalan, E. Deconstructing 802.11b using FUD. In Proceedings of OOPSLA (Oct. 2005).
[17]
LeBout, J., and Anderson, T. a. The relationship between rasterization and robots using Faro. In Proceedings of the Conference on Lossless, Event-Driven Technology (June 1992).
[18]
LeBout, J., and Jones, V. O. IPv7 considered harmful. Journal of Heterogeneous, Low-Energy Archetypes 20 (July 2005), 1-11.
[19]
Lee, K., Taylor, O. K., Martinez, H. G., Milner, R., and Robinson, N. E. Capstan: Simulation of simulated annealing. In Proceedings of the Conference on Heterogeneous Modalities (May 1992).
[20]
Nehru, W. The impact of unstable methodologies on e-voting technology. In Proceedings of NDSS (July 1994).
[21]
Reddy, R. Improving fiber-optic cables and reinforcement learning. In Proceedings of the Workshop on Lossless Modalities (Mar. 1999).
[22]
Ritchie, D., Ritchie, D., Culler, D., Stearns, R., Bose, X., Leiserson, C., Bhabha, U. R., and Sato, V. Understanding of the Internet. In Proceedings of IPTPS (June 2001).
[23]
Sato, Q., and Smith, A. Decoupling Moore’s Law from hierarchical databases in SCSI disks. In Proceedings of IPTPS (Dec. 1997).
[24]
Shenker, S., and Thomas, I. Deconstructing cache coherence. In Proceedings of the Workshop on Scalable, Relational Modalities (Feb. 2004).
[25]
Simon, H., Tanenbaum, A., Blum, M., and Lakshminarayanan, K. An exploration of RAID using BordelaisMisuser. Tech. Rep. 98/30, IBM Research, May 1998.
[26]
Smith, R., Estrin, D., Thompson, K., Brown, X., and Adleman, L. Architecture considered harmful. In Proceedings of the Workshop on Flexible, “Fuzzy” Theory (Apr. 2005).
[27]
Sun, G. On the study of telephony. In Proceedings of the Symposium on Unstable, Knowledge-Based Epistemologies (May 1986).
[28]
Sutherland, I. Deconstructing systems. In Proceedings of ASPLOS (June 2000).
[29]
Suzuki, F. Y., Leary, T., Shastri, C., Lakshminarayanan, K., and Garcia-Molina, H. Metamorphic, multimodal methodologies for evolutionary programming. In Proceedings of the Workshop on Stable, Embedded Algorithms (Aug. 2005).
[30]
Takahashi, O., Gupta, W., and Hoare, C. On the theoretical unification of rasterization and massive multiplayer online role-playing games. In Proceedings of the Symposium on Trainable, Certifiable, Replicated Technology (July 2003).
[31]
Taylor, H., Morrison, R. T., Harris, Y., Bachman, C., Nygaard, K., Einstein, A., and Gupta, a. Byzantine fault tolerance considered harmful. In Proceedings of ASPLOS (Mar. 2003).
[32]
Thomas, X. K. Real-time, cooperative communication for e-business. In Proceedings of POPL (May 2004).
[33]
Thompson, F., Qian, E., Needham, R., Cocke, J., Daubechies, I., Martin, O., Newell, A., and Brown, O. Towards the understanding of consistent hashing. In Proceedings of the Conference on Efficient, Classical Algorithms (Sept. 1992).
[34]
Thompson, K. Simulating hash tables and DNS. IEEE JSAC 7 (Apr. 2001), 75-82.
[35]
Turing, A. Deconstructing IPv6 with ELOPS. In Proceedings of the Workshop on Atomic, Random Technology (Feb. 1995).
[36]
Turing, A., Minsky, M., Bhabha, C., and Sun, P. A methodology for the construction of courseware. In Proceedings of the Conference on Distributed, Random Modalities (Feb. 2004).
[37]
Ullman, J., and Ritchie, D. Distributed communication. In Proceedings of IPTPS (Nov. 2004).
[38]
Welsh, M., Schroedinger, E., Daubechies, I., and Shastri, W. A methodology for the analysis of hash tables. In Proceedings of OSDI (Oct. 2002).
[39]
White, V., and White, V. The influence of encrypted configurations on networking. Journal of Semantic, Flexible Theory 4 (July 2004), 154-198.
[40]
Wigleton, B., Anderson, G., Wang, Q., Morrison, R. T., and Codd, E. A synthesis of Web services. In Proceedings of IPTPS (Mar. 1999).
[41]
Wirth, N., and Hoare, C. A. R. Comparing DNS and checksums. OSR 310 (Jan. 2001), 159-191.
[42]
Zhao, B., Smith, A., and Perlis, A. Deploying architecture and Internet QoS. In Proceedings of NOSSDAV (July 2001).
[43]
Zhao, H. The effect of “smart” theory on hardware and architecture. In Proceedings of the USENIX Technical Conference (Apr. 2001).
[44]
Zheng, N. A methodology for the understanding of superpages. In Proceedings of SOSP (Dec. 2005).
[45]
Zheng, R., Smith, J., Chomsky, N., and Chandrasekharan, B. X. Comparing systems and redundancy with CandyUre. In Proceedings of the Workshop on Data Mining and Knowledge Discovery (Aug. 2003).

Apocalypse Now or Later?

Armageddon-poster06Americans love their apocalypses. So, should demise come at the hands of a natural catastrophe, hastened by human (in)action, or should it come courtesy of an engineered biological or nuclear disaster? You chose. Isn’t this so much fun, thinking about absolute extinction?

Ira Chernus, Professor of Religious Studies at the University of Colorado at Boulder, brings us a much-needed scholarly account of our love affairs with all things apocalyptic. But our fascination for  Armageddon — often driven by hope — does nothing to resolve the ultimate conundrum: regardless of the type of ending, it is unlikely that Bruce Willis will be featuring.

From TomDispatch / Salon:

Wherever we Americans look, the threat of apocalypse stares back at us.

Two clouds of genuine doom still darken our world: nuclear extermination and environmental extinction. If they got the urgent action they deserve, they would be at the top of our political priority list.

But they have a hard time holding our attention, crowded out as they are by a host of new perils also labeled “apocalyptic”: mounting federal debt, the government’s plan to take away our gunscorporate control of the Internet, the Comcast-Time Warner mergerocalypse, Beijing’s pollution airpocalypse, the American snowpocalypse, not to speak of earthquakes and plagues. The list of topics, thrown at us with abandon from the political right, left, and center, just keeps growing.

Then there’s the world of arts and entertainment where selling the apocalypse turns out to be a rewarding enterprise. Check out the website “Romantically Apocalyptic,” Slash’s album “Apocalyptic Love,” or the history-lite documentary “Viking Apocalypse” for starters. These days, mathematicians even have an “apocalyptic number.”

Yes, the A-word is now everywhere, and most of the time it no longer means “the end of everything,” but “the end of anything.” Living a life so saturated with apocalypses undoubtedly takes a toll, though it’s a subject we seldom talk about.

So let’s lift the lid off the A-word, take a peek inside, and examine how it affects our everyday lives. Since it’s not exactly a pretty sight, it’s easy enough to forget that the idea of the apocalypse has been a container for hope as well as fear. Maybe even now we’ll find some hope inside if we look hard enough.

A Brief History of Apocalypse

Apocalyptic stories have been around at least since biblical times, if not earlier. They show up in many religions, always with the same basic plot: the end is at hand; the cosmic struggle between good and evil (or God and the Devil, as the New Testament has it) is about to culminate in catastrophic chaos, mass extermination, and the end of the world as we know it.

That, however, is only Act I, wherein we wipe out the past and leave a blank cosmic slate in preparation for Act II: a new, infinitely better, perhaps even perfect world that will arise from the ashes of our present one. It’s often forgotten that religious apocalypses, for all their scenes of destruction, are ultimately stories of hope; and indeed, they have brought it to millions who had to believe in a better world a-comin’, because they could see nothing hopeful in this world of pain and sorrow.

That traditional religious kind of apocalypse has also been part and parcel of American political life since, in Common Sense, Tom Paine urged the colonies to revolt by promising, “We have it in our power to begin the world over again.”

When World War II — itself now sometimes called an apocalypse – ushered in the nuclear age, it brought a radical transformation to the idea. Just as novelist Kurt Vonnegut lamented that the threat of nuclear war had robbed us of “plain old death” (each of us dying individually, mourned by those who survived us), the theologically educated lamented the fate of religion’s plain old apocalypse.

After this country’s “victory weapon” obliterated two Japanese cities in August 1945, most Americans sighed with relief that World War II was finally over. Few, however, believed that a permanently better world would arise from the radioactive ashes of that war. In the 1950s, even as the good times rolled economically, America’s nuclear fear created something historically new and ominous — a thoroughly secular image of the apocalypse.  That’s the one you’ll get first if you type “define apocalypse” into Google’s search engine: “the complete final destruction of the world.” In other words, one big “whoosh” and then… nothing. Total annihilation. The End.

Apocalypse as utter extinction was a new idea. Surprisingly soon, though, most Americans were (to adapt the famous phrase of filmmaker Stanley Kubrick) learning how to stop worrying and get used to the threat of “the big whoosh.” With the end of the Cold War, concern over a world-ending global nuclear exchange essentially evaporated, even if the nuclear arsenals of that era were left ominously in place.

Meanwhile, another kind of apocalypse was gradually arising: environmental destruction so complete that it, too, would spell the end of all life.

This would prove to be brand new in a different way. It is, as Todd Gitlin has so aptly termed it, history’s first “slow-motion apocalypse.” Climate change, as it came to be called, had been creeping up on us “in fits and starts,” largely unnoticed, for two centuries. Since it was so different from what Gitlin calls “suddenly surging Genesis-style flood” or the familiar “attack out of the blue,” it presented a baffling challenge. After all, the word apocalypse had been around for a couple of thousand years or more without ever being associated in any meaningful way with the word gradual.
The eminent historian of religions Mircea Eliade once speculated that people could grasp nuclear apocalypse because it resembled Act I in humanity’s huge stock of apocalypse myths, where the end comes in a blinding instant — even if Act II wasn’t going to follow. This mythic heritage, he suggested, remains lodged in everyone’s unconscious, and so feels familiar.

But in a half-century of studying the world’s myths, past and present, he had never found a single one that depicted the end of the world coming slowly. This means we have no unconscious imaginings to pair it with, nor any cultural tropes or traditions that would help us in our struggle to grasp it.

That makes it so much harder for most of us even to imagine an environmentally caused end to life. The very category of “apocalypse” doesn’t seem to apply. Without those apocalyptic images and fears to motivate us, a sense of the urgent action needed to avert such a slowly emerging global catastrophe lessens.

All of that (plus of course the power of the interests arrayed against regulating the fossil fuel industry) might be reason enough to explain the widespread passivity that puts the environmental peril so far down on the American political agenda. But as Dr. Seuss would have said, that is not all! Oh no, that is not all.

Apocalypses Everywhere

When you do that Google search on apocalypse, you’ll also get the most fashionable current meaning of the word: “Any event involving destruction on an awesome scale; [for example] ‘a stock market apocalypse.’” Welcome to the age of apocalypses everywhere.

With so many constantly crying apocalyptic wolf or selling apocalyptic thrills, it’s much harder now to distinguish between genuine threats of extinction and the cheap imitations. The urgency, indeed the very meaning, of apocalypse continues to be watered down in such a way that the word stands in danger of becoming virtually meaningless. As a result, we find ourselves living in an era that constantly reflects premonitions of doom, yet teaches us to look away from the genuine threats of world-ending catastrophe.

Oh, America still worries about the Bomb — but only when it’s in the hands of some “bad” nation. Once that meant Iraq (even if that country, under Saddam Hussein, never had a bomb and in 2003, when the Bush administration invaded, didn’t even have a bomb program). Now, it means Iran — another country without a bomb or any known plan to build one, but with the apocalyptic stare focused on it as if it already had an arsenal of such weapons — and North Korea.

These days, in fact, it’s easy enough to pin the label “apocalyptic peril” on just about any country one loathes, even while ignoring friendsallies, and oneself. We’re used to new apocalyptic threats emerging at a moment’s notice, with little (or no) scrutiny of whether the A-word really applies.

What’s more, the Cold War era fixed a simple equation in American public discourse: bad nation + nuclear weapon = our total destruction. So it’s easy to buy the platitude that Iran must never get a nuclear weapon or it’s curtains. That leaves little pressure on top policymakers and pundits to explain exactly how a few nuclear weapons held by Iran could actually harm Americans.

Meanwhile, there’s little attention paid to the world’s largest nuclear arsenal, right here in the U.S. Indeed, America’s nukes are quite literally impossible to see, hidden as they are underground, under the seas, and under the wraps of “top secret” restrictions. Who’s going to worry about what can’t be seen when so many dangers termed “apocalyptic” seem to be in plain sight?

Environmental perils are among them: melting glaciers and open-water Arctic seas, smog-blinded Chinese cities, increasingly powerful storms, and prolonged droughts. Yet most of the time such perils seem far away and like someone else’s troubles. Even when dangers in nature come close, they generally don’t fit the images in our apocalyptic imagination. Not surprisingly, then, voices proclaiming the inconvenient truth of a slowly emerging apocalypse get lost in the cacophony of apocalypses everywhere. Just one more set of boys crying wolf and so remarkably easy to deny or stir up doubt about.

Death in Life

Why does American culture use the A-word so promiscuously? Perhaps we’ve been living so long under a cloud of doom that every danger now readily takes on the same lethal hue.

Psychiatrist Robert Lifton predicted such a state years ago when he suggested that the nuclear age had put us all in the grips of what he called “psychic numbing” or “death in life.” We can no longer assume that we’ll die Vonnegut’s plain old death and be remembered as part of an endless chain of life. Lifton’s research showed that the link between death and life had become, as he put it, a “broken connection.”

As a result, he speculated, our minds stop trying to find the vitalizing images necessary for any healthy life. Every effort to form new mental images only conjures up more fear that the chain of life itself is coming to a dead end. Ultimately, we are left with nothing but “apathy, withdrawal, depression, despair.”

If that’s the deepest psychic lens through which we see the world, however unconsciously, it’s easy to understand why anything and everything can look like more evidence that The End is at hand. No wonder we have a generation of American youth and young adults who take a world filled with apocalyptic images for granted.

Think of it as, in some grim way, a testament to human resiliency. They are learning how to live with the only reality they’ve ever known (and with all the irony we’re capable of, others are learning how to sell them cultural products based on that reality). Naturally, they assume it’s the only reality possible. It’s no surprise that “The Walking Dead,” a zombie apocalypse series, is theirfavorite TV show, since it reveals (and revels in?) what one TV critic called the “secret life of the post-apocalyptic American teenager.”

Perhaps the only thing that should genuinely surprise us is how many of those young people still manage to break through psychic numbing in search of some way to make a difference in the world.

Yet even in the political process for change, apocalypses are everywhere. Regardless of the issue, the message is typically some version of “Stop this catastrophe now or we’re doomed!” (An example: Stop the Keystone XL pipeline or it’s “game over”!) A better future is often implied between the lines, but seldom gets much attention because it’s ever harder to imagine such a future, no less believe in it.

No matter how righteous the cause, however, such a single-minded focus on danger and doom subtly reinforces the message of our era of apocalypses everywhere: abandon all hope, ye who live here and now.

Read the entire article here.

Image: Armageddon movie poster. Courtesy of Touchstone Pictures.

The Joy of New Technology

prosthetic-hand

We are makers. We humans love to create and invent. Some of our inventions are hideous, laughable or just plain evil — Twinkies, collateralized debt obligations and subprime mortgages, Agent Orange, hair extensions, spray-on tans, cluster bombs, diet water.

However, for every misguided invention comes something truly great. This time, a prosthetic hand that provides a sense of real feeling, courtesy of the makers of the Veterans Affairs Medical Center in Cleveland, Ohio.

From Technology Review:

Igor Spetic’s hand was in a fist when it was severed by a forging hammer three years ago as he made an aluminum jet part at his job. For months afterward, he felt a phantom limb still clenched and throbbing with pain. “Some days it felt just like it did when it got injured,” he recalls.

He soon got a prosthesis. But for amputees like Spetic, these are more tools than limbs. Because the prosthetics can’t convey sensations, people wearing them can’t feel when they have dropped or crushed something.Now Spetic, 48, is getting some of his sensation back through electrodes that have been wired to residual nerves in his arm. Spetic is one of two people in an early trial that takes him from his home in Madison, Ohio, to the Cleveland Veterans Affairs Medical Center. In a basement lab, his prosthetic hand is rigged with force sensors that are plugged into 20 wires protruding from his upper right arm. These lead to three surgically implanted interfaces, seven millimeters long, with as many as eight electrodes apiece encased in a polymer, that surround three major nerves in Spetic’s forearm.

On a table, a nondescript white box of custom electronics does a crucial job: translating information from the sensors on Spetic’s prosthesis into a series of electrical pulses that the interfaces can translate into sensations. This technology “is 20 years in the making,” says the trial’s leader, Dustin Tyler, a professor of biomedical engineering at Case Western Reserve University and an expert in neural interfaces.

As of February, the implants had been in place and performing well in tests for more than a year and a half. Tyler’s group, drawing on years of neuroscience research on the signaling mechanisms that underlie sensation, has developed a library of patterns of electrical pulses to send to the arm nerves, varied in strength and timing. Spetic says that these different stimulus patterns produce distinct and realistic feelings in 20 spots on his prosthetic hand and fingers. The sensations include pressing on a ball bearing, pressing on the tip of a pen, brushing against a cotton ball, and touching sandpaper, he says. A surprising side effect: on the first day of tests, Spetic says, his phantom fist felt open, and after several months the phantom pain was “95 percent gone.”

On this day, Spetic faces a simple challenge: seeing whether he can feel a foam block. He dons a blindfold and noise-­canceling headphones (to make sure he’s relying only on his sense of touch), and then a postdoc holds the block inside his wide-open prosthetic hand and taps him on the shoulder. Spetic closes his prosthesis—a task made possible by existing commercial interfaces to residual arm muscles—and reports the moment he touches the block: success.

Read the entire article here.

Image: Prosthetic hand. Courtesy of MIT Technology Review / Veterans Affairs Medical Center.

Abraham Lincoln Was a Sham President

 

This is not the opinion of theDiagonal. Rather, it’s the view of the revisionist thinkers over at the so-called “News Leader”, Fox News. Purposefully I avoid commenting on news and political events, but once in a while the story is so jaw-droppingly incredulous that your friendly editor cannot keep away from his keyboard. Which brings me to Fox News.

The latest diatribe from the 24/7 conservative think tank is that Lincoln actually caused the Civil War. According to Fox analyst Andrew Napolitano the Civil War was an unnecessary folly, and could have been avoided by Lincoln had he chosen to pay off the South or let slavery come to a natural end.

This is yet another example of the mindless, ideological drivel dished out on a daily basis by Fox. Next are we likely to see Fox defend Hitler’s “cleansing” of Europe as fine economic policy that the Allies should have let run its course? Ugh! One has to suppose that the present day statistic of 30 million enslaved humans around the world is just as much a figment of the collective imaginarium that is Fox.

The one bright note to ponder about Fox and its finely-tuned propaganda machine comes from looking at its commercials. When the majority of its TV ads are for the over-60s — think Viagra, statins and catheters — you can sense that its aging demographic will soon sublimate to meet its alternate, heavenly reality.

From Salon:

“The Daily Show” had one of its best segments in a while on Monday night, ruthlessly and righteously taking Fox News legal analyst and libertarian Andrew Napolitano to task for using the airwaves to push his clueless and harmful revisionist understanding of the Civil War.

Jon Stewart and “senior black correspondent” Larry Wilmore criticized Napolitano for a Feb. 14 appearance on the Fox Business channel during which he called himself a “contrarian” when it comes to estimating former President Abraham Lincoln’s legacy and argued that the Civil War was unnecessary — and may not have even been about slavery, anyway!

“At the time that [Lincoln] was the president of the United States, slavery was dying a natural death all over the Western world,” Napolitano said. “Instead of allowing it to die, or helping it to die, or even purchasing the slaves and then freeing them — which would have cost a lot less money than the Civil War cost — Lincoln set about on the most murderous war in American history.”

Stewart quickly shred this argument to pieces, noting that Lincoln spent much of 1862 trying (and failing) to convince border states to accept compensatory emancipation as well as the fact that the South’s relationship with chattel slavery was fundamentally not just an economic but also a social system, one that it would never willingly abandon.

Soon after, Stewart turned to Wilmore, who noted that the Confederacy was “so committed to slavery that Lincoln didn’t die of natural causes.” Wilmore next pointed out that people who “think Lincoln started the Civil War because the North was ready to kill to end slavery” are mistaken. “[T]he truth was,” Wilmore said, “the South was ready to die to keep slavery.”

Stewart and Wilmore next highlighted that Napolitano doesn’t hate all wars, and in fact has a history of praising the Revolutionary War as necessary and just. “So it was heroic to fight for the proposition that all men are created equal, but when there’s a war to enforce that proposition, that’s wack?” Wilmore asked. “You know, there’s something not right when you feel the only black thing worth fighting for is tea.”

As the final dagger, Stewart and Wilmore noted that Napolitano has ranted at length on Fox about how taxation is immoral and unjust, prompting Wilmore to elegantly outline the problems with Napolitano-style libertarianism in a single paragraph. Speaking to Napolitano, Wilmore said:

You think it’s immoral for the government to reach into your pocket, rip your money away from its warm home and claim it as its own property, money that used to enjoy unfettered freedom is now conscripted to do whatever its new owner tells it to. Now, I know this is going to be a leap, but you know that sadness and rage you feel about your money? Well, that’s the way some of us feel about people.

Read the entire story here.

Video courtesy of The Daily Show with Jon Stewart, Comedy Central.

 

FOMO Reshaping You and Your Network

Fear of missing out (FOMO) and other negative feelings are greatly disproportional to good ones in online social networks. The phenomenon is widespread and well-documented. Compound this with the observation — though unintuitive — that your online friends will have more friends and be more successful than you, and you have a recipe for a growing, deep-seated inferiority complex. Add to this other behavioral characteristics that are peculiar or exaggerated in online social networks and you have a more fundamental recipe — one that threatens the very fabric of the network itself. Just consider how online trolling, status lurking, persona-curation, passive monitoring, stalking and deferred (dis-)liking are re-fashioning our behaviors and the networks themselves.

From ars technica:

I found out my new college e-mail address in 2005 from a letter in the mail. Right after opening the envelope, I went straight to the computer. I was part of a LiveJournal group made of incoming students, and we had all been eagerly awaiting our college e-mail addresses, which had a use above and beyond corresponding with professors or student housing: back then, they were required tokens for entry to the fabled thefacebook.com.

That was nine years ago, and Facebook has now been in existence for 10. But even in those early days, Facebook’s cultural impact can’t be overstated. A search for “Facebook” on Google Scholar alone now produces 1.2 million results from 2006 on; “Physics” only returns 456,000.

But in terms of presence, Facebook is flopping around a bit now. The ever-important “teens” despise it, and it’s not the runaway success, happy addiction, or awe-inspiring source of information it once was. We’ve curated our identities so hard and had enough experiences with unforeseen online conflict that Facebook can now feel more isolating than absorbing. But what we are dissatisfied with is what Facebook has been, not what it is becoming.

Even if the grand sociological experiment that was Facebook is now running a little dry, the company knows this—which is why it’s transforming Facebook into a completely different entity. And the cause of all this built-up disarray that’s pushing change? It’s us. To prove it, let’s consider the social constructs and weirdnesses Facebook gave rise to, how they ultimately undermined the site, and how these ideas are shaping Facebook into the company it is now and will become.

Cue that Randy Newman song

Facebook arrived late to the concept of online friending, long after researchers started wondering about the structure of these social networks. What Facebook did for friending, especially reciprocal friending, was write it so large that it became a common concern. How many friends you had, who did and did not friend you back, and who should friend each other first all became things that normal people worried about.

Once Facebook opened beyond colleges, it became such a one-to-one representation of an actual social network that scientists started to study it. They applied social theories like those of weak ties or identity creation to see how they played out sans, or in supplement to, face-to-face interactions.

In a 2007 study, when Facebook was still largely campus-bound, a group of researchers said that Facebook “appears to play an important role in the process by which students form and maintain social capital.” They were using it to keep in touch with old friends and “to maintain or intensify relationships characterized by some form of offline connection.”

This sounds mundane now, since Facebook is so integrated into much of our lives. Seeing former roommates or childhood friends posting updates to Facebook feels as commonplace as literally seeing them nearly every day back when we were still roommates at 20 or friends at eight.

But the ability to keep tabs on someone without having to be proactive about it—no writing an e-mail, making a phone call, etc.—became the unique selling factor of Facebook. Per the 2007 study above, Facebook became a rich opportunity for “convert[ing] latent ties into weak ties,” connections that are valuable because they are with people who are sufficiently distant socially to bring in new information and opportunities.

Some romantic pixels have been spilled about the way no one is ever lost to anyone anymore; most people, including ex-lovers, estranged family members, or missed connections are only a Wi-Fi signal away.

“Modern technology has made our worlds smaller, but perhaps it also has diminished life’s mysteries, and with them, some sense of romance,” writes David Vecsey in The New York Times. Vecsey cites a time when he tracked down a former lover “across two countries and an ocean,” something he would not have done in the absence of passive social media monitoring. “It was only in her total absence, in a total vacuum away from her, that I was able to appreciate the depth of love I felt.”

The art of the Facebook-stalk

While plenty of studies have been conducted on the productive uses of Facebook—forming or maintaining weak ties, supplementing close relationships, or fostering new, casual ones—there are plenty that also touch on the site as a means for passive monitoring. Whether it was someone we’d never met, a new acquaintance, or an unrequited infatuation, Facebook eventually had enough breadth that you could call up virtually anyone’s profile, if only to see how fat they’ve gotten.

One study referred to this process as “social investigation.” We developed particular behaviors to avoid creating suspicion: do not “like” anything by the object of a stalking session, or if we do like it, don’t “like” too quickly; be careful not to type a name we want to search into the status field by accident; set an object of monitoring as a “close friend,” even if they aren’t, so their updates show up without fail; friend their friends; surreptitiously visit profile pages multiple times a day in case we missed anything.

This passive monitoring is one of the more utilitarian uses of Facebook. It’s also one of the most addictive. The (fictionalized) movie The Social Network closes with Facebook’s founder, Mark Zuckerberg, gazing at the Facebook profile of a high-school crush. Facebook did away with the necessity of keeping tabs on anyone. You simply had all of the tabs, all of the time, with the most recent information whenever you wanted to look at them.

The book Digital Discourse cites a classic example of the Facebook stalk in an IM conversation between two teenagers:

“I just saw what Tanya Eisner wrote on your Facebook wall. Go to her house,” one says.
“Woah, didn’t even see that til right now,” replies the other.
“Haha it looks like I stalk you… which I do,” says the first.
“I stalk u too its ok,” comforts the second.

But even innocent, casual information recon in the form of a Facebook stalk can rub us the wrong way. Any instance of a Facebook interaction that ends with an unexpected third body’s involvement can taint the rest of users’ Facebook behavior, making us feel watched.

Digital Discourse states that “when people feel themselves to be the objects of stalking, creeping, or lurking by third parties, they express annoyance or even moral outrage.” It cites an example of another teenager who gets a wall post from a person she barely knows, and it explains something she wrote about in a status update. “Don’t stalk my status,” she writes in mocking command to another friend, as if talking to the interloper.

You are who you choose to be

“The advent of the Internet has changed the traditional conditions of identity production,” reads a study from 2008 on how people presented themselves on Facebook. People had been curating their presences online for a long time before Facebook, but the fact that Facebook required real names and, for a long time after its inception, association with an educational institution made researchers wonder if it would make people hew a little closer to reality.

But beyond the bounds of being tied to a real name, users still projected an idealized self to others; a type of “possible self,” or many possible selves, depending on their sharing settings. Rather than try to describe themselves to others, users projected a sort of aspirational identity.

People were more likely to associate themselves with cultural touchstones, like movies, books, or music, than really identify themselves. You might not say you like rock music, but you might write Led Zeppelin as one of your favorite bands, and everyone else can infer your taste in music as well as general taste and coolness from there.

These identity proxies also became vectors for seeking approval. “The appeal is as much to the likeability of my crowd, the desirability of my boyfriend, or the magic of my music as it is to the personal qualities of the Facebook users themselves,” said the study. The authors also noted that, for instance, users tended to post photos of themselves mostly in groups in social situations. Even the profile photos, which would ostensibly have a single subject, were socially styled.

As the study concluded, “identity is not an individual characteristic; it is not an expression of something innate in a person, it is rather a social product, the outcome of a given social environment and hence performed differently in varying contexts.” Because Facebook was so susceptible to this “performance,” so easily controlled and curated, it quickly became less about real people and more about highlight reels.

We came to Facebook to see other real people, but everyone, even casual users, saw it could be gamed for personal benefit. Inflicting our groomed identities on each other soon became its own problem.

Fear of missing out

A long-time problem of social networks has been that the bad feelings they can generate are greatly disproportional to good ones.

In strict terms of self-motivation, posting something and getting a good reception feels good. But most of Facebook use is watching other people post about their own accomplishments and good times. For a social network of 300 friends with an even distribution of auspicious life events, you are seeing 300 times as many good things happen to others as happen to you (of course, everyone has the same amount of good luck, but in bulk for the consumer, it doesn’t feel that way). If you were happy before looking at Facebook, or even after posting your own good news, you’re not now.

The feelings of inadequacy did start to drive people back to Facebook. Even in the middle of our own vacations, celebration dinners, or weddings, we might check Facebook during or after to compare notes and see if we really had the best time possible.

That feeling became known as FOMO, “fear of missing out.” As Jenna Wortham wrote in The New York Times, “When we scroll through pictures and status updates, the worry that tugs at the corners of our minds is set off by the fear of regret… we become afraid that we’ve made the wrong decision about how to spend our time.”

Even if you had your own great stuff to tell Facebook about, someone out there is always doing better. And Facebook won’t let you forget. The brewing feeling of inferiority means users don’t post about stuff that might be too lame. They might start to self-censor, and then the bar for what is worth the “risk” of posting rises higher and higher. As people stop posting, there is less to see, less reason to come back and interact, like, or comment on other people’s material. Ultimately, people, in turn, have less reason to post.

Read the entire article here.

Gephyrophobes Not Welcome

Royal_Gorge_Bridge

A gephyrophobic person is said to have a fear of crossing bridges. So, we’d strongly recommend avoiding the structures on this list of some of the world’s scariest bridges. For those who suffer no anxiety from either bridges or heights, and who crave endless vistas both horizontally and vertically, this list is for you. Our favorite, the suspension bridge over the Royal Gorge in Colorado.

From the Guardian:

From rickety rope walkways to spectacular feats of engineering, we take a look at some of the world’s scariest bridges.

Until 2001, the Royal Gorge bridge in Colorado was the highest bridge in the world. Built in 1929, the 291m-high structure is now a popular tourist attraction, not least because of the fact that it is situated within a theme park.

Read the entire story and see more images here.

Image: Royal Gorge, Colorado. Courtesy of Wikipedia / Hustvedt.

 

Influencing and Bullying

We sway our co-workers. We coach teams. We cajole our spouses and we parent our kids. But what characterizes this behavior over more overt and negative forms of influencing, such as bullying? It’s a question very much worth exploring since we are all bullies at some point — much more so than we tend to think of ourselves. And, not surprisingly, this goes hand-in-hand with deceit.

From the NYT:

WHAT is the chance that you could get someone to lie for you? What about vandalizing public property at your suggestion?

Most of us assume that others would go along with such schemes only if, on some level, they felt comfortable doing so. If not, they’d simply say “no,” right?

Yet research suggests that saying “no” can be more difficult than we believe — and that we have more power over others’ decisions than we think.

Social psychologists have spent decades demonstrating how difficult it can be to say “no” to other people’s propositions, even when they are morally questionable — consider Stanley Milgram’s infamous experiments, in which participants were persuaded to administer what they believed to be dangerous electric shocks to a fellow participant.

Countless studies have subsequently shown that we find it similarly difficult to resist social pressure from peers, friends and colleagues. Our decisions regarding everything from whether to turn the lights off when we leave a room to whether to call in sick to take a day off from work are affected by the actions and opinions of our neighbors and colleagues.

But what about those times when we are the ones trying to get someone to act unethically? Do we realize how much power we wield with a simple request, suggestion or dare? New research by my students and me suggests that we don’t.

We examined this question in a series of studies in which we had participants ask strangers to perform unethical acts. Before making their requests, participants predicted how many people they thought would comply. In one study, 25 college students asked 108 unfamiliar students to vandalize a library book. Targets who complied wrote the word “pickle” in pen on one of the pages.

As in the Milgram studies, many of the targets protested. They asked the instigators to take full responsibility for any repercussions. Yet, despite their hesitation, a large portion still complied.

Most important for our research question, more targets complied than participants had anticipated. Our participants predicted that an average of 28.5 percent would go along. In fact, fully half of those who were approached agreed. Moreover, 87 percent of participants underestimated the number they would be able to persuade to vandalize the book.

In another study, we asked 155 participants to think about a series of ethical dilemmas — for example, calling in sick to work to attend a baseball game. One group was told to think about these misdeeds from the perspective of a person deciding whether to commit them, and to imagine receiving advice from a colleague suggesting they do it or not. Another group took the opposite side, and thought about them from the perspective of someone advising another person about whether or not to do each deed.

Those in the first group were strongly influenced by the advice they received. When they were urged to engage in the misdeed, they said they would be more comfortable doing so than when they were advised not to. Their average reported comfort level fell around the midpoint of a 7-point scale after receiving unethical advice, but fell closer to the low end after receiving ethical advice.

However, participants in the “advisory” role thought that their opinions would hold little sway over the other person’s decision, assuming that participants in the first group would feel equally comfortable regardless of whether they had received unethical or ethical advice.

Taken together, our research, which was recently published in the journal Personality and Social Psychology Bulletin, suggests that we often fail to recognize the power of social pressure when we are the ones doing the pressuring.

Notably, this tendency may be especially pronounced in cultures like the United States’, where independence is so highly valued. American culture idolizes individuals who stand up to peer pressure. But that doesn’t mean that most do; in fact, such idolatry may hide, and thus facilitate, compliance under social pressure, especially when we are the ones putting on the pressure.

Consider the roles in the Milgram experiments: Most people have probably fantasized about being one of the subjects and standing up to the pressure. But in daily life, we play the role of the metaphorical experimenter in those studies as often as we play the participant. We bully. We pressure others to blow off work to come out for a drink or stiff a waitress who is having a bad night. These suggestions are not always wrong or unethical, but they may impact others’ behaviors more than we realize.

Read the entire story here.

Mars Emigres Beware

MRO-Mars-impact-craterThe planners behind the proposed, private Mars One mission to Mars are still targeting 2024 for an initial settlement on the Red Planet. That’s now a mere 10 years away. As of this writing, the field of potential settlers has been whittled down to around 2,000 from an initial pool of about 250,000 would-be explorers. While the selection process and planning continues, other objects continue to target Mars as well. Large space rocks seem to be hitting the planet more frequently and more recently than was first thought. So, while such impacts are both beautiful and scientifically valuable — they may come as rather unwanted to the forthcoming human Martians.

From ars technica:

Yesterday [February 5, 2014], the team that runs the HiRISE camera on the Mars Reconnaissance Orbiter released the photo shown above. It’s a new impact crater on Mars, formed sometime early this decade. The crater at the center is about 30 meters in diameter, and the material ejected during its formation extends out as far as 15 kilometers.

The impact was originally spotted by the MRO’s Context Camera, a wide-field imaging system that (wait for it) provides the context—an image of the surrounding terrain—for the high-resolution images taken by HiRISE. The time window on the impact, between July 2010 and May 2012, simply represents the time between two different Context Camera photos of the same location. Once the crater was spotted, it took until November of 2013 for another pass of the region, at which point HiRISE was able to image it.

Read the entire article here.

Image: Impact crater from Mars Reconnaissance Orbiter. Courtesy of NASA / JPL.

 

 

A Quest For Skeuomorphic Noise

Toyota_Prius_III

Your Toyota Prius, or other electric vehicle, is a good environmental citizen. It helps reduce pollution and carbon emissions and does so rather efficiently. You and other eco-conscious owners should be proud.

But wait, not so fast. Your electric car may have a low carbon footprint, but it is a silent killer in waiting. It may be efficient, however it is far too quiet, and is thus somewhat of a hazard for pedestrians, cyclists and other motorists — they don’t hear it approaching.

Cars like the Prius are so quiet — in fact too quiet, for our own safety. So, enterprising engineers are working to add artificial noise to the next generations of almost silent cars. The irony is not lost: after years of trying to make cars quieter, engineers are now looking to make them noisier.

Perhaps, the added noise could be configurable as an option for customers — a base option would sound like a Citroen CV, a high-end model could sound like, well, a Ferrari or a classic Bugatti. Much better.

From Technology Review:

It was a pleasant June day in Munich, Germany. I was picked up at my hotel and driven to the country, farmland on either side of the narrow, two-lane road. Occasional walkers strode by, and every so often a bicyclist passed. We parked the car on the shoulder and joined a group of people looking up and down the road. “Okay, get ready,” I was told. “Close your eyes and listen.” I did so and about a minute later I heard a high-pitched whine, accompanied by a low humming sound: an automobile was approaching. As it came closer, I could hear tire noise. After the car had passed, I was asked my judgment of the sound. We repeated the exercise numerous times, and each time the sound was different. What was going on? We were evaluating sound designs for BMW’s new electric vehicles.

Electric cars are extremely quiet. The only sounds they make come from the tires, the air, and occasionally from the high-pitched whine of the electronics. Car lovers really like the silence. Pedestrians have mixed feelings, but blind people are greatly concerned. After all, they cross streets in traffic by relying upon the sounds of vehicles. That’s how they know when it is safe to cross. And what is true for the blind might also be true for anyone stepping onto the street while distracted. If the vehicles don’t make any sounds, they can kill. The United States National Highway Traffic Safety Administration determined that pedestrians are considerably more likely to be hit by hybrid or electric vehicles than by those with an internal-combustion engine. The greatest danger is when the hybrid or electric vehicles are moving slowly: they are almost completely silent.

Adding sound to a vehicle to warn pedestrians is not a new idea. For many years, commercial trucks and construction equipment have had to make beeping sounds when backing up. Horns are required by law, presumably so that drivers can use them to alert pedestrians and other drivers when the need arises, although they are often used as a way of venting anger and rage instead. But adding a continuous sound to a normal vehicle because it would otherwise be too quiet is a challenge.

What sound would you want? One group of blind people suggested putting some rocks into the hubcaps. I thought this was brilliant. The rocks would provide a natural set of cues, rich in meaning and easy to interpret. The car would be quiet until the wheels started to turn. Then the rocks would make natural, continuous scraping sounds at low speeds, change to the pitter-patter of falling stones at higher speeds. The frequency of the drops would increase with the speed of the car until the rocks ended up frozen against the circumference of the rim, silent. Which is fine: the sounds are not needed for fast-moving vehicles, because then the tire noise is audible. The lack of sound when the vehicle is not moving would be a problem, however.

The marketing divisions of automobile manufacturers thought the addition of artificial sounds would be a wonderful branding opportunity, so each car brand or model should have its own unique sound that captured just the car personality the brand wished to convey. Porsche added loudspeakers to its electric car prototype to give it the same throaty growl as its gasoline-powered cars. Nissan wondered whether a hybrid automobile should sound like tweeting birds. Some manufacturers thought all cars should sound the same, with standardized noises and sound levels, making it easier for everyone to learn how to interpret them. Some blind people thought they should sound like cars—you know, gasoline engines.

Skeuomorphic is the technical term for incorporating old, familiar ideas into new technologies, even though they no longer play a functional role. Skeuomorphic designs are often comfortable for traditionalists, and indeed the history of technology shows that new technologies and materials often slavishly imitate the old for no apparent reason except that it’s what people know how to do. Early automobiles looked like horse-driven carriages without the horses (which is also why they were called horseless carriages); early plastics were designed to look like wood; folders in computer file systems often look like paper folders, complete with tabs. One way of overcoming the fear of the new is to make it look like the old. This practice is decried by design purists, but in fact, it has its benefits in easing the transition from the old to the new. It gives comfort and makes learning easier. Existing conceptual models need only be modified rather than replaced. Eventually, new forms emerge that have no relationship to the old, but the skeuomorphic designs probably helped the transition.

When it came to deciding what sounds the new silent automobiles should generate, those who wanted differentiation ruled the day, yet everyone also agreed that there had to be some standards. It should be possible to determine that the sound is coming from an automobile, to identify its location, direction, and speed. No sound would be necessary once the car was going fast enough, in part because tire noise would be sufficient. Some standardization would be required, although with a lot of leeway. International standards committees started their procedures. Various countries, unhappy with the normally glacial speed of standards agreements and under pressure from their communities, started drafting legislation. Companies scurried to develop appropriate sounds, hiring psychologists, Hollywood sound designers, and experts in psychoacoustics.

The United States National Highway Traffic Safety Administration issued a set of principles along with a detailed list of requirements, including sound levels, spectra, and other criteria. The full document is 248 pages. The document states:

This standard will ensure that blind, visually-impaired, and other pedestrians are able to detect and recognize nearby hybrid and electric vehicles by requiring that hybrid and electric vehicles emit sound that pedestrians will be able to hear in a range of ambient environments and contain acoustic signal content that pedestrians will recognize as being emitted from a vehicle. The proposed standard establishes minimum sound requirements for hybrid and electric vehicles when operating under 30 kilometers per hour (km/h) (18 mph), when the vehicle’s starting system is activated but the vehicle is stationary, and when the vehicle is operating in reverse. The agency chose a crossover speed of 30 km/h because this was the speed at which the sound levels of the hybrid and electric vehicles measured by the agency approximated the sound levels produced by similar internal combustion engine vehicles. (Department of Transportation, 2013.)

As I write this, sound designers are still experimenting. The automobile companies, lawmakers, and standards committees are still at work. Standards are not expected until 2014 or later, and then it will take considerable time for the millions of vehicles across the world to meet them. What principles should be used for the sounds of electric vehicles (including hybrids)? The sounds have to meet several criteria:

Alerting. The sound will indicate the presence of an electric vehicle.

Orientation. The sound will make it possible to determine where the vehicle is located, roughly how fast it is going, and whether it is moving toward or away from the listener.

Lack of annoyance. Because these sounds will be heard frequently even in light traffic and continually in heavy traffic, they must not be annoying. Note the contrast with sirens, horns, and backup signals, all of which are intended to be aggressive warnings. Such sounds are deliberately unpleasant, but because they are infrequent and relatively short in duration, they are acceptable. The challenge for electric vehicles is to make sounds that alert and orient, not annoy.

Standardization versus individualization. Standardization is necessary to ensure that all electric-vehicle sounds can readily be interpreted. If they vary too much, novel sounds might confuse the listener. Individualization has two functions: safety and marketing. From a safety point of view, if there were many vehicles on the street, individualization would allow them to be tracked. This is especially important at crowded intersections. From a marketing point of view, individualization can ensure that each brand of electric vehicle has its own unique characteristic, perhaps matching the quality of the sound to the brand image.

Read the entire article here.

Image: Toyota Prius III. Courtesy of Toyota / Wikipedia.

It’s a Woman’s World

[tube]V4UWxlVvT1A[/tube]

Well, not really. Though, there is no doubting that the planet would look rather different if the genders had truly equal opportunities and pay-offs, or if women generally had all the power that tends to be concentrated in masculine hands.

A short movie by French actor and film-maker Eleonoré Pourriat imagines what our Western culture might resemble if the traditional female-male roles were reversed.

A portent of the future? Perhaps not, but thought-provoking nonetheless. One has to believe that if women had all the levers and trappings of power that they could do a better job than men. Or, perhaps not. It may just be possible that power corrupts — regardless of the gender of the empowered.

From the Independent:

Imagine a world where it is the women who pee in the street, jog bare-chested and harass and physically assault the men. Such a world has just gone viral on the internet. A nine-minute satirical film made by Eleonoré Pourriat, the French actress, script-writer and director, has clocked up hundreds of thousands of views in recent days.

The movie, Majorité Opprimée or “Oppressed Majority”, was made in 2010. It caused a flurry of interest when it was first posted on YouTube early last year. But now it’s time seems to have come. “It is astonishing, just incredible that interest in my film has suddenly exploded in this way,” Ms Pourriat told The Independent. “Obviously, I have touched a nerve. Women in France, but not just in France, feel that everyday sexism has been allowed to go on for too long.”

The star of the short film is Pierre, who is played very convincingly by Pierre Bénézit. He is a slightly gormless stay-at-home father, who spends a day besieged by the casual or aggressive sexism of women in a female-dominated planet. The film, in French with English subtitles, begins in a jokey way and turns gradually, and convincingly, nasty. It is not played for cheap laughs. It has a Swiftian capacity to disturb by the simple trick of reversing roles.

Pierre, pushing his baby-buggy, is casually harassed by a bare-breasted female jogger. He meets a male, Muslim babysitter, who is forced by his wife to wear a balaclava in public. He is verbally abused – “Think I don’t see you shaking your arse at me?” – by a drunken female down-and-out. He is sexually assaulted and humiliated by a knife-wielding girl gang. (“Say your dick is small or I’ll cut off your precious jewels.”)

He is humiliated a second time by a policewoman, who implies that he invented the gang assault. “Daylight and no witnesses, that’s strange,” she says. As she takes Pierre’s statement, the policewoman patronises a pretty, young policeman. “I need a coffee, cutie.”

Pierre’s self-important working wife arrives to collect him. She comforts him at first, calling him “kitten” and “pumpkin”. When he complains that he can no longer stand the permanent aggression of a female-dominated society, she says that he is to blame because of the way he dresses: in short sleeves, flip-flops and Bermudas.

At the second, or third, time of asking, interest in Ms Pourriat’s highly charged little movie has exploded in recent days on social media and on feminist and anti-feminist websites on both sides of the Channel and on both sides of the Atlantic. Some men refuse to see the point. “Sorry, but I would adore to live such a life,” said one French male blogger. “To be raped by a gang of girls. Great! That’s every man’s fantasy.”

Ms Pourriat, 42, acts and writes scripts for comedy movies in France. This was her first film as director. “It is rooted absolutely in my own experience as a woman living in France,” she tells me. “I think French men are worse than men elsewhere, but the incredible success of the movie suggests that it is not just a French problem.

“What angers me is that many women seem to accept this kind of behaviour from men or joke about it. I had long wanted to make a film that would turn the situation on its head.

Read the entire article here.

Video: Majorité Opprimée or “Oppressed Majority by Eleonoré Pourriat.

 

Daddy, What Is Snow?

No-snow-on-slopes

Adults living at higher latitudes will remember snow falling during the cold seasons, but most will recall having seen more snow when they were younger. As climate change continues to shift our global weather patterns, and increase global temperatures, our children and grand-children may have to make do with artificially made snow or watch a historical documentary of the real thing when they reach adulthood.

Our glaciers are retreating and snowcaps are melting. The snow is disappearing. This may be a boon to local governments that can save precious dollars from discontinuing snow and ice removal activities. But for those of us who love to ski and snowboard and skate, or just throw snowballs, build snowmen with our kids or gasp in awe at an icy panorama — snow, you’ll be sorely missed.

From the NYT:

OVER the next two weeks, hundreds of millions of people will watch Americans like Ted Ligety and Mikaela Shiffrin ski for gold on the downhill alpine course. Television crews will pan across epic vistas of the rugged Caucasus Mountains, draped with brilliant white ski slopes. What viewers might not see is the 16 million cubic feet of snow that was stored under insulated blankets last year to make sure those slopes remained white, or the hundreds of snow-making guns that have been running around the clock to keep them that way.

Officials canceled two Olympic test events last February in Sochi after several days of temperatures above 60 degrees Fahrenheit and a lack of snowfall had left ski trails bare and brown in spots. That situation led the climatologist Daniel Scott, a professor of global change and tourism at the University of Waterloo in Ontario, to analyze potential venues for future Winter Games. His thought was that with a rise in the average global temperature of more than 7 degrees Fahrenheit possible by 2100, there might not be that many snowy regions left in which to hold the Games. He concluded that of the 19 cities that have hosted the Winter Olympics, as few as 10 might be cold enough by midcentury to host them again. By 2100, that number shrinks to 6.

The planet has warmed 1.4 degrees Fahrenheit since the 1800s, and as a result, snow is melting. In the last 47 years, a million square miles of spring snow cover has disappeared from the Northern Hemisphere. Europe has lost half of its Alpine glacial ice since the 1850s, and if climate change is not reined in, two-thirds of European ski resorts will be likely to close by 2100.

The same could happen in the United States, where in the Northeast, more than half of the 103 ski resorts may no longer be viable in 30 years because of warmer winters. As far for the Western part of the country, it will lose an estimated 25 to 100 percent of its snowpack by 2100 if greenhouse gas emissions are not curtailed — reducing the snowpack in Park City, Utah, to zero and relegating skiing to the top quarter of Ajax Mountain in Aspen.

The facts are straightforward: The planet is getting hotter. Snow melts above 32 degrees Fahrenheit. The Alps are warming two to three times faster than the worldwide average, possibly because of global circulation patterns. Since 1970, the rate of winter warming per decade in the United States has been triple the rate of the previous 75 years, with the strongest trends in the Northern regions of the country. Nine of the 10 hottest years on record have occurred since 2000, and this winter is already looking to be one of the driest on record — with California at just 12 percent of its average snowpack in January, and the Pacific Northwest at around 50 percent.

To a skier, snowboarder or anyone who has spent time in the mountains, the idea of brown peaks in midwinter is surreal. Poets write of the grace and beauty by which snowflakes descend and transform a landscape. Powder hounds follow the 100-odd storms that track across the United States every winter, then drive for hours to float down a mountainside in the waist-deep “cold smoke” that the storms leave behind.

The snow I learned to ski on in northern Maine was more blue than white, and usually spewed from snow-making guns instead of the sky. I didn’t like skiing at first. It was cold. And uncomfortable.

Then, when I was 12, the mystical confluence of vectors that constitute a ski turn aligned, and I was hooked. I scrubbed toilets at my father’s boatyard on Mount Desert Island in high school so I could afford a ski pass and sold season passes in college at Mad River Glen in Vermont to get a free pass for myself. After graduating, I moved to Jackson Hole, Wyo., for the skiing. Four years later, Powder magazine hired me, and I’ve been an editor there ever since.

My bosses were generous enough to send me to five continents over the last 15 years, with skis in tow. I’ve skied the lightest snow on earth on the northern Japanese island of Hokkaido, where icy fronts spin off the Siberian plains and dump 10 feet of powder in a matter of days. In the high peaks of Bulgaria and Morocco, I slid through snow stained pink by grains of Saharan sand that the crystals formed around.

In Baja, Mexico, I skied a sliver of hardpack snow at 10,000 feet on Picacho del Diablo, sandwiched between the Sea of Cortez and the Pacific Ocean. A few years later, a crew of skiers and I journeyed to the whipsaw Taurus Mountains in southern Turkey to ski steep couloirs alongside caves where troglodytes lived thousands of years ago.

At every range I traveled to, I noticed a brotherhood among mountain folk: Say you’re headed into the hills, and the doors open. So it has been a surprise to see the winter sports community, as one of the first populations to witness effects of climate change in its own backyard, not reacting more vigorously and swiftly to reverse the fate we are writing for ourselves.

It’s easy to blame the big oil companies and the billions of dollars they spend on influencing the media and popular opinion. But the real reason is a lack of knowledge. I know, because I, too, was ignorant until I began researching the issue for a book on the future of snow.

I was floored by how much snow had already disappeared from the planet, not to mention how much was predicted to melt in my lifetime. The ski season in parts of British Columbia is four to five weeks shorter than it was 50 years ago, and in eastern Canada, the season is predicted to drop to less than two months by midcentury. At Lake Tahoe, spring now arrives two and a half weeks earlier, and some computer models predict that the Pacific Northwest will receive 40 to 70 percent less snow by 2050. If greenhouse gas emissions continue to rise — they grew 41 percent between 1990 and 2008 — then snowfall, winter and skiing will no longer exist as we know them by the end of the century.

The effect on the ski industry has already been significant. Between 1999 and 2010, low snowfall years cost the industry $1 billion and up to 27,000 jobs. Oregon took the biggest hit out West, with 31 percent fewer skier visits during low snow years. Next was Washington at 28 percent, Utah at 14 percent and Colorado at 7.7 percent.

Read the entire story here.

Image courtesy of USA Today.

13.6 Billion Versus 4004 BCE

The first number, 13.6 billion, is the age in years of the oldest known star in the cosmos. It was discovered recently by astronomers in Australia at the National University’s Mount Stromlo SkyMapper Observatory. The star is located in our Milky Way galaxy about 6,000 light years away. A little closer to home, in Kentucky at the aptly named Creation Museum, the Synchronological Chart places the beginning of time and all things at 4004 BCE.

Interestingly enough both Australia and Kentucky should not exist according to the flat earth myth or the widespread pre-Columbus view of our world with an edge at the visible horizon. But, the evolution versus creationism debates continue unabated. The chasm between the two camps remains a mere 13.6 billion years give or take a handful of millennia. But perhaps over time, those who subscribe to reason and the scientific method are likely to prevail — an apt example of survival of the most adaptable at work.

Hitch, we still miss you!

From ars technica:

In 1878, the American scholar and minister Sebastian Adams put the final touches on the third edition of his grandest project: a massive Synchronological Chart that covers nothing less than the entire history of the world in parallel, with the deeds of kings and kingdoms running along together in rows over 25 horizontal feet of paper. When the chart reaches 1500 BCE, its level of detail becomes impressive; at 400 CE it becomes eyebrow-raising; at 1300 CE it enters the realm of the wondrous. No wonder, then, that in their 2013 book Cartographies of Time: A History of the Timeline, authors Daniel Rosenberg and Anthony Grafton call Adams’ chart “nineteenth-century America’s surpassing achievement in complexity and synthetic power… a great work of outsider thinking.”

The chart is also the last thing that visitors to Kentucky’s Creation Museum see before stepping into the gift shop, where full-sized replicas can be purchased for $40.

That’s because, in the world described by the museum, Adams’ chart is more than a historical curio; it remains an accurate timeline of world history. Time is said to have begun in 4004 BCE with the creation of Adam, who went on to live for 930 more years. In 2348 BCE, the Earth was then reshaped by a worldwide flood, which created the Grand Canyon and most of the fossil record even as Noah rode out the deluge in an 81,000 ton wooden ark. Pagan practices at the eight-story high Tower of Babel eventually led God to cause a “confusion of tongues” in 2247 BCE, which is why we speak so many different languages today.

Adams notes on the second panel of the chart that “all the history of man, before the flood, extant, or known to us, is found in the first six chapters of Genesis.”

Ken Ham agrees. Ham, CEO of Answers in Genesis (AIG), has become perhaps the foremost living young Earth creationist in the world. He has authored more books and articles than seems humanly possible and has built AIG into a creationist powerhouse. He also made national headlines when the slickly modern Creation Museum opened in 2007.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

He has also been looking for the opportunity to debate a prominent supporter of evolution.

And so it was that, as a severe snow and sleet emergency settled over the Cincinnati region, 900 people climbed into cars and wound their way out toward the airport to enter the gates of the Creation Museum. They did not come for the petting zoo, the zip line, or the seasonal camel rides, nor to see the animatronic Noah chortle to himself about just how easy it had really been to get dinosaurs inside his ark. They did not come to see The Men in White, a 22-minute movie that plays in the museum’s halls in which a young woman named Wendy sees that what she’s been taught about evolution “doesn’t make sense” and is then visited by two angels who help her understand the truth of six-day special creation. They did not come to see the exhibits explaining how all animals had, before the Fall of humanity into sin, been vegetarians.

They came to see Ken Ham debate TV presenter Bill Nye the Science Guy—an old-school creation v. evolution throwdown for the Powerpoint age. Even before it began, the debate had been good for both men. Traffic to AIG’s website soared by 80 percent, Nye appeared on CNN, tickets sold out in two minutes, and post-debate interviews were lined up with Piers Morgan Live and MSNBC.

While plenty of Ham supporters filled the parking lot, so did people in bow ties and “Bill Nye is my Homeboy” T-shirts. They all followed the stamped dinosaur tracks to the museum’s entrance, where a pack of AIG staffers wearing custom debate T-shirts stood ready to usher them into “Discovery Hall.”

Security at the Creation Museum is always tight; the museum’s security force is made up of sworn (but privately funded) Kentucky peace officers who carry guns, wear flat-brimmed state trooper-style hats, and operate their own K-9 unit. For the debate, Nye and Ham had agreed to more stringent measures. Visitors passed through metal detectors complete with secondary wand screenings, packages were prohibited in the debate hall itself, and the outer gates were closed 15 minutes before the debate began.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Inside the hall, packed with bodies and the blaze of high-wattage lights, the temperature soared. The empty stage looked—as everything at the museum does—professionally designed, with four huge video screens, custom debate banners, and a pair of lecterns sporting Mac laptops. 20 different video crews had set up cameras in the hall, and 70 media organizations had registered to attend. More than 10,000 churches were hosting local debate parties. As AIG technical staffers made final preparations, one checked the YouTube-hosted livestream—242,000 people had already tuned in before start time.

An AIG official took the stage eight minutes before start time. “We know there are people who disagree with each other in this room,” he said. “No cheering or—please—any disruptive behavior.”

At 6:59pm, the music stopped and the hall fell silent but for the suddenly prominent thrumming of the air conditioning. For half a minute, the anticipation was electric, all eyes fixed on the stage, and then the countdown clock ticked over to 7:00pm and the proceedings snapped to life. Nye, wearing his traditional bow tie, took the stage from the left; Ham appeared from the right. The two shook hands in the center to sustained applause, and CNN’s Tom Foreman took up his moderating duties.

Ham had won the coin toss backstage and so stepped to his lectern to deliver brief opening remarks. “Creation is the only viable model of historical science confirmed by observational science in today’s modern scientific era,” he declared, blasting modern textbooks for “imposing the religion of atheism” on students.

“We’re teaching people to think critically!” he said. “It’s the creationists who should be teaching the kids out there.”

And we were off.

Two kinds of science

Digging in the fossil fields of Colorado or North Dakota, scientists regularly uncover the bones of ancient creatures. No one doubts the existence of the bones themselves; they lie on the ground for anyone to observe or weigh or photograph. But in which animal did the bones originate? How long ago did that animal live? What did it look like? One of Ham’s favorite lines is that the past “doesn’t come with tags”—so the prehistory of a stegosaurus thigh bone has to be interpreted by scientists, who use their positions in the present to reconstruct the past.

For mainstream scientists, this is simply an obvious statement of our existential position. Until a real-life Dr. Emmett “Doc” Brown finds a way to power a Delorean with a 1.21 gigawatt flux capacitor in order to shoot someone back through time to observe the flaring-forth of the Universe, the formation of the Earth, or the origins of life, or the prehistoric past can’t be known except by interpretation. Indeed, this isn’t true only of prehistory; as Nye tried to emphasize, forensic scientists routinely use what they know of nature’s laws to reconstruct past events like murders.

For Ham, though, science is broken into two categories, “observational” and “historical,” and only observational science is trustworthy. In the initial 30 minute presentation of his position, Ham hammered the point home.

“You don’t observe the past directly,” he said. “You weren’t there.”

Ham spoke with the polish of a man who has covered this ground a hundred times before, has heard every objection, and has a smooth answer ready for each one.

When Bill Nye talks about evolution, Ham said, that’s “Bill Nye the Historical Science Guy” speaking—with “historical” being a pejorative term.

In Ham’s world, only changes that we can observe directly are the proper domain of science. Thus, when confronted with the issue of speciation, Ham readily admits that contemporary lab experiments on fast-breeding creatures like mosquitoes can produce new species. But he says that’s simply “micro-evolution” below the family level. He doesn’t believe that scientists can observe “macro-evolution,” such as the alteration of a lobe-finned fish into a tiger over millions of years.

Because they can’t see historical events unfold, scientists must rely on reconstructions of the past. Those might be accurate, but they simply rely on too many “assumptions” for Ham to trust them. When confronted during the debate with evidence from ancient trees which have more rings than there are years on the Adams Sychronological Chart, Ham simply shrugged.

“We didn’t see those layers laid down,” he said.

To him, the calculus of “one ring, one year” is merely an assumption when it comes to the past—an assumption possibly altered by cataclysmic events such as Noah’s flood.

In other words, “historical science” is dubious; we should defer instead to the “observational” account of someone who witnessed all past events: God, said to have left humanity an eyewitness account of the world’s creation in the book of Genesis. All historical reconstructions should thus comport with this more accurate observational account.

Mainstream scientists don’t recognize this divide between observational and historical ways of knowing (much as they reject Ham’s distinction between “micro” and “macro” evolution). Dinosaur bones may not come with tags, but neither does observed contemporary reality—think of a doctor presented with a set of patient symptoms, who then has to interpret what she sees in order to arrive at a diagnosis.

Given that the distinction between two kinds of science provides Ham’s key reason for accepting the “eyewitness account” of Genesis as a starting point, it was unsurprising to see Nye take generous whacks at the idea. You can’t observe the past? “That’s what we do in astronomy,” said Nye in his opening presentation. Since light takes time to get here, “All we can do in astronomy is look at the past. By the way, you’re looking at the past right now.”

Those in the present can study the past with confidence, Nye said, because natural laws are generally constant and can be used to extrapolate into the past.

“This idea that you can separate the natural laws of the past from the natural laws you have now is at the heart of our disagreement,” Nye said. “For lack of a better word, it’s magical. I’ve appreciated magic since I was a kid, but it’s not what we want in mainstream science.”

How do scientists know that these natural laws are correctly understood in all their complexity and interplay? What operates as a check on their reconstructions? That’s where the predictive power of evolutionary models becomes crucial, Nye said. Those models of the past should generate predictions which can then be verified—or disproved—through observations in the present.

Read the entire article here.

MondayMap: Mississippi is Syria; Colorado is Slovenia

US-life-expectancy

A fascinating map re-imagines life expectancy in the United States, courtesy of Olga Khazan over at measureofamerica.org. The premise of this map is a simple one: match the average life expectancy for each state of the union with that of a country having a similar rate. Voila. The lowest life expectancy rate belongs to Mississippi at 75 years, which equates with that of Syria. The highest, at 81.3 years, is found in Hawaii and Cyprus.

From the Atlantic:

American life expectancy has leapt up some 30 years in the past century, and we now live roughly 79.8 years on average. That’s not terrible, but it’s not fantastic either: We rank 35th in the world as far as lifespan, nestled right between Costa Rica and Chile. But looking at life expectancy by state, it becomes clear that where you live in America, at least to some extent, determines when you’ll die.

Here, I’ve found the life expectancy for every state to the tenth of a year using the data and maps from the Measure of America, a nonprofit group that tracks human development. Then, I paired it up with the nearest country by life expectancy from the World Health Organization’s 2013 data. When there was no country with that state’s exact life expectancy, I paired it with the nearest matching country, which was always within two-tenths of a year.

There’s profound variation by state, from a low of 75 years in Mississippi to a high of 81.3 in Hawaii. Mostly, we resemble tiny, equatorial hamlets like Kuwait and Barbados. At our worst, we look more like Malaysia or Oman, and at our best, like the United Kingdom. No state approaches the life expectancies of most European countries or some Asian ones. Icelandic people can expect to live a long 83.3 years, and that’s nothing compared to the Japanese, who live well beyond 84.

Life expectancy can be causal, a factor of diet, environment, medical care, and education. But it can also be recursive: People who are chronically sick are less likely to become wealthy, and thus less likely to live in affluent areas and have access to the great doctors and Whole-Foods kale that would have helped them live longer.

It’s worth noting that the life expectancy for certain groups within the U.S. can be much higher—or lower—than the norm. The life expectancy for African Americans is, on average, 3.8 years shorter than that of whites. Detroit has a life expectancy of just 77.6 years, but that city’s Asian Americans can expect to live 89.3 years.

Read the entire article here.

Business Decison-Making Welcomes Science

data-visualization-ayasdi

It is likely that business will never eliminate gut instinct from the decision-making process. However, as data, now big data, increasingly pervades every crevice of every organization, the use of data-driven decisions will become the norm. As this happens, more and more businesses find themselves employing data scientists to help filter, categorize, mine and analyze these mountains of data in meaningful ways.

The caveat, of course, is that data, big data and an even bigger reliance on that data requires subject matter expertise and analysts with critical thinking skills and sound judgement — data cannot be used blindly.

From Technology review:

Throughout history, innovations in instrumentation—the microscope, the telescope, and the cyclotron—have repeatedly revolutionized science by improving scientists’ ability to measure the natural world. Now, with human behavior increasingly reliant on digital platforms like the Web and mobile apps, technology is effectively “instrumenting” the social world as well. The resulting deluge of data has revolutionary implications not only for social science but also for business decision making.

As enthusiasm for “big data” grows, skeptics warn that overreliance on data has pitfalls. Data may be biased and is almost always incomplete. It can lead decision makers to ignore information that is harder to obtain, or make them feel more certain than they should. The risk is that in managing what we have measured, we miss what really matters—as Vietnam-era Secretary of Defense Robert McNamara did in relying too much on his infamous body count, and as bankers did prior to the 2007–2009 financial crisis in relying too much on flawed quantitative models.

The skeptics are right that uncritical reliance on data alone can be problematic. But so is overreliance on intuition or ideology. For every Robert McNamara, there is a Ron Johnson, the CEO whose disastrous tenure as the head of JC Penney was characterized by his dismissing data and evidence in favor of instincts. For every flawed statistical model, there is a flawed ideology whose inflexibility leads to disastrous results.

So if data is unreliable and so is intuition, what is a responsible decision maker supposed to do? While there is no correct answer to this question—the world is too complicated for any one recipe to apply—I believe that leaders across a wide range of contexts could benefit from a scientific mind-set toward decision making.

A scientific mind-set takes as its inspiration the scientific method, which at its core is a recipe for learning about the world in a systematic, replicable way: start with some general question based on your experience; form a hypothesis that would resolve the puzzle and that also generates a testable prediction; gather data to test your prediction; and finally, evaluate your hypothesis relative to competing hypotheses.

The scientific method is largely responsible for the astonishing increase in our understanding of the natural world over the past few centuries. Yet it has been slow to enter the worlds of politics, business, policy, and marketing, where our prodigious intuition for human behavior can always generate explanations for why people do what they do or how to make them do something different. Because these explanations are so plausible, our natural tendency is to want to act on them without further ado. But if we have learned one thing from science, it is that the most plausible explanation is not necessarily correct. Adopting a scientific approach to decision making requires us to test our hypotheses with data.

While data is essential for scientific decision making, theory, intuition, and imagination remain important as well—to generate hypotheses in the first place, to devise creative tests of the hypotheses that we have, and to interpret the data that we collect. Data and theory, in other words, are the yin and yang of the scientific method—theory frames the right questions, while data answers the questions that have been asked. Emphasizing either at the expense of the other can lead to serious mistakes.

Also important is experimentation, which doesn’t mean “trying new things” or “being creative” but quite specifically the use of controlled experiments to tease out causal effects. In business, most of what we observe is correlation—we do X and Y happens—but often what we want to know is whether or not X caused Y. How many additional units of your new product did your advertising campaign cause consumers to buy? Will expanded health insurance coverage cause medical costs to increase or decline? Simply observing the outcome of a particular choice does not answer causal questions like these: we need to observe the difference between choices.

Replicating the conditions of a controlled experiment is often difficult or impossible in business or policy settings, but increasingly it is being done in “field experiments,” where treatments are randomly assigned to different individuals or communities. For example, MIT’s Poverty Action Lab has conducted over 400 field experiments to better understand aid delivery, while economists have used such experiments to measure the impact of online advertising.

Although field experiments are not an invention of the Internet era—randomized trials have been the gold standard of medical research for decades—digital technology has made them far easier to implement. Thus, as companies like Facebook, Google, Microsoft, and Amazon increasingly reap performance benefits from data science and experimentation, scientific decision making will become more pervasive.

Nevertheless, there are limits to how scientific decision makers can be. Unlike scientists, who have the luxury of withholding judgment until sufficient evidence has accumulated, policy makers or business leaders generally have to act in a state of partial ignorance. Strategic calls have to be made, policies implemented, reward or blame assigned. No matter how rigorously one tries to base one’s decisions on evidence, some guesswork will be required.

Exacerbating this problem is that many of the most consequential decisions offer only one opportunity to succeed. One cannot go to war with half of Iraq and not the other just to see which policy works out better. Likewise, one cannot reorganize the company in several different ways and then choose the best. The result is that we may never know which good plans failed and which bad plans worked.

Read the entire article here.

Image: Screenshot of Iris, Ayasdi’s data-visualization tool. Courtesy of Ayasdi / Wired.

Mr. Magorium’s Real Life Toy Emporium

tim-rowett

We are all children at heart. Unfortunately many of us are taught to suppress or abandon our dreams and creativity as a prerequisite for entering adulthood. However, a few manage to keep the wonder of their inner child alive.

Tim Rowett is one such person; through his toys he brings smiles and re-awakens memories in many of us who have since forgotten how to play and imagine. Though, I would take issue with Wired’s characterization of Mr.Rowett as an “eccentric”. Eccentricity is not a label that I’d apply to a person who remains true to his or her earlier self.

From Wired (UK):

When Wired.co.uk visited Tim Rowett’s flat in Twickenham, nothing had quite prepared us for the cabinet of curiosities we found ourselves walking into. Old suitcases overflowing with toys and knick-knacks were meticulously labelled, dated and stacked on top of one another from room to room, floor to ceiling. Every bookshelf, corner and cupboard had been stripped of whatever its original purpose might have been, and replaced with the task of storing Tim’s 25,000 toys, which he’s been collecting for over 50 years.

For the last five years Tim has been entertaining a vast and varied audience of millions on YouTube, becoming a perhaps surprising viral success. Taking a small selection of his toys each week to and from a studio in Buckinghamshire — which also happens to be an 18th century barn — he’s steadily built up a following of the curious, the charmed and the fanatic.

If you’re a regular user of Reddit, or perhaps occasionally find yourself in “the weird place” on YouTube after one too many clicks through the website’s dubious “related videos” section, then you’ve probably already come across Tim in one form or another. With more than 28 million views and hundreds of thousands of subscribers, he’s certainly no small presence.

You won’t know him as Tim, though. In fact, unless you’ve deliberately gone out of your way, you won’t know very much about Tim at all — he’s a private man, who’s far more interested in entertaining and educating viewers with his endless collection of toys and gadgets, which often have mathematically or scientifically curious fundamental principles, than he is in bothering you with fussy details like his full name.

Greeted with a warm and familiar hello, Tim offered us a cup of tea, a biscuit and and a seat by the fire. “Toys, everywhere, toys.” He said, looking round the room as he sat down. “I see myself as an hourglass. A large part of me is 112, a small part is my physical age and the last part is a 12-year-old boy.”

This unique mix of old and new — both literally and figuratively — certainly displays itself in his videos, of which there are upwards of 500 at rarely no more than 10 minutes in length. The formula is refreshingly simple. Tim sits at a table, demonstrates how a particular toy works, and provides background information to the piece before explaining how the mechanism inside (if it has one) functions — a particular delight for the scientifically-minded collector: “The mechanism is the key thing” he explained, “and some of them are quite remarkable. If a child breaks a toy I often think ‘oh wonderful’ because it means I can get into it.”

The apparently simple facade of the show is slightly deceptive however — Tim works with two ex-BBC producers: Hendrik Ball and George Auckland, who are responsible for editing and filming the videos. Hendrik’s passion for science (fuelled by his BSc at Bristol) ultimately landed him a job as a producer at the BBC, which he kept for 25 years, specialising in science and educational material. Hendrik has his own remarkable history in tech, having written the first website for the BBC that ever accompanied a television programme (called Multimedia Business), back in 1996, making him and George “a little nucleus of knowledge of multimedia in our department at that time”.

With few opportunities presenting themselves at the BBC to expand their newly developed skills in HTML, the two hatched a plan to create a website called Grand Illusions, which would not only sell many of the various toys and gadgets Tim came across in his collection, but would also experiment with video, with Tim as the presenter: “George and I wanted to get some more first-hand experience of running a website which would feed into our BBC work.” Said Hendrik, “so we had this idea, which closely involved a bottle of Rioja — wilder rumours say there were two bottles — and we came up with Grand Illusions. Within about a week we’d finished the website and at one point we were getting more hits than the BBC education website.”

Having only spent two hours with Tim, it’s clear why Hendrik and George were so keen to get him in front of the camera. During our time together, Tim played up to his role as the restless prestidigitator, which has afforded him such great success online — “I’m a child philosopher” he said, as he waved a parallax-inspired business card in front of us.  “You can either explore the world outside, as people do,” he placed a tiny cylindrical metal tube in my hand, “or you can explore the world inside, which is equally meaningful in my mind — there are still dragons and dangers and treasures on the inside as well as the outside world.” He then suggested throwing the cylinder in the air, and it burst into a large magic wand.

This constant conjuring was what initially piqued Hendrik’s interest: “He’s a master at it. Whenever he goes anywhere he’ll have a few toys on him. If there’s ever a lull he’ll produce one and give a quick demonstration and then everyone wants a go but, just as the excitement is peaking, Tim will bring out the next one.”

On one occasion, after a meal, Tim inflated a large balloon outside of a restaurant using a helium cylinder he stores in the boot of his car. He attached a sparkler to the balloon, lit it and then let the balloon float off into the sky. “It was an impressive end to the evening,” says Hendrik.

When we asked Hendrik what he thought the appeal of Tim’s channel was, on which nearly two million people have watched a video on Japanese zip bags and a further million on a spinning gun, he stressed that sometimes his apparent innocence worked in their favour. “Tim produced a toy some while ago, which looked like a revolver but in black rubber. It has a wire coming out of it and there’s a battery at the other end — when you press a button the end of the revolver sort of wiggles,” says Hendrik, who assures us that Tim bought this from a toy shop and has the original packaging to prove it. He also bought a rather large rubbery heart, which kind of throbs when you push a button.

Read the entire story here.

Image: Tim Rowett / Grand Illusions. Courtesy of Wired UK.