120 Years of Best Movies

Slim-pickens_riding-the-bomb_enh-lores

So, if you have some time to spare mine the IMDb movie database for trends and patterns buried in the gazillions of movie reviews. Then, parse the results for most positive mentions for a movie for each year — since public movies began. Then post the results on Reddit. That’s what monoglot did for us a couple of weeks ago. The results show the best movies by popular consent, not by critical acclaim. But, fascinating nonetheless. My favorite, goes to the vintage year of 1964, the movie: Stanley Kubrick’s, Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb. It’s a classic, very dark comedy, and all the more hysterical because it’s very close to the truth.

From Reddit:

Year Film Top Votes All Votes Rating
1894 Edison Kinetoscopic Record of a Sneeze 181 824 6.1
1895 Employees Leaving the Lumière Factory 449 2809 6.9
1896 Arrival of a Train at La Ciotat 735 3676 7.3
1897 Leaving Jerusalem by Railway 53 334 6.6
1898 Four Heads Are Better Than One 326 1254 7.7
1899 The Kiss in the Tunnel 51 505 5.9
1900 The One-Man Band 82 1021 7.1
1901 The India Rubber Head 91 1133 7.2
1902 A Trip to the Moon 7563 17189 8.2
1903 The Great Train Robbery 1403 7795 7.4
1904 An Impossible Voyage 388 1615 7.7
1905 Le diable noir 163 1016 7.2
1906 Dream of a Rarebit Fiend 93 931 6.8
1907 Ben Hur 101 336 5.7
1908 Fantasmagorie 102 1015 7.0
1909 The Devilish Tenant 159 661 7.5
1910 Frankenstein 144 1805 6.6
1911 Winsor McCay, the Famous Cartoonist of the N.Y. Herald and His Moving Comics 155 860 7.3
1912 The Revenge of a Kinematograph Cameraman 400 1332 7.9
1913 Fantomas 111 1110 6.8
1914 Tillie’s Punctured Romance 892 2230 7.4
1915 The Birth of a Nation 4121 13736 6.9
1916 Intolerance 3280 8632 8.1
1917 The Immigrant 966 3715 7.8
1918 A Dog’s Life 860 3307 7.8
1919 Broken Blossoms 2089 5804 7.7
1920 The Cabinet of Dr. Caligari 13131 28545 8.1
1921 The Kid 18501 40219 8.4
1922 Nosferatu 21126 55596 8.0
1923 Safety Last! 4569 9933 8.3
1924 Sherlock Jr. 7707 16754 8.3
1925 The Gold Rush 20720 45044 8.3
1926 The General 17175 37337 8.3
1927 Metropolis 37077 80602 8.4
1928 The Passion of Joan of Arc 9826 19651 8.3
1929 Un chien andalou 9507 25019 7.9
1930 All Quiet on the Western Front 18611 40458 8.1
1931 City Lights 38960 69572 8.7
1932 Freaks 7740 25801 8.0
1933 King Kong 21296 56042 8.0
1934 It Happened One Night 21284 46270 8.3
1935 Bride of Frankenstein 9697 25518 7.9
1936 Modern Times 50487 90156 8.6
1937 Snow White and the Seven Dwarfs 25843 92297 7.7
1938 Bringing Up Baby 14224 37432 8.1
1939 The Wizard of Oz 79226 208490 8.2
1940 The Great Dictator 40192 87374 8.5
1941 Citizen Kane 127586 227833 8.5
1942 Casablanca 165578 295675 8.6
1943 Shadow of a Doubt 10359 36995 8.0
1944 Double Indemnity 32626 70925 8.5
1945 Brief Encounter 9876 21469 8.1
1946 It’s a Wonderful Life 114199 196894 8.7
1947 Miracle on 34th Street 9205 24223 7.9
1948 Bicycle Thieves 29153 63377 8.4
1949 The Third Man 39394 85640 8.4
1950 Sunset Blvd. 54848 101571 8.6
1951 A Streetcar Named Desire 29419 63954 8.1
1952 Singin’ in the Rain 58094 107582 8.4
1953 Roman Holiday 31896 69340 8.1
1954 Seven Samurai 113482 171942 8.8
1955 The Night of the Hunter 21862 47527 8.1
1956 The Searchers 19109 50286 8.0
1957 12 Angry Men 192641 291880 8.9
1958 Vertigo 80687 175406 8.5
1959 North by Northwest 76067 165364 8.5
1960 Psycho 135723 295051 8.6
1961 Breakfast at Tiffany’s 25338 90494 7.8
1962 Lawrence of Arabia 75643 140080 8.4
1963 The Great Escape 54982 119526 8.3
1964 Dr. Strangelove 146779 262105 8.6
1965 For a Few Dollars More 45628 103701 8.4
1966 The Good, the Bad and the Ugly 233024 353066 9.0
1967 The Graduate 61087 160755 8.1
1968 2001: A Space Odyssey 164849 294374 8.3
1969 Butch Cassidy and the Sundance Kid 39801 110558 8.2
1970 Patton 29139 63345 8.1
1971 A Clockwork Orange 208014 385212 8.4
1972 The Godfather 604775 817264 9.2
1973 The Exorcist 87140 229317 8.0
1974 The Godfather: Part II 355223 538216 9.1
1975 One Flew Over the Cuckoo’s Nest 313325 489570 8.8
1976 Taxi Driver 160636 349209 8.4
1977 Star Wars 364912 629158 8.7
1978 The Deer Hunter 79985 173881 8.2
1979 Alien 179377 389950 8.5
1980 The Empire Strikes Back 325241 560760 8.8
1981 Raiders of the Lost Ark 217026 471795 8.6
1982 Blade Runner 160519 348955 8.3
1983 Return of the Jedi 203856 443166 8.4
1984 The Terminator 148056 411266 8.1
1985 Back to the Future 218885 475838 8.5
1986 Aliens 162067 352320 8.5
1987 Full Metal Jacket 126512 332925 8.4
1988 Die Hard 154758 429882 8.3
1989 Indiana Jones and the Last Crusade 159712 362981 8.3
1990 Goodfellas 327955 512429 8.8
1991 The Silence of the Lambs 313522 580596 8.6
1992 Reservoir Dogs 208201 452611 8.4
1993 Schindler’s List 345845 596284 8.9
1994 The Shawshank Redemption 870630 1176527 9.3
1995 Se7en 371390 687759 8.7
1996 Trainspotting 130009 342128 8.2
1997 Titanic 213075 560724 7.7
1998 Saving Private Ryan 322571 597354 8.6
1999 Fight Club 519243 895246 8.9
2000 Memento 325477 602735 8.6
2001 The Fellowship of the Ring 572464 867369 8.9
2002 The Two Towers 438736 756441 8.8
2003 The Return of the King 554928 840800 8.9
2004 Eternal Sunshine of the Spotless Mind 217921 473742 8.4
2005 Batman Begins 302015 656554 8.3
2006 The Departed 321600 595555 8.5
2007 No Country for Old Men 198718 431995 8.2
2008 The Dark Knight 753903 1142277 9.0
2009 Inglourious Basterds 256945 558576 8.3
2010 Inception 618118 936543 8.8
2011 Intouchables 181019 282842 8.6
2012 The Dark Knight Rises 437472 754262 8.6
2013 Gravity 151512 329373 8.2
2014 The Lego Movie 25934 48025 8.4

Read the entire post here.

Image: Slim Pickens as Major T.J. “King” Kong riding a nuclear bomb to oblivion, from the movie Dr.Strangelove. Courtesy of Wikipedia.

Send to Kindle

We Are Part Selfie, Part Voyeur

I would take issue with the Atlantic’s story below: citizen journalist as documentarian. Without doubt filming someone in danger or emergency and then posting the video on YouTube does certainly add an in-the-moment authenticity. The news event becomes more personal, more identifiable. Yet it is more troubling than positive. It removes us directly from the event, turning us all into passive observers. And in legitimizing the role of the observer — through pageviews, likes and re-tweets — it lessens the impetus to participate actively, to assist and to help. Selfie replaces selflessness.

From the Atlantic:

Yesterday, as a five-alarm fire engulfed a new apartment complex in Houston, a construction worker found himself in pretty much the last place he’d want to: trapped on a ledge, feet from the flames. As he waited, helplessly, to be rescued, others waited with him. The construction site was across the street from an office building, and workers flocked to the windows to see the drama unfold. One of them filmed it. You can see some of their images reflected in the video that resulted, above.

Things ended as well as they could have for the trapped man; he escaped, and no injuries were reported as a result of the fire. In the video, the scene playing out on that ledge vaguely foreshadows this outcome: The person whose life is in danger—who is standing, trapped, as flames lick at the walls next to him—seems relatively calm.

What we hear, instead, is the commentary—the exchanges of people who are watching the scene unfold from a safe distance. And that commentary is … banal. Deeply (and almost profoundly) so. In the same way that your commentary, or mine, might well be were we watching the same scene. Here are some of the sentiments expressed by the onlookers of this terrifyingly unfolding drama:

“OMG.”

“Oh, Jesus.”

“This guy is on the frickin’ ledge.”

“He can’t get out, ‘cuz he can’t get out the door.”

This is not to criticize the people watching the scene unfold—the people whose commentary, almost literally, upstages the drama of the burning building and the man trapped on its ledge. Again, my own comments, on witnessing the same scene, would probably sound similar. (Though I do like to flatter myself that I’d save the “cheap apartment” hilarity until after the threat of a man being burned alive had officially ended.)

It’s worth noting, though, what the real estate humor here hints at: the chaos of tragedy as it’s experienced by real people, in real time. The confusion that is so aptly captured by a video like this, shot on a smartphone and posted to YouTube. The same kind of caught chaos we saw with that fertilizer plant in Texas. And with that asteroid exploding in the skies above Russia. And with, for that matter, the Hindenburg disaster.

Compare those ad hoc representations of tragedy to our more traditional ways of knowing tragedy as an aesthetic, and video-taped, reality: through moving images provided by TV news, by Hollywood, by professionals who are trained to keep their mouths shut. On YouTube, as shot by amateurs on the scene, our experience of disaster instead features a Greek chorus of “OMGs” and “Unbelievables.” More and more of our portrays of catastrophe—and of the dramas that prevent catastrophe—are now mediated in this way: by other people. People who are shocked and scared and empathetic and, in the best and worst of ways, unthinking. People who, even if they tried, couldn’t keep quiet.

Read the entire story here.

Send to Kindle

Need Some Exercise? Laugh

Duck_SoupYour sense of humor and wit will keep your brain active and nimble. It will endear you to friends (often), family (usually) and bosses (sometimes). In addition, there is growing evidence that being an amateur (or professional) comedian or a just a connoisseur of good jokes will help you physically as well.

From WSJ:

“I just shot an elephant in my pajamas,” goes the old Groucho Marx joke. “How he got in my pajamas I don’t know.”

You’ve probably heard that one before, or something similar. For example, while viewing polling data for the 2008 presidential election on Comedy Central, Stephen Colbert deadpanned, “If I’m reading this graph correctly…I’d be very surprised.”

Zingers like these aren’t just good lines. They reveal something unusual about how the mind operates—and they show us how humor works. Simply put, the brain likes to jump the gun. We are always guessing where things are going, and we often get it wrong. But this isn’t necessarily bad. It’s why we laugh.

Humor is a form of exercise—a way of keeping the brain engaged. Mr. Colbert’s line is a fine example of this kind of mental calisthenics. If he had simply observed that polling data are hard to interpret, you would have heard crickets chirping. Instead, he misdirected his listeners, leading them to expect ponderous analysis and then bolting in the other direction to declare his own ignorance. He got a laugh as his audience’s minds caught up with him and enjoyed the experience of being fooled.

We benefit from taxing our brains with the mental exercise of humor, much as we benefit from the physical exercise of a long run or a tough tennis match. Comedy extends our mental stamina and improves our mental flexibility. A 1976 study by Avner Ziv of Tel Aviv University found that those who listened to a comedy album before taking a creativity test performed 20% better than those who weren’t exposed to the routine beforehand. In 1987, researchers at the University of Maryland found that watching comedy more than doubles our ability to solve brain teasers, like the so-called Duncker candle problem, which challenges people to attach a candle to a wall using only a book of matches and a box of thumbtacks. Research published in 1998 by psychologist Heather Belanger of the College of William and Mary even suggests that humor improves our ability to mentally rotate imaginary objects in our heads—a key test of spatial thinking ability.

The benefits of humor don’t stop with increased intelligence and creativity. Consider the “cold pressor test,” in which scientists ask subjects to submerge their hands in water cooled to just above the freezing mark.

This isn’t dangerous, but it does allow researchers to measure pain tolerance—which varies, it turns out, depending on what we’ve been doing before dunking our hands. How long could you hold your hand in 35-degree water after watching 10 minutes of Bill Cosby telling jokes? The answer depends on your own pain tolerance, but I can promise that it is longer than it would be if you had instead watched a nature documentary.

Like exercise, humor helps to prepare the mind for stressful events. A study done in 2000 by Arnold Cann, a psychologist at the University of North Carolina, had subjects watch 16 minutes of stand-up comedy before viewing “Faces of Death”—the notorious 1978 shock film depicting scene after scene of gruesome deaths. Those who watched the comedy routine before the grisly film reported significantly less psychological distress than those who watched a travel show instead. The degree to which humor can inoculate us from stress is quite amazing (though perhaps not as amazing as the fact that Dr. Cann got his experiment approved by his university’s ethical review board).

This doesn’t mean that every sort of humor is helpful. Taking a dark, sardonic attitude toward life can be unhealthy, especially when it relies on constant self-punishment. (Rodney Dangerfield: “My wife and I were happy for 20 years. Then we met.”) According to Nicholas Kuiper of the University of Western Ontario, people who resort to this kind of humor experience higher rates of depression than their peers, along with higher anxiety and lower self-esteem. Enjoying a good laugh is healthy, so long as you yourself aren’t always the target.

Having an active sense of humor helps us to get more from life, both cognitively and emotionally. It allows us to exercise our brains regularly, looking for unexpected and pleasing connections even in the face of difficulties or hardship. The physicist Richard Feynman called this “the kick of the discovery,” claiming that the greatest joy of his life wasn’t winning the Nobel Prize—it was the pleasure of discovering new things.

Read the entire story here.

Image: Duck Soup, promotional movie poster (1933). Courtesy of Wikipedia.

 

Send to Kindle

Are You in the 18 Percent? A Cave Beckons

la-mar-labAccording to a recent survey, 18 percent of U.S. citizens believe that the sun revolves around the earth. And, another survey suggests that 30 percent believe in the literal “truth” of the bible and 40 percent believe in intelligent design. The surveys, apparently, were of functioning adults.

I have to suspect that a similar number of adults believe in the fat reducing power of soap.

A number of vociferous advocates of creationism-as-science have recently taken to the airwaves to demand equal time — believing their (pseudo)-scientific views should stand on a par with real science.

Astrophysicist and presenter of the re-made Cosmos series, Neil deGrasse Tyson recently provided his eloquent take on these scientific naysayers,

“If you don’t know science in the 21st century, just move back to the cave, because that’s where we’re going to leave you as we move forward.”

My hat off to Mr.Tyson. Rather than engaging in lengthy debate over nonsense his curt reply is very apt: it is time for believers — in the scientific method — to just move on, and move ahead.

From Salon:

We Americans pride ourselves on our ideals of free speech. We believe in spirited back-and-forth and the notion that we are all entitled to our opinions. We stack our media coverage of news events with “opposing views.” These ideals are deeply rooted in our cultural character. And they’re making us stupid.

Ever since it debuted earlier this month, Neil deGrasse Tyson’s blockbuster, multi-network reboot of “Cosmos” has been ruffling feathers with its crazy, brazen tactic of putting scientific facts forward as the truth. It’s infuriated religious conservatives by furthering “the Scientific Martyr Myth of Giordano Bruno” within its “glossy multi-million-dollar piece of agitprop for scientific materialism.” And this weekend, creationist astronomer and Answers in Genesis bigwig Danny Faulkner complained about “Cosmos” on “The Janet Mefferd Show” that “Creationists aren’t even on the radar screen; they wouldn’t even consider us plausible at all” and that “Consideration of creation is definitely not up for discussion,” leading Mefferd to suggest equal time for the opposing views. But on “Late Night With Seth Meyers” last week, Neil deGrasse Tyson shrugged off the naysayers, noting, “If you don’t know science in the 21st century, just move back to the cave, because that’s where we’re going to leave you as we move forward.” This is why he’s a treasure — he has proven himself a consistent and elegant beacon of how to respond to extremists and crazy talk – by acknowledging it but not wasting breath arguing it.

We can go round and round in endless circles about social and philosophical issues. We can debate all day about matters of faith and religion, if you’re up for it. But well-established scientific principles don’t lend themselves well to conversations in which I say something based on hard physical evidence and carefully analyzed data, and then you shoot back with a bunch of spurious nonsense.

Read the entire article here.

Image courtesy of La-Mar Laboratories.

Send to Kindle

eLiquid eQuals ePoison

Nicotine3Dan2Many smokers are weaning themselves off tobacco, leaving the perils of carcinogenic tar and ash behind. Some are kicking the smoking habit for good. Others are dashing headlong towards another risk to health — e-cigarettes with tobacco substitutes.

The most prominent new danger comes from a brand of substances called eLiquids, particularly liquid nicotine. Just like the tobacco industry during its early days, eLiquid producers are poorly controlled and the substances are not regulated. A teaspoon of concentrated nicotine, even absorbed through the skin, can kill. Caveat emptor!

From NYT:

A dangerous new form of a powerful stimulant is hitting markets nationwide, for sale by the vial, the gallon and even the barrel.

The drug is nicotine, in its potent, liquid form — extracted from tobacco and tinctured with a cocktail of flavorings, colorings and assorted chemicals to feed the fast-growing electronic cigarette industry.

These “e-liquids,” the key ingredients in e-cigarettes, are powerful neurotoxins. Tiny amounts, whether ingested or absorbed through the skin, can cause vomiting and seizures and even be lethal. A teaspoon of even highly diluted e-liquid can kill a small child.

But, like e-cigarettes, e-liquids are not regulated by federal authorities. They are mixed on factory floors and in the back rooms of shops, and sold legally in stores and online in small bottles that are kept casually around the house for regular refilling of e-cigarettes.

Evidence of the potential dangers is already emerging. Toxicologists warn that e-liquids pose a significant risk to public health, particularly to children, who may be drawn to their bright colors and fragrant flavorings like cherry, chocolate and bubble gum.

“It’s not a matter of if a child will be seriously poisoned or killed,” said Lee Cantrell, director of the San Diego division of the California Poison Control System and a professor of pharmacy at the University of California, San Francisco. “It’s a matter of when.”

Reports of accidental poisonings, notably among children, are soaring. Since 2011, there appears to have been one death in the United States, a suicide by an adult who injected nicotine. But less serious cases have led to a surge in calls to poison control centers. Nationwide, the number of cases linked to e-liquids jumped to 1,351 in 2013, a 300 percent increase from 2012, and the number is on pace to double this year, according to information from the National Poison Data System. Of the cases in 2013, 365 were referred to hospitals, triple the previous year’s number.

Examples come from across the country. Last month, a 2-year-old girl in Oklahoma City drank a small bottle of a parent’s nicotine liquid, started vomiting and was rushed to an emergency room.

That case and age group is considered typical. Of the 74 e-cigarette and nicotine poisoning cases called into Minnesota poison control in 2013, 29 involved children age 2 and under. In Oklahoma, all but two of the 25 cases in the first two months of this year involved children age 4 and under.

In terms of the immediate poison risk, e-liquids are far more dangerous than tobacco, because the liquid is absorbed more quickly, even in diluted concentrations.

“This is one of the most potent naturally occurring toxins we have,” Mr. Cantrell said of nicotine. But e-liquids are now available almost everywhere. “It is sold all over the place. It is ubiquitous in society.”

The surge in poisonings reflects not only the growth of e-cigarettes but also a shift in technology. Initially, many e-cigarettes were disposable devices that looked like conventional cigarettes. Increasingly, however, they are larger, reusable gadgets that can be refilled with liquid, generally a combination of nicotine, flavorings and solvents. In Kentucky, where about 40 percent of cases involved adults, one woman was admitted to the hospital with cardiac problems after her e-cigarette broke in her bed, spilling the e-liquid, which was then absorbed through her skin.

The problems with adults, like those with children, owe to carelessness and lack of understanding of the risks. In the cases of exposure in children, “a lot of parents didn’t realize it was toxic until the kid started vomiting,” said Ashley Webb, director of the Kentucky Regional Poison Control Center at Kosair Children’s Hospital.

The increased use of liquid nicotine has, in effect, created a new kind of recreational drug category, and a controversial one. For advocates of e-cigarettes, liquid nicotine represents the fuel of a technology that might prompt people to quit smoking, and there is anecdotal evidence that is happening. But there are no long-term studies about whether e-cigarettes will be better than nicotine gum or patches at helping people quit. Nor are there studies about the long-term effects of inhaling vaporized nicotine.

 Unlike nicotine gums and patches, e-cigarettes and their ingredients are not regulated. The Food and Drug Administration has said it plans to regulate e-cigarettes but has not disclosed how it will approach the issue. Many e-cigarette companies hope there will be limited regulation.

“It’s the wild, wild west right now,” said Chip Paul, chief executive officer of Palm Beach Vapors, a company based in Tulsa, Okla., that operates 13 e-cigarette franchises nationwide and plans to open 50 more this year. “Everybody fears F.D.A. regulation, but honestly, we kind of welcome some kind of rules and regulations around this liquid.”

Mr. Paul estimated that this year in the United States there will be sales of one million to two million liters of liquid used to refill e-cigarettes, and it is widely available on the Internet. Liquid Nicotine Wholesalers, based in Peoria, Ariz., charges $110 for a liter with 10 percent nicotine concentration. The company says on its website that it also offers a 55 gallon size. Vaporworld.biz sells a gallon at 10 percent concentrations for $195.

Read the entire story here.

Image: Nicotine molecule. Courtesy of Wikipedia.

Send to Kindle

The Angry Letter, Not Sent

LetterMost people over the age of 40 have probably written and not sent an angry letter.

The unsent letter may have been intended for a boss or an ex-boss. It may have been for a colleague or a vendor or a business associate. It may have been for your electrician or the plumber who failed to fix the problem. It may have been to a local restaurant that served up an experience far below your expecations; it may have been intended for Microsoft because your Windows XP laptop failed again, and this time you lost all your documents. We’ve all written an angry letter.

The angry letter has probably, for the most part, been replaced by the angry email — after all you can still keep an email as a draft, and not hit send. Younger generations may not be as fortunate — write an angry Facebook post or text a Tweet an it’s sent, shared, gone. Thus, social network users may not realize what they are truly missing from writing an angry letter, or email, and not sending it.

From NYT:

WHENEVER Abraham Lincoln felt the urge to tell someone off, he would compose what he called a “hot letter.” He’d pile all of his anger into a note, “put it aside until his emotions cooled down,” Doris Kearns Goodwin once explained on NPR, “and then write: ‘Never sent. Never signed.’ ” Which meant that Gen. George G. Meade, for one, would never hear from his commander in chief that Lincoln blamed him for letting Robert E. Lee escape after Gettysburg.

Lincoln was hardly unique. Among public figures who need to think twice about their choice of words, the unsent angry letter has a venerable tradition. Its purpose is twofold. It serves as a type of emotional catharsis, a way to let it all out without the repercussions of true engagement. And it acts as a strategic catharsis, an exercise in saying what you really think, which Mark Twain (himself a notable non-sender of correspondence) believed provided “unallowable frankness & freedom.”

Harry S. Truman once almost informed the treasurer of the United States that “I don’t think that the financial advisor of God Himself would be able to understand what the financial position of the Government of the United States is, by reading your statement.” In 1922, Winston Churchill nearly warned Prime Minister David Lloyd George that when it came to Iraq, “we are paying eight millions a year for the privilege of living on an ungrateful volcano out of which we are in no circumstances to get anything worth having.” Mark Twain all but chastised Russians for being too passive when it came to the czar’s abuses, writing, “Apparently none of them can bear to think of losing the present hell entirely, they merely want the temperature cooled down a little.”

But while it may be the unsent mail of politicians and writers that is saved for posterity, that doesn’t mean that they somehow hold a monopoly on the practice. Lovers carry on impassioned correspondence that the beloved never sees; family members vent their mutual frustrations. We rail against the imbecile who elbowed past us on the subway platform.

Personally, when I’m working on an article with an editor, I have a habit of using the “track changes” feature in Microsoft Word for writing retorts to suggested editorial changes. I then cool off and promptly delete the comments — and, usually, make the changes. (As far as I know, the uncensored me hasn’t made it into a final version.)

In some ways, little has changed in the art of the unsent letter since Lincoln thought better of excoriating Meade. We may have switched the format from paper to screen, but the process is largely the same. You feel angry. And you construct a retort — only to find yourself thinking better of taking it any further. Emotions cooled, you proceed in a more reasonable, and reasoned, fashion. It’s the opposite of the glib rejoinder that you think of just a bit too late and never quite get to say.

 

But it strikes me that in other, perhaps more fundamental, respects, the art of the unsent angry letter has changed beyond recognition in the world of social media. For one thing, the Internet has made the enterprise far more public. Truman, Lincoln and Churchill would file away their unsent correspondence. No one outside their inner circle would read what they had written. Now we have the option of writing what should have been our unsent words for all the world to see. There are threads on reddit and many a website devoted to those notes you’d send if only you were braver, not to mention the habit of sites like Thought Catalog of phrasing entire articles as letters that were never sent.

Want to express your frustration with your ex? Just submit a piece called “An Open Letter to the Girl I Loved and Lost,” and hope that she sees it and recognize herself. You, of course, have taken none of the risk of sending it to her directly.

A tweet about “that person,” a post about “restaurant employees who should know better”; you put in just enough detail to make the insinuation fairly obvious, but not enough that, if caught, you couldn’t deny the whole thing. It’s public shaming with an escape hatch. Does knowing that we can expect a collective response to our indignation make it more satisfying?

Not really. Though we create a safety net, we may end up tangled all the same. We have more avenues to express immediate displeasure than ever before, and may thus find ourselves more likely to hit send or tweet when we would have done better to hit save or delete. The ease of venting drowns out the possibility of recanting, and the speed of it all prevents a deeper consideration of what exactly we should say and why, precisely, we should say it.

When Lincoln wanted to voice his displeasure, he had to find a secretary or, at the very least, a pen. That process alone was a way of exercising self-control — twice over. It allowed him not only to express his thoughts in private (so as not to express them by mistake in public), but also to determine which was which: the anger that should be voiced versus the anger that should be kept quiet.

Now we need only click a reply button to rattle off our displeasures. And in the heat of the moment, we find the line between an appropriate response and one that needs a cooling-off period blurring. We toss our reflexive anger out there, but we do it publicly, without the private buffer that once would have let us separate what needed to be said from what needed only to be felt. It’s especially true when we see similarly angry commentary coming from others. Our own fury begins to feel more socially appropriate.

We may also find ourselves feeling less satisfied. Because the angry email (or tweet or text or whatnot) takes so much less effort to compose than a pen-and-paper letter, it may in the end offer us a less cathartic experience, in just the same way that pressing the end call button on your cellphone will never be quite the same as slamming down an old-fashioned receiver.

Perhaps that’s why we see so much vitriol online, so many anonymous, bitter comments, so many imprudent tweets and messy posts. Because creating them is less cathartic, you feel the need to do it more often. When your emotions never quite cool, they keep coming out in other ways.

Read the entire article here.

Image courtesy the Guardian.

 

Send to Kindle

Ten Greatest Works

PicassoGuernica

 

 

 

 

 

I would take issue with Jonathan Jones’ top ten best works of art, ever. Though the list of some chosen artists is perhaps a fair representation of la creme de la creme — Rembrandt, da Vinci, Michelangelo and Velasquez for sure.

One work that clearly does belong in the list is Guernica. Picasso summed up the truth of fascism and war in this masterpiece.

See more of Jones’ top ten here.

Image: Guernica, Pablo Picasso, 1937. Prado Museum, Madrid. Courtesy of Wikipedia.

Send to Kindle

The Inflaton and the Multiverse

multiverse-illustration

 

 

 

 

 

 

 

 

 

Last week’s announcement that cosmologists had found signals of gravitational waves from the primordial cosmic microwave background of the Big Bang made many headlines, even on cable news. If verified by separate experiments this will be ground-breaking news indeed — much like the discovery of the Higgs Boson in 2012. Should the result stand, this may well pave the way for new physics and greater support for the multiverse theory of the universe. So, in addition to the notion that we may not be alone in the vast cosmos, we’ll now have to consider not being alone in a cosmos made up of multiple universes — our universe may not be alone either!

From the New Scientist:

Wave hello to the multiverse? Ripples in the very fabric of the cosmos, unveiled this week, are allowing us to peer further back in time than anyone thought possible, showing us what was happening in the first slivers of a second after the big bang.

The discovery of these primordial waves could solidify the idea that our young universe went through a rapid growth spurt called inflation. And that theory is linked to the idea that the universe is constantly giving birth to smaller “pocket” universes within an ever-expanding multiverse.

The waves in question are called gravitational waves, and they appear in Einstein’s highly successful theory of general relativity (see “A surfer’s guide to gravitational waves”). On 17 March, scientists working with the BICEP2 telescope in Antarctica announced the first indirect detection of primordial gravitational waves. This version of the ripples was predicted to be visible in maps of the cosmic microwave background (CMB), the earliest light emitted in the universe, roughly 380,000 years after the big bang.

Repulsive gravity

The BICEP2 team had spent three years analysing CMB data, looking for a distinctive curling pattern called B-mode polarisation. These swirls indicate that the light of the CMB has been twisted, or polarised, into specific curling alignments. In two papers published online on the BICEP project website, the team said they have high confidence the B-mode pattern is there, and that they can rule out alternative explanations such as dust in our own galaxy, distortions caused by the gravity of other galaxies and errors introduced by the telescope itself. That suggests the swirls could have been left only by the very first gravitational waves being stretched out by inflation.

“If confirmed, this result would constitute the most important breakthrough in cosmology over the past 15 years. It will open a new window into the beginning of our universe and have fundamental implications for extensions of the standard model of physics,” says Avi Loeb at Harvard University. “If it is real, the signal will likely lead to a Nobel prize.”

And for some theorists, simply proving that inflation happened at all would be a sign of the multiverse.

“If inflation is there, the multiverse is there,” said Andrei Linde of Stanford University in California, who is not on the BICEP2 team and is one of the originators of inflationary theory. “Each observation that brings better credence to inflation brings us closer to establishing that the multiverse is real.” (Watch video of Linde being surprised with the news that primordial gravitational waves have been detected.)

The simplest models of inflation, which the BICEP2 results seem to support, require a particle called an inflaton to push space-time apart at high speed.

“Inflation depends on a kind of material that turns gravity on its head and causes it to be repulsive,” says Alan Guth at the Massachusetts Institute of Technology, another author of inflationary theory. Theory says the inflaton particle decays over time like a radioactive element, so for inflation to work, these hypothetical particles would need to last longer than the period of inflation itself. Afterwards, inflatons would continue to drive inflation in whatever pockets of the universe they inhabit, repeatedly blowing new universes into existence that then rapidly inflate before settling down. This “eternal inflation” produces infinite pocket universes to create a multiverse.

Quantum harmony

For now, physicists don’t know how they might observe the multiverse and confirm that it exists. “But when the idea of inflation was proposed 30 years ago, it was a figment of theoretical imagination,” says Marc Kamionkowski at Johns Hopkins University in Baltimore, Maryland. “What I’m hoping is that with these results, other theorists out there will start to think deeply about the multiverse, so that 20 years from now we can have a press conference saying we’ve found evidence of it.”

In the meantime, studying the properties of the swirls in the CMB might reveal details of what the cosmos was like just after its birth. The power and frequency of the waves seen by BICEP2 show that they were rippling through a particle soup with an energy of about 1016 gigaelectronvolts, or 10 trillion times the peak energy expected at the Large Hadron Collider. At such high energies, physicists expect that three of the four fundamental forces in physics – the strong, weak and electromagnetic forces – would be merged into one.

The detection is also the first whiff of quantum gravity, one of the thorniest puzzles in modern physics. Right now, theories of quantum mechanics can explain the behaviour of elementary particles and those three fundamental forces, but the equations fall apart when the fourth force, gravity, is added to the mix. Seeing gravitational waves in the CMB means that gravity is probably linked to a particle called the graviton, which in turn is governed by quantum mechanics. Finding these primordial waves won’t tell us how quantum mechanics and gravity are unified, says Kamionkowski. “But it does tell us that gravity obeys quantum laws.”

“For the first time, we’re directly testing an aspect of quantum gravity,” says Frank Wilczek at MIT. “We’re seeing gravitons imprinted on the sky.”

Waiting for Planck

Given the huge potential of these results, scientists will be eagerly anticipating polarisation maps from projects such as the POLARBEAR experiment in Chile or the South Pole Telescope. The next full-sky CMB maps from the Planck space telescope are also expected to include polarisation data. Seeing a similar signal from one or more of these experiments would shore up the BICEP2 findings and make a firm case for inflation and boost hints of the multiverse and quantum gravity.

One possible wrinkle is that previous temperature maps of the CMB suggested that the signal from primordial gravitational waves should be much weaker that what BICEP2 is seeing. Those results set theorists bickering about whether inflation really happened and whether it could create a multiverse. Several physicists suggested that we scrap the idea entirely for a new model of cosmic birth.

Taken alone, the BICEP2 results give a strong-enough signal to clinch inflation and put the multiverse back in the game. But the tension with previous maps is worrying, says Paul Steinhardt at Princeton University, who helped to develop the original theory of inflation but has since grown sceptical of it.

“If you look at the best-fit models with the new data added, they’re bizarre,” Steinhardt says. “If it remains like that, it requires adding extra fields, extra parameters, and you get really rather ugly-looking models.”

Forthcoming data from Planck should help resolve the issue, and we may not have long to wait. Olivier Doré at the California Institute of Technology is a member of the Planck collaboration. He says that the BICEP2 results are strong and that his group should soon be adding their data to the inflation debate: “Planck in particular will have something to say about it as soon as we publish our polarisation result in October 2014.”

Read the entire article here.

Image: Multiverse illustration. Courtesy of National Geographic.

Send to Kindle

Father of Distributed Computing

Leslie_LamportDistributed computing is a foundational element for most modern day computing. It paved the way for processing to be shared across multiple computers and, nowadays, within the cloud. Most technology companies, including IBM, Google, Amazon, and Facebook, use distributed computing to provide highly scalable and reliable computing power for their systems and services. Yet, Bill Gates did not invent distributed computing, nor did Steve Jobs. In fact, it was pioneered in the mid-1970s by an unsung hero of computer science, Leslie Lamport. Know aged 73 Leslie Lamport was recognized with this year’s Turing Award.

From Technology Review:

This year’s winner of the Turing Award—often referred to as the Nobel Prize of computing—was announced today as Leslie Lamport, a computer scientist whose research made possible the development of the large, networked computer systems that power, among other things, today’s cloud and Web services. The Association for Computing Machinery grants the award annually, with an associated prize of $250,000.

Lamport, now 73 and a researcher with Microsoft, was recognized for a series of major breakthroughs that began in the 1970s. He devised algorithms that make it possible for software to function reliably even if it is running on a collection of independent computers or components that suffer from delays in communication or sometimes fail altogether.

That work, within a field now known as distributed computing, remains crucial to the sprawling data centers used by Internet giants, and is also involved in coördinating the multiple cores of modern processors in computers and mobile devices. Lamport talked to MIT Technology Review’s Tom Simonite about why his ideas have lasted.

Why is distributed computing important?

Distribution is not something that you just do, saying “Let’s distribute things.” The question is ‘How do you get it to behave coherently?’”

My Byzantine Generals work [on making software fault-tolerant, in 1980] came about because I went to SRI and had a contract to build a reliable prototype computer for flying airplanes for NASA. That used multiple computers that could fail, and so there you have a distributed system. Today there are computers in Palo Alto and Beijing and other places, and we want to use them together, so we build distributed systems. Computers with multiple processors inside are also distributed systems.

We no longer use computers like those you worked with in the 1970s and ’80s. Why have your distributed-computing algorithms survived?

Some areas have had enormous changes, but the aspect of things I was looking at, the fundamental notions of synchronization, are the same.

Running multiple processes on a single computer is very different from a set of different computers talking over a relatively slow network, for example. [But] when you’re trying to reason mathematically about their correctness, there’s no fundamental difference between the two systems.

I [developed] Paxos [in 1989] because people at DEC [Digital Equipment Corporation] were building a distributed file system. The Paxos algorithm is very widely used now. Look inside of Bing or Google or Amazon—where they’ve got rooms full of computers, they’ll probably be running an instance of Paxos.

More recently, you have worked on ways to improve how software is built. What’s wrong with how it’s done now?

People seem to equate programming with coding, and that’s a problem. Before you code, you should understand what you’re doing. If you don’t write down what you’re doing, you don’t know whether you understand it, and you probably don’t if the first thing you write down is code. If you’re trying to build a bridge or house without a blueprint—what we call a specification—it’s not going to be very pretty or reliable. That’s how most code is written. Every time you’ve cursed your computer, you’re cursing someone who wrote a program without thinking about it in advance.

There’s something about the culture of software that has impeded the use of specification. We have a wonderful way of describing things precisely that’s been developed over the last couple of millennia, called mathematics. I think that’s what we should be using as a way of thinking about what we build.

Read the entire story here.

Image: Leslie Lamport, 2005. Courtesy of Wikipedia.

Send to Kindle

Meet the Indestructible Life-form

water-bear

Meet the water bear or tardigrade. It may not be pretty, but its as close to indestructible as any life-form may ever come.

Cool it to a mere 1 degree above absolute zero or -458 F and it lives on. Heat it to 300 F and it lives on. Throw it out into the vacuum of space and it lives on. Irradiate it with hundreds of times the radiation that would kill a human and it lives on. Dehydrate it to 3 percent of its normal water content and it lives on.

From Wired:

In 1933, the owner of a New York City speakeasy and three cronies embarked on a rather unoriginal scheme to make a quick couple grand: Take out three life insurance policies on the bar’s deepest alcoholic, Mike Malloy, then kill him.

First, they pumped him full of ungodly amounts of liquor. When that didn’t work, they poisoned the hooch. Mike didn’t mind. Then came the sandwiches of rotten sardines and broken glass and metal shavings. Mike reportedly loved them. Next they dropped him in the snow and poured cold water on him. It didn’t faze Mike. Then they ran him over with a cab, which only broke his arm. The conspirators finally succeeded when they boozed Mike up, ran a tube down his throat, and pumped him full of carbon monoxide.

They don’t come much tougher than Mike the Durable, as he is remembered. Except in the microscopic world beneath our feet, where there lives what is perhaps the toughest creature on Earth: the tardigrade. Also known as the water bear (because it looks like an adorable little many-legged bear), this exceedingly tiny critter has an incredible resistance to just about everything. Go ahead and boil it, freeze it, irradiate it, and toss it into the vacuum of space — it won’t die. If it were big enough to eat a glass sandwich, it probably could survive that too.

The water bear’s trick is something called cryptobiosis, in which it brings its metabolic processes nearly to a halt. In this state it can dehydrate to 3 percent of its normal water content in what is called desiccation, becoming a husk of its former self. But just add water and the tardigrade roars back to life like Mike the Durable emerging from a bender and continues trudging along, puncturing algae and other organisms with a mouthpart called a stylet and sucking out the nutrients.

“They are probably the most extreme survivors that we know of among animals,” said biologist Bob Goldstein of the University of North Carolina at Chapel Hill. “People talk about cockroaches surviving anything. I think long after the cockroaches would be killed we’d still have dried water bears that could be rehydrated and be alive.”

“Is It Cold in Here?” Asked a Water Bear NEVER

This hibernation of sorts isn’t happening for a single season, like a true bear (tardigrades are invertebrates). As far as scientists can tell, water bears can be dried out for at least a decade and still revivify, only to find their clothes are suddenly out of style.

Mike the Durable did just fine in the freezing cold, but the temperatures the water bear endures in cryptobiosis defy belief. It can survive in a lab environment of just 1 degree kelvin. That’s an astonishing -458 degrees Fahrenheit, where matter goes bizarro, with gases becoming liquids and liquids becoming solids.

At this temperature the movements of the normally frenzied atoms inside the water bear come almost to a standstill, yet the creature endures. And that’s all the more incredible when you consider that the water bear indeed has a brain, a relatively simple one, sure, but a brain that somehow emerges from this unscathed.

Water bears also can tolerate pressures six times that of the deepest oceans. And a few of them once survived an experiment that subjected them to 10 days exposed to the vacuum of space. (While we’re on the topic, humans can survive for a couple minutes, max. One poor fellow at NASA accidentally depressurized his suit in a vacuum chamber in 1965 and lost consciousness after 15 seconds. When he woke up, he said his last memory was feeling the water on his tongue boiling, which I’m guessing felt a bit like Pop Rocks, only somehow even worse for your body.)

Anyway, tardigrades. They can take hundreds of times the radiation that would kill a human. Water bears don’t mind hot water either–like, 300 degrees Fahrenheit hot. So the question is: why? Why evolve to survive the kind of cold that only scientists can create in a lab, and pressures that have never even existed on our planet?

Water bears don’t even necessarily inhabit extreme habitats like, say, boiling springs where certain bacteria proliferate. Therefore the term “extremophile” that has been applied to tardigrades over the years isn’t entirely accurate. Just because they’re capable of surviving these harsh environments doesn’t mean they seek them out.

They actually prefer regular old dirt and sand and moss all over the world. I mean, would you rather stay in a Motel 6 in a lake of boiling acidic water or lounge around on a beach resort and drink algae cocktails? (Why this isn’t a BuzzFeed quiz yet is beyond me. It’s gold. There’s untold billions of water bears on Earth. Page views, BuzzFeed. What’s the sound of a billion water bears clicking? Boom, another quiz.)

But that isn’t to say there aren’t troubles in the tardigrade version of paradise. “If you’re living in dirt,” said Goldstein, “there’s a danger of desiccation all the time.” If, say, the sun starts drying out the surface, one option is to move farther down into the soil. But “if you go too far down, there’s not going to be much food. So they really probably have to live in a fringe where they need to get food, but there’s always danger of drying out.”

A Tiny Superhero That Could One Day Save Your Life

And so it could be that the water bear’s incredible feats of survival may simply stem from a tough life in the dirt. But there’s also the question of how it does this, and it’s a perplexing one at that. Goldstein’s lab is researching this, and he reckons that water bears don’t just have one simple trick, but a range of strategies to be able to endure drying out and eventually reanimating.

“There’s one that we know of, which is some animals that survive drying make a sugar called trehalose,” he said. “And trehalose sort of replaces water as they dry down, so it will make glassy surfaces where normally water would be sitting. That probably helps prevent a lot of the damage that normally occurs when you dry something down or when you rehydrate it.” Not all of the 1,000 or so species of water bears produce this sugar though, he says, so there must be some other trick going on.

Ironically enough, these incredibly hardy creatures are very difficult to grow in the lab, but Goldstein has had great success where many others have failed. And, like so many great things in this world, it all began in a shed in England, where a regular old chap mastered their breeding to sell them to local schools for scientific experiments. He was so good at it, in fact, that he never needed to venture out to recollect specimens. And their descendants now crawl around Goldstein’s lab, totally unaware of how incredibly lucky they are to not be tortured by school children day in and day out.

A scanning electron micrograph of three awkwardly cuddling water bears. “You know what they say: Two’s company, three’s a crowd. We’re looking at you, Paul. Seriously though, Paul. You need to scram.” Image: Willow Gabriel

“Some organisms just can’t be raised in labs,” Goldstein said. “You bring them in and try to mimic what’s going on outside and they just don’t grow up. So we were lucky, actually, people were having a hard time growing water bears in labs continuously. And this guy in England had figured it out.”

Thanks to this breakthrough, Goldstein and other scientists are exploring the possibility of utilizing the water bear as science’s next fruit fly, that ubiquitous test subject that has yielded so many momentous discoveries. The water bear’s small size means you can pack a ton of them into a lab, plus they reproduce quickly and have a relatively compact genome to work with. Also, they’re way cuter than fruit flies and they don’t fall into your sodas and stuff.

Read the entire article here.

Image: A scanning electron micrograph of a water bear.  Courtesy: Bob Goldstein and Vicky Madden / Wired.

Send to Kindle

Building The 1,000 Mile Per Hour Car

BloodhoundSSC_front_dynamic_medium_Feb2014

First start with a jet engine. Then, perhaps add a second for auxiliary power. And, while your at it, throw in a rocket engine as well for some extra thrust. Add aluminum wheels with no tires. Hire a fighter pilot to “drive” it. Oh, and name it Bloodhound SSC (Supersonic Car). You’re on your way! Visit the official  Bloodhound website here.

From ars technica:

Human beings achieved many ‘firsts’ in the 20th century. We climbed the planet’s highest mountains, dived its deepest undersea trench, flew over it faster than the speed of sound, and even escaped it altogether in order to visit the moon. Beyond visiting Mars, it may feel like there are no more milestones left to reach. Yet people are still trying to push the envelope, even if they have to travel a little farther to get there.

Richard Noble is one such person. He’s spearheading a project called Bloodhound SSC that will visit uncharted territory on its way to a new land speed record on the far side of 1,000mph. The idea of a car capable of 1,000mph might sound ludicrous at first blush, but consider Noble’s credentials. The British businessman is responsible for previous land speed records in 1983 and 1997, the first of which came with him behind the wheel.

Bloodhound’s ancestors

Noble had been captivated by speed as a child after watching Cobb attempt to break a water speed record on Loch Ness in Scotland. Inspired by the achievements of fellow countrymen Campbell and Cobb, he wanted to reclaim the record for Britain. After building—and then crashing—one of the UK’s first jet-powered cars (Thrust 1), he acquired a surplus engine from English Electric Lightning. The Lightning was Britain’s late-1950s interceptor, designed to shoot down Soviet bombers over the North Sea. It was built around two powerful Rolls Royce Avon engines that gave it astonishing performance for the time. Just one of these engines was sufficient to convince John Ackroyd to accept Noble’s job offer as Thrust 2’s designer, and work began on the car in 1978, albeit in a shoestring fashion.

Thrust 2, now with a more powerful variant of the Avon engine, went to Bonneville at the end of September 1981. Until now, Noble had only driven the car on runways in the UK, never faster than 260mph. For two weeks the team built up speed at Bonneville before the rain arrived, flooding the lake and ending any record attempts for the year. Thrust 2 had peaked at 500mph, but Gabelich’s record would stand for a while longer. Thrust 2 returned the following September to again find Bonneville’s flats under several inches of water. Once it was clear that Bonneville was no good for anything other than hovercraft, the search was on for a new location.

Noble and Thrust 2 found themselves in the Black Rock desert in Nevada, now best known as the site of the Burning Man festival. Helpfully, the surface of the alkaline playa was much better suited to Thrust 2’s solid metal wheels. (At Bonneville these had cut ruts into the salt, requiring a new track for each run.) 1982 wasn’t to be Thrust 2’s year either, averaging 590mph and teaching Noble and his team a lot before the weather came and stopped things. Finally in 1983 everything went according to plan, and on October 4, Thrust 2 reached a peak speed of 650mph, setting a new world land speed record of 633.5mph.

It’s easy to see how the mindset required to successfully break a land speed record wouldn’t be satisfied just doing it once; it seems everyone comes back for another bite at the cherry. Noble was no exception. He knew that Breedlove was planning on taking back the record and that the American had a pair of General Electric J-79 engines with which to do so. 700mph was the next headline speed, with the speed of sound not much further away. Eager not to lose the record, Noble planned to defend it with Thrust 2’s successor, Thrust SSC (the initials stand for SuperSonic Car).

Thrust 2’s success came despite the lack of any significant aerodynamic design or refinement. Going supersonic meant that aerodynamics couldn’t be ignored any longer though. In 1992, Noble met the man who would design his new car, a retired aerodynamicist called Ron Ayers. Ayers would learn much on Thrust SSC—and another land speed car, 2006’s diesel-powered JCB Dieselmax—that would inform his design for Bloodhound SSC. At first though, he was reluctant to get involved. “The first thing I told him was he’d kill himself,” Ayers told Ars. Yet curiosity got the better of Ayers, and he began to see solutions for the various problems that at first made this look like an impossible challenge. A second chance meeting between Noble and Ayers followed, and before long Ayers was Thrust SSC’s concept designer and aerodynamicist.

Now, Ayers had the problem of working out what shape a supersonic car ought to take. That came from computational fluid dynamics (CFD). No one had attempted to use computer modeling to design a land speed record car until then, but even now no wind tunnels capable of supersonic speeds also feature a rolling road, necessary to accurately account for the effect of having wheels at those speeds. The University of Swansea in Wales created a CFD simulation of a supersonic vehicle, but “the problem was, at that time neither I nor anyone else trusted [CFD],” Ayers explained. His skepticism vanished following tests with scale models fired down a rocket sled track belonging to the UK Defense establishment (located at Pendine Sands, the site of many 1920s land speed records). The CFD data matched that from the rocket sled track to within a few percent, something that astonished both Ayers and the other aerodynamicists with whom he shared his findings.

Thrust SSC would use a pair of Rolls Royce Spey engines, taken from a British F-4 Phantom, mounted quite far forward on either side of the car, with the driver’s cockpit in-between. Together with a long, pointed nose and a T-shaped tail fin and stabilizer, Thrust SSC looked much more like a jet fighter with no wings than a car. Fittingly, the car got a driver to suit its looks. Land speed records aren’t cheap, something Noble (and probably every other record chaser) knew from bitter experience. He managed to scrape together enough funding to make three record attempts with Thrust 2 even though his attention was split between fund-raising and learning how to operate and control the car. For the sequel he wisely decided to leave the driving to someone else, concentrating his efforts on leading the project and raising the money. Thirty people applied for the job, a mix of drag racers and fighter pilots. The successful candidate was one of the latter, RAF Wing Commander Andy Green. Green had plenty of supersonic experience in RAF Phantoms and tornadoes; he also had a daredevil streak, evident in his choice of hobbies.

By 1997 the car was ready for Black Rock Desert. So, too, were Breedlove and his Spirit of America, setting the stage for a transatlantic, transonic shoot-out. Spirit of America narrowly escaped disaster the previous year, turning sharply right at ~675mph and rolling onto its side in the process. 1997 was to be no kinder to the Americans. On October 15, a sonic boom announced to the world that Green (backed by Noble) was now the fastest man on earth. Thrust SSC set a two-way average of 763mph, or Mach 1.015, exactly 50 years and a day after the first Mach 1 flight.

Noble, Green, and Ayers set another land speed record in 2006, albeit with a much slower car. JCB Dieselmax set a new world record for a diesel-powered vehicle, reaching just over 350mph. Even though Bloodhound SSC will go much faster, Ayers told me they gathered a lot of useful knowledge then that is being applied to the current project.

Bloodhound SSC

A number of factors appear to be necessary for a land speed record attempt: a car with a sufficiently powerful engine, a suitable location, and someone motivated enough to raise the money to make it happen. A little bit of competition helps with the last of these. Breedlove, Green, and Arfons spurred each other on in the 1960s, and it was the threat of Breedlove going supersonic that sparked Thrust SSC. As you might expect, competition was also the original impetus behind Bloodhound SSC. Noble learned that Steve Fossett was planning a land speed record attempt. The ballooning adventurer bought Spirit of America from Breedlove in 2006, and he set his sights on 800mph. Noble needed a new car that incorporated the lessons learned from Thrusts 2 and SSC.

What makes the car go?

The key to any land speed record car is its engine, and Bloodhound SSC is no exception. Rather than depend on decades-old surplus, Noble and Green approached the UK government to see if they could help. “We thought we’d earned the right to do this properly with the right technology,” Noble told the UK’s Director magazine. The Ministry of Defense agreed on the condition that Bloodhound SSC be exciting enough a project to rekindle the interest in science and technology that Apollo or Concorde created in the 1960s and 1970s. In return for inspiring a new generation of engineers, Bloodhound SSC could have an EJ200 jet engine, a type more often found in the Eurofighter Typhoon.

Thrust SSC needed the combined thrust of two Spey jet engines to break the sound barrier. To go 30 percent faster, Bloodhound SSC will need more power than a single EJ200 can provide—at full reheat just over 20,000lbf (90 kN), roughly as much as one of the two engines on its predecessor (albeit at half the weight). The Bloodhound team decided upon rocket power for the remaining thrust. We asked Ayers why they opted for this approach, and he explained that it had several advantages over a pair of jets. For one thing, it needs only one air intake, meaning a lower drag design than Thrust SSC’s twin engines. To reach the kind of performance target Bloodhound SSC is aiming at with a pair of jets, it would require designing variable geometry air intakes. While this sort of engineering solution is used by fighter aircraft, it would add unnecessary cost, complexity, and weight to Bloodhound SSC. What’s more, a rocket can provide much more thrust for its size and weight than a jet. Finally, using rocket power means being able to accelerate much more rapidly, which should help limit the length of track needed.

Read the entire article here.

Image: Bloodhound SCC. Courtesy of Bloodhound.

 

Send to Kindle

Research Without a Research Lab

Many technology companies have separate research teams, or even divisions, that play with new product ideas and invent new gizmos. The conventional wisdom suggests that businesses like Microsoft or IBM need to keep their innovative, far-sighted people away from those tasked with keeping yesterday’s products functioning and today’s customers happy. Google and a handful of other innovators on the other hand follow a different mantra; they invent in hallways and cubes — everywhere.

From Technology Review:

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.

“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Research vice presidents at some computing giants, such as Microsoft and IBM, rule over divisions housed in dedicated facilities carefully insulated from the rat race of the main businesses. In contrast, Google’s research boss, Alfred Spector, has a small core team and no department or building to call his own. He spends most of his time roaming the open plan, novelty strewn offices of Google’s product divisions, where the vast majority of its fundamental research takes place.

Groups working on Android or data centers are tasked with pushing the boundaries of computer science while simultaneously running Google’s day-to-day business operations.“There doesn’t need to be a protective shell around our researchers where they think great thoughts,” says Spector. “It’s a collaborative activity across the organization; talent is distributed everywhere.” He says this approach allows Google make fundamental advances quickly—since its researchers are close to piles of data and opportunities to experiment—and then rapidly turn those advances into products.

In 2012, for example, Google’s mobile products saw a 25 percent drop in speech recognition errors after the company pioneered the use of very large neural networks—aka deep learning (see “Google Puts Its Virtual Brain Technology to Work”).

Alan MacCormack, an adjunct professor at Harvard Business School who studies innovation and product development in the technology sector, says Google’s approach to research helps it deal with a conundrum facing many large companies. “Many firms are trying to balance a corporate strategy that defines who they are in five years with trying to discover new stuff that is unpredictable—this model has allowed them to do both.” Embedding people working on fundamental research into the core business also makes it possible for Google to encourage creative contributions from workers who would typically be far removed from any kind of research and development, adds MacCormack.

Spector even claims that his company’s secretive Google X division, home of Google Glass and the company’s self-driving car project (see “Glass, Darkly” and “Google’s Robot Cars Are Safer Drivers Than You or I”), is a product development shop rather than a research lab, saying that every project there is focused on a marketable end result. “They have pursued an approach like the rest of Google, a mixture of engineering and research [and] putting these things together into prototypes and products,” he says.

Cynthia Wagner Weick, a management professor at University of the Pacific, thinks that Google’s approach stems from its cofounders’ determination to avoid the usual corporate approach of keeping fundamental research isolated. “They are interested in solving major problems, and not just in the IT and communications space,” she says. Weick recently published a paper singling out Google, Edwards Lifescience, and Elon Musk’s companies, Tesla Motors and Space X, as examples of how tech companies can meet short-term needs while also thinking about far-off ideas.

Google can also draw on academia to boost its fundamental research. It spends millions each year on more than 100 research grants to universities and a few dozen PhD fellowships. At any given time it also hosts around 30 academics who “embed” at the company for up to 18 months. But it has lured many leading computing thinkers away from academia in recent years, particularly in artificial intelligence (see “Is Google Cornering the Market on Deep Learning?”). Those that make the switch get to keep publishing academic research while also gaining access to resources, tools and data unavailable inside universities.

Spector argues that it’s increasingly difficult for academic thinkers to independently advance a field like computer science without the involvement of corporations. Access to piles of data and working systems like those of Google is now a requirement to develop and test ideas that can move the discipline forward, he says. “Google’s played a larger role than almost any company in bringing that empiricism into the mainstream of the field,” he says. “Because of machine learning and operation at scale you can do things that are vastly different. You don’t want to separate researchers from data.”

It’s hard to say how long Google will be able to count on luring leading researchers, given the flush times for competing Silicon Valley startups. “We’re back to a time when there are a lot of startups out there exploring new ground,” says MacCormack, and if competitors can amass more interesting data, they may be able to leach away Google’s research mojo.

Read the entire story here.

Send to Kindle

Gravity Makes Some Waves

Gravity, the movie, made some “waves” at the recent Academy Awards ceremony in Hollywood. But the real star in this case, is the real gravity that seems to hold all macroscopic things in the cosmos together. And the waves in the this case are real gravitational waves. A long-running experiment based at the South Pole has discerned a signal from the Cosmic Microwave Background that points to the existence of gravitational waves. This is a discovery of great significance, if upheld, and confirms the Inflationary Theory of our universe’s exponential expansion just after the Big Bang. Theorists who first proposed this remarkable hypothesis — Alan Guth (1979) and Andrei Linde (1981) — are probably popping some champagne right now.

From the New Statesman:

The announcement yesterday that scientists working on the BICEP2 experiment in Antarctica had detected evidence of “inflation” may not appear incredible, but it is. It appears to confirm longstanding hypotheses about the Big Bang and the earliest moments of our universe, and could open a new path to resolving some of physics’ most difficult mysteries.

Here’s the explainer. BICEP2, near the South Pole (where the sky is clearest of pollution), was scanning the visible universe for cosmic background radiation – that is, the fuzzy warmth left over from the Big Bang. It’s the oldest light in the universe, and as such our maps of it are our oldest glimpses of the young universe. Here’s a map created with data collected by the ESA’s Planck Surveyor probe last year:

ESA-Planck-Surveyor-image

What should be clear from this is that the universe is remarkably flat and regular – that is, there aren’t massive clumps of radiation in some areas and gaps in others. This doesn’t quite make intuitive sense.

If the Big Bang really was a chaotic event, with energy and matter being created and destroyed within tiny fractions of nanoseconds, then we would expect the net result to be a universe that’s similarly chaotic in its structure. Something happened to smooth everything out, and that something is inflation.

Inflation assumes that something must have happened to the rate of expansion of the universe, somewhere between 10-35 and 10-32 seconds after the Big Bang, to make it massively increase. It would mean that the size of the “lumps” would outpace the rate at which they appear in the cosmos, smoothing them out.

For an analogy, imagine if the Moon was suddenly stretched out to the size of the Sun. You’d see – just before it collapsed in on itself – that its rifts and craters had become, relative to its new size, made barely perceptible. Just like a sheet being pulled tightly on a bed, a chaotic structure becomes more uniform.

Inflation, first theorised by Alan Guth in 1979 and refined by Andrei Linde in 1981, became the best hypothesis to explain what we were observing in the universe. It also seemed to offer a way to better understand how dark energy drove the expansion of the Big Bang, and even possibly lead a way towards unifying quantum mechanics with general relativity. That is, if it was correct. And there have been plenty of theories which tied-up some loose ends only to come apart with further observation.

The key evidence needed to verify inflation would be in the form of gravitational waves – that is, ripples in spacetime. Such waves were a part of Einstein’s theory of general relativity, and in the 90s scientists observed some for the first time, but until now there’s never been any evidence of them from inside the cosmic background radiation.

BICEP2, though, has found that evidence, and with it scientists now have a crucial piece of fact that can falsify other theories about the early universe and potentially open up entirely new areas of investigation. This is why it’s being compared with the discovery of the Higgs Boson last year, as just as that particle was fundamental to our understanding of molecular physics, so to is inflation to our understanding of the wider universe.

Read the entire article here.

Video: Professor physicist Chao-Lin Kuo delivers news of results from his gravitational wave experiment. Professor Andrei Linde reacts to the discovery, March 17, 2014. Courtesy of Stanford University.

Send to Kindle

Big Data Knows What You Do and When

Data scientists are getting to know more about you and your fellow urban dwellers as you move around your neighborhood and your city. As smartphones and cell towers become more ubiquitous and  data collection and analysis gathers pace researchers (and advertisers) will come to know your daily habits and schedule rather intimately. So, questions from a significant other along the lines of, “and, where were you at 11:15 last night?” may soon be consigned to history.

From Technology Review:

Mobile phones have generated enormous insight into the human condition thanks largely to the study of the data they produce. Mobile phone companies record the time of each call, the caller and receiver ids, as well as the locations of the cell towers involved, among other things.

The combined data from millions of people produces some fascinating new insights in the nature of our society.

Anthropologists have crunched it to reveal human reproductive strategiesa universal law of commuting and even the distribution of wealth in Africa.

Today, computer scientists have gone one step further by using mobile phone data to map the structure of cities and how people use them throughout the day. “These results point towards the possibility of a new, quantitative classification of cities using high resolution spatio-temporal data,” say Thomas Louail at the Institut de Physique Théorique in Paris and a few pals.

They say their work is part of a new science of cities that aims to objectively measure and understand the nature of large population centers.

These guys begin with a database of mobile phone calls made by people in the 31 Spanish cities that have populations larger than 200,000. The data consists of the number of unique individuals using a given cell tower (whether making a call or not) for each hour of the day over almost two months.

Given the area that each tower covers, Louail and co work out the density of individuals in each location and how it varies throughout the day. And using this pattern, they search for “hotspots” in the cities where the density of individuals passes some specially chosen threshold at certain times of the day.

The results reveal some fascinating patterns in city structure. For a start, every city undergoes a kind of respiration in which people converge into the center and then withdraw on a daily basis, almost like breathing. And this happens in all cities. This “suggests the existence of a single ‘urban rhythm’ common to all cities,” says Louail and co.

During the week, the number of phone users peaks at about midday and then again at about 6 p.m. During the weekend the numbers peak a little later: at 1 p.m. and 8 p.m. Interestingly, the second peak starts about an hour later in western cities, such as Sevilla and Cordoba.

The data also reveals that small cities tend to have a single center that becomes busy during the day, such as the cities of Salamanca and Vitoria.

But it also shows that the number of hotspots increases with city size; so-called polycentric cities include Spain’s largest, such as Madrid, Barcelona, and Bilboa.

That could turn out to be useful for automatically classifying cities.

Read the entire article here.

Send to Kindle

Time Traveling Camels

camels_at_giza

Camels have no place in the Middle East of biblical times. Forensic scientists, biologists, archeologists, geneticists and paleontologists all seem to agree that camels could not have been present in the early Jewish stories of the Genesis and the Old Testament — camels trotted in to the land many hundreds of years later.

From the NYT:

There are too many camels in the Bible, out of time and out of place.

Camels probably had little or no role in the lives of such early Jewish patriarchs as Abraham, Jacob and Joseph, who lived in the first half of the second millennium B.C., and yet stories about them mention these domesticated pack animals more than 20 times. Genesis 24, for example, tells of Abraham’s servant going by camel on a mission to find a wife for Isaac.

These anachronisms are telling evidence that the Bible was written or edited long after the events it narrates and is not always reliable as verifiable history. These camel stories “do not encapsulate memories from the second millennium,” said Noam Mizrahi, an Israeli biblical scholar, “but should be viewed as back-projections from a much later period.”

Dr. Mizrahi likened the practice to a historical account of medieval events that veers off to a description of “how people in the Middle Ages used semitrailers in order to transport goods from one European kingdom to another.”

For two archaeologists at Tel Aviv University, the anachronisms were motivation to dig for camel bones at an ancient copper smelting camp in the Aravah Valley in Israel and in Wadi Finan in Jordan. They sought evidence of when domesticated camels were first introduced into the land of Israel and the surrounding region.

The archaeologists, Erez Ben-Yosef and Lidar Sapir-Hen, used radiocarbon dating to pinpoint the earliest known domesticated camels in Israel to the last third of the 10th century B.C. — centuries after the patriarchs lived and decades after the kingdom of David, according to the Bible. Some bones in deeper sediments, they said, probably belonged to wild camels that people hunted for their meat. Dr. Sapir-Hen could identify a domesticated animal by signs in leg bones that it had carried heavy loads.

The findings were published recently in the journal Tel Aviv and in a news release from Tel Aviv University. The archaeologists said that the origin of the domesticated camel was probably in the Arabian Peninsula, which borders the Aravah Valley. Egyptians exploited the copper resources there and probably had a hand in introducing the camels. Earlier, people in the region relied on mules and donkeys as their beasts of burden.

“The introduction of the camel to our region was a very important economic and social development,” Dr. Ben-Yosef said in a telephone interview. “The camel enabled long-distance trade for the first time, all the way to India, and perfume trade with Arabia. It’s unlikely that mules and donkeys could have traversed the distance from one desert oasis to the next.”

Dr. Mizrahi, a professor of Hebrew culture studies at Tel Aviv University who was not directly involved in the research, said that by the seventh century B.C. camels had become widely employed in trade and travel in Israel and through the Middle East, from Africa as far as India. The camel’s influence on biblical research was profound, if confusing, for that happened to be the time that the patriarchal stories were committed to writing and eventually canonized as part of the Hebrew Bible.

“One should be careful not to rush to the conclusion that the new archaeological findings automatically deny any historical value from the biblical stories,” Dr. Mizrahi said in an email. “Rather, they established that these traditions were indeed reformulated in relatively late periods after camels had been integrated into the Near Eastern economic system. But this does not mean that these very traditions cannot capture other details that have an older historical background.”

Read the entire article here.

Image: Camels at the Great Pyramid of Giza, Egypt. Courtesy of Wikipedia.

Send to Kindle

Is Your City Killing You?

The stresses of modern day living are taking a toll on your mind and body. And, more so if you happen to live in an concrete jungle. The results are even more pronounced for those of us living in large urban centers. That’s the finding of some fascinating new brain research out of Germany. Their simple answer to a lower-stress life: move to the countryside.

From The Guardian:

You are lying down with your head in a noisy and tightfitting fMRI brain scanner, which is unnerving in itself. You agreed to take part in this experiment, and at first the psychologists in charge seemed nice.

They set you some rather confusing maths problems to solve against the clock, and you are doing your best, but they aren’t happy. “Can you please concentrate a little better?” they keep saying into your headphones. Or, “You are among the worst performing individuals to have been studied in this laboratory.” Helpful things like that. It is a relief when time runs out.

Few people would enjoy this experience, and indeed the volunteers who underwent it were monitored to make sure they had a stressful time. Their minor suffering, however, provided data for what became a major study, and a global news story. The researchers, led by Dr Andreas Meyer-Lindenberg of the Central Institute of Mental Health in Mannheim, Germany, were trying to find out more about how the brains of different people handle stress. They discovered that city dwellers’ brains, compared with people who live in the countryside, seem not to handle it so well.

To be specific, while Meyer-Lindenberg and his accomplices were stressing out their subjects, they were looking at two brain regions: the amygdalas and the perigenual anterior cingulate cortex (pACC). The amygdalas are known to be involved in assessing threats and generating fear, while the pACC in turn helps to regulate the amygdalas. In stressed citydwellers, the amygdalas appeared more active on the scanner; in people who lived in small towns, less so; in people who lived in the countryside, least of all.

And something even more intriguing was happening in the pACC. Here the important relationship was not with where the the subjects lived at the time, but where they grew up. Again, those with rural childhoods showed the least active pACCs, those with urban ones the most. In the urban group moreover, there seemed not to be the same smooth connection between the behaviour of the two brain regions that was observed in the others. An erratic link between the pACC and the amygdalas is often seen in those with schizophrenia too. And schizophrenic people are much more likely to live in cities.

When the results were published in Nature, in 2011, media all over the world hailed the study as proof that cities send us mad. Of course it proved no such thing – but it did suggest it. Even allowing for all the usual caveats about the limitations of fMRI imaging, the small size of the study group and the huge holes that still remained in our understanding, the results offered a tempting glimpse at the kind of urban warping of our minds that some people, at least, have linked to city life since the days of Sodom and Gomorrah.

The year before the Meyer-Lindenberg study was published, the existence of that link had been established still more firmly by a group of Dutch researchers led by Dr Jaap Peen. In their meta-analysis (essentially a pooling together of many other pieces of research) they found that living in a city roughly doubles the risk of schizophrenia – around the same level of danger that is added by smoking a lot of cannabis as a teenager.

At the same time urban living was found to raise the risk of anxiety disorders and mood disorders by 21% and 39% respectively. Interestingly, however, a person’s risk of addiction disorders seemed not to be affected by where they live. At one time it was considered that those at risk of mental illness were just more likely to move to cities, but other research has now more or less ruled that out.

So why is it that the larger the settlement you live in, the more likely you are to become mentally ill? Another German researcher and clinician, Dr Mazda Adli, is a keen advocate of one theory, which implicates that most paradoxical urban mixture: loneliness in crowds. “Obviously our brains are not perfectly shaped for living in urban environments,” Adli says. “In my view, if social density and social isolation come at the same time and hit high-risk individuals … then city-stress related mental illness can be the consequence.”

Read the entire story here.

Send to Kindle

Mining Minecraft

minecraft-example

If you have a child under the age of 13 it’s likely that you’ve heard of, seen or even used Minecraft. More than just a typical online game, Minecraft is a playground for aspiring architects — despite the Creepers. Minecraft began in 2011 with a simple premise — place and remove blocks to fend of unwanted marauders. Now it has become a blank canvas for young minds to design and collaborate on building fantastical structures. My own twin 11 year-olds have designed their dream homes complete with basement stables, glass stairways roof-top pool.

From the Guardian:

I couldn’t pinpoint exactly when I became aware of my eight-year-old son’s fixation with Minecraft. I only know that the odd reference to zombies and pickaxes burgeoned until it was an omnipresent force in our household, the dominant topic of conversation and, most bafflingly, a game he found so gripping that he didn’t just want to play it, he wanted to watch YouTube videos of others playing it too.

This was clearly more than any old computer game – for Otis and, judging by discussion at the school gates, his friends too. I felt as if he’d joined a cult, albeit a reasonably benign one, though as someone who last played a computer game when Jet Set Willy was the height of technological wizardry, I hardly felt in a position to judge.

Minecraft, I realised, was something I knew nothing about. It was time to become acquainted. I announced my intention to give myself a crash course in the game to Otis one evening, interrupting his search for Obsidian to build a portal to the Nether dimension. As you do. “Why would you want to play Minecraft?” he asked, as if I’d confided that I was taking up a career in trapeze-artistry.

For anyone as mystified about it as I was, Minecraft is now one of the world’s biggest computer games, a global phenomenon that’s totted up 14,403,011 purchases as I write; 19,270 in the past 24 hours – live statistics they update on their website, as if it were Children in Need night.

Trying to define the objective of the game isn’t easy. When I ask Otis, he shrugs. “I’m not sure there is one. But that’s what’s brilliant. You can do anything you like.”

This doesn’t seem like much of an insight, though to be fair, the developers themselves, Mojang, define it succinctly as, “a game about breaking and placing blocks”. This sounds delightfully simple, an impression echoed by its graphics. In sharp contrast to the rich, more cinematic style of other games, this is unapologetically old school, the sort of computer game of the future that Marty McFly would have played.

In this case, looks are deceptive. “The pixelated style might appear simple but it masks a huge amount of depth and complexity,” explains Alex Wiltshire, former editor of Edge magazine and author of forthcoming Minecraft guide, Block-o-pedia. “Its complex nature doesn’t lie in detailed art assets, but in how each element of the game interrelates.”

It’s this that gives players the potential to produce elaborate constructions on a lavish scale; fans have made everything from 1:1 scale re-creations of the Lord of the Rings’ Mines of Moria, to models of entire cities.

I’m a long way from that. “Don’t worry, Mum – when I first went on it when I was six, I had no idea what I was doing,” Otis reassures, shaking his head at the thought of those naive days, way back when.

Otis’s device of choice is his iPod, ideal for on-the-move sessions, though this once caused him serious grief after being caught on it under his duvet after lights out. I take one look at the lightning speed with which his fingers move and decide to download it on to my MacBook instead. The introduction of an additional version of the game into our household is greeted very much like Walter Raleigh’s return from the New World.

We open up the game and he tells me that I am “Steve”, the default player, and that we get a choice of modes in which to play: creative or survival. He suggests I start with the former on the basis that this is the best place for those who aren’t very good at it.

In creative mode, you are dropped into a newly generated world (an island in our case) and gifted a raft of resources – everything from coal and lapis lazuli to cake and beds.

At the risk of sounding like a dunce, it isn’t at all obvious what I’m supposed to do. So instead of springing into action, I’m left standing, looking around lamely as if I’m on the edge of a dance floor waiting for someone to come and put me out of my misery. Despite knowing that the major skill required in this game is building, before Otis intervenes, the most I can accomplish is to dig a few holes.

“When it first came out everyone was confused as the developer gave little or no guidance,” says Wiltshire. “It didn’t specifically say you had to cut down a tree to get some wood, whereas games that are produced by big companies give instructions – the last thing they want is for people not to understand how to play. With Minecraft, which had an indie developer, the player had to work things out for themselves. It was quite a tonic.”

He believes that this is why a game not specifically designed for children has become so popular with them. “Because you learn so much when you’re young, kids are used to the idea of a world they don’t fully understand, so they’re comfortable with having to find things out for themselves.”

For the moment, I’m happy to take instruction from my son, who begins his demonstration by creating a rollercoaster – an obvious priority when you’ve just landed on a desert island. He quickly installs its tracks, weaving them through trees and into the sea, before sending Steve for a ride. He asks me if I feel ready to have a go. I feel as if I’m on a nursing home word processing course.

Familiarising yourself takes a little time but once you get going – and have worked out the controls – being able to run, fly, swim and build is undeniably absorbing. I also finally manage to construct something, a slightly disappointing shipping container-type affair that explodes Wiltshire’s assertion that it’s “virtually impossible to build something that looks terrible in Minecraft”. Still, I’m enjoying it, I can’t deny it. Aged eight, I’d have loved it every bit as much as my son does.

The more I play it, the more I also start to understand why this game is been championed for its educational possibilities, with some schools in the US using it as a tool to teach maths and science.

Dr Helen O’Connor, who runs UK-based Childchology – which provides children and their families with support for common psychological problems via the internet – said: “Minecraft offers some strong positives for children. It works on a cognitive level in that it involves problem solving, imagination, memory, creativity and logical sequencing. There is a good educational element to the game, and it also requires some number crunching.

“Unlike lots of other games, there is little violence, with the exception of fighting off a few zombies and creepers. This is perhaps one of the reasons why it is fairly gender neutral and girls enjoy playing it as well as boys.”

The next part of Otis’s demonstration involves switching to survival mode. He explains: “You’ve got to find the resources yourself here. You’re not just given them. Oh and there are villains too. Zombie pigmen and that kind of thing.”

It’s clear that life in survival mode is a significantly hairier prospect than in creative, particularly when Otis changes the difficulty setting to its highest notch. He says he doesn’t do this often because, after spending three weeks creating a house from wood and cobblestones, zombies nearly trashed the place. I make a mental note to remind him of this conversation next time he has a sleepover.

One of the things that’s so appealing about Minecraft is that there is no obvious start and end; it’s a game of infinite possibilities, which is presumably why it’s often compared to Lego. Yet, the addictive nature of the game is clearly vexing many parents: internet talkboards are awash with people seeking advice on how to prize their children away from it.

Read the entire story here.

Image courtesy of Minecraft.

Send to Kindle

United Kingdom Without the United

new-union-jack

There is increasing noise in the media about Scottish independence. With the referendum a mere six months away — September 18, 2014 to be precise — what would the United Kingdom look like without the anchor nation to the north? An immediate consequence would be the need to redraw the UK’s Union Jack flag.

Avid vexillophiles will know that the Union Jack is a melding of the nations that make up the union — with one key omission. Wales does not feature on today’s flag. So, perhaps, if Scotland where to leave the UK, the official flag designers could make up for the gross omission and add Wales as they remove Saint Andrew’s cross, which represents Scotland.

Would-be designers have been letting their imaginations run wild with some fascinating and humorous designs — though one must suspect that Her Majesty the Queen, sovereign of this fair isle is certainly not amused by the possible break-up of her royal domain.

From the Atlantic

Long after the Empire’s collapse, the Union Jack remains an internationally recognized symbol of Britain. But all that could change soon. Scotland, one of the four countries that make up the United Kingdom (along with England, Northern Ireland, and Wales), will hold a referendum on independence this September. If it succeeds, Britain’s iconic flag may need a makeover.

The Flag Institute, the U.K.’s national flag charity and the largest membership-based vexillological organization in the world, recently polled its members and found that nearly 65 percent of respondents felt the Union Jack should be changed if Scotland becomes independent. And after the poll, the organization found itself flooded with suggested replacements for the flag.

“We are not advocating changing the flag. We are not advising changing the flag. We are not encouraging a change to the flag. We are not discouraging a change to the flag,” Charles Ashburner, the Flag Institute’s chief executive and trustee, told me. “We are simply simply here to facilitate and inform the debate if there is an appetite for such a thing.”

“As this subject has generated the largest post bag of any single subject in our history, however,” Ashburner noted, “there is clearly such an appetite.”

The Union Jack’s history is closely intertwined with the U.K.’s history. After Elizabeth I died in 1603, her cousin, King James VI of Scotland, ascended to the English throne as James I of England. With Britain united under one king for the first time, James sought to symbolize his joint rule of the two countries with a new flag in 1606. The design placed the traditional English flag, known as the cross of Saint George, over the traditional Scottish flag, known as the cross of Saint Andrew.

England and Scotland remained independent countries with separate parliaments, royal courts, and flags until they fully merged under the Act of Union in 1707. Queen Anne then adopted James I’s symbolic flag as the national banner of Great Britain. When Ireland merged with Britain in 1801 to form the modern United Kingdom, the British flag incorporated Ireland’s cross of Saint Patrick to create the modern Union Jack. The flag’s design did not change after Irish independence in the mid-20th century because Saint Patrick’s cross still represents Northern Ireland, which remained part of the U.K.

The Union Jack doesn’t represent everyone, though. England, Scotland, and Northern Ireland are included, but Wales, the fourth U.K. country, isn’t. Because Wales was considered part of the English crown in 1606 (with the title “Prince of Wales” reserved for that crown’s heir) after its annexation by England centuries earlier, neither James I’s original design nor any subsequent design based on it bears any influence of the culturally distinct, Celtic-influenced territory.

British authorities granted Wales’ red-dragon flag, or Y Ddraig Goch in Welsh, official status in 1959. But attempts to add Welsh symbolism to the Union Jack haven’t succeeded; in 2007, a member of Parliament from Wales proposed adding the Welsh dragon to the flag, to no avail. Iconography could involve more than just the dragon: Like the U.K.’s other three countries, Wales has a patron saint, Saint David, and a black-and-gold flag to represent him.

If Scotland stays in the U.K., incorporating Wales into the British flag could be as simple as adding yellow borders.

Read the entire article here.

Image: A Royal Standard influenced design for the replacement of the Union Jack should Scotland secede from the United Kingdom. Courtesy of the UK Flag Institute.

 

 

Send to Kindle

Which is Your God?

Is your God the one to be feared from the Old Testament? Or is yours the God who brought forth the angel Moroni? Or are your Gods those revered by Hindus or Ancient Greeks or the Norse? Theists have continuing trouble in answering these fundamental questions much to the consternation, and satisfaction, of atheists.

In a thoughtful interview with Gary Gutting, Louise Antony a professor of philosophy at the University of Massachusetts, structures these questions in the broader context of morality and social justice.

From the NYT:

Gary Gutting: You’ve taken a strong stand as an atheist, so you obviously don’t think there are any good reasons to believe in God. But I imagine there are philosophers whose rational abilities you respect who are theists. How do you explain their disagreement with you? Are they just not thinking clearly on this topic?

Louise Antony: I’m not sure what you mean by saying that I’ve taken a “strong stand as an atheist.” I don’t consider myself an agnostic; I claim to know that God doesn’t exist, if that’s what you mean.

G.G.: That is what I mean.

L.A.: O.K. So the question is, why do I say that theism is false, rather than just unproven? Because the question has been settled to my satisfaction. I say “there is no God” with the same confidence I say “there are no ghosts” or “there is no magic.” The main issue is supernaturalism — I deny that there are beings or phenomena outside the scope of natural law.

That’s not to say that I think everything is within the scope of human knowledge. Surely there are things not dreamt of in our philosophy, not to mention in our science – but that fact is not a reason to believe in supernatural beings. I think many arguments for the existence of a God depend on the insufficiencies of human cognition. I readily grant that we have cognitive limitations. But when we bump up against them, when we find we cannot explain something — like why the fundamental physical parameters happen to have the values that they have — the right conclusion to draw is that we just can’t explain the thing. That’s the proper place for agnosticism and humility.

But getting back to your question: I’m puzzled why you are puzzled how rational people could disagree about the existence of God. Why not ask about disagreements among theists? Jews and Muslims disagree with Christians about the divinity of Jesus; Protestants disagree with Catholics about the virginity of Mary; Protestants disagree with Protestants about predestination, infant baptism and the inerrancy of the Bible. Hindus think there are many gods while Unitarians think there is at most one. Don’t all these disagreements demand explanation too? Must a Christian Scientist say that Episcopalians are just not thinking clearly? Are you going to ask a Catholic if she thinks there are no good reasons for believing in the angel Moroni?

G.G.: Yes, I do think it’s relevant to ask believers why they prefer their particular brand of theism to other brands. It seems to me that, at some point of specificity, most people don’t have reasons beyond being comfortable with one community rather than another. I think it’s at least sometimes important for believers to have a sense of what that point is. But people with many different specific beliefs share a belief in God — a supreme being who made and rules the world. You’ve taken a strong stand against that fundamental view, which is why I’m asking you about that.

L.A.: Well I’m challenging the idea that there’s one fundamental view here. Even if I could be convinced that supernatural beings exist, there’d be a whole separate issue about how many such beings there are and what those beings are like. Many theists think they’re home free with something like the argument from design: that there is empirical evidence of a purposeful design in nature. But it’s one thing to argue that the universe must be the product of some kind of intelligent agent; it’s quite something else to argue that this designer was all-knowing and omnipotent. Why is that a better hypothesis than that the designer was pretty smart but made a few mistakes? Maybe (I’m just cribbing from Hume here) there was a committee of intelligent creators, who didn’t quite agree on everything. Maybe the creator was a student god, and only got a B- on this project.

In any case though, I don’t see that claiming to know that there is no God requires me to say that no one could have good reasons to believe in God. I don’t think there’s some general answer to the question, “Why do theists believe in God?” I expect that the explanation for theists’ beliefs varies from theist to theist. So I’d have to take things on a case-by-case basis.

I have talked about this with some of my theist friends, and I’ve read some personal accounts by theists, and in those cases, I feel that I have some idea why they believe what they believe. But I can allow there are arguments for theism that I haven’t considered, or objections to my own position that I don’t know about. I don’t think that when two people take opposing stands on any issue that one of them has to be irrational or ignorant.

G.G.: No, they may both be rational. But suppose you and your theist friend are equally adept at reasoning, equally informed about relevant evidence, equally honest and fair-minded — suppose, that is, you are what philosophers call epistemic peers: equally reliable as knowers. Then shouldn’t each of you recognize that you’re no more likely to be right than your peer is, and so both retreat to an agnostic position?

L.A.: Yes, this is an interesting puzzle in the abstract: How could two epistemic peers — two equally rational, equally well-informed thinkers — fail to converge on the same opinions? But it is not a problem in the real world. In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe.

G.G.: So is your point that we always have reason to think that people who disagree are not epistemic peers?

L.A.: It’s worse than that. The whole notion of epistemic peers belongs only to the abstract study of knowledge, and has no role to play in real life. Take the notion of “equal cognitive powers”: speaking in terms of real human minds, we have no idea how to seriously compare the cognitive powers of two people.

Read the entire article here.

Send to Kindle

The Magnificent Seven

Magnificent-seven

Actually, these seven will not save your village from bandits. Nor will they ride triumphant into the sunset on horseback. These seven are more mundane, but they are nonetheless shrouded in a degree of mystery, albeit rather technical. These are the seven holders of the seven keys that control the Internet’s core directory — the Domain Name System. Without it the Internet’s billions of users would not be able to browse or search or shop or email or text.

From the Guardian:

In a nondescript industrial estate in El Segundo, a boxy suburb in south-west Los Angeles just a mile or two from LAX international airport, 20 people wait in a windowless canteen for a ceremony to begin. Outside, the sun is shining on an unseasonably warm February day; inside, the only light comes from the glare of halogen bulbs.

There is a strange mix of accents – predominantly American, but smatterings of Swedish, Russian, Spanish and Portuguese can be heard around the room, as men and women (but mostly men) chat over pepperoni pizza and 75-cent vending machine soda. In the corner, an Asteroids arcade machine blares out tinny music and flashing lights.

It might be a fairly typical office scene, were it not for the extraordinary security procedures that everyone in this room has had to complete just to get here, the sort of measures normally reserved for nuclear launch codes or presidential visits. The reason we are all here sounds like the stuff of science fiction, or the plot of a new Tom Cruise franchise: the ceremony we are about to witness sees the coming together of a group of people, from all over the world, who each hold a key to the internet. Together, their keys create a master key, which in turn controls one of the central security measures at the core of the web. Rumours about the power of these keyholders abound: could their key switch off the internet? Or, if someone somehow managed to bring the whole system down, could they turn it on again?

The keyholders have been meeting four times a year, twice on the east coast of the US and twice here on the west, since 2010. Gaining access to their inner sanctum isn’t easy, but last month I was invited along to watch the ceremony and meet some of the keyholders – a select group of security experts from around the world. All have long backgrounds in internet security and work for various international institutions. They were chosen for their geographical spread as well as their experience – no one country is allowed to have too many keyholders. They travel to the ceremony at their own, or their employer’s, expense.

What these men and women control is the system at the heart of the web: the domain name system, or DNS. This is the internet’s version of a telephone directory – a series of registers linking web addresses to a series of numbers, called IP addresses. Without these addresses, you would need to know a long sequence of numbers for every site you wanted to visit. To get to the Guardian, for instance, you’d have to enter “77.91.251.10” instead of theguardian.com.

The master key is part of a new global effort to make the whole domain name system secure and the internet safer: every time the keyholders meet, they are verifying that each entry in these online “phone books” is authentic. This prevents a proliferation of fake web addresses which could lead people to malicious sites, used to hack computers or steal credit card details.

The east and west coast ceremonies each have seven keyholders, with a further seven people around the world who could access a last-resort measure to reconstruct the system if something calamitous were to happen. Each of the 14 primary keyholders owns a traditional metal key to a safety deposit box, which in turn contains a smartcard, which in turn activates a machine that creates a new master key. The backup keyholders have something a bit different: smartcards that contain a fragment of code needed to build a replacement key-generating machine. Once a year, these shadow holders send the organisation that runs the system – the Internet Corporation for Assigned Names and Numbers (Icann) – a photograph of themselves with that day’s newspaper and their key, to verify that all is well.

The fact that the US-based, not-for-profit organisation Icann – rather than a government or an international body – has one of the biggest jobs in maintaining global internet security has inevitably come in for criticism. Today’s occasionally over-the-top ceremony (streamed live on Icann’s website) is intended to prove how seriously they are taking this responsibility. It’s one part The Matrix (the tech and security stuff) to two parts The Office (pretty much everything else).

For starters: to get to the canteen, you have to walk through a door that requires a pin code, a smartcard and a biometric hand scan. This takes you into a “mantrap”, a small room in which only one door at a time can ever be open. Another sequence of smartcards, handprints and codes opens the exit. Now you’re in the break room.

Already, not everything has gone entirely to plan. Leaning next to the Atari arcade machine, ex-state department official Rick Lamb, smartly suited and wearing black-rimmed glasses (he admits he’s dressed up for the occasion), is telling someone that one of the on-site guards had asked him out loud, “And your security pin is 9925, yes?” “Well, it was…” he says, with an eye-roll. Looking in our direction, he says it’s already been changed.

Lamb is now a senior programme manager for Icann, helping to roll out the new, secure system for verifying the web. This is happening fast, but it is not yet fully in play. If the master key were lost or stolen today, the consequences might not be calamitous: some users would receive security warnings, some networks would have problems, but not much more. But once everyone has moved to the new, more secure system (this is expected in the next three to five years), the effects of losing or damaging the key would be far graver. While every server would still be there, nothing would connect: it would all register as untrustworthy. The whole system, the backbone of the internet, would need to be rebuilt over weeks or months. What would happen if an intelligence agency or hacker – the NSA or Syrian Electronic Army, say – got hold of a copy of the master key? It’s possible they could redirect specific targets to fake websites designed to exploit their computers – although Icann and the keyholders say this is unlikely.

Standing in the break room next to Lamb is Dmitry Burkov, one of the keyholders, a brusque and heavy-set Russian security expert on the boards of several internet NGOs, who has flown in from Moscow for the ceremony. “The key issue with internet governance is always trust,” he says. “No matter what the forum, it always comes down to trust.” Given the tensions between Russia and the US, and Russia’s calls for new organisations to be put in charge of the internet, does he have faith in this current system? He gestures to the room at large: “They’re the best part of Icann.” I take it he means he likes these people, and not the wider organisation, but he won’t be drawn further.

It’s time to move to the ceremony room itself, which has been cleared for the most sensitive classified information. No electrical signals can come in or out. Building security guards are barred, as are cleaners. To make sure the room looks decent for visitors, an east coast keyholder, Anne-Marie Eklund Löwinder of Sweden, has been in the day before to vacuum with a $20 dustbuster.

We’re about to begin a detailed, tightly scripted series of more than 100 actions, all recorded to the minute using the GMT time zone for consistency. These steps are a strange mix of high-security measures lifted straight from a thriller (keycards, safe combinations, secure cages), coupled with more mundane technical details – a bit of trouble setting up a printer – and occasional bouts of farce. In short, much like the internet itself.

Read the entire article here.

Image: The Magnificent Seven, movie poster. Courtesy of Wikia.

Send to Kindle

Unification of Byzantine Fault Tolerance

The title reads rather elegantly. However, I have no idea what it means and I challenge you to find meaning as well. You see, while your friendly editor typed the title the words themselves came from a non-human author, who goes by the name SCIgen.

SCIgen is an automated scientific paper generator. Accessible via the internet the SCIgen program generates utterly random nonsense, which includes an abstract, hypothesis, test results, detailed diagrams and charts, and even academic references. At first glance the output seems highly convincing. In fact, unscrupulous individuals have been using it to author fake submissions to scientific conferences and to generate bogus research papers for publication in academic journals.

This says a great deal about the quality of some academic conferences and peer review process (or lack of one).

Access the SCIgen generator here.

Read more about the Unification of Byzantine Fault Tolerance — our very own scientific paper — below.

The Effect of Perfect Modalities on Hardware and Architecture

Bob Widgleton, Jordan LeBouth and Apropos Smythe

Abstract

The implications of pseudorandom archetypes have been far-reaching and pervasive. After years of confusing research into e-commerce, we demonstrate the refinement of rasterization, which embodies the confusing principles of cryptography [21]. We propose new modular communication, which we call Tither.

Table of Contents

1) Introduction
2) Principles
3) Implementation
4) Evaluation

5) Related Work

6) Conclusion

1  Introduction

The transistor must work. Our mission here is to set the record straight. On the other hand, a typical challenge in machine learning is the exploration of simulated annealing. Furthermore, an intuitive quandary in robotics is the confirmed unification of Byzantine fault tolerance and thin clients. Clearly, XML and Moore’s Law [22] interact in order to achieve the visualization of the location-identity split. This at first glance seems unexpected but has ample historical precedence.
We confirm not only that IPv4 can be made game-theoretic, homogeneous, and signed, but that the same is true for write-back caches. In addition, we view operating systems as following a cycle of four phases: location, location, construction, and evaluation. It should be noted that our methodology turns the stable communication sledgehammer into a scalpel. Despite the fact that it might seem unexpected, it always conflicts with the need to provide active networks to experts. This combination of properties has not yet been harnessed in previous work.
Nevertheless, this solution is fraught with difficulty, largely due to perfect information. In the opinions of many, the usual methods for the development of multi-processors do not apply in this area. By comparison, it should be noted that Tither studies event-driven epistemologies. By comparison, the flaw of this type of solution, however, is that red-black trees can be made efficient, linear-time, and replicated. This combination of properties has not yet been harnessed in existing work.
Here we construct the following contributions in detail. We disprove that although the well-known unstable algorithm for the compelling unification of I/O automata and interrupts by Ito et al. is recursively enumerable, the acclaimed collaborative algorithm for the investigation of 802.11b by Davis et al. [4] runs in ?( n ) time. We prove not only that neural networks and kernels are generally incompatible, but that the same is true for DHCP. we verify that while the foremost encrypted algorithm for the exploration of the transistor by D. Nehru [23] runs in ?( n ) time, the location-identity split and the producer-consumer problem are always incompatible.
The rest of this paper is organized as follows. We motivate the need for the partition table. Similarly, to fulfill this intent, we describe a novel approach for the synthesis of context-free grammar (Tither), arguing that IPv6 and write-back caches are continuously incompatible. We argue the construction of multi-processors. This follows from the understanding of the transistor that would allow for further study into robots. Ultimately, we conclude.

2  Principles

In this section, we present a framework for enabling model checking. We show our framework’s authenticated management in Figure 1. We consider a methodology consisting of n spreadsheets. The question is, will Tither satisfy all of these assumptions? Yes, but only in theory.

dia0.png

Figure 1: An application for the visualization of DHTs [24].

Furthermore, we assume that electronic theory can prevent compilers without needing to locate the synthesis of massive multiplayer online role-playing games. This is a compelling property of our framework. We assume that the foremost replicated algorithm for the construction of redundancy by John Kubiatowicz et al. follows a Zipf-like distribution. Along these same lines, we performed a day-long trace confirming that our framework is solidly grounded in reality. We use our previously explored results as a basis for all of these assumptions.

dia1.png

Figure 2: A decision tree showing the relationship between our framework and the simulation of context-free grammar.

Reality aside, we would like to deploy a methodology for how Tither might behave in theory. This seems to hold in most cases. Figure 1 depicts the relationship between Tither and linear-time communication. We postulate that each component of Tither enables active networks, independent of all other components. This is a key property of our heuristic. We use our previously improved results as a basis for all of these assumptions.

3  Implementation

Though many skeptics said it couldn’t be done (most notably Wu et al.), we propose a fully-working version of Tither. It at first glance seems unexpected but is supported by prior work in the field. We have not yet implemented the server daemon, as this is the least private component of Tither. We have not yet implemented the homegrown database, as this is the least appropriate component of Tither. It is entirely a significant aim but is derived from known results.

4  Evaluation

As we will soon see, the goals of this section are manifold. Our overall evaluation method seeks to prove three hypotheses: (1) that the World Wide Web no longer influences performance; (2) that an application’s effective ABI is not as important as median signal-to-noise ratio when minimizing median signal-to-noise ratio; and finally (3) that USB key throughput behaves fundamentally differently on our system. Our logic follows a new model: performance might cause us to lose sleep only as long as usability takes a back seat to simplicity constraints. Furthermore, our logic follows a new model: performance might cause us to lose sleep only as long as scalability constraints take a back seat to performance constraints. Only with the benefit of our system’s legacy code complexity might we optimize for performance at the cost of signal-to-noise ratio. Our evaluation approach will show that increasing the instruction rate of concurrent symmetries is crucial to our results.

4.1  Hardware and Software Configuration

figure0.png

Figure 3: Note that popularity of multi-processors grows as complexity decreases – a phenomenon worth exploring in its own right.

Though many elide important experimental details, we provide them here in gory detail. We instrumented a deployment on our network to prove the work of Italian mad scientist K. Ito. Had we emulated our underwater cluster, as opposed to deploying it in a chaotic spatio-temporal environment, we would have seen weakened results. For starters, we added 3 2GB optical drives to MIT’s decommissioned UNIVACs. This configuration step was time-consuming but worth it in the end. We removed 2MB of RAM from our 10-node testbed [15]. We removed more 2GHz Intel 386s from our underwater cluster. Furthermore, steganographers added 3kB/s of Internet access to MIT’s planetary-scale cluster.

figure1.png

Figure 4: These results were obtained by Noam Chomsky et al. [23]; we reproduce them here for clarity.

Tither runs on autogenerated standard software. We implemented our model checking server in x86 assembly, augmented with collectively wireless, noisy extensions. Our experiments soon proved that automating our Knesis keyboards was more effective than instrumenting them, as previous work suggested. Second, all of these techniques are of interesting historical significance; R. Tarjan and Andrew Yao investigated an orthogonal setup in 1967.

figure2.png

Figure 5: The average distance of our application, compared with the other applications.

4.2  Experiments and Results

figure3.png

Figure 6: The expected instruction rate of our application, as a function of popularity of replication.

figure4.png

Figure 7: Note that hit ratio grows as interrupt rate decreases – a phenomenon worth studying in its own right.

We have taken great pains to describe out evaluation setup; now, the payoff, is to discuss our results. That being said, we ran four novel experiments: (1) we ran von Neumann machines on 15 nodes spread throughout the underwater network, and compared them against semaphores running locally; (2) we measured database and instant messenger performance on our planetary-scale cluster; (3) we ran 87 trials with a simulated DHCP workload, and compared results to our courseware deployment; and (4) we ran 58 trials with a simulated RAID array workload, and compared results to our bioware simulation. All of these experiments completed without LAN congestion or access-link congestion.
Now for the climactic analysis of the second half of our experiments. Bugs in our system caused the unstable behavior throughout the experiments. Continuing with this rationale, bugs in our system caused the unstable behavior throughout the experiments. These expected time since 1935 observations contrast to those seen in earlier work [29], such as Alan Turing’s seminal treatise on RPCs and observed block size.
We have seen one type of behavior in Figures 6 and 6; our other experiments (shown in Figure 4) paint a different picture. Operator error alone cannot account for these results. Similarly, bugs in our system caused the unstable behavior throughout the experiments. Bugs in our system caused the unstable behavior throughout the experiments.
Lastly, we discuss the first two experiments. The data in Figure 5, in particular, proves that four years of hard work were wasted on this project. Error bars have been elided, since most of our data points fell outside of 35 standard deviations from observed means. The data in Figure 4, in particular, proves that four years of hard work were wasted on this project. Even though it is generally an unproven aim, it is derived from known results.

5  Related Work

Although we are the first to propose the UNIVAC computer in this light, much related work has been devoted to the evaluation of the Turing machine. Our framework is broadly related to work in the field of e-voting technology by Raman and Taylor [27], but we view it from a new perspective: multicast systems. A comprehensive survey [3] is available in this space. Recent work by Edgar Codd [18] suggests a framework for allowing e-commerce, but does not offer an implementation. Moore et al. [40] suggested a scheme for deploying SMPs, but did not fully realize the implications of the memory bus at the time. Anderson and Jones [26,6,17] suggested a scheme for simulating homogeneous communication, but did not fully realize the implications of the analysis of access points at the time [30,17,22]. Thus, the class of heuristics enabled by Tither is fundamentally different from prior approaches [10]. Our design avoids this overhead.

5.1  802.11 Mesh Networks

Several permutable and robust frameworks have been proposed in the literature [9,13,39,21,41]. Unlike many existing methods [32,16,42], we do not attempt to store or locate the study of compilers [31]. Obviously, comparisons to this work are unreasonable. Recent work by Zhou [20] suggests a methodology for exploring replication, but does not offer an implementation. Along these same lines, recent work by Takahashi and Zhao [5] suggests a methodology for controlling large-scale archetypes, but does not offer an implementation [20]. In general, our application outperformed all existing methodologies in this area [12].

5.2  Compilers

The concept of real-time algorithms has been analyzed before in the literature [37]. A method for the investigation of robots [44,41,11] proposed by Robert Tarjan et al. fails to address several key issues that our solution does answer. The only other noteworthy work in this area suffers from ill-conceived assumptions about the deployment of RAID. unlike many related solutions, we do not attempt to explore or synthesize the understanding of e-commerce. Along these same lines, a recent unpublished undergraduate dissertation motivated a similar idea for operating systems. Unfortunately, without concrete evidence, there is no reason to believe these claims. Ultimately, the application of Watanabe et al. [14,45] is a practical choice for operating systems [25]. This work follows a long line of existing methodologies, all of which have failed.

5.3  Game-Theoretic Symmetries

A major source of our inspiration is early work by H. Suzuki [34] on efficient theory [35,44,28]. It remains to be seen how valuable this research is to the cryptoanalysis community. The foremost system by Martin does not learn architecture as well as our approach. An analysis of the Internet [36] proposed by Ito et al. fails to address several key issues that Tither does answer [19]. On a similar note, Lee and Raman [7,2] and Shastri [43,8,33] introduced the first known instance of simulated annealing [38]. Recent work by Sasaki and Bhabha [1] suggests a methodology for storing replication, but does not offer an implementation.

6  Conclusion

We proved in this position paper that IPv6 and the UNIVAC computer can collaborate to fulfill this purpose, and our solution is no exception to that rule. Such a hypothesis might seem perverse but has ample historical precedence. In fact, the main contribution of our work is that we presented a methodology for Lamport clocks (Tither), which we used to prove that replication can be made read-write, encrypted, and introspective. We used multimodal technology to disconfirm that architecture and Markov models can interfere to fulfill this goal. we showed that scalability in our method is not a challenge. Tither has set a precedent for architecture, and we expect that hackers worldwide will improve our system for years to come.

References

[1]
Anderson, L. Constructing expert systems using symbiotic modalities. In Proceedings of the Symposium on Encrypted Modalities (June 1990).
[2]
Bachman, C. The influence of decentralized algorithms on theory. Journal of Homogeneous, Autonomous Theory 70 (Oct. 1999), 52-65.
[3]
Bachman, C., and Culler, D. Decoupling DHTs from DHCP in Scheme. Journal of Distributed, Distributed Methodologies 97 (Oct. 1999), 1-15.
[4]
Backus, J., and Kaashoek, M. F. The relationship between B-Trees and Smalltalk with Paguma. Journal of Omniscient Technology 6 (June 2003), 70-99.
[5]
Cocke, J. Deconstructing link-level acknowledgements using Samlet. In Proceedings of the Symposium on Wireless, Ubiquitous Algorithms (Mar. 2003).
[6]
Cocke, J., and Williams, J. Constructing IPv7 using random models. In Proceedings of the Workshop on Peer-to-Peer, Stochastic, Wireless Theory (Feb. 1999).
[7]
Dijkstra, E., and Rabin, M. O. Decoupling agents from fiber-optic cables in the transistor. In Proceedings of PODS (June 1993).
[8]
Engelbart, D., Lee, T., and Ullman, J. A case for active networks. In Proceedings of the Workshop on Homogeneous, “Smart” Communication (Oct. 1996).
[9]
Engelbart, D., Shastri, H., Zhao, S., and Floyd, S. Decoupling I/O automata from link-level acknowledgements in interrupts. Journal of Relational Epistemologies 55 (May 2004), 51-64.
[10]
Estrin, D. Compact, extensible archetypes. Tech. Rep. 2937/7774, CMU, Oct. 2001.
[11]
Fredrick P. Brooks, J., and Brooks, R. The relationship between replication and forward-error correction. Tech. Rep. 657/1182, UCSD, Nov. 2004.
[12]
Garey, M. I/O automata considered harmful. In Proceedings of NDSS (July 1999).
[13]
Gupta, P., Newell, A., McCarthy, J., Martinez, N., and Brown, G. On the investigation of fiber-optic cables. In Proceedings of the Symposium on Encrypted Theory (July 2005).
[14]
Hartmanis, J. Constant-time, collaborative algorithms. Journal of Metamorphic Archetypes 34 (Oct. 2003), 71-95.
[15]
Hennessy, J. A methodology for the exploration of forward-error correction. In Proceedings of SIGMETRICS (Mar. 2002).
[16]
Kahan, W., and Ramagopalan, E. Deconstructing 802.11b using FUD. In Proceedings of OOPSLA (Oct. 2005).
[17]
LeBout, J., and Anderson, T. a. The relationship between rasterization and robots using Faro. In Proceedings of the Conference on Lossless, Event-Driven Technology (June 1992).
[18]
LeBout, J., and Jones, V. O. IPv7 considered harmful. Journal of Heterogeneous, Low-Energy Archetypes 20 (July 2005), 1-11.
[19]
Lee, K., Taylor, O. K., Martinez, H. G., Milner, R., and Robinson, N. E. Capstan: Simulation of simulated annealing. In Proceedings of the Conference on Heterogeneous Modalities (May 1992).
[20]
Nehru, W. The impact of unstable methodologies on e-voting technology. In Proceedings of NDSS (July 1994).
[21]
Reddy, R. Improving fiber-optic cables and reinforcement learning. In Proceedings of the Workshop on Lossless Modalities (Mar. 1999).
[22]
Ritchie, D., Ritchie, D., Culler, D., Stearns, R., Bose, X., Leiserson, C., Bhabha, U. R., and Sato, V. Understanding of the Internet. In Proceedings of IPTPS (June 2001).
[23]
Sato, Q., and Smith, A. Decoupling Moore’s Law from hierarchical databases in SCSI disks. In Proceedings of IPTPS (Dec. 1997).
[24]
Shenker, S., and Thomas, I. Deconstructing cache coherence. In Proceedings of the Workshop on Scalable, Relational Modalities (Feb. 2004).
[25]
Simon, H., Tanenbaum, A., Blum, M., and Lakshminarayanan, K. An exploration of RAID using BordelaisMisuser. Tech. Rep. 98/30, IBM Research, May 1998.
[26]
Smith, R., Estrin, D., Thompson, K., Brown, X., and Adleman, L. Architecture considered harmful. In Proceedings of the Workshop on Flexible, “Fuzzy” Theory (Apr. 2005).
[27]
Sun, G. On the study of telephony. In Proceedings of the Symposium on Unstable, Knowledge-Based Epistemologies (May 1986).
[28]
Sutherland, I. Deconstructing systems. In Proceedings of ASPLOS (June 2000).
[29]
Suzuki, F. Y., Leary, T., Shastri, C., Lakshminarayanan, K., and Garcia-Molina, H. Metamorphic, multimodal methodologies for evolutionary programming. In Proceedings of the Workshop on Stable, Embedded Algorithms (Aug. 2005).
[30]
Takahashi, O., Gupta, W., and Hoare, C. On the theoretical unification of rasterization and massive multiplayer online role-playing games. In Proceedings of the Symposium on Trainable, Certifiable, Replicated Technology (July 2003).
[31]
Taylor, H., Morrison, R. T., Harris, Y., Bachman, C., Nygaard, K., Einstein, A., and Gupta, a. Byzantine fault tolerance considered harmful. In Proceedings of ASPLOS (Mar. 2003).
[32]
Thomas, X. K. Real-time, cooperative communication for e-business. In Proceedings of POPL (May 2004).
[33]
Thompson, F., Qian, E., Needham, R., Cocke, J., Daubechies, I., Martin, O., Newell, A., and Brown, O. Towards the understanding of consistent hashing. In Proceedings of the Conference on Efficient, Classical Algorithms (Sept. 1992).
[34]
Thompson, K. Simulating hash tables and DNS. IEEE JSAC 7 (Apr. 2001), 75-82.
[35]
Turing, A. Deconstructing IPv6 with ELOPS. In Proceedings of the Workshop on Atomic, Random Technology (Feb. 1995).
[36]
Turing, A., Minsky, M., Bhabha, C., and Sun, P. A methodology for the construction of courseware. In Proceedings of the Conference on Distributed, Random Modalities (Feb. 2004).
[37]
Ullman, J., and Ritchie, D. Distributed communication. In Proceedings of IPTPS (Nov. 2004).
[38]
Welsh, M., Schroedinger, E., Daubechies, I., and Shastri, W. A methodology for the analysis of hash tables. In Proceedings of OSDI (Oct. 2002).
[39]
White, V., and White, V. The influence of encrypted configurations on networking. Journal of Semantic, Flexible Theory 4 (July 2004), 154-198.
[40]
Wigleton, B., Anderson, G., Wang, Q., Morrison, R. T., and Codd, E. A synthesis of Web services. In Proceedings of IPTPS (Mar. 1999).
[41]
Wirth, N., and Hoare, C. A. R. Comparing DNS and checksums. OSR 310 (Jan. 2001), 159-191.
[42]
Zhao, B., Smith, A., and Perlis, A. Deploying architecture and Internet QoS. In Proceedings of NOSSDAV (July 2001).
[43]
Zhao, H. The effect of “smart” theory on hardware and architecture. In Proceedings of the USENIX Technical Conference (Apr. 2001).
[44]
Zheng, N. A methodology for the understanding of superpages. In Proceedings of SOSP (Dec. 2005).
[45]
Zheng, R., Smith, J., Chomsky, N., and Chandrasekharan, B. X. Comparing systems and redundancy with CandyUre. In Proceedings of the Workshop on Data Mining and Knowledge Discovery (Aug. 2003).
Send to Kindle

Apocalypse Now or Later?

Armageddon-poster06Americans love their apocalypses. So, should demise come at the hands of a natural catastrophe, hastened by human (in)action, or should it come courtesy of an engineered biological or nuclear disaster? You chose. Isn’t this so much fun, thinking about absolute extinction?

Ira Chernus, Professor of Religious Studies at the University of Colorado at Boulder, brings us a much-needed scholarly account of our love affairs with all things apocalyptic. But our fascination for  Armageddon — often driven by hope — does nothing to resolve the ultimate conundrum: regardless of the type of ending, it is unlikely that Bruce Willis will be featuring.

From TomDispatch / Salon:

Wherever we Americans look, the threat of apocalypse stares back at us.

Two clouds of genuine doom still darken our world: nuclear extermination and environmental extinction. If they got the urgent action they deserve, they would be at the top of our political priority list.

But they have a hard time holding our attention, crowded out as they are by a host of new perils also labeled “apocalyptic”: mounting federal debt, the government’s plan to take away our gunscorporate control of the Internet, the Comcast-Time Warner mergerocalypse, Beijing’s pollution airpocalypse, the American snowpocalypse, not to speak of earthquakes and plagues. The list of topics, thrown at us with abandon from the political right, left, and center, just keeps growing.

Then there’s the world of arts and entertainment where selling the apocalypse turns out to be a rewarding enterprise. Check out the website “Romantically Apocalyptic,” Slash’s album “Apocalyptic Love,” or the history-lite documentary “Viking Apocalypse” for starters. These days, mathematicians even have an “apocalyptic number.”

Yes, the A-word is now everywhere, and most of the time it no longer means “the end of everything,” but “the end of anything.” Living a life so saturated with apocalypses undoubtedly takes a toll, though it’s a subject we seldom talk about.

So let’s lift the lid off the A-word, take a peek inside, and examine how it affects our everyday lives. Since it’s not exactly a pretty sight, it’s easy enough to forget that the idea of the apocalypse has been a container for hope as well as fear. Maybe even now we’ll find some hope inside if we look hard enough.

A Brief History of Apocalypse

Apocalyptic stories have been around at least since biblical times, if not earlier. They show up in many religions, always with the same basic plot: the end is at hand; the cosmic struggle between good and evil (or God and the Devil, as the New Testament has it) is about to culminate in catastrophic chaos, mass extermination, and the end of the world as we know it.

That, however, is only Act I, wherein we wipe out the past and leave a blank cosmic slate in preparation for Act II: a new, infinitely better, perhaps even perfect world that will arise from the ashes of our present one. It’s often forgotten that religious apocalypses, for all their scenes of destruction, are ultimately stories of hope; and indeed, they have brought it to millions who had to believe in a better world a-comin’, because they could see nothing hopeful in this world of pain and sorrow.

That traditional religious kind of apocalypse has also been part and parcel of American political life since, in Common Sense, Tom Paine urged the colonies to revolt by promising, “We have it in our power to begin the world over again.”

When World War II — itself now sometimes called an apocalypse – ushered in the nuclear age, it brought a radical transformation to the idea. Just as novelist Kurt Vonnegut lamented that the threat of nuclear war had robbed us of “plain old death” (each of us dying individually, mourned by those who survived us), the theologically educated lamented the fate of religion’s plain old apocalypse.

After this country’s “victory weapon” obliterated two Japanese cities in August 1945, most Americans sighed with relief that World War II was finally over. Few, however, believed that a permanently better world would arise from the radioactive ashes of that war. In the 1950s, even as the good times rolled economically, America’s nuclear fear created something historically new and ominous — a thoroughly secular image of the apocalypse.  That’s the one you’ll get first if you type “define apocalypse” into Google’s search engine: “the complete final destruction of the world.” In other words, one big “whoosh” and then… nothing. Total annihilation. The End.

Apocalypse as utter extinction was a new idea. Surprisingly soon, though, most Americans were (to adapt the famous phrase of filmmaker Stanley Kubrick) learning how to stop worrying and get used to the threat of “the big whoosh.” With the end of the Cold War, concern over a world-ending global nuclear exchange essentially evaporated, even if the nuclear arsenals of that era were left ominously in place.

Meanwhile, another kind of apocalypse was gradually arising: environmental destruction so complete that it, too, would spell the end of all life.

This would prove to be brand new in a different way. It is, as Todd Gitlin has so aptly termed it, history’s first “slow-motion apocalypse.” Climate change, as it came to be called, had been creeping up on us “in fits and starts,” largely unnoticed, for two centuries. Since it was so different from what Gitlin calls “suddenly surging Genesis-style flood” or the familiar “attack out of the blue,” it presented a baffling challenge. After all, the word apocalypse had been around for a couple of thousand years or more without ever being associated in any meaningful way with the word gradual.
The eminent historian of religions Mircea Eliade once speculated that people could grasp nuclear apocalypse because it resembled Act I in humanity’s huge stock of apocalypse myths, where the end comes in a blinding instant — even if Act II wasn’t going to follow. This mythic heritage, he suggested, remains lodged in everyone’s unconscious, and so feels familiar.

But in a half-century of studying the world’s myths, past and present, he had never found a single one that depicted the end of the world coming slowly. This means we have no unconscious imaginings to pair it with, nor any cultural tropes or traditions that would help us in our struggle to grasp it.

That makes it so much harder for most of us even to imagine an environmentally caused end to life. The very category of “apocalypse” doesn’t seem to apply. Without those apocalyptic images and fears to motivate us, a sense of the urgent action needed to avert such a slowly emerging global catastrophe lessens.

All of that (plus of course the power of the interests arrayed against regulating the fossil fuel industry) might be reason enough to explain the widespread passivity that puts the environmental peril so far down on the American political agenda. But as Dr. Seuss would have said, that is not all! Oh no, that is not all.

Apocalypses Everywhere

When you do that Google search on apocalypse, you’ll also get the most fashionable current meaning of the word: “Any event involving destruction on an awesome scale; [for example] ‘a stock market apocalypse.’” Welcome to the age of apocalypses everywhere.

With so many constantly crying apocalyptic wolf or selling apocalyptic thrills, it’s much harder now to distinguish between genuine threats of extinction and the cheap imitations. The urgency, indeed the very meaning, of apocalypse continues to be watered down in such a way that the word stands in danger of becoming virtually meaningless. As a result, we find ourselves living in an era that constantly reflects premonitions of doom, yet teaches us to look away from the genuine threats of world-ending catastrophe.

Oh, America still worries about the Bomb — but only when it’s in the hands of some “bad” nation. Once that meant Iraq (even if that country, under Saddam Hussein, never had a bomb and in 2003, when the Bush administration invaded, didn’t even have a bomb program). Now, it means Iran — another country without a bomb or any known plan to build one, but with the apocalyptic stare focused on it as if it already had an arsenal of such weapons — and North Korea.

These days, in fact, it’s easy enough to pin the label “apocalyptic peril” on just about any country one loathes, even while ignoring friendsallies, and oneself. We’re used to new apocalyptic threats emerging at a moment’s notice, with little (or no) scrutiny of whether the A-word really applies.

What’s more, the Cold War era fixed a simple equation in American public discourse: bad nation + nuclear weapon = our total destruction. So it’s easy to buy the platitude that Iran must never get a nuclear weapon or it’s curtains. That leaves little pressure on top policymakers and pundits to explain exactly how a few nuclear weapons held by Iran could actually harm Americans.

Meanwhile, there’s little attention paid to the world’s largest nuclear arsenal, right here in the U.S. Indeed, America’s nukes are quite literally impossible to see, hidden as they are underground, under the seas, and under the wraps of “top secret” restrictions. Who’s going to worry about what can’t be seen when so many dangers termed “apocalyptic” seem to be in plain sight?

Environmental perils are among them: melting glaciers and open-water Arctic seas, smog-blinded Chinese cities, increasingly powerful storms, and prolonged droughts. Yet most of the time such perils seem far away and like someone else’s troubles. Even when dangers in nature come close, they generally don’t fit the images in our apocalyptic imagination. Not surprisingly, then, voices proclaiming the inconvenient truth of a slowly emerging apocalypse get lost in the cacophony of apocalypses everywhere. Just one more set of boys crying wolf and so remarkably easy to deny or stir up doubt about.

Death in Life

Why does American culture use the A-word so promiscuously? Perhaps we’ve been living so long under a cloud of doom that every danger now readily takes on the same lethal hue.

Psychiatrist Robert Lifton predicted such a state years ago when he suggested that the nuclear age had put us all in the grips of what he called “psychic numbing” or “death in life.” We can no longer assume that we’ll die Vonnegut’s plain old death and be remembered as part of an endless chain of life. Lifton’s research showed that the link between death and life had become, as he put it, a “broken connection.”

As a result, he speculated, our minds stop trying to find the vitalizing images necessary for any healthy life. Every effort to form new mental images only conjures up more fear that the chain of life itself is coming to a dead end. Ultimately, we are left with nothing but “apathy, withdrawal, depression, despair.”

If that’s the deepest psychic lens through which we see the world, however unconsciously, it’s easy to understand why anything and everything can look like more evidence that The End is at hand. No wonder we have a generation of American youth and young adults who take a world filled with apocalyptic images for granted.

Think of it as, in some grim way, a testament to human resiliency. They are learning how to live with the only reality they’ve ever known (and with all the irony we’re capable of, others are learning how to sell them cultural products based on that reality). Naturally, they assume it’s the only reality possible. It’s no surprise that “The Walking Dead,” a zombie apocalypse series, is theirfavorite TV show, since it reveals (and revels in?) what one TV critic called the “secret life of the post-apocalyptic American teenager.”

Perhaps the only thing that should genuinely surprise us is how many of those young people still manage to break through psychic numbing in search of some way to make a difference in the world.

Yet even in the political process for change, apocalypses are everywhere. Regardless of the issue, the message is typically some version of “Stop this catastrophe now or we’re doomed!” (An example: Stop the Keystone XL pipeline or it’s “game over”!) A better future is often implied between the lines, but seldom gets much attention because it’s ever harder to imagine such a future, no less believe in it.

No matter how righteous the cause, however, such a single-minded focus on danger and doom subtly reinforces the message of our era of apocalypses everywhere: abandon all hope, ye who live here and now.

Read the entire article here.

Image: Armageddon movie poster. Courtesy of Touchstone Pictures.

Send to Kindle

The Joy of New Technology

prosthetic-hand

We are makers. We humans love to create and invent. Some of our inventions are hideous, laughable or just plain evil — Twinkies, collateralized debt obligations and subprime mortgages, Agent Orange, hair extensions, spray-on tans, cluster bombs, diet water.

However, for every misguided invention comes something truly great. This time, a prosthetic hand that provides a sense of real feeling, courtesy of the makers of the Veterans Affairs Medical Center in Cleveland, Ohio.

From Technology Review:

Igor Spetic’s hand was in a fist when it was severed by a forging hammer three years ago as he made an aluminum jet part at his job. For months afterward, he felt a phantom limb still clenched and throbbing with pain. “Some days it felt just like it did when it got injured,” he recalls.

He soon got a prosthesis. But for amputees like Spetic, these are more tools than limbs. Because the prosthetics can’t convey sensations, people wearing them can’t feel when they have dropped or crushed something.Now Spetic, 48, is getting some of his sensation back through electrodes that have been wired to residual nerves in his arm. Spetic is one of two people in an early trial that takes him from his home in Madison, Ohio, to the Cleveland Veterans Affairs Medical Center. In a basement lab, his prosthetic hand is rigged with force sensors that are plugged into 20 wires protruding from his upper right arm. These lead to three surgically implanted interfaces, seven millimeters long, with as many as eight electrodes apiece encased in a polymer, that surround three major nerves in Spetic’s forearm.

On a table, a nondescript white box of custom electronics does a crucial job: translating information from the sensors on Spetic’s prosthesis into a series of electrical pulses that the interfaces can translate into sensations. This technology “is 20 years in the making,” says the trial’s leader, Dustin Tyler, a professor of biomedical engineering at Case Western Reserve University and an expert in neural interfaces.

As of February, the implants had been in place and performing well in tests for more than a year and a half. Tyler’s group, drawing on years of neuroscience research on the signaling mechanisms that underlie sensation, has developed a library of patterns of electrical pulses to send to the arm nerves, varied in strength and timing. Spetic says that these different stimulus patterns produce distinct and realistic feelings in 20 spots on his prosthetic hand and fingers. The sensations include pressing on a ball bearing, pressing on the tip of a pen, brushing against a cotton ball, and touching sandpaper, he says. A surprising side effect: on the first day of tests, Spetic says, his phantom fist felt open, and after several months the phantom pain was “95 percent gone.”

On this day, Spetic faces a simple challenge: seeing whether he can feel a foam block. He dons a blindfold and noise-­canceling headphones (to make sure he’s relying only on his sense of touch), and then a postdoc holds the block inside his wide-open prosthetic hand and taps him on the shoulder. Spetic closes his prosthesis—a task made possible by existing commercial interfaces to residual arm muscles—and reports the moment he touches the block: success.

Read the entire article here.

Image: Prosthetic hand. Courtesy of MIT Technology Review / Veterans Affairs Medical Center.

Send to Kindle

Abraham Lincoln Was a Sham President

 

This is not the opinion of theDiagonal. Rather, it’s the view of the revisionist thinkers over at the so-called “News Leader”, Fox News. Purposefully I avoid commenting on news and political events, but once in a while the story is so jaw-droppingly incredulous that your friendly editor cannot keep away from his keyboard. Which brings me to Fox News.

The latest diatribe from the 24/7 conservative think tank is that Lincoln actually caused the Civil War. According to Fox analyst Andrew Napolitano the Civil War was an unnecessary folly, and could have been avoided by Lincoln had he chosen to pay off the South or let slavery come to a natural end.

This is yet another example of the mindless, ideological drivel dished out on a daily basis by Fox. Next are we likely to see Fox defend Hitler’s “cleansing” of Europe as fine economic policy that the Allies should have let run its course? Ugh! One has to suppose that the present day statistic of 30 million enslaved humans around the world is just as much a figment of the collective imaginarium that is Fox.

The one bright note to ponder about Fox and its finely-tuned propaganda machine comes from looking at its commercials. When the majority of its TV ads are for the over-60s — think Viagra, statins and catheters — you can sense that its aging demographic will soon sublimate to meet its alternate, heavenly reality.

From Salon:

“The Daily Show” had one of its best segments in a while on Monday night, ruthlessly and righteously taking Fox News legal analyst and libertarian Andrew Napolitano to task for using the airwaves to push his clueless and harmful revisionist understanding of the Civil War.

Jon Stewart and “senior black correspondent” Larry Wilmore criticized Napolitano for a Feb. 14 appearance on the Fox Business channel during which he called himself a “contrarian” when it comes to estimating former President Abraham Lincoln’s legacy and argued that the Civil War was unnecessary — and may not have even been about slavery, anyway!

“At the time that [Lincoln] was the president of the United States, slavery was dying a natural death all over the Western world,” Napolitano said. “Instead of allowing it to die, or helping it to die, or even purchasing the slaves and then freeing them — which would have cost a lot less money than the Civil War cost — Lincoln set about on the most murderous war in American history.”

Stewart quickly shred this argument to pieces, noting that Lincoln spent much of 1862 trying (and failing) to convince border states to accept compensatory emancipation as well as the fact that the South’s relationship with chattel slavery was fundamentally not just an economic but also a social system, one that it would never willingly abandon.

Soon after, Stewart turned to Wilmore, who noted that the Confederacy was “so committed to slavery that Lincoln didn’t die of natural causes.” Wilmore next pointed out that people who “think Lincoln started the Civil War because the North was ready to kill to end slavery” are mistaken. “[T]he truth was,” Wilmore said, “the South was ready to die to keep slavery.”

Stewart and Wilmore next highlighted that Napolitano doesn’t hate all wars, and in fact has a history of praising the Revolutionary War as necessary and just. “So it was heroic to fight for the proposition that all men are created equal, but when there’s a war to enforce that proposition, that’s wack?” Wilmore asked. “You know, there’s something not right when you feel the only black thing worth fighting for is tea.”

As the final dagger, Stewart and Wilmore noted that Napolitano has ranted at length on Fox about how taxation is immoral and unjust, prompting Wilmore to elegantly outline the problems with Napolitano-style libertarianism in a single paragraph. Speaking to Napolitano, Wilmore said:

You think it’s immoral for the government to reach into your pocket, rip your money away from its warm home and claim it as its own property, money that used to enjoy unfettered freedom is now conscripted to do whatever its new owner tells it to. Now, I know this is going to be a leap, but you know that sadness and rage you feel about your money? Well, that’s the way some of us feel about people.

Read the entire story here.

Video courtesy of The Daily Show with Jon Stewart, Comedy Central.

 

Send to Kindle

FOMO Reshaping You and Your Network

Fear of missing out (FOMO) and other negative feelings are greatly disproportional to good ones in online social networks. The phenomenon is widespread and well-documented. Compound this with the observation — though unintuitive — that your online friends will have more friends and be more successful than you, and you have a recipe for a growing, deep-seated inferiority complex. Add to this other behavioral characteristics that are peculiar or exaggerated in online social networks and you have a more fundamental recipe — one that threatens the very fabric of the network itself. Just consider how online trolling, status lurking, persona-curation, passive monitoring, stalking and deferred (dis-)liking are re-fashioning our behaviors and the networks themselves.

From ars technica:

I found out my new college e-mail address in 2005 from a letter in the mail. Right after opening the envelope, I went straight to the computer. I was part of a LiveJournal group made of incoming students, and we had all been eagerly awaiting our college e-mail addresses, which had a use above and beyond corresponding with professors or student housing: back then, they were required tokens for entry to the fabled thefacebook.com.

That was nine years ago, and Facebook has now been in existence for 10. But even in those early days, Facebook’s cultural impact can’t be overstated. A search for “Facebook” on Google Scholar alone now produces 1.2 million results from 2006 on; “Physics” only returns 456,000.

But in terms of presence, Facebook is flopping around a bit now. The ever-important “teens” despise it, and it’s not the runaway success, happy addiction, or awe-inspiring source of information it once was. We’ve curated our identities so hard and had enough experiences with unforeseen online conflict that Facebook can now feel more isolating than absorbing. But what we are dissatisfied with is what Facebook has been, not what it is becoming.

Even if the grand sociological experiment that was Facebook is now running a little dry, the company knows this—which is why it’s transforming Facebook into a completely different entity. And the cause of all this built-up disarray that’s pushing change? It’s us. To prove it, let’s consider the social constructs and weirdnesses Facebook gave rise to, how they ultimately undermined the site, and how these ideas are shaping Facebook into the company it is now and will become.

Cue that Randy Newman song

Facebook arrived late to the concept of online friending, long after researchers started wondering about the structure of these social networks. What Facebook did for friending, especially reciprocal friending, was write it so large that it became a common concern. How many friends you had, who did and did not friend you back, and who should friend each other first all became things that normal people worried about.

Once Facebook opened beyond colleges, it became such a one-to-one representation of an actual social network that scientists started to study it. They applied social theories like those of weak ties or identity creation to see how they played out sans, or in supplement to, face-to-face interactions.

In a 2007 study, when Facebook was still largely campus-bound, a group of researchers said that Facebook “appears to play an important role in the process by which students form and maintain social capital.” They were using it to keep in touch with old friends and “to maintain or intensify relationships characterized by some form of offline connection.”

This sounds mundane now, since Facebook is so integrated into much of our lives. Seeing former roommates or childhood friends posting updates to Facebook feels as commonplace as literally seeing them nearly every day back when we were still roommates at 20 or friends at eight.

But the ability to keep tabs on someone without having to be proactive about it—no writing an e-mail, making a phone call, etc.—became the unique selling factor of Facebook. Per the 2007 study above, Facebook became a rich opportunity for “convert[ing] latent ties into weak ties,” connections that are valuable because they are with people who are sufficiently distant socially to bring in new information and opportunities.

Some romantic pixels have been spilled about the way no one is ever lost to anyone anymore; most people, including ex-lovers, estranged family members, or missed connections are only a Wi-Fi signal away.

“Modern technology has made our worlds smaller, but perhaps it also has diminished life’s mysteries, and with them, some sense of romance,” writes David Vecsey in The New York Times. Vecsey cites a time when he tracked down a former lover “across two countries and an ocean,” something he would not have done in the absence of passive social media monitoring. “It was only in her total absence, in a total vacuum away from her, that I was able to appreciate the depth of love I felt.”

The art of the Facebook-stalk

While plenty of studies have been conducted on the productive uses of Facebook—forming or maintaining weak ties, supplementing close relationships, or fostering new, casual ones—there are plenty that also touch on the site as a means for passive monitoring. Whether it was someone we’d never met, a new acquaintance, or an unrequited infatuation, Facebook eventually had enough breadth that you could call up virtually anyone’s profile, if only to see how fat they’ve gotten.

One study referred to this process as “social investigation.” We developed particular behaviors to avoid creating suspicion: do not “like” anything by the object of a stalking session, or if we do like it, don’t “like” too quickly; be careful not to type a name we want to search into the status field by accident; set an object of monitoring as a “close friend,” even if they aren’t, so their updates show up without fail; friend their friends; surreptitiously visit profile pages multiple times a day in case we missed anything.

This passive monitoring is one of the more utilitarian uses of Facebook. It’s also one of the most addictive. The (fictionalized) movie The Social Network closes with Facebook’s founder, Mark Zuckerberg, gazing at the Facebook profile of a high-school crush. Facebook did away with the necessity of keeping tabs on anyone. You simply had all of the tabs, all of the time, with the most recent information whenever you wanted to look at them.

The book Digital Discourse cites a classic example of the Facebook stalk in an IM conversation between two teenagers:

“I just saw what Tanya Eisner wrote on your Facebook wall. Go to her house,” one says.
“Woah, didn’t even see that til right now,” replies the other.
“Haha it looks like I stalk you… which I do,” says the first.
“I stalk u too its ok,” comforts the second.

But even innocent, casual information recon in the form of a Facebook stalk can rub us the wrong way. Any instance of a Facebook interaction that ends with an unexpected third body’s involvement can taint the rest of users’ Facebook behavior, making us feel watched.

Digital Discourse states that “when people feel themselves to be the objects of stalking, creeping, or lurking by third parties, they express annoyance or even moral outrage.” It cites an example of another teenager who gets a wall post from a person she barely knows, and it explains something she wrote about in a status update. “Don’t stalk my status,” she writes in mocking command to another friend, as if talking to the interloper.

You are who you choose to be

“The advent of the Internet has changed the traditional conditions of identity production,” reads a study from 2008 on how people presented themselves on Facebook. People had been curating their presences online for a long time before Facebook, but the fact that Facebook required real names and, for a long time after its inception, association with an educational institution made researchers wonder if it would make people hew a little closer to reality.

But beyond the bounds of being tied to a real name, users still projected an idealized self to others; a type of “possible self,” or many possible selves, depending on their sharing settings. Rather than try to describe themselves to others, users projected a sort of aspirational identity.

People were more likely to associate themselves with cultural touchstones, like movies, books, or music, than really identify themselves. You might not say you like rock music, but you might write Led Zeppelin as one of your favorite bands, and everyone else can infer your taste in music as well as general taste and coolness from there.

These identity proxies also became vectors for seeking approval. “The appeal is as much to the likeability of my crowd, the desirability of my boyfriend, or the magic of my music as it is to the personal qualities of the Facebook users themselves,” said the study. The authors also noted that, for instance, users tended to post photos of themselves mostly in groups in social situations. Even the profile photos, which would ostensibly have a single subject, were socially styled.

As the study concluded, “identity is not an individual characteristic; it is not an expression of something innate in a person, it is rather a social product, the outcome of a given social environment and hence performed differently in varying contexts.” Because Facebook was so susceptible to this “performance,” so easily controlled and curated, it quickly became less about real people and more about highlight reels.

We came to Facebook to see other real people, but everyone, even casual users, saw it could be gamed for personal benefit. Inflicting our groomed identities on each other soon became its own problem.

Fear of missing out

A long-time problem of social networks has been that the bad feelings they can generate are greatly disproportional to good ones.

In strict terms of self-motivation, posting something and getting a good reception feels good. But most of Facebook use is watching other people post about their own accomplishments and good times. For a social network of 300 friends with an even distribution of auspicious life events, you are seeing 300 times as many good things happen to others as happen to you (of course, everyone has the same amount of good luck, but in bulk for the consumer, it doesn’t feel that way). If you were happy before looking at Facebook, or even after posting your own good news, you’re not now.

The feelings of inadequacy did start to drive people back to Facebook. Even in the middle of our own vacations, celebration dinners, or weddings, we might check Facebook during or after to compare notes and see if we really had the best time possible.

That feeling became known as FOMO, “fear of missing out.” As Jenna Wortham wrote in The New York Times, “When we scroll through pictures and status updates, the worry that tugs at the corners of our minds is set off by the fear of regret… we become afraid that we’ve made the wrong decision about how to spend our time.”

Even if you had your own great stuff to tell Facebook about, someone out there is always doing better. And Facebook won’t let you forget. The brewing feeling of inferiority means users don’t post about stuff that might be too lame. They might start to self-censor, and then the bar for what is worth the “risk” of posting rises higher and higher. As people stop posting, there is less to see, less reason to come back and interact, like, or comment on other people’s material. Ultimately, people, in turn, have less reason to post.

Read the entire article here.

Send to Kindle

Gephyrophobes Not Welcome

Royal_Gorge_Bridge

A gephyrophobic person is said to have a fear of crossing bridges. So, we’d strongly recommend avoiding the structures on this list of some of the world’s scariest bridges. For those who suffer no anxiety from either bridges or heights, and who crave endless vistas both horizontally and vertically, this list is for you. Our favorite, the suspension bridge over the Royal Gorge in Colorado.

From the Guardian:

From rickety rope walkways to spectacular feats of engineering, we take a look at some of the world’s scariest bridges.

Until 2001, the Royal Gorge bridge in Colorado was the highest bridge in the world. Built in 1929, the 291m-high structure is now a popular tourist attraction, not least because of the fact that it is situated within a theme park.

Read the entire story and see more images here.

Image: Royal Gorge, Colorado. Courtesy of Wikipedia / Hustvedt.

 

Send to Kindle

Influencing and Bullying

We sway our co-workers. We coach teams. We cajole our spouses and we parent our kids. But what characterizes this behavior over more overt and negative forms of influencing, such as bullying? It’s a question very much worth exploring since we are all bullies at some point — much more so than we tend to think of ourselves. And, not surprisingly, this goes hand-in-hand with deceit.

From the NYT:

WHAT is the chance that you could get someone to lie for you? What about vandalizing public property at your suggestion?

Most of us assume that others would go along with such schemes only if, on some level, they felt comfortable doing so. If not, they’d simply say “no,” right?

Yet research suggests that saying “no” can be more difficult than we believe — and that we have more power over others’ decisions than we think.

Social psychologists have spent decades demonstrating how difficult it can be to say “no” to other people’s propositions, even when they are morally questionable — consider Stanley Milgram’s infamous experiments, in which participants were persuaded to administer what they believed to be dangerous electric shocks to a fellow participant.

Countless studies have subsequently shown that we find it similarly difficult to resist social pressure from peers, friends and colleagues. Our decisions regarding everything from whether to turn the lights off when we leave a room to whether to call in sick to take a day off from work are affected by the actions and opinions of our neighbors and colleagues.

But what about those times when we are the ones trying to get someone to act unethically? Do we realize how much power we wield with a simple request, suggestion or dare? New research by my students and me suggests that we don’t.

We examined this question in a series of studies in which we had participants ask strangers to perform unethical acts. Before making their requests, participants predicted how many people they thought would comply. In one study, 25 college students asked 108 unfamiliar students to vandalize a library book. Targets who complied wrote the word “pickle” in pen on one of the pages.

As in the Milgram studies, many of the targets protested. They asked the instigators to take full responsibility for any repercussions. Yet, despite their hesitation, a large portion still complied.

Most important for our research question, more targets complied than participants had anticipated. Our participants predicted that an average of 28.5 percent would go along. In fact, fully half of those who were approached agreed. Moreover, 87 percent of participants underestimated the number they would be able to persuade to vandalize the book.

In another study, we asked 155 participants to think about a series of ethical dilemmas — for example, calling in sick to work to attend a baseball game. One group was told to think about these misdeeds from the perspective of a person deciding whether to commit them, and to imagine receiving advice from a colleague suggesting they do it or not. Another group took the opposite side, and thought about them from the perspective of someone advising another person about whether or not to do each deed.

Those in the first group were strongly influenced by the advice they received. When they were urged to engage in the misdeed, they said they would be more comfortable doing so than when they were advised not to. Their average reported comfort level fell around the midpoint of a 7-point scale after receiving unethical advice, but fell closer to the low end after receiving ethical advice.

However, participants in the “advisory” role thought that their opinions would hold little sway over the other person’s decision, assuming that participants in the first group would feel equally comfortable regardless of whether they had received unethical or ethical advice.

Taken together, our research, which was recently published in the journal Personality and Social Psychology Bulletin, suggests that we often fail to recognize the power of social pressure when we are the ones doing the pressuring.

Notably, this tendency may be especially pronounced in cultures like the United States’, where independence is so highly valued. American culture idolizes individuals who stand up to peer pressure. But that doesn’t mean that most do; in fact, such idolatry may hide, and thus facilitate, compliance under social pressure, especially when we are the ones putting on the pressure.

Consider the roles in the Milgram experiments: Most people have probably fantasized about being one of the subjects and standing up to the pressure. But in daily life, we play the role of the metaphorical experimenter in those studies as often as we play the participant. We bully. We pressure others to blow off work to come out for a drink or stiff a waitress who is having a bad night. These suggestions are not always wrong or unethical, but they may impact others’ behaviors more than we realize.

Read the entire story here.

Send to Kindle

Mars Emigres Beware

MRO-Mars-impact-craterThe planners behind the proposed, private Mars One mission to Mars are still targeting 2024 for an initial settlement on the Red Planet. That’s now a mere 10 years away. As of this writing, the field of potential settlers has been whittled down to around 2,000 from an initial pool of about 250,000 would-be explorers. While the selection process and planning continues, other objects continue to target Mars as well. Large space rocks seem to be hitting the planet more frequently and more recently than was first thought. So, while such impacts are both beautiful and scientifically valuable — they may come as rather unwanted to the forthcoming human Martians.

From ars technica:

Yesterday [February 5, 2014], the team that runs the HiRISE camera on the Mars Reconnaissance Orbiter released the photo shown above. It’s a new impact crater on Mars, formed sometime early this decade. The crater at the center is about 30 meters in diameter, and the material ejected during its formation extends out as far as 15 kilometers.

The impact was originally spotted by the MRO’s Context Camera, a wide-field imaging system that (wait for it) provides the context—an image of the surrounding terrain—for the high-resolution images taken by HiRISE. The time window on the impact, between July 2010 and May 2012, simply represents the time between two different Context Camera photos of the same location. Once the crater was spotted, it took until November of 2013 for another pass of the region, at which point HiRISE was able to image it.

Read the entire article here.

Image: Impact crater from Mars Reconnaissance Orbiter. Courtesy of NASA / JPL.

 

 

Send to Kindle