Is Walmart Wiccan? Is BestBuy Baptist? Is McDonalds Methodist?

So much for the Roberts Supreme Court. Conservatives would suggest that the court is intent on protecting the Constitution from assault by progressive liberals and upholding its libertarian conservativism. Yet, protections of and for the individual seem to have taken a backseat to recent rulings that promote corporate power — a somewhat new invention; perhaps, none more so than recent decisions that ruled corporations to be “people”. But the court is not standing still — not content with animating a business with lifeblood, soon, the court is likely to establish whether corporations have a religious spirit as well as individual sentience. Sticks of oxymoronic progressivism.

From the Washington Post:

If you thought this “corporations are people” business was getting out of hand, brace yourself. On Tuesday, the Supreme Court accepted two cases that will determine whether a corporation can deny contraceptive coverage to its female employees because of its religious beliefs.

The cases concern two of the most politically charged issues of recent years: who is exempted from the requirements of the Affordable Care Act, and whether application of the First Amendment’s free speech protections to corporations, established by the court’s 2010 decision in Citizens United, means that the First Amendment’s protections of religious beliefs must also be extended to corporations.

The Affordable Care Act requires employers to offer health insurance that covers contraception for their female employees. Churches and religious institutions are exempt from that mandate. But Hobby Lobby, a privately owned corporation that employs 13,000 people of all faiths — and, presumably, some of no faith — in its 500 craft stores says that requiring it to pay for contraception violates its religious beliefs — that is, the beliefs of its owners, the Green family.

In a brief submitted to a federal court, the Greens said that some forms of contraception — diaphragms, sponges, some versions of the pill — were fine by them, but others that prevented embryos from implanting in the womb were not. The U.S. Court of Appeals for the 10th Circuit upheld the Greens’ position in June in a decision explicitly based on “the First Amendment logic of Citizens United.” Judge Timothy Tymkovich wrote: “We see no reason the Supreme Court would recognize constitutional protection for a corporation’s political expression but not its religious expression.”

Tymkovich’s assessment of how the five right-wing justices on the Supreme Court may rule could prove correct — but what a mess such a ruling would create! For one thing, the Green family’s acceptance of some forms of contraception and rejection of others, while no doubt sincere, suggests that they, like many people of faith, adhere to a somewhat personalized religion. The line they draw is not, for instance, the same line that the Catholic Church draws.

Individual believers and non-believers draw their own lines on all kinds of moral issues every day. That’s human nature. They are free to say that their lines adhere to or are close to specific religious doctrines. But to extend the exemptions that churches receive to secular, for-profit corporations that claim to be following religious doctrine, but may in fact be nipping it here and tucking it there, would open the door to a range of idiosyncratic management practices inflicted on employees. For that matter, some religions have doctrines that, followed faithfully, could result in bizarre and discriminatory management practices.

The Supreme Court has not frequently ruled that religious belief creates an exemption from following the law. On the contrary, in a 1990 majority opinion, Justice Antonin Scalia wrote that Native Americans fired for smoking peyote as part of a religious ceremony had no right to reinstatement. It “would be courting anarchy,” Scalia wrote in Employment Division v. Smith, to allow them to violate the law just because they were “religious objectors” to it. “An individual’s religious beliefs,” he continued, cannot “excuse him from compliance with an otherwise valid law.”

It will be interesting to see whether Scalia still believes that now that he’s being confronted with a case where the religious beliefs in question may be closer to his own.

The other issue all this raises: Where does this corporations-are-people business start and stop? Under the law, corporations and humans have long had different standards of responsibility. If corporations are treated as people, so that they are free to spend money in election campaigns and to invoke their religious beliefs to deny a kind of health coverage to their workers, are they to be treated as people in other regards? Corporations are legal entities whose owners are not personally liable for the company’s debts, whereas actual people are liable for their own. Both people and corporations can discharge their debts through bankruptcy, but there are several kinds of bankruptcy, and the conditions placed on people are generally far more onerous than those placed on corporations. If corporations are people, why aren’t they subject to the same bankruptcy laws that people are? Why aren’t the owners liable for corporate debts as people are for their own?

Read the entire article here.

Send to Kindle

Bert and Ernie and Friends

The universe is a very strange place, stranger than Washington D.C., stranger than most reality TV shows.

And, it keep getting stranger as astronomers and cosmologists continue to make ever more head-scratching discoveries. The latest, a pair of super-high energy neutrinos, followed by another 28. It seems that these tiny, almost massless, particles are reaching Earth from an unknown source, or sources, of immense power outside of our own galaxy.

The neutrinos were spotted by the IceCube detector, which is buried beneath about a mile of solid ice in an Antarctic glacier.

From i09:

By drilling a 1.5 mile hole deep into an Antarctic glacier, physicists working at the IceCube South Pole Observatory have captured 28 extraterrestrial neutrinos — those mysterious and extremely powerful subatomic particles that can pass straight through solid matter. Welcome to an entirely new age of astronomy.

Back in April of this year, the same team of physicists captured the highest energy neutrinos ever detected. Dubbed Bert and Ernie, the elusive subatomic particles likely originated from beyond our solar system, and possibly even our galaxy.

Neutrinos are extremely tiny and prolific subatomic particles that are born in nuclear reactions, including those that occur inside of stars. And because they’re practically massless (together they contain only a tiny fraction of the mass of a single electron), they can pass through normal matter, which is why they’re dubbed ‘ghost particles.’ Neutrinos are able to do this because they don’t carry an electric charge, so they’re immune to electromagnetic forces that influence charged particles like electrons and protons.

A Billion Times More Powerful

But not all neutrinos are the same. The ones discovered by the IceCube team are about a billion times more energetic than the ones coming out of our sun. A pair of them had energies above an entire petaelectron volt. That’s more than 1,000 times the energy produced by protons smashed at CERN’s Large Hadron Collider.

So whatever created them must have been extremely powerful. Like, mindboggingly powerful — probably the remnants of supernova explosions. Indeed, as a recent study has shown, these cosmic explosions are more powerful than we could have ever imagined — to the point where they’re defying known physics.

Other candidates for neutrino production include black holes, pulsars, galactic nuclei — or even the cataclysmic merger of two black holes.

That’s why the discovery of these 28 new neutrinos, and the construction of the IceCube facility, is so important. It’s still a mystery, but these new findings, and the new detection technique, will help.

Back in April, the IceCube project looked for neutrinos above one petaelectronvolt, which is how Bert and Ernie were detected. But the team went back and searched through their data and found 26 neutrinos with slightly lower energies, though still above 30 teraelectronvolts that were detected between May 2010 and May 2012. While it’s possible that some of these less high-energy neutrinos could have been produced by cosmic rays in the Earth’s atmosphere, the researchers say that most of them likely came from space. And in fact, the data was analyzed in such a way as to exclude neutrinos that didn’t come from space and other types of particles that may have tripped off the detector.

The Dawn of a New Field

“This is a landmark discovery — possibly a Nobel Prize in the making,” said Alexander Kusenko, a UCLA astroparticle physicist who was not involved in the IceCube collaboration. Thanks to the remarkable IceCube facility, where neutrinos are captured in holes drilled 1.5 miles down into the Antarctic glacier, astronomers have a completely new way to scope out the cosmos. It’s both literally and figuratively changing the way we see the universe.

“It really is the dawn of a new field,” said Darren Grant, a University of Alberta physicist, and a member of the IceCube team.

Read the entire article here.

Send to Kindle

What’s Up With Bitcoin?

The digital, internet currency Bitcoin seems to be garnering much attention recently from some surprising corners, and it’s beyond speculators and computer geeks. Why?

From the Guardian:

The past weeks have seen a surprising meeting of minds between chairman of the US Federal Reserve Ben Bernanke, the Bank of England, the Olympic-rowing and Zuckerberg-bothering Winklevoss twins, and the US Department of Homeland Security. The connection? All have decided it’s time to take Bitcoin seriously.

Until now, what pundits called in a rolling-eye fashion “the new peer-to-peer cryptocurrency” had been seen just as a digital form of gold, with all the associated speculation, stake-claiming and even “mining”; perfect for the digital wild west of the internet, but no use for real transactions.

Bitcoins are mined by computers solving fiendishly hard mathematical problems. The “coin” doesn’t exist physically: it is a virtual currency that exists only as a computer file. No one computer controls the currency. A network keeps track of all transactions made using Bitcoins but it doesn’t know what they were used for – just the ID of the computer “wallet” they move from and to.

Right now the currency is tricky to use, both in terms of the technological nous required to actually acquire Bitcoins, and finding somewhere to spend them. To get them, you have to first set up a wallet, probably online at a site such as Blockchain.info, and then pay someone hard currency to get them to transfer the coins into that wallet.

A Bitcoin payment address is a short string of random characters, and if used carefully, it’s possible to make transactions anonymously. That’s what made it the currency of choice for sites such as the Silk Road and Black Market Reloaded, which let users buy drugs anonymously over the internet. It also makes it very hard to tax transactions, despite the best efforts of countries such as Germany, which in August declared that Bitcoin was “private money” in which transactions should be taxed as normal.

It doesn’t have all the advantages of cash, though the fact you can’t forge it is a definite plus: Bitcoin is “peer-to-peer” and every coin “spent” is authenticated with the network. Thus you can’t spend the same coin in two different places. (But nor can you spend it without an internet connection.) You don’t have to spend whole Bitcoins: each one can be split into 100m pieces (each known as a satoshi), and spent separately.

Although most people have now vaguely heard of Bitcoin, you’re unlikely to find someone outside the tech community who really understands it in detail, let alone accepts it as payment. Nobody knows who invented it; its pseudonymous creator, Satoshi Nakamoto, hasn’t come forward. He or she may not even be Japanese but certainly knows a lot about cryptography, economics and computing.

It was first presented in November 2008 in an academic paper shared with a cryptography mailing list. It caught the attention of that community but took years to take off as a niche transaction tool. The first Bitcoin boom and bust came in 2011, and signalled that it had caught the attention of enough people for real money to get involved – but also posed the question of whether it could ever be more than a novelty.

The algorithm for mining Bitcoins means the number in circulation will never exceed 21m and this limit will be reached in around 2140. Already 57% of all Bitcoins have been created; by 2017, 75% will have been. If you tried to create a Bitcoin in 2141, every other computer on the network would reject it as fake because it would not have been made according to the rules of currency.

The number of companies taking Bitcoin payments is increasing from a small base, and a few payment processors such as Atlanta-based Bitpay are making real money from the currency. But it’s difficult to get accurate numbers on conventional transactions, and it still seems that the most popular uses of Bitcoins are buying drugs in the shadier parts of the internet, as people did on the Silk Road website, and buying the currency in the hope that in a few weeks’ time you will be able to sell it at a profit.

This is remarkable because there’s no fundamental reason why Bitcoin should have any value at all. The only reason people are willing to pay money for the currency is because other people are willing to as well. (Try not to think about it too hard.) Now, though, sensible economists are saying that Bitcoin might become part of our future economy. That’s quite a shift from October last year, when the European Central Bank said that Bitcoin was “characteristic of a Ponzi [pyramid] scheme”. This month, the Chicago Federal Reserve commented that the currency was “a remarkable conceptual and technical achievement, which may well be used by existing financial institutions (which could issue their own bitcoins) or even by governments themselves”.

It might not sound thrilling. But for a central banker, that’s like yelling “BITCOIIINNNN!” from the rooftops. And Bernanke, in a carefully dull letter to the US Senate committee on Homeland Security, said that when it came to virtual currencies (read: Bitcoin), the US Federal Reserve had “ongoing initiatives” to “identify additional areas of … concern that require heightened attention by the banking organisations we supervise”.

In other words, Bernanke is ready to make Bitcoin part of US currency regulation – the key step towards legitimacy.

Most reporting about Bitcoin until now has been of its extraordinary price ramp – from a low of $1 in 2011 to more than $900 earlier this month. That massive increase has sparked a classic speculative rush, with more and more people hoping to get a piece of the pie by buying and then selling Bitcoins. Others are investing thousands of pounds in custom “mining rigs”, computers specially built to solve the mathematical problems necessary to confirm a Bitcoin transaction.

But bubbles can burst: in 2011 it went from $33 to $1. The day after hitting that $900 high, Bitcoin’s value halved on MtGox, the biggest exchange. Then it rose again.

Speculative bubbles happen everywhere, though, from stock markets to Beanie Babies. All that’s needed is enough people who think that they are the smart money, and that everyone else is sufficiently stupid to buy from them. But the Bitcoin bubbles tell us as much about the usefulness of the currency itself as the tulip mania of 17th century Holland did about flower-arranging.

History does provide some lessons. While the Dutch were selling single tulip bulbs for 10 times a craftsman’s annual income, the British were panicking about their own economic crisis. The silver coinage that had been the basis of the national economy for centuries was rapidly becoming unfit for purpose: it was constrained in supply and too easy to forge. The economy was taking on the features of a modern capitalist state, and the currency simply couldn’t catch up.

Describing the problem Britain faced then, David Birch, a consultant specialising in electronic transactions, says: “We had a problem in matching the nature of the economy to the nature of the money we used.” Birch has been talking about electronic money for over two decades and is convinced that we find ourselves on the edge of the same shift that occurred 400 years ago.

The cause of that shift is the internet, because even though you might want to, you can’t use cash – untraceable, no-fee-charged cash – online. Existing payment systems such as PayPal and credit cards demand a cut. So for individuals looking for a digital equivalent of cash – no middleman, quick, easy – Bitcoin looks pretty good.

In 1613, as people looked for a replacement for silver, Birch says, “we might have been saying ‘the idea of tulip bulbs as an asset class looks pretty good, but this central bank nonsense will never catch on.’ We knew we needed a change, but we couldn’t tell which made sense.” Back then, the currency crisis was solved with the introduction first of Isaac Newton’s Royal Mint (“official” silver and gold) and later with the creation of the Bank of England (“official” paper money that could in theory be swapped for official silver or gold).

And now? Bitcoin offers unprecedented flexibility compared with what has gone before. “Some people in the mid-90s asked: ‘Why do we need the web when we have AOL and CompuServe?'” says Mike Hearn, who works on the programs that underpin Bitcoin. “And so now people ask the same of Bitcoin. The web came to dominate because it was flexible and open, so anyone could take part, innovate and build interesting applications like YouTube, Facebook or Wikipedia, none of which would have ever happened on the AOL platform. I think the same will be true of Bitcoin.”

For a small (but vocal) group in the US, Bitcoin represents the next best alternative to the gold standard, the 19th-century conception that money ought to be backed by precious metals rather than government printing presses and promises. This love of “hard money” is baked into Bitcoin itself, and is the reason why the owners who set computers to do the maths required to make the currency work are known as “miners”, and is why the total supply of Bitcoin is capped.

And for Tyler and Cameron Winklevoss, the twins who sued Mark Zuckerberg (claiming he stole their idea for Facebook; the case was settled out of court), it’s a handy vehicle for speculation. The two of them are setting up the “Winklevoss Bitcoin Trust”, letting conventional investors gamble on the price of the currency.

Some of the hurdles left between Bitcoin and widespread adoption can be fixed. But until and unless Bitcoin develops a fully fledged banking system, some things that we take for granted with conventional money won’t work.

Others are intrinsic to the currency. At some point in the early 22nd century, the last Bitcoin will be generated. Long before that, the creation of new coins will have dropped to near-zero. And through the next 100 or so years, it will follow an economic path laid out by “Nakomoto” in 2009 – a path that rejects the consensus view of modern economics that management by a central bank is beneficial. For some, that means Bitcoin can never achieve ubiquity. “Economies perform better when they have managed monetary policies,” the Bank of England’s chief cashier, Chris Salmon, said at an event to discuss Bitcoin last week. “As a result, it will never be more than an alternative [to state-backed money].” To macroeconomists, Bitcoin isn’t scary because it enables crime, or eases tax dodging. It’s scary because a world where it’s used for all transactions is one where the ability of a central bank to guide the economy is destroyed, by design.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

Good, Old-Fashioned Spying

The spied-upon — and that’s most of us — must wonder how the spymasters of the NSA eavesdrop on their electronic communications. After all, we are led to believe that the agency with a voracious appetite for our personal data — phone records, financial transactions, travel reservations, texts and email conversations — gathered it all without permission. And, apparently, companies such as Google, Yahoo and AT&T with vast data centers and sprawling interconnections between them, did not collude with the government.

So, there is growing speculation that the agency tapped into the physical cables that make up the very backbone of the Internet. It brings a whole new meaning to the phrase World Wide Web.

From the NYT:

The recent revelation that the National Security Agency was able to eavesdrop on the communications of Google and Yahoo users without breaking into either companies’ data centers sounded like something pulled from a Robert Ludlum spy thriller.

How on earth, the companies asked, did the N.S.A. get their data without them knowing about it?

The most likely answer is a modern spin on a century-old eavesdropping tradition.

People knowledgeable about Google and Yahoo’s infrastructure say they believe that government spies bypassed the big Internet companies and hit them at a weak spot — the fiber-optic cables that connect data centers around the world that are owned by companies like Verizon Communications, the BT Group, the Vodafone Group and Level 3 Communications. In particular, fingers have been pointed at Level 3, the world’s largest so-called Internet backbone provider, whose cables are used by Google and Yahoo.

The Internet companies’ data centers are locked down with full-time security and state-of-the-art surveillance, including heat sensors and iris scanners. But between the data centers — on Level 3’s fiber-optic cables that connected those massive computer farms — information was unencrypted and an easier target for government intercept efforts, according to three people with knowledge of Google’s and Yahoo’s systems who spoke on the condition of anonymity.

It is impossible to say for certain how the N.S.A. managed to get Google and Yahoo’s data without the companies’ knowledge. But both companies, in response to concerns over those vulnerabilities, recently said they were now encrypting data that runs on the cables between their data centers. Microsoft is considering a similar move.

“Everyone was so focused on the N.S.A. secretly getting access to the front door that there was an assumption they weren’t going behind the companies’ backs and tapping data through the back door, too,” said Kevin Werbach, an associate professor at the Wharton School.

Data transmission lines have a long history of being tapped.

As far back as the days of the telegraph, spy agencies have located their operations in proximity to communications companies. Indeed, before the advent of the Internet, the N.S.A. and its predecessors for decades operated listening posts next to the long-distance lines of phone companies to monitor all international voice traffic.

Beginning in the 1960s, a spy operation code-named Echelon targeted the Soviet Union and its allies’ voice, fax and data traffic via satellite, microwave and fiber-optic cables.

In the 1990s, the emergence of the Internet both complicated the task of the intelligence agencies and presented powerful new spying opportunities based on the ability to process vast amounts of computer data.

In 2002, John M. Poindexter, former national security adviser under President Ronald Reagan, proposed the Total Information Awareness plan, an effort to scan the world’s electronic information — including phone calls, emails and financial and travel records. That effort was scrapped in 2003 after a public outcry over potential privacy violations.

The technologies Mr. Poindexter proposed are similar to what became reality years later in N.S.A. surveillance programs like Prism and Bullrun.

The Internet effectively mingled domestic and international communications, erasing the bright line that had been erected to protect against domestic surveillance. Although the Internet is designed to be a highly decentralized system, in practice a small group of backbone providers carry almost all of the network’s data.

The consequences of the centralization and its value for surveillance was revealed in 2006 by Mark Klein, an AT&T technician who described an N.S.A. listening post inside a room at an AT&T switching facility.

The agency was capturing a copy of all the data passing over the telecommunications links and then filtering it in AT&T facilities that housed systems that were able to filter data packets at high speed.

Documents taken by Edward J. Snowden and reported by The Washington Post indicate that, seven years after Mr. Klein first described the N.S.A.’s surveillance technologies, they have been refined and modernized.

Read the entire article here.

Image: fiber-optic cables. Courtesy of Daily Mail.

Send to Kindle

Nooooooooooooooooooo!

The Federal Communications Commission (FCC) recently relaxed rules governing the use of electronics onboard aircraft. We can now use our growing collection of electronic gizmos during take-off and landing, not just during the cruise portion of the flight. But, during flight said gizmos still need to be set to “airplane mode” which shuts off a device’s wireless transceiver.

However, the FCC is considering relaxing the rule even further, allowing cell phone use during flight. Thus, many flyers will soon have yet another reason to hate airlines and hate flying. We’ll be able to add loud cell phone conversations to the lengthy list of aviation pain inducers: cramped seating, fidgety kids, screaming babies, business bores, snorers, Microsoft Powerpoint, body odor, non-existent or bad food, and worst of all travelers who still can’t figure out how to buckle the seat belt.

FCC, please don’t do it!

From WSJ:

If cellphone calling comes to airplanes, it is likely to be the last call for manners.

The prospect is still down the road a bit, and a good percentage of the population can be counted on to be polite. But etiquette experts who already are fuming over the proliferation of digital rudeness aren’t optimistic.

Jodi R.R. Smith, owner of Mannersmith Etiquette Consulting in Massachusetts, says the biggest problem is forced proximity. It is hard to be discreet when just inches separate passengers. And it isn’t possible to escape.

“If I’m on an airplane, and my seatmate starts making a phone call, there’s not a lot of places I can go,” she says.

Should the Federal Communications Commission allow cellphone calls on airplanes above 10,000 feet, and if the airlines get on board, one solution would be to create yakking and non-yakking sections of aircraft, or designate flights for either the chatty or the taciturn, as airlines used to do for smoking.

Barring such plans, there are four things you should consider before placing a phone call on an airplane, Ms. Smith says:

• Will you disturb those around you?

• Will you be ignoring companions you should be paying attention to?

• Will you be discussing confidential topics?

• Is it an emergency?
The answer to the last question needs to be “Yes,” she says, and even then, make the call brief.

“I find that the vast majority of people will get it,” she says. “It’s just the few that don’t who will make life uncomfortable for the rest of us.”

FCC Chairman Tom Wheeler said last week that there is no technical reason to maintain what has been a long-standing ban.

Airlines are approaching the issue cautiously because many customers have expressed strong feelings against cellphone use.

“I believe fistfights at 39,000 feet would become common place,” says Alan Smith, a frequent flier from El Dorado Hills, Calif. “I would be terrified that some very large fellow, after a few drinks, would beat up a passenger annoying him by using the phone.”

Minneapolis etiquette consultant Gretchen Ditto says cellphone use likely will become commonplace on planes since our expectations have changed about when people should be reachable.

Passengers will feel obliged to answer calls, she says. “It’s going to become more prevalent for returning phone calls, and it’s going to be more annoying to everybody.”

Electronic devices are taking over our lives, says Arden Clise, an etiquette expert in Seattle. We text during romantic dinners, answer email during meetings and shop online during Thanksgiving. Making a call on a plane is only marginally more rude.

“Are we saying that our tools are more important than the people in front of us?” she asks. Even if you don’t know your in-flight neighbor, ask yourself, “Do I want to be that annoying person,” Ms. Clise says.

If airlines decide to allow calls, punching someone’s lights out clearly wouldn’t be the best way to get some peace, says New Jersey etiquette consultant Mary Harris. But tensions often run high during flights, and fights could happen.

If someone is bothering you with a phone call, Ms. Harris advises asking politely for the person to end the conversation.

If that doesn’t work, you’re stuck.

In-flight cellphone calls have been possible in Europe for several years. But U.K. etiquette expert William Hanson says they haven’t caught on.

If you need to make a call, he advises leaving your seat for the area near the lavatory or door. If it is night and the lights are dimmed, “you should not make a call at your seat,” he says.

Calls used to be possible on U.S. flights using Airfone units installed on the planes, but the technology never became popular. When people made calls, they were usually brief, in part because they cost $2 a minute, says Tony Lent, a telecommunication consultant in Detroit who worked on Airfone products in the 1980s.

The situation might be different today. “People were much more prudent about using their mobile phones,” Mr. Lent says. “Nowadays, those social mores are gone.”

Several years ago, when the government considered lifting its cellphone ban, U.S. Rep. Tom Petri co-sponsored the Halting Airplane Noise to Give Us Peace Act of 2008. The bill would have allowed texting and other data applications but banned voice calls. He was motivated by “a sense of courtesy,” he says. The bill was never brought to a vote.

Mr. Petri says he will try again if the FCC allows calls this time around. What if his bill doesn’t pass? “I suppose you can get earplugs,” he says.

Read the entire article here.

Image: Smartphone user. Courtesy of CNN / Money.

Send to Kindle

Of Monsters And the Man

Neil Gorton must have one of the best jobs in the world. For the last ten years he has brought to life monsters and alien beings for TV series Doctor Who. The iconic British sci-fi show, on air since 1963, is an established part of British popular culture having influenced — and sometimes paired with nightmares — generations of audiences and TV professionals. [Our favorites here at theDiagonal are the perennially clunky but evil Daleks].

From Wired:

The Time Lord, also known as “The Doctor,” has run into a lot of different aliens, monsters and miscellaneous beasties during his five-decade run on the BBC’s Doctor Who. With the show’s 50th anniversary upon us this weekend, WIRED talked to Neill Gorton — director of Millennium FX, which has created prosthetics and makeup for Doctor Who for the last nine years — about what it’s like to make the show’s most memorable monsters (above) appear on-screen.

Although Gorton works with other television series, movies and live events, he said Doctor Who in particular is more than just another job. “There’s no other project we’ve had such a close association with for so long,” he told WIRED. “It can’t help but become part of your life.”

It helps, too, that Gorton was a Who fan long before he started working on the show. “I grew up in Liverpool in the ’70s so I was a long way away from the London-centric film and TV world,” he recalled. “Nearby Blackpool, the Las Vegas of the North, had a permanent Doctor Who exhibition, and on our yearly family day trips to Blackpool I would insist on visiting. I think this was the first time I really started to understand that these things, these creatures and robots and monsters, had to be made by someone. On TV it was magical and far away but here I could see the joins and the seams and paint flaking off. Seeing that they where tangible made them something in my grasp.”

That early love for the show paid off when one of his childhood favorite characters reappeared on the series. “Davros [the cyborg creator of the show’s signature monsters, the Daleks] haunted me as a child,” Gorton said. “I remember seeing him on TV and thinking, ‘Where did they find that creepy old man?’ For years, I thought they found a bald old bloke and painted him brown. I pestered Russell T. [Davies, former Doctor Who showrunner] constantly about when I would get to do Davros.”

When the character did reappear in 2008?s “The Stolen Earth,” Gorton said that his work with actor Julian Bleach was “really personal to me… I sculpted [the prosthetics], molded it, painted and applied the makeup on the shoot every day. It’s the only revival of a classic Doctor Who monster that I’ve not heard a single fan moan about. Everyone just loved it.”

After nine years of working on the show, Gorton said that his team and the show’s producers have “a pretty good understanding” of how to deal with the prosthetic effect demands for the show. “It’s like that scene in Apollo 13 when they dump a box of bits on the table and the Nasa guys have to figure out how to make a CO2 scrubber out of odd objects and trash that happens to be aboard,” he joked. “The team is so clever at at getting the maximum effect out of the minimum resources, we’d be able to rustle up an engine modification that’d get us a round trip to Mars on top of fixing up that life support… The reality is the scripted vision always outstrips the budget by a huge margin.”

Although the showrunner usually plots out the season’s stories before Gorton’s team becomes involved — meaning there’s little chance to impact storyline decisions — that’s not always the case. “Last [season], I mentioned to producer Marcus Wilson that I had a couple of cool nine-foot robot suits that could add value to an episode. And several months later Chris Chibnall delivers ‘Dinosaurs on a Spaceship’ with two nine-foot robots taking featured roles!” he said. “Since then I’ve been turfing all kinds of oddities out of my store rooms and excitedly saying ‘How about this?’”

Read the entire article and see more doctor Who monsters here.

Image: Daleks. Courtesy of Wired / BBC.

Send to Kindle

Two-Thirds From a Mere Ninety

Two-thirds is the overall proportion of man-made carbon emissions released into the atmosphere, since the dawn of the industrial age. Ninety is the number of companies responsible for the two-thirds.

The leader in global fossil fuel emissions is Chevron Texaco, which accounts for a staggering 3.5 percent (since 1750). Other leading emitters include Exxon Mobil, BP, Royal Dutch Shell, Saudi Aramco, and Gazprom. See an interactive graphic of the top polluters — companies and nations — here.

From the Guardian:

The climate crisis of the 21st century has been caused largely by just 90 companies, which between them produced nearly two-thirds of the greenhouse gas emissions generated since the dawning of the industrial age, new research suggests.

The companies range from investor-owned firms – household names such as Chevron, Exxon and BP – to state-owned and government-run firms.

The analysis, which was welcomed by the former vice-president Al Gore as a “crucial step forward” found that the vast majority of the firms were in the business of producing oil, gas or coal, found the analysis, which has been published in the journal Climatic Change.

“There are thousands of oil, gas and coal producers in the world,” climate researcher and author Richard Heede at the Climate Accountability Institute in Colorado said. “But the decision makers, the CEOs, or the ministers of coal and oil if you narrow it down to just one person, they could all fit on a Greyhound bus or two.”

Half of the estimated emissions were produced just in the past 25 years – well past the date when governments and corporations became aware that rising greenhouse gas emissions from the burning of coal and oil were causing dangerous climate change.

Many of the same companies are also sitting on substantial reserves of fossil fuel which – if they are burned – puts the world at even greater risk of dangerous climate change.

Climate change experts said the data set was the most ambitious effort so far to hold individual carbon producers, rather than governments, to account.

The United Nations climate change panel, the IPCC, warned in September that at current rates the world stood within 30 years of exhausting its “carbon budget” – the amount of carbon dioxide it could emit without going into the danger zone above 2C warming. The former US vice-president and environmental champion, Al Gore, said the new carbon accounting could re-set the debate about allocating blame for the climate crisis.

Leaders meeting in Warsaw for the UN climate talks this week clashed repeatedly over which countries bore the burden for solving the climate crisis – historic emitters such as America or Europe or the rising economies of India and China.

Gore in his comments said the analysis underlined that it should not fall to governments alone to act on climate change.

“This study is a crucial step forward in our understanding of the evolution of the climate crisis. The public and private sectors alike must do what is necessary to stop global warming,” Gore told the Guardian. “Those who are historically responsible for polluting our atmosphere have a clear obligation to be part of the solution.”

Between them, the 90 companies on the list of top emitters produced 63% of the cumulative global emissions of industrial carbon dioxide and methane between 1751 to 2010, amounting to about 914 gigatonne CO2 emissions, according to the research. All but seven of the 90 were energy companies producing oil, gas and coal. The remaining seven were cement manufacturers.

The list of 90 companies included 50 investor-owned firms – mainly oil companies with widely recognised names such as Chevron, Exxon, BP , and Royal Dutch Shell and coal producers such as British Coal Corp, Peabody Energy and BHP Billiton.

Some 31 of the companies that made the list were state-owned companies such as Saudi Arabia’s Saudi Aramco, Russia’s Gazprom and Norway’s Statoil.

Nine were government run industries, producing mainly coal in countries such as China, the former Soviet Union, North Korea and Poland, the host of this week’s talks.

Experts familiar with Heede’s research and the politics of climate change said they hoped the analysis could help break the deadlock in international climate talks.

“It seemed like maybe this could break the logjam,” said Naomi Oreskes, professor of the history of science at Harvard. “There are all kinds of countries that have produced a tremendous amount of historical emissions that we do not normally talk about. We do not normally talk about Mexico or Poland or Venezuela. So then it’s not just rich v poor, it is also producers v consumers, and resource rich v resource poor.”

Michael Mann, the climate scientist, said he hoped the list would bring greater scrutiny to oil and coal companies’ deployment of their remaining reserves. “What I think could be a game changer here is the potential for clearly fingerprinting the sources of those future emissions,” he said. “It increases the accountability for fossil fuel burning. You can’t burn fossil fuels without the rest of the world knowing about it.”

Others were less optimistic that a more comprehensive accounting of the sources of greenhouse gas emissions would make it easier to achieve the emissions reductions needed to avoid catastrophic climate change.

John Ashton, who served as UK’s chief climate change negotiator for six years, suggested that the findings reaffirmed the central role of fossil fuel producing entities in the economy.

“The challenge we face is to move in the space of not much more than a generation from a carbon-intensive energy system to a carbonneutral energy system. If we don’t do that we stand no chance of keeping climate change within the 2C threshold,” Ashton said.

“By highlighting the way in which a relatively small number of large companies are at the heart of the current carbon-intensive growth model, this report highlights that fundamental challenge.”

Meanwhile, Oreskes, who has written extensively about corporate-funded climate denial, noted that several of the top companies on the list had funded the climate denial movement.

“For me one of the most interesting things to think about was the overlap of large scale producers and the funding of disinformation campaigns, and how that has delayed action,” she said.

The data represents eight years of exhaustive research into carbon emissions over time, as well as the ownership history of the major emitters.

The companies’ operations spanned the globe, with company headquarters in 43 different countries. “These entities extract resources from every oil, natural gas and coal province in the world, and process the fuels into marketable products that are sold to consumers on every nation on Earth,” Heede writes in the paper.

The largest of the investor-owned companies were responsible for an outsized share of emissions. Nearly 30% of emissions were produced just by the top 20 companies, the research found.

Read the entire article here.

Image: Strip coal mine. Courtesy of Wikipedia.

Send to Kindle

You May Be Just a Line of Code

Some very logical and rational people — scientists and philosophers — argue that we are no more than artificial constructs. They suggest that it is more likely that we are fleeting constructions in a simulated universe rather than organic beings in a real cosmos; that we are, in essence, like the oblivious Neo in the classic sci-fi movie The Matrix. One supposes that the minds proposing this notion are themselves simulations…

From Discovery:

In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines.

The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”

Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one.

But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim.

So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?

Worlds in Our Grasp

The first serious attempt to find the truth about our universe came in 2001, when an effort to calculate the resources needed for a universe-size simulation made the prospect seem impossible.

Seth Lloyd, a quantum-mechanical engineer at MIT, estimated the number of “computer operations” our universe has performed since the Big Bang — basically, every event that has ever happened. To repeat them, and generate a perfect facsimile of reality down to the last atom, would take more energy than the universe has.

“The computer would have to be bigger than the universe, and time would tick more slowly in the program than in reality,” says Lloyd. “So why even bother building it?”

But others soon realized that making an imperfect copy of the universe that’s just good enough to fool its inhabitants would take far less computational power. In such a makeshift cosmos, the fine details of the microscopic world and the farthest stars might only be filled in by the programmers on the rare occasions that people study them with scientific equipment. As soon as no one was looking, they’d simply vanish.

In theory, we’d never detect these disappearing features, however, because each time the simulators noticed we were observing them again, they’d sketch them back in.

That realization makes creating virtual universes eerily possible, even for us. Today’s supercomputers already crudely model the early universe, simulating how infant galaxies grew and changed. Given the rapid technological advances we’ve witnessed over past decades — your cell phone has more processing power than NASA’s computers had during the moon landings — it’s not a huge leap to imagine that such simulations will eventually encompass intelligent life.

“We may be able to fit humans into our simulation boxes within a century,” says Silas Beane, a nuclear physicist at the University of Washington in Seattle. Beane develops simulations that re-create how elementary protons and neutrons joined together to form ever larger atoms in our young universe.

Legislation and social mores could soon be all that keeps us from creating a universe of artificial, but still feeling, humans — but our tech-savvy descendants may find the power to play God too tempting to resist.

They could create a plethora of pet universes, vastly outnumbering the real cosmos. This thought led philosopher Nick Bostrom at the University of Oxford to conclude in 2003 that it makes more sense to bet that we’re delusional silicon-based artificial intelligences in one of these many forgeries, rather than carbon-based organisms in the genuine universe. Since there seemed no way to tell the difference between the two possibilities, however, bookmakers did not have to lose sleep working out the precise odds.

Learning the Truth

That changed in 2007 when John D. Barrow, professor of mathematical sciences at Cambridge University, suggested that an imperfect simulation of reality would contain detectable glitches. Just like your computer, the universe’s operating system would need updates to keep working.

As the simulation degrades, Barrow suggested, we might see aspects of nature that are supposed to be static — such as the speed of light or the fine-structure constant that describes the strength of the electromagnetic force — inexplicably drift from their “constant” values.

Last year, Beane and colleagues suggested a more concrete test of the simulation hypothesis. Most physicists assume that space is smooth and extends out infinitely. But physicists modeling the early universe cannot easily re-create a perfectly smooth background to house their atoms, stars and galaxies. Instead, they build up their simulated space from a lattice, or grid, just as television images are made up from multiple pixels.

The team calculated that the motion of particles within their simulation, and thus their energy, is related to the distance between the points of the lattice: the smaller the grid size, the higher the energy particles can have. That means that if our universe is a simulation, we’ll observe a maximum energy amount for the fastest particles. And as it happens, astronomers have noticed that cosmic rays, high-speed particles that originate in far-flung galaxies, always arrive at Earth with a specific maximum energy of about 1020 electron volts.

The simulation’s lattice has another observable effect that astronomers could pick up. If space is continuous, then there is no underlying grid that guides the direction of cosmic rays — they should come in from every direction equally. If we live in a simulation based on a lattice, however, the team has calculated that we wouldn’t see this even distribution. If physicists do see an uneven distribution, it would be a tough result to explain if the cosmos were real.

Astronomers need much more cosmic ray data to answer this one way or another. For Beane, either outcome would be fine. “Learning we live in a simulation would make no more difference to my life than believing that the universe was seeded at the Big Bang,” he says. But that’s because Beane imagines the simulators as driven purely to understand the cosmos, with no desire to interfere with their simulations.

Unfortunately, our almighty simulators may instead have programmed us into a universe-size reality show — and are capable of manipulating the rules of the game, purely for their entertainment. In that case, maybe our best strategy is to lead lives that amuse our audience, in the hope that our simulator-gods will resurrect us in the afterlife of next-generation simulations.

The weird consequences would not end there. Our simulators may be simulations themselves — just one rabbit hole within a linked series, each with different fundamental physical laws. “If we’re indeed a simulation, then that would be a logical possibility, that what we’re measuring aren’t really the laws of nature, they’re some sort of attempt at some sort of artificial law that the simulators have come up with. That’s a depressing thought!” says Beane.

This cosmic ray test may help reveal whether we are just lines of code in an artificial Matrix, where the established rules of physics may be bent, or even broken. But if learning that truth means accepting that you may never know for sure what’s real — including yourself — would you want to know?

There is no turning back, Neo: Do you take the blue pill, or the red pill?

Read the entire article here.

Image: The Matrix, promotional poster for the movie. Courtesy of Silver Pictures / Warner Bros. Entertainment Inc.

Send to Kindle

Colorizing History

Historical events happened in full color. Yet, many of the photographs that captured most of our important collective, cultural moments were, and still are, in black and white. So, is right to have them colorized? An iconic image of a mushroom cloud over Bikini Atoll from 1946 shows the effect of colorization.

We would argue that while the process of colorization adds a degree of realism and fidelity to an image that would otherwise not exist as black and white in nature. However, it is no more true than the original photograph itself. A color version is merely another rendition of a scene through the subjective eyes of a colorist, however skilled. In the case of a black and white image it is perhaps truer to a historical period in the sense that it is captured and rendered by the medium of expression at the time. The act of recording an event, including how it is done, cannot be divorced from the event itself.

Original: A nuclear weapon test by the United States military at Bikini Atoll, Marshall Islands, on 25 July 1946. Photograph: Library Of Congress.

Colorized: Colorization of the Bikini Atoll nuclear explosion by Sanna Dullaway.

From the Guardian:

Do historic photographs look better in colour? The colorizers think so. Skilled digital artists such as Sanna Dullaway and Jordan J Lloyd are keen to remind us that the past was as colourful as the present – and their message is spreading though Reddit and Facebook.

See more images and read the entire article here.

Images courtesy of the Library of Congress and respective copyright holders.

Send to Kindle

Predicting the Future is Highly Overrated

Contrary to what political pundits, stock market talking heads and your local strip mall psychic will have you believe, no one, yet, can predict the future. And, it is no more possible for the current generation of tech wunderkinds or Silicon Valley venture fund investors or the armies of analysts.

From WSJ:

I believe the children aren’t our future. Teach them well, but when it comes to determining the next big thing in tech, let’s not fall victim to the ridiculous idea that they lead the way.

Yes, I’m talking about Snapchat.

Last week my colleagues reported that Facebook FB -2.71% recently offered $3 billion to acquire the company behind the hyper-popular messaging app. Stunningly, Evan Spiegel, Snapchat’s 23-year-old co-founder and CEO, rebuffed the offer.

If you’ve never used Snapchat—and I implore you to try it, because Snapchat can be pretty fun if you’re into that sort of thing, which I’m not, because I’m grumpy and old and I have two small kids and no time for fun, which I think will be evident from the rest of this column, and also would you please get off my lawn?—there are a few things you should know about the app.

First, Snapchat’s main selling point is ephemerality. When I send you a photo and caption using the app, I can select how long I want you to be able to view the picture. After you look at it for the specified time—1 to 10 seconds—the photo and all trace of our having chatted disappear from your phone. (Or, at least, they are supposed to. Snapchat’s security measures have frequently been defeated.)

Second, and relatedly, Snapchat is used primarily by teens and people in college. This explains much of Silicon Valley’s obsession with the company.

The app doesn’t make any money—its executives have barely even mentioned any desire to make money—but in the ad-supported tech industry, youth is the next best thing to revenue. For tech execs, youngsters are the canaries in the gold mine.

That logic follows a widely shared cultural belief: We all tend to assume that young people are on the technological vanguard, that they somehow have got an inside scoop on what’s next. If today’s kids are Snapchatting instead of Facebooking, the thinking goes, tomorrow we’ll all be Snapchatting, too, because tech habits, like hairstyles, flow only one way: young to old.

There is only one problem with elevating young people’s tastes this way: Kids are often wrong. There is little evidence to support the idea that the youth have any closer insight on the future than the rest of us do. Sometimes they are first to flock to technologies that turn out to be huge; other times, the young pick products and services that go nowhere. They can even be late adopters, embracing innovations that older people understood first. To butcher another song: The kids could be all wrong.

Here’s a thought exercise. How many of the products and services that you use every day were created or first used primarily by people under 25?

A few will spring to mind, Facebook the biggest of all. Yet the vast majority of your most-used things weren’t initially popular among teens. The iPhone, the iPad, the iPod, the Google search engine, YouTube, Twitter, TWTR -1.86% Gmail, Google Maps, Pinterest, LinkedIn, the Kindle, blogs, the personal computer, none of these were initially targeted to, or primarily used by, high-school or college-age kids. Indeed, many of the most popular tech products and services were burdened by factors that were actively off-putting to kids, such as high prices, an emphasis on productivity and a distinct lack of fun. Yet they succeeded anyway.

Even the exceptions suggest we should be wary of catering to youth. It is true that in 2004, Mark Zuckerberg designed Facebook for his Harvard classmates, and the social network was first made available only to college students. At the time, though, Facebook looked vastly more “grown up” than its competitors. The site prevented you from uglifying your page with your own design elements, something you could do with Myspace, which, incidentally, was the reigning social network among the pubescent set.

Mr. Zuckerberg deliberately avoided catering to this group. He often told his co-founders that he wanted Facebook to be useful, not cool. That is what makes the persistent worry about Facebook’s supposedly declining cachet among teens so bizarre; Facebook has never really been cool, but neither are a lot of other billion-dollar companies. Just ask Myspace how far being cool can get you.

Incidentally, though 20-something tech founders like Mr. Zuckerberg, Steve Jobs and Bill Gates get a lot of ink, they are unusual. A recent study by the VC firm Cowboy Ventures found that among tech startups that have earned a valuation of at least $1 billion since 2003, the average founder’s age was 34. “The twentysomething inexperienced founder is an outlier, not the norm,” wrote Cowboy’s founder Aileen Lee.

If you think about it for a second, the fact that young people aren’t especially reliable predictors of tech trends shouldn’t come as a surprise. Sure, youth is associated with cultural flexibility, a willingness to try new things that isn’t necessarily present in older folk. But there are other, less salutary hallmarks of youth, including capriciousness, immaturity, and a deference to peer pressure even at the cost of common sense. This is why high school is such fertile ground for fads. And it’s why, in other cultural areas, we don’t put much stock in teens’ choices. No one who’s older than 18, for instance, believes One Direction is the future of music.

That brings us back to Snapchat. Is the app just a youthful fad, just another boy band, or is it something more permanent; is it the Beatles?

To figure this out, we would need to know why kids are using it. Are they reaching for Snapchat for reasons that would resonate with older people—because, like the rest of us, they’ve grown wary of the public-sharing culture promoted by Facebook and Twitter? Or are they using it for less universal reasons, because they want to evade parental snooping, send risqué photos, or avoid feeling left out of a fad everyone else has adopted?

Read the entire article here.

Image: Snapchat logo. Courtesy of Snapchat / Wikipedia.

Send to Kindle

The Anglosphere

Good or bad the modern world owes much of its current shape and form to two anglophone nations — Britain and the United States. How and why this would be is the subject of new book Inventing Freedom: How the English-Speaking Peoples Made the Modern World by Daniel Hannan. His case is summarized below in an excerpted essay.

From the WSJ:

Asked, early in his presidency, whether he believed in American exceptionalism, Barack Obama gave a telling reply. “I believe in American exceptionalism, just as I suspect the Brits believe in British exceptionalism and the Greeks believe in Greek exceptionalism.”

The first part of that answer is fascinating (we’ll come back to the Greeks in a bit). Most Brits do indeed believe in British exceptionalism. But here’s the thing: They define it in almost exactly the same way that Americans do. British exceptionalism, like its American cousin, has traditionally been held to reside in a series of values and institutions: personal liberty, free contract, jury trials, uncensored newspapers, regular elections, habeas corpus, open competition, secure property, religious pluralism.

The conceit of our era is to assume that these ideals are somehow the natural condition of an advanced society—that all nations will get around to them once they become rich enough and educated enough. In fact, these ideals were developed overwhelmingly in the language in which you are reading these words. You don’t have to go back very far to find a time when freedom under the law was more or less confined to the Anglosphere: the community of English-speaking democracies.

In August 1941, when Franklin Delano Roosevelt and Winston Churchill met on the deck of HMS Prince of Wales off Newfoundland, no one believed that there was anything inevitable about the triumph of what the Nazis and Communists both called “decadent Anglo-Saxon capitalism.” They called it “decadent” for a reason. Across the Eurasian landmass, freedom and democracy had retreated before authoritarianism, then thought to be the coming force. Though a small number of European countries had had their parliamentary systems overthrown by invaders, many more had turned to autocracy on their own, without needing to be occupied: Austria, Bulgaria, Estonia, Germany, Greece, Hungary, Italy, Latvia, Lithuania, Poland, Portugal, Romania, Spain.

Churchill, of all people, knew that the affinity between the United States and the rest of the English-speaking world rested on more than a congruence of parliamentary systems, and he was determined to display that cultural affinity to maximum advantage when he met FDR.

It was a Sunday morning, and the British and American crewmen were paraded jointly on the decks of HMS Prince of Wales for a religious service. The prime minister was determined that “every detail be perfect,” and the readings and hymns were meticulously chosen. The sailors listened as a chaplain read from Joshua 1 in the language of the King James Bible, revered in both nations: “As I was with Moses, so I will be with thee: I will not fail thee, nor forsake thee. Be strong and of a good courage.”

The prime minister was delighted. “The same language, the same hymns and, more or less, the same ideals,” he enthused. The same ideals: That was no platitude. The world was in the middle of the second of the three great global confrontations of the 20th century, in which countries that elevated the individual over the state contended for mastery against countries that did the opposite. The list of nations that were on the right side in all three of those conflicts is a short one, but it includes the Anglophone democracies.

We often use the word “Western” as a shorthand for liberal-democratic values, but we’re really being polite. What we mean is countries that have adopted the Anglo-American system of government. The spread of “Western” values was, in truth, a series of military victories by the Anglosphere.

I realize that all this might seem strange to American readers. Am I not diluting the uniqueness of the U.S., the world’s only propositional state, by lumping it in with the rest of the Anglosphere? Wasn’t the republic founded in a violent rejection of the British Empire? Didn’t Paul Revere rouse a nation with his cry of “the British are coming”?

Actually, no. That would have been a remarkably odd thing to yell at a Massachusetts population that had never considered itself anything other than British (what the plucky Boston silversmith actually shouted was “The regulars are coming out!”). The American Founders were arguing not for the rejection but for the assertion of what they took to be their birthright as Englishmen. They were revolutionaries in the 18th-century sense of the word, whereby a revolution was understood to be a complete turn of the wheel: a setting upright of that which had been placed on its head.

Alexis de Tocqueville is widely quoted these days as a witness to American exceptionalism. Quoted, but evidently not so widely read, since at the very beginning of “Democracy in America,” he flags up what is to be his main argument, namely, that the New World allowed the national characteristics of Europe’s nations the freest possible expression. Just as French America exaggerated the autocracy and seigneurialism of Louis XIV’s France, and Spanish America the ramshackle obscurantism of Philip IV’s Spain, so English America (as he called it) exaggerated the localism, the libertarianism and the mercantilism of the mother country: “The American is the Englishman left to himself.”

What made the Anglosphere different? Foreign visitors through the centuries remarked on a number of peculiar characteristics: the profusion of nonstate organizations, clubs, charities and foundations; the cheerful materialism of the population; the strong county institutions, including locally chosen law officers and judges; the easy coexistence of different denominations (religious toleration wasn’t unique to the Anglosphere, but religious equality—that is, freedom for every sect to proselytize—was almost unknown in the rest of the world). They were struck by the weakness, in both law and custom, of the extended family, and by the converse emphasis on individualism. They wondered at the stubborn elevation of private property over raison d’état, of personal freedom over collective need.

Many of them, including Tocqueville and Montesquieu, connected the liberty that English-speakers took for granted to geography. Outside North America, most of the Anglosphere is an extended archipelago: Great Britain, Ireland, Australia, New Zealand, Hong Kong, Singapore, the more democratic Caribbean states. North America, although not literally isolated, was geopolitically more remote than any of them, “kindly separated by nature and a wide ocean,” as Jefferson put it in his 1801 inaugural address, “from the exterminating havoc [of Europe].”

Isolation meant that there was no need for a standing army in peacetime, which in turn meant that the government had no mechanism for internal repression. When rulers wanted something, usually revenue, they had to ask nicely, by summoning people’s representatives in an assembly. It is no coincidence that the world’s oldest parliaments—England, Iceland, the Faroes, the Isle of Man—are on islands.

Above all, liberty was tied up with something that foreign observers could only marvel at: the miracle of the common law. Laws weren’t written down in the abstract and then applied to particular disputes; they built up, like a coral reef, case by case. They came not from the state but from the people. The common law wasn’t a tool of government but an ally of liberty: It placed itself across the path of the Stuarts and George III; it ruled that the bonds of slavery disappeared the moment a man set foot on English soil.

There was a fashion for florid prose in the 18th century, but the second American president, John Adams, wasn’t exaggerating when he identified the Anglosphere’s beautiful, anomalous legal system—which today covers most English-speaking countries plus Israel, almost an honorary member of the club, alongside the Netherlands and the Nordic countries—as the ultimate guarantor of freedom: “The liberty, the unalienable, indefeasible rights of men, the honor and dignity of human nature… and the universal happiness of individuals, were never so skillfully and successfully consulted as in that most excellent monument of human art, the common law of England.”

Freedom under the law is a portable commodity, passed on through intellectual exchange rather than gene flow. Anyone can benefit from constitutional liberty simply by adopting the right institutions and the cultural assumptions that go with them. The Anglosphere is why Bermuda is not Haiti, why Singapore is not Indonesia, why Hong Kong is not China—and, for that matter, not Macau. As the distinguished Indian writer Madhav Das Nalapat, holder of the Unesco Peace Chair, puts it, the Anglosphere is defined not by racial affinity but “by the blood of the mind.”

At a time when most countries defined citizenship by ancestry, Britain was unusual in developing a civil rather than an ethnic nationality. The U.S., as so often, distilled and intensified a tendency that had been present in Great Britain, explicitly defining itself as a creedal polity: Anyone can become American simply by signing up to the values inherent in the Constitution.

There is, of course, a flip-side. If the U.S. abandons its political structures, it will lose its identity more thoroughly than states that define nationality by blood or territory. Power is shifting from the 50 states to Washington, D.C., from elected representatives to federal bureaucrats, from citizens to the government. As the U.S. moves toward European-style health care, day care, college education, carbon taxes, foreign policy and spending levels, so it becomes less prosperous, less confident and less free.

We sometimes talk of the English-speaking nations as having a culture of independence. But culture does not exist, numinously, alongside institutions; it is a product of institutions. People respond to incentives. Make enough people dependent on the state, and it won’t be long before Americans start behaving and voting like…well, like Greeks.

Which brings us back to Mr. Obama’s curiously qualified defense of American exceptionalism. Outside the Anglosphere, people have traditionally expected—indeed, demanded—far more state intervention. They look to the government to solve their problems, and when the government fails, they become petulant.

That is the point that much of Europe has reached now. Greeks, like many Europeans, spent decades increasing their consumption without increasing their production. They voted for politicians who promised to keep the good times going and rejected those who argued for fiscal restraint. Even now, as the calamity overwhelms them, they refuse to take responsibility for their own affairs by leaving the euro and running their own economy. It’s what happens when an electorate is systematically infantilized.

Read the entire article here.

Image: Keep Calm and Carry On. Courtesy of Wikipedia.

Send to Kindle

Hmm. An Atheist Mega-Church?

A movement begun by two British comedians — Pippa Evans and Sanderson Jones — to assemble like-minded atheists seems to have grown legs. But doesn’t a church for the faithless somehow contravene the principles of atheism? Unperturbed by this obvious contradiction the two are venturing on a lengthy tour of god-fearing America to raise funds and consciousness. One wonders if they are stopping in the Bible Belt. And, more importantly will they eventually resort to teleatheism [ed: your friends at theDiagonal coined this first].

From the Guardian:

It’s not easy being an atheist. In a world that for centuries has been dominated (and divided by) religious affiliations, it’s sort of inevitable that the minority group who can’t get down with the God thing or who don’t subscribe to any particular belief system would find themselves marginalized. As children of no God, it seems that atheists are somehow seen as lesser – less charitable, that is, and more selfish, nihilistic, closed minded, negative and just generally unworthy. Now, however, a group of atheists are fighting back.

Determined to show that those who believe in nothing are just as good as those who believe in something, the faithless are establishing a church of their own, and a mega-church at that. On the surface it seems like a rather brilliant idea. What’s not to like about beating the faithful at their own game? Apart from the one small caveat that establishing a place of worship for the faithless, even a godless one, rather negates what atheism is supposed to be all about.

The godless church concept is the brainchild of Pippa Evans and Sanderson Jones, two British comedians, who identified a gap in the faith market that so far non-believers are flocking to fill. The first Sunday Assembly (as the gatherings are being called) took place in a dilapidated church in London on a cold morning this past January. It went down a treat, apparently, and the movement has gained enough momentum in Britain that the comic duo have since embarked on a “40 dates, 40 nights” tour of the United States raising money to build US congregations so godless Americans can become churchgoers too.

This past Sunday, the groups’ inaugural assembly in Los Angeles attracted some 400 people. Similar gatherings across the states have also drawn big crowds, bursting to do all the good stuff religious people do, just without the God stuff. As one of those non-believing types – the kind who’d be inclined to tick off the “spiritual but not religious” checkbox on a dating profile – I should fall right into the Sunday Assembly movement’s target demographic. If only the central idea of dragging atheists into a church so they can prove they are just as worthy as traditional churchgoers didn’t strike me as a bit of joke.

I’m sure Evans and Jones mean well. Although they might want to tone down the “shiny happy people” routine they have going on in their promotional video. It’s a little too reminiscent of the bearded, guitar playing priest that used to pay regular visits to the convent school I attended as a child in Ireland, who tried a little too hard to convince us skeptical kids that Catholicism is cool. I don’t mean to downplay the human need to find like-minded communities either or to explore the deeper purpose of our existence. I just can’t quite embrace the notion that atheists should be under any obligation to prove their worthiness to religious types, or that to do so they should mimic the long established religious practices that non-believers have typically eschewed.

I would have thought the message of atheism (if there needs to be one) is that churches and ritualized worship (whatever the focus of that worship might be) are best left to the people who feel the need to have a God figure in their lives. I say this as someone who has done plenty of Elizabeth Gilbert (“Eat, Pray, Love”) style dabbling in various philosophies to find life’s bigger meaning, albeit on a lower budget and so far with less satisfying results – no mega movie deals or hot Brazilian husbands have materialized to date, but the journey continues.

Like a lot of people who don’t subscribe to any particular faith or belief system, I’m all for exploring the many spiritual adventures that are out there, and there are already plenty of inspirational (and godless) paths to choose from. The thing is, rewarding as these ventures into the spiritual realm often are, be they Buddhist retreats, Hindu meditation sessions or just a good old-fashioned yoga class with some “Om” chanting built in, I know that my true self is an atheist one. No philosophy, full on religion or Sunday Assembly – no matter how enticing, inviting or full of wisdom it may be – is going to win me over in the long term. I’m just not in the market for any man-made belief system – and they are all man-made – because I already have the one I am comfortable with: atheism.

Read the entire article here.

Image courtesy of Google Search.

Send to Kindle

Perovskites

No, these are not a new form of luxury cut glass from Europe, but something much more significant. First discovered in the mid-1800s in the Ural mountain range of Russia, Perovskite materials could lay the foundation for a significant improvement in the efficiency of solar power systems.

From Technology Review:

A new solar cell material has properties that might lead to solar cells more than twice as efficient as the best on the market today. An article this week in the journal Nature describes the materials—a modified form of a class of compounds called perovskites, which have a particular crystalline structure.

The researchers haven’t yet demonstrated a high efficiency solar cell with the material. But their work adds to a growing body of evidence suggesting perovskite materials could change the face of solar power. Researchers are making new perovskites using combinations of elements and molecules not seen in nature; many researchers see the materials as the next great hope for making solar power cheap enough to compete with fossil fuels.

Perovskite-based solar cells have been improving at a remarkable pace. It took a decade or more for the major solar cell materials used today—silicon and cadmium telluride—to reach efficiency levels that have been demonstrated with perovskites in just four years. The rapid success of the material has impressed even veteran solar researchers who have learned to be cautious about new materials after seeing many promising ones come to nothing (see “A Material that Could Make Solar Power ‘Dirt Cheap’”).

The perovskite material described in Nature has properties that could lead to solar cells that can convert over half of the energy in sunlight directly into electricity, says Andrew Rappe, co-director of Pennergy, a center for energy innovation at the University of Pennsylvania, and one of the new report’s authors. That’s more than twice as efficient as conventional solar cells. Such high efficiency would cut in half the number of solar cells needed to produce a given amount of power. Besides reducing the cost of solar panels, this would greatly reduce installation costs, which now account for most of the cost of a new solar system.

Unlike conventional solar cell materials, the new material doesn’t require an electric field to produce an electrical current. This reduces the amount of material needed and produces higher voltages, which can help increase power output, Rappe says. While other materials have been shown to produce current without the aid of an electric field, the new material is the first to also respond well to visible light, making it relevant for solar cells, he says.

The researchers also showed that it is relatively easy to modify the material so that it efficiently converts different wavelengths of light into electricity. It could be possible to form a solar cell with different layers, each designed for a specific part of the solar spectrum, something that could greatly improve efficiency compared to conventional solar cells (see “Ultra-Efficient Solar Power” and “Manipulating Light to Double Solar Power Output”).

Other solar cell experts note that while these properties are interesting, Rappe and his colleagues have a long way to go before they can produce viable solar cells. For one thing, the electrical current it produces so far is very low. Ramamoorthy Ramesh, a professor of materials science and engineering at Berkeley, says, “This is nice work, but really early stage. To make a solar cell, a lot of other things are needed.”

Perovskites remain a promising solar material. Michael McGehee, a materials science and engineering professor at Stanford University, recently wrote, “The fact that multiple teams are making such rapid progress suggests that the perovskites have extraordinary potential, and might elevate the solar cell industry to new heights.”

Read the entire article here.

Image: Perovskite mined in Magnet Cove, Arkansas. Courtesy of Wikimedia.

Send to Kindle

The Global Detective Story of Little Red Riding Hood

Intrepid literary detective work spanning Europe, China, Japan and Africa uncovers the roots of a famous children’s tale.

From the Independent:

Little Red Riding Hood’s closest relative may have been her ill-fated grandmother, but academics have discovered she has long-lost cousins as far away as China and Japan.

Employing scientific analysis commonly used by biologists, anthropologist Jamshid Tehrani has mapped the emergence of the story to an earlier tale from the first century AD – and found it has numerous links to similar stories across the globe.

The Durham University academic traced the roots of Little Red Riding Hood to a folk tale called The Wolf and the Kids, which subsequently “evolved twice”, he claims in his paper, published this week in scientific journal Plos One.

Dr Tehrani, who has previously studied cultural change over generations in areas such as textiles, debunked theories that the tale emerged in China, arriving via the Silk Route. Instead, he traced the origins to European oral traditions, which then spread east.

“The Chinese version is derived from European oral traditions and not vice versa,” he said.

The Chinese took Little Red Riding Hood and The Wolf and the Kids and blended it with local tales, he argued. Often the wolf is replaced with an ogre or a tiger.

The research analysed 58 variants of the tales and looked at 72 plot variables.

The scientific process used was called phylogenetic analysis, used by biologists to group closely-related organisms to map out branches of evolution. Dr Tehrani used maths to model the similarities of the plots and score them on the probability that they have the same origin.

Little Red Riding Hood and The Wolf and the Kids, which concerns a wolf impersonating a goat to trick her kids and eat them, remain as distinct stories. Dr Tehrani described it “like a biologist showing that humans and other apes share a common ancestor but have evolved into distinct species”.

The Wolf and the Kids originated in the 1st century AD, with Little Red Riding Hood branching off 1,000 years later.

The story was immortalised by the Brothers Grimm in the 19th century, based on a tale written by Charles Perrault 200 years earlier. That derived from oral storytelling in France, Austria and northern Italy. Variants of Little Red Riding Hood can be found across Africa and Asia, including The Tiger Grandmother in Japan, China and Korea.

Dr Tehrani said: “My research cracks a long-standing mystery. The African tales turn out to be descended from The Wolf and the Kids but over time, they have evolved to become like Little Red Riding Hood, which is also likely to be descended from The Wolf and the Kids.”

The academic, who is now studying a range of other fairy tales, said: “This exemplifies a process biologists call convergent evolution, in which species independently evolve similar adaptations.”

Read the entire article here.

Image: Old father Wolf eyes up Little Red Riding Hood. Illustration courtesy of Tyler Garrison / Guardian.

Send to Kindle

Teasie Weasie, Vidal and Bob

A new BBC documentary chronicles the influence of flamboyant sixties hairstylist Raymond Bessone, better known as Raymond “Teasie Weasie”. He was the first stylist to appear on television, and is credited with inventing the modern bouffant and innovating with hair color. He also trained Vidal Sassoon, who later created the bob.

From the Guardian:

From the beehive to the afro and the footballer’s perm, a new BBC documentary celebrates the nation’s love of a flamboyant hairstyle – the bigger the better.

Possibly the most famous haircut of the 1960s, the asymmetric bob created by Vidal Sassoon in 1963 liberated a generation of women from the need for a weekly appointment and a session under the hood-dryer. With these sharp, swinging styles, the blowdry was born. This cut is by Roger Thompson, a stylist at Sassoon’s salon.

See more images hair[sic] or check out a preview of the documentary here.

Image: Cover of the 1976 paperback book “Raymond – The outrageous autobiography of Teasie-Weasie”. Courtesy of Raymond Bessone/ Wyndham Publications / Wikipedia.

Image: Bob cut. Courtesy of Vic Singh/Rex.

Send to Kindle

Biological Transporter

Molecular-biology entrepreneur and genomics engineering pioneer, Craig Venter, is at it again. In his new book, Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, Venter explains his grand ideas and the coming era of discovery.

From ars technica:

J Craig Venter has been a molecular-biology pioneer for two decades. After developing expressed sequence tags in the 90s, he led the private effort to map the human genome, publishing the results in 2001. In 2010, the J Craig Venter Institute manufactured the entire genome of a bacterium, creating the first synthetic organism.

Now Venter, author of Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, explains the coming era of discovery.

Wired: In Life at the Speed of Light, you argue that humankind is entering a new phase of evolution. How so?

J Craig Venter: As the industrial age is drawing to a close, I think that we’re witnessing the dawn of the era of biological design. DNA, as digitized information, is accumulating in computer databases. Thanks to genetic engineering, and now the field of synthetic biology, we can manipulate DNA to an unprecedented extent, just as we can edit software in a computer. We can also transmit it as an electromagnetic wave at or near the speed of light and, via a “biological teleporter,” use it to recreate proteins, viruses, and living cells at another location, changing forever how we view life.

So you view DNA as the software of life?

All the information needed to make a living, self-replicating cell is locked up within the spirals of DNA’s double helix. As we read and interpret that software of life, we should be able to completely understand how cells work, then change and improve them by writing new cellular software.

The software defines the manufacture of proteins that can be viewed as its hardware, the robots and chemical machines that run a cell. The software is vital because the cell’s hardware wears out. Cells will die in minutes to days if they lack their genetic-information system. They will not evolve, they will not replicate, and they will not live.

Of all the experiments you have done over the past two decades involving the reading and manipulation of the software of life, which are the most important?

I do think the synthetic cell is my most important contribution. But if I were to select a single study, paper, or experimental result that has really influenced my understanding of life more than any other, I would choose one that my team published in 2007, in a paper with the title Genome Transplantation in Bacteria: Changing One Species to Another. The research that led to this paper in the journal Science not only shaped my view of the fundamentals of life but also laid the groundwork to create the first synthetic cell. Genome transplantation not only provided a way to carry out a striking transformation, converting one species into another, but would also help prove that DNA is the software of life.

What has happened since your announcement in 2010 that you created a synthetic cell, JCVI-syn1.0?

At the time, I said that the synthetic cell would give us a better understanding of the fundamentals of biology and how life works, help develop techniques and tools for vaccine and pharmaceutical development, enable development of biofuels and biochemicals, and help to create clean water, sources of food, textiles, bioremediation. Three years on that vision is being borne out.

Your book contains a dramatic account of the slog and setbacks that led to the creation of this first synthetic organism. What was your lowest point?

When we started out creating JCVI-syn1.0 in the lab, we had selected M. genitalium because of its extremely small genome. That decision we would come to really regret: in the laboratory, M. genitalium grows slowly. So whereas E. coli divides into daughter cells every 20 minutes, M. genitalium requires 12 hours to make a copy of itself. With logarithmic growth, it’s the difference between having an experimental result in 24 hours versus several weeks. It felt like we were working really hard to get nowhere at all. I changed the target to the M. mycoides genome. It’s twice as large as that of genitalium, but it grows much faster. In the end, that move made all the difference.

Some of your peers were blown away by the synthetic cell; others called it a technical tour de force. But there were also those who were underwhelmed because it was not “life from scratch.”

They haven’t thought much about what they are actually trying to say when they talk about “life from scratch.” How about baking a cake “from scratch”? You could buy one and then ice it at home. Or buy a cake mix, to which you add only eggs, water and oil. Or combining the individual ingredients, such as baking powder, sugar, salt, eggs, milk, shortening and so on. But I doubt that anyone would mean formulating his own baking powder by combining sodium, hydrogen, carbon, and oxygen to produce sodium bicarbonate, or producing homemade corn starch. If we apply the same strictures to creating life “from scratch,” it could mean producing all the necessary molecules, proteins, lipids, organelles, DNA, and so forth from basic chemicals or perhaps even from the fundamental elements carbon, hydrogen, oxygen, nitrogen, phosphate, iron, and so on.

There’s a parallel effort to create virtual life, which you go into in the book. How sophisticated are these models of cells in silico?

In the past year we have really seen how virtual cells can help us understand the real things. This work dates back to 1996 when Masaru Tomita and his students at the Laboratory for Bioinformatics at Keio started investigating the molecular biology of Mycoplasma genitalium—which we had sequenced in 1995—and by the end of that year had established the E-Cell Project. The most recent work on Mycoplasma genitalium has been done in America, by the systems biologist Markus W Covert, at Stanford University. His team used our genome data to create a virtual version of the bacterium that came remarkably close to its real-life counterpart.

You’ve discussed the ethics of synthetic organisms for a long time—where is the ethical argument today?

The Janus-like nature of innovation—its responsible use and so on—was evident at the very birth of human ingenuity, when humankind first discovered how to make fire on demand. (Do I use it burn down a rival’s settlement, or to keep warm?) Every few months, another meeting is held to discuss how powerful technology cuts both ways. It is crucial that we invest in underpinning technologies, science, education, and policy in order to ensure the safe and efficient development of synthetic biology. Opportunities for public debate and discussion on this topic must be sponsored, and the lay public must engage. But it is important not to lose sight of the amazing opportunities that this research presents. Synthetic biology can help address key challenges facing the planet and its population. Research in synthetic biology may lead to new things such as programmed cells that self-assemble at the sites of disease to repair damage.

What worries you more: bioterror or bioerror?

I am probably more concerned about an accidental slip. Synthetic biology increasingly relies on the skills of scientists who have little experience in biology, such as mathematicians and electrical engineers. The democratization of knowledge and the rise of “open-source biology;” the availability of kitchen-sink versions of key laboratory tools, such as the DNA-copying method PCR, make it easier for anyone—including those outside the usual networks of government, commercial, and university laboratories and the culture of responsible training and biosecurity—to play with the software of life.

Following the precautionary principle, should we abandon synthetic biology?

My greatest fear is not the abuse of technology, but that we will not use it at all, and turn our backs to an amazing opportunity at a time when we are over-populating our planet and changing environments forever.

You’re bullish about where this is headed.

I am—and a lot of that comes from seeing the next generation of synthetic biologists. We can get a view of what the future holds from a series of contests that culminate in a yearly event in Cambridge, Massachusetts—the International Genetically Engineered Machine (iGEM) competition. High-school and college students shuffle a standard set of DNA subroutines into something new. It gives me hope for the future.

You’ve been working to convert DNA into a digital signal that can be transmitted to a unit which then rebuilds an organism.

At Synthetic Genomics, Inc [which Venter founded with his long-term collaborator, the Nobel laureate Ham Smith], we can feed digital DNA code into a program that works out how to re-synthesize the sequence in the lab. This automates the process of designing overlapping pieces of DNA base-pairs, called oligonucleotides, adding watermarks, and then feeding them into the synthesizer. The synthesizer makes the oligonucleotides, which are pooled and assembled using what we call our Gibson-assembly robot (named after my talented colleague Dan Gibson). NASA has funded us to carry out experiments at its test site in the Mojave Desert. We will be using the JCVI mobile lab, which is equipped with soil-sampling, DNA-isolation and DNA sequencing equipment, to test the steps for autonomously isolating microbes from soil, sequencing their DNA and then transmitting the information to the cloud with what we call a “digitized-life-sending unit”. The receiving unit, where the transmitted DNA information can be downloaded and reproduced anew, has a number of names at present, including “digital biological converter,” “biological teleporter,” and—the preference of former US Wired editor-in-chief and CEO of 3D Robotics, Chris Anderson—”life replicator”.

Read the entire article here.

Image: J Craig Venter. Courtesy of Wikipedia.

Send to Kindle

The Large Hadron Collider is So Yesterday

CERN’s Large Hadron Collider (LHC) smashed countless particles into one another to reveal the Higgs Boson. A great achievement for all concerned. Yet what of the still remaining “big questions” of physics, and how will we find the answers?

From Wired:

The current era of particle physics is over. When scientists at CERN  announced last July that they had found the Higgs boson — which is responsible for giving all other particles their mass — they uncovered the final missing piece in the framework that accounts for the interactions of all known particles and forces, a theory known as the Standard Model.

And that’s a good thing, right? Maybe not.

The prized Higgs particle, physicists assumed, would help steer them toward better theories, ones that fix the problems known to plague the Standard Model. Instead, it has thrown the field  into a confusing situation.

“We’re sitting on a puzzle that is difficult to explain,” said particle physicist Maria Spiropulu of Caltech, who works on one of the LHC’s main Higgs-finding experiments, CMS.

It may sound strange, but physicists were hoping, maybe even expecting, that the Higgs would not turn out to be like they predicted it would be. At the very least, scientists hoped the properties of the Higgs would be different enough from those predicted under the Standard Model that they could show researchers how to build new models. But the Higgs’ mass  proved stubbornly normal, almost exactly in the place the Standard Model said it would be.

To make matters worse, scientists had hoped to find evidence for other strange particles. These could have pointed in the direction of theories beyond the Standard Model, such as the current favorite  supersymmetry, which posits the existence of a heavy doppelganger to all the known subatomic bits like electrons, quarks, and photons.

Instead, they were disappointed by being right. So how do we get out of this mess? More data!

Over the next few years, experimentalists will be churning out new results, which may be able to answer questions about dark matter, the properties of neutrinos, the nature of the Higgs, and perhaps what the next era of physics will look like. Here we take a look at the experiments that you should be paying attention to. These are the ones scientists are the most excited about because they might just form the next cracks in modern physics.

ALTAS and CMS
The Large Hadron Collider isn’t smashing protons right now. Instead, engineers are installing upgrades to help it search at even higher energies. The machine may be closed for business until 2015 but the massive amounts of data it has already collected is still wide open. The two main Higgs-searching experiments, ATLAS and CMS, could have plenty of surprises in store.

“We looked for the low-hanging fruit,” said particle physicist David Miller of the University of Chicago, who works on ATLAS. “All that we found was the Higgs, and now we’re going back for the harder stuff.”

What kind of other stuff might be lurking in the data? Nobody knows for sure but the collaborations will spend the next two years combing through the data they collected in 2011 and 2012, when the Higgs was found. Scientists are hoping to see hints of other, more exotic particles, such as those predicted under a theory known as supersymmetry. They will also start to understand the Higgs better.

See, scientists don’t have some sort of red bell that goes “ding” every time their detector finds a Higgs boson. In fact, ATLAS and CMS can’t actually see the Higgs at all. What they look for instead are the different particles that the Higgs decays into. The easiest-to-detect channels include when the Higgs decays to things like a quark and an anti-quark or two photons. What scientists are now trying to find out is exactly what percent of the time it decays to various different particle combinations, which will help them further pin down its properties.

It’s also possible that, with careful analysis, physicists would add up the percentages for each of the different decays and notice that they haven’t quite gotten to 100. There might be just a tiny remainder, indicating that the Higgs is decaying to particles that the detectors can’t see.

“We call that invisible decay,” said particle physicist Maria Spiropulu. The reason that might be exciting is that the Higgs could be turning into something really strange, like a dark matter particle.

We know from cosmological observations that dark matter has mass and, because the Higgs gives rise to mass, it probably has to somehow interact with dark matter. So the LHC data could tell scientists just how strong the connection is between the Higgs and dark matter. If found, these invisible decays could open up a whole new world of exploration.

“It’s fashionable to call it the ‘dark matter portal’ right now,” said Spiropulu.

NOVA and T2K
Neutrinos are oddballs in the Standard Model. They are tiny, nearly massless, and barely like interacting with any other members of the subatomic zoo. Historically, they have been the subject of  many surprising results and the future will probably reveal them to be even stranger. Physicists are currently trying to figure out some of their properties, which remain open questions.

“A very nice feature of these open questions is we know they all have answers that are accessible in the next round of experiments,” said physicist Maury Goodman of Argonne National Laboratory.

The US-based NOvA experiment will hopefully pin down some neutrino characteristics, in particular their masses. There are three types of neutrinos: electron, muon, and tau. We know that they have a very tiny mass — at least 10 billion times smaller than an electron — but we don’t know exactly what it is nor which of the three different types is heaviest or lightest.

NOvA will attempt to figure out this mass hierarchy by shooting a beam of neutrinos from Fermilab near Chicago 810 kilometers away to a detector in Ash River, Minnesota. A similar experiment in Japan called T2K is also sending neutrinos across 295 kilometers. As they pass through the Earth, neutrinos oscillate between their three different types. By comparing how the neutrinos look when they are first shot out versus how they appear at the distant detector, NOvA and T2K will be able to determine their properties with high precision.

T2K has been running for a couple years while NOvA is expected to begin taking data in 2014 and will run six years. Scientists hope that they will help answer some of the last remaining questions about neutrinos.

Read the entire article here.

Image: A simulation of the decay of a Higgs boson in a linear collider detector. Courtesy of Norman Graf / CERN.

Send to Kindle

Dangerous Foreign Films

The next time you cringe because your date or significant other wants to go see a foreign movie with you count your blessings. After all, you don’t live in North Korea.

So, take a deep breath and go see La Dolce Vita, The Discreet Charm of the Bourgeoisie and Rashomon.

From the Telegraph:

South Korea’s JoongAng Ilbo newspaper reported that the co-ordinated public executions took place in seven separate cities earlier this month.

In one case, the local authorities rounded up 10,000 people, including children, and forced them to watch, it reported.

Those put to death were found guilty by the state of minor misdemeanors, including watching videos of South Korean television programmes or possessing a Bible.

Sources told the paper that witnesses saw eight people tied to stakes in the Shinpoong Stadium, in Kangwon Province, before having sacks placed over their heads and being executed by soldiers firing machineguns.

“I heard from the residents that they watched in terror as the corpses were so riddled by machinegun fire that they were hard to identify afterwards,” the source said.

Relatives and friends of the victims were reportedly sent to prison camps, a tactic that North Korea frequently uses to dissuade anyone from breaking the law.

“Reports on public executions across the country would be certain to have a chilling effect on the rest of the people,” Daniel Pinkston, a North Korea analyst with The International Crisis Group in Seoul, said. “All these people want to do is to survive and for their families to survive. The incentives for not breaking the law are very clear now.”

The mass executions could signal a broader crackdown on any hints of discontent among the population – and even rival groups in Pyongyang – against the rule of Kim Jong-un, who came to power after the death of his father in December 2011.

In a new report, the Rand Corporation think tank claims that Kim survived an assassination attempt in 2012 and that his personal security has since been stepped up dramatically. The report concurs with South Korean intelligence sources that stated in March that a faction within the North Korean army had been involved in an attempt on Kim’s life in November of last year.

Read the entire article here.

Image: Kim Jong-un. Supreme leader of North Korea. Courtesy of Time.

Send to Kindle

Retailing: An Engineering Problem

Traditional retailers look at retailing primarily as a marketing and customer acquisition and relationship problem. For Amazon, it’s more of an engineering and IT problem with solutions to be found in innovation and optimization.

From Technology Review:

Why do some stores succeed while others fail? Retailers constantly struggle with this question, battling one another in ways that change with each generation. In the late 1800s, architects ruled. Successful merchants like Marshall Field created palaces of commerce that were so gorgeous shoppers rushed to come inside. In the early 1900s, mail order became the “killer app,” with Sears Roebuck leading the way. Toward the end of the 20th century, ultra-efficient suburban discounters like Target and Walmart conquered all.

Now the tussles are fiercest in online retailing, where it’s hard to tell if anyone is winning. Retailers as big as Walmart and as small as Tweezerman.com all maintain their own websites, catering to an explosion of customer demand. Retail e-commerce sales expanded 15 percent in the U.S in 2012—seven times as fast as traditional retail. But price competition is relentless, and profit margins are thin to nonexistent. It’s easy to regard this $186 billion market as a poisoned prize: too big to ignore, too treacherous to pursue.

Even the most successful online retailer, Amazon.com, has a business model that leaves many people scratching their heads. Amazon is on track to ring up $75 billion in worldwide sales this year. Yet it often operates in the red; last quarter, Amazon posted a $41 million loss. Amazon’s founder and chief executive officer, Jeff Bezos, is indifferent to short-term earnings, having once quipped that when the company achieved profitability for a brief stretch in 1995, “it was probably a mistake.”

Look more closely at Bezos’s company, though, and its strategy becomes clear. Amazon is constantly plowing cash back into its business. Its secretive advanced-research division, Lab 126, works on next-generation Kindles and other mobile devices. More broadly, Amazon spends heavily to create the most advanced warehouses, the smoothest customer-service channels, and other features that help it grab an ever-larger share of the market. As former Amazon manager Eugene Wei wrote in a recent blog post, “Amazon’s core business model does generate a profit with most every transaction … The reason it isn’t showing a profit is because it’s undertaken a massive investment to support an even larger sales base.”

Much of that investment goes straight into technology. To Amazon, retailing looks like a giant engineering problem. Algorithms define everything from the best way to arrange a digital storefront to the optimal way of shipping a package. Other big retailers spend heavily on advertising and hire a few hundred engineers to keep systems running. Amazon prefers a puny ad budget and a payroll packed with thousands of engineering graduates from the likes of MIT, Carnegie Mellon, and Caltech.

Other big merchants are getting the message. Walmart, the world’s largest retailer, two years ago opened an R&D center in Silicon Valley where it develops its own search engines and looks for startups to buy. But competing on Amazon’s terms doesn’t stop with putting up a digital storefront or creating a mobile app. Walmart has gone as far as admitting that it may have to rethink what its stores are for. To equal Amazon’s flawless delivery, this year it even floated the idea of recruiting shoppers out of its aisles to play deliveryman, whisking goods to customers who’ve ordered online.

Amazon is a tech innovator by necessity, too. The company lacks three of conventional retailing’s most basic elements: a showroom where customers can touch the wares; on-the-spot salespeople who can woo shoppers; and the means for customers to take possession of their goods the instant a sale is complete. In one sense, everything that Amazon’s engineers create is meant to make these fundamental deficits vanish from sight.

Amazon’s cunning can be seen in the company’s growing patent portfolio. Since 1994, Amazon.com and a subsidiary, Amazon Technologies, have won 1,263 patents. (By contrast, Walmart has just 53.) Each Amazon invention is meant to make shopping on the site a little easier, a little more seductive, or to trim away costs. Consider U.S. Patent No. 8,261,983, on “generating customized packaging” which came into being in late 2012.

“We constantly try to drive down the percentage of air that goes into a shipment,” explains Dave Clark, the Amazon vice president who oversees the company’s nearly 100 warehouses, known as fulfillment centers. The idea of shipping goods in a needlessly bulky box (and paying a few extra cents to United Parcel Service or other carriers) makes him shudder. Ship nearly a billion packages a year, and those pennies add up. Amazon over the years has created more than 40 sizes of boxes– but even that isn’t enough. That’s the glory of Amazon’s packaging patent: when a customer’s odd pairing of items creates a one-of-a-kind shipment, Amazon now has systems that will compute the best way to pack that order and create a perfect box for it within 30 minutes.

For thousands of online merchants, it’s easier to live within Amazon’s ecosystem than to compete. So small retailers such as EasyLunchboxes.com have moved their inventory into Amazon’s warehouses, where they pay a commission on each sale for shipping and other services. That is becoming a highly lucrative business for Amazon, says Goldman Sachs analyst Heath Terry. He predicts Amazon will reap $3.5 billion in cash flow from third-party shipping in 2014, creating a very profitable side business that he values at $38 billion—about 20 percent of the company’s overall stock market value.

Jousting directly with Amazon is tougher. Researchers at Internet Retailer calculate that Amazon’s revenue exceeds that of its next 12 competitors combined. In a regulatory filing earlier this year, Target—the third-largest retailer in the U.S.—conceded that its “digital sales represented an immaterial amount of total sales.” For other online entrants, the most prudent strategies generally involve focusing on areas that the big guy hasn’t conquered yet, such as selling services, online “flash sales” that snare impulse buyers who can’t pass up a deal, or particularly challenging categories such as groceries. Yet many, if not most, of these upstarts are losing money.

Read the entire article here.

Image: Amazon fullfillment center, Scotland. Courtesy of Amazon / Wired.

Send to Kindle

Let the Sunshine In

A ingeniously simple and elegant idea brings sunshine to a small town in Norway.

From the Guardian:

On the market square in Rjukan stands a statue of the town’s founder, a noted Norwegian engineer and industrialist called Sam Eyde, sporting a particularly fine moustache. One hand thrust in trouser pocket, the other grasping a tightly rolled drawing, the great man stares northwards across the square at an almost sheer mountainside in front of him.

Behind him, to the south, rises the equally sheer 1,800-metre peak known as Gaustatoppen. Between the mountains, strung out along the narrow Vestfjord valley, lies the small but once mighty town that Eyde built in the early years of the last century, to house the workers for his factories.

He was plainly a smart guy, Eyde. He harnessed the power of the 100-metre Rjukanfossen waterfall to generate hydro-electricity in what was, at the time, the world’s biggest power plant. He pioneered new technologies – one of which bears his name – to produce saltpetre by oxidising nitrogen from air, and made industrial quantities of hydrogen by water electrolysis.

But there was one thing he couldn’t do: change the elevation of the sun. Deep in its east-west valley, surrounded by high mountains, Rjukan and its 3,400 inhabitants are in shadow for half the year. During the day, from late September to mid-March, the town, three hours’ north-west of Oslo, is not dark (well, it is almost, in December and January, but then so is most of Norway), but it’s certainly not bright either. A bit … flat. A bit subdued, a bit muted, a bit mono.

Since last week, however, Eyde’s statue has gazed out upon a sight that even the eminent engineer might have found startling. High on the mountain opposite, 450 metres above the town, three large, solar-powered, computer-controlled mirrors steadily track the movement of the sun across the sky, reflecting its rays down on to the square and bathing it in bright sunlight. Rjukan – or at least, a small but vital part of Rjukan – is no longer stuck where the sun don’t shine.

“It’s the sun!” grins Ingrid Sparbo, disbelievingly, lifting her face to the light and closing her eyes against the glare. A retired secretary, Sparbo has lived all her life in Rjukan and says people “do sort of get used to the shade. You end up not thinking about it, really. But this … This is so warming. Not just physically, but mentally. It’s mentally warming.”

Two young mothers wheel their children into the square, turn, and briefly bask: a quick hit. On a freezing day, an elderly couple sit wide-eyed on one of the half-dozen newly installed benches, smiling at the warmth on their faces. Children beam. Lots of people take photographs. A shop assistant, Silje Johansen, says it’s “awesome. Just awesome.”

Pushing his child’s buggy, electrical engineer Eivind Toreid is more cautious. “It’s a funny thing,” he says. “Not real sunlight, but very like it. Like a spotlight. I’ll go if I’m free and in town, yes. Especially in autumn and in the weeks before the sun comes back. Those are the worst: you look just a short way up the mountainside and the sun is right there, so close you can almost touch it. But not here.”

Pensioners Valborg and Eigil Lima have driven from Stavanger – five long hours on the road – specially to see it. Heidi Fieldheim, who lives in Oslo now but spent six years in Rjukan with her husband, a local man, says she heard all about it on the radio. “But it’s far more than I expected,” she says. “This will bring much happiness.”

Across the road in the Nyetider cafe, sporting – by happy coincidence – a particularly fine set of mutton chops, sits the man responsible for this unexpected access to happiness. Martin Andersen is a 40-year-old artist and lifeguard at the municipal baths who, after spells in Berlin, Paris, Mali and Oslo, pitched up in Rjukan in the summer of 2001.

The first inkling of an artwork Andersen dubbed the Solspeil, or sun mirror, came to him as the month of September began to fade: “Every day, we would take our young child for a walk in the buggy,” he says, “and every day I realised we were having to go a little further down the valley to find the sun.” By 28 September, Andersen realised, the sun completely disappears from Rjukan’s market square. The occasion of its annual reappearance, lighting up the bridge across the river by the old fire station, is a date indelibly engraved in the minds of all Rjukan residents: 12 March.

And throughout the seemingly endless intervening months, Andersen says: “We’d look up and see blue sky above, and the sun high on the mountain slopes, but the only way we could get to it was to go out of town. The brighter the day, the darker it was down here. And it’s sad, a town that people have to leave in order to feel the sun.”

A hundred years ago, Eyde had already grasped the gravity of the problem. Researching his own plan, Andersen discovered that, as early as 1913, Eyde was considering a suggestion by one of his factory workers for a system of mountain-top mirrors to redirect sunlight into the valley below.

The industrialist eventually abandoned the plan for want of adequate technology, but soon afterwards his company, Norsk Hydro, paid for the construction of a cable car to carry the long-suffering townsfolk, for a modest sum, nearly 500m higher up the mountain and into the sunlight. (Built in 1928, the Krossobanen is still running, incidentally; £10 for the return trip. The view is majestic and the coffee at the top excellent. A brass plaque in the ticket office declares the facility a gift from the company “to the people of Rjukan, because for six months of the year, the sun does not shine in the bottom of the valley”.)

Andersen unearthed a partially covered sports stadium in Arizona that was successfully using small mirrors to keep its grass growing. He learned that in the Middle East and other sun-baked regions of the world, vast banks of hi-tech tracking mirrors called heliostats concentrate sufficient reflected sunlight to heat steam turbines and drive whole power plants.He persuaded the town hall to come up with the cash to allow him to develop his project further. He contacted an expert in the field, Jonny Nersveen, who did the maths and told him it could probably work. He visited Viganella, an Italian village that installed a similar sun mirror in 2006.

And 12 years after he first dreamed of his Solspeil, a German company specialising in so-called CSP – concentrated solar power – helicoptered in the three 17 sq m glass mirrors that now stand high above the market square in Rjukan. “It took,” he says, “a bit longer than we’d imagined.” First, the municipality wasn’t used to dealing with this kind of project: “There’s no rubber stamp for a sun mirror.” But Andersen also wanted to be sure it was right – that Rjukan’s sun mirror would do what it was intended to do.

Viganella’s single polished steel mirror, he says, lights a much larger area, but with a far weaker, more diffuse light. “I wanted a smaller, concentrated patch of sunlight: a special sunlit spot in the middle of town where people could come for a quick five minutes in the sun.” The result, you would have to say, is pretty much exactly that: bordered on one side by the library and town hall, and on the other by the tourist office, the 600 sq ms of Rjukan’s market square, to be comprehensively remodelled next year in celebration, now bathes in a focused beam of bright sunlight fully 80-90% as intense as the original.

Their efforts monitored by webcams up on the mountain and down in the square, their movement dictated by computer in a Bavarian town outside Munich, the heliostats generate the solar power they need to gradually tilt and rotate, following the sun on its brief winter dash across the sky.

It really works. Even the objectors – and there were, in town, plenty of them; petitions and letter-writing campaigns and a Facebook page organised against what a large number of locals saw initially as a vanity project and, above all, a criminal waste of money – now seem largely won over.

Read the entire article here.

Image: Light reflected by the mirrors of Rjukan, Norway. Courtesy of David Levene / Guardian.

Send to Kindle

A Female Muslim Superhero

Until recently all superheroes from the creative minds at Marvel and DC Comics were white, straight men. But over time — albeit very slowly — we have seen the arrival of greater diversity: an Amazonian Wonder Woman, an African-American Green Lantern, a lesbian Batwoman. Now, comes Kamala Khan, a shape-shifting Muslim girl, from New Jersey (well, nobody’s perfect).

Author Shelina Janmohamed chimes in with some well-timed analysis.

From the Telegraph:

Once, an average comic book superhero was male and wore his pants on the outside of his trousers. We’ve been thrown some female heroines along the way: Wonder Woman, Lara Croft and Ms Marvel. The female presence in comics has been growing over the years. But the latest announcement by Marvel Comics that a 16-year-old Pakistani Muslim American girl from New Jersey will be one of their lead characters has been creating a stir, and for all the right reasons. Kamala Khan is the new Ms Marvel.

The series editor at Marvel, Sana Amanat says the series is a “desire to explore the Muslim-American diaspora from an authentic perspective”. Khan can grow and shrink her limbs and her body and ultimately, she’ll be able to shape shift into other forms.

Like all superheroes she has a back story, and the series will deal with how familial and religious edicts mesh with super-heroics, and perhaps even involve some rule breaking.

I love it.

As a teenager, I wish I could have seen depictions of struggling with identity, religion and adolescence that reflected my own, and in a way that made me believe I could be powerful rather than confused, marginalised and abnormal.

Kamala Khan will create waves not just for teenagers though. Her very existence will enable readers to see past the ‘Muslim’ tag, into a powerful and flawed multifaceted human being. Fantasy, paradoxically, is a potent method to create normalisation of Muslim women in the ordinary mainstream.

Usually, Muslim women in the public eye including fictional ones, are cast in a long tradition of one-dimensional stereotypes, meek, submissive, oppressed and cloaked females struggling to escape from a violent family, or too brainwashed to know that she needs to escape.

Instead, Marvel Comics has created the opportunity to investigate the complexity of a Muslim female character to the backdrop of a different history: the tradition of superheroes. Fraught with angst in her daily life, we can now explore Muslim women’s relationship with power (and in Khan’s case, with giant fists). She is contextualised not through politics but through the world of superheroes.

Comics and cartoons are increasingly giving space to Muslim women to be explored in new contexts, offering the opportunity for better understanding, and ‘normalisation.’ Yes, I’m using the word again, because sometimes that’s all we long for, to be seen as normal ordinary women.

Just yesterday, the hashtag ‘#AsAMuslimWoman’ was trending on Twitter, offering mundane self descriptions from Muslim women such as: “Early mornings irritate me & I enjoy chocolate”, “I hate the District line in the morning. It’s cramped. And it smells funny”, and I’m “running my business, enjoying motherhood and living my Dreams”.

Read the entire article here.

Image: Kamala Khan, Marvel’s new Muslim superhero, on the cover of the new Ms. Marvel comic. Courtesy of the Marvel / Independent.

Send to Kindle

The Best Place to be a Woman

By most accounts the best place to be a woman is one that offers access to quality education and comprehensive healthcare, provides gender equality with men, and meaningful career and family work-life balance. So where is this real world Shangri-La. Some might suggest this place to be the land of opportunity — the United States. But, that’s not even close. Nor is it Canada or Switzerland or Germany or the UK.

According to a recent Global Gender Gap report, and a number of other surveys, the best place to be born a girl is Iceland. Next on the list come Finland, Norway, and Sweden, with another Scandinavian country, Denmark, not too far behind in seventh place. By way of comparison, the US comes in 23rd — not great, but better than Afghanistan and Yemen.

From the Social Reader:

Icelanders are among the happiest and healthiest people on Earth. They publish more books per capita than any other country, and they have more artists. They boast the most prevalent belief in evolution — and elves, too. Iceland is the world’s most peaceful nation (the cops don’t even carry guns), and the best place for kids. Oh, and they’ve got a lesbian head of state, the world’s first. Granted, the national dish is putrefied shark meat, but you can’t have everything.

Iceland is also the best place to have a uterus, according to the folks at the World Economic Forum. The Global Gender Gap Report ranks countries based on where women have the most equal access to education and healthcare, and where they can participate most fully in the country’s political and economic life.

According to the 2013 report, Icelandic women pretty much have it all. Their sisters in Finland, Norway, and Sweden have it pretty good, too: those countries came in second, third and fourth, respectively. Denmark is not far behind at number seven.

The U.S. comes in at a dismal 23rd, which is a notch down from last year. At least we’re not Yemen, which is dead last out of 136 countries.

So how did a string of countries settled by Vikings become leaders in gender enlightenment? Bloodthirsty raiding parties don’t exactly sound like models of egalitarianism, and the early days weren’t pretty. Medieval Icelandic law prohibited women from bearing arms or even having short hair. Viking women could not be chiefs or judges, and they had to remain silent in assemblies. On the flip side, they could request a divorce and inherit property. But that’s not quite a blueprint for the world’s premier egalitarian society.

The change came with literacy, for one thing. Today almost everybody in Scandinavia can read, a legacy of the Reformation and early Christian missionaries, who were interested in teaching all citizens to read the Bible. Following a long period of turmoil, Nordic states also turned to literacy as a stabilizing force in the late 18th century. By 1842, Sweden had made education compulsory for both boys and girls.

Researchers have found that the more literate the society in general, the more egalitarian it is likely to be, and vice versa. But the literacy rate is very high in the U.S., too, so there must be something else going on in Scandinavia. Turns out that a whole smorgasbord of ingredients makes gender equality a high priority in Nordic countries.

To understand why, let’s take a look at religion. The Scandinavian Lutherans, who turned away from the excesses of the medieval Catholic Church, were concerned about equality — especially the disparity between rich and poor. They thought that individuals had some inherent rights that could not just be bestowed by the powerful, and this may have opened them to the idea of rights for women. Lutheran state churches in Denmark, Sweden, Finland, Norway and Iceland have had female priests since the middle of the 20th century, and today, the Swedish Lutheran Church even has a female archbishop.

Or maybe it’s just that there’s not much religion at all. Scandinavians aren’t big churchgoers. They tend to look at morality from a secular point of view, where there’s not so much obsessive focus on sexual issues and less interest in controlling women’s behavior and activities. Scandinavia’s secularism decoupled sex from sin, and this worked out well for females. They came to be seen as having the right to sexual experience just like men, and reproductive freedom, too. Girls and boys learn about contraception in school (and even the pleasure of orgasms), and most cities have youth clinics where contraceptives are readily available. Women may have an abortion for any reason up to the eighteenth week (they can seek permission from the National Board of Health and Welfare after that), and the issue is not politically controversial.

Scandinavia’s political economy also developed along somewhat different lines than America’s did. Sweden and Norway had some big imperialist adventures, but this behavior declined following the Napoleonic Wars. After that they invested in the military to ward off invaders, but they were less interested in building it up to deal with bloated colonial structures and foreign adventures. Overall Nordic countries devoted fewer resources to the military — the arena where patriarchal values tend to get emphasized and entrenched. Iceland, for example, spends the world’s lowest percentage of GDP on its military.

Industrialization is part of the story, too: it hit the Nordic countries late. In the 19th century, Scandinavia did have a rich and powerful merchant class, but the region never produced the Gilded Age industrial titans and extreme concentration of wealth that happened in America back then, and has returned today. (Income inequality and discrimination of all kinds seem to go hand-in-hand.)

In the 20th century, farmers and workers in the newly populated Nordic cities tended to join together in political coalitions, and they could mount a serious challenge to the business elites, who were relatively weak compared to those in the U.S. Like ordinary people everywhere, Scandinavians wanted a social and economic system where everyone could get a job, expect decent pay, and enjoy a strong social safety net. And that’s what they got — kind of like Roosevelt’s New Deal without all the restrictions added by New York bankers and southern conservatives. Strong trade unions developed, which tend to promote gender equality. The public sector grew, providing women with good job opportunities. Iceland today has the highest rate of union membership out of any OECD country.

Over time, Scandinavian countries became modern social democratic states where wealth is more evenly distributed, education is typically free up through university, and the social safety net allows women to comfortably work and raise a family. Scandinavian moms aren’t agonizing over work-family balance: parents can take a year or more of paid parental leave. Dads are expected to be equal partners in childrearing, and they seem to like it. (Check them out in the adorable photo book, The Swedish Dad.)

The folks up north have just figured out — and it’s not rocket science! — that everybody is better off when men and women share power and influence. They’re not perfect — there’s still some unfinished business about how women are treated in the private sector, and we’ve sensed an undertone of darker forces in pop culture phenoms like The Girl with the Dragon Tattoo. But Scandinavians have decided that investment in women is both good for social relations and a smart economic choice. Unsurprisingly, Nordic countries have strong economies and rank high on things like innovation — Sweden is actually ahead of the U.S. on that metric. (So please, no more nonsense about how inequality makes for innovation.)

The good news is that things are getting better for women in most places in the world. But the World Economic Forum report shows that the situation either remains the same or is deteriorating for women in 20 percent of countries.

In the U.S., we’ve evened the playing field in education, and women have good economic opportunities. But according to the WEF, American women lag behind men in terms of health and survival, and they hold relatively few political offices. Both facts become painfully clear every time a Tea Party politician betrays total ignorance of how the female body works. Instead of getting more women to participate in the political process, we’ve got setbacks like a new voter ID law in Texas, which could disenfranchise one-third of the state’s woman voters. That’s not going to help the U.S. become a world leader in gender equality.

Read the entire article here.

Send to Kindle

Mind the Gap

The gap in question is not the infamous gap between subway platform and train, but the so-called “thigh gap”. Courtesy of the twittersphere, internet trolls and the substance-lacking 24hr news media, the thigh gap has now become the hot topic du jour.

One wonders when the conversation will move to a more significant gap — the void between the ears of a significant number of image-obsessed humans.

From the Guardian:

She may have modelled for Ralph Lauren and appeared on the cover of Vogue Italia, but when a photo of Robyn Lawley wearing a corset appeared on Facebook the responses were far from complimentary. “Pig”, “hefty” and “too fat” were some of the ways in which commenters described the 24-year-old. Her crime? Her thighs were touching. Lawley had failed to achieve a “thigh gap”.

The model, who has her own swimwear line and has won numerous awards for her work, responded vehemently below the line: “You sit behind a computer screen objectifying my body, judging it and insulting it, without even knowing it.”

She also went on to pen a thoughtful rallying cry for the Daily Beast last week against those who attacked her, saying their words were “just another tool of manipulation that other people are trying to use to keep me from loving my body”.

The response to her article was electric and Lawley was invited to speak about thigh-gap prejudice on America’s NBC Today. In a careful and downbeat tone, she explained: “It’s basically when your upper middle thighs do not touch when you’re standing with your legs together.”

The Urban Dictionary website describes it in no uncertain terms as “the gap between a woman’s thighs directly below the vagina, often diamond shaped when the thighs are together.”

The thigh gap is not a new concept to Lawley, who at 6ft 2in and 12 stone is classified as a “plus-size” model, and who remembers learning about it aged 12. But the growth of Instagram and other social media has allowed the concept of a thigh gap to enter the public consciousness and become an alarming, and exasperating, new trend among girls and women.

A typical example is a Twitter account devoted solely to Cara Delevingne’s thigh gap, which the model initially described as “pretty funny” but also “quite crazy”.

Selfies commonly show one part of a person’s anatomy, a way of compartmentalising body sections to show them in the best light, and the thigh gap is particularly popular. What was once a standard barometer of thinness among models is now apparently sought after by a wider public.

The thigh gap has its own hashtag on Twitter, under which users post pictures of non-touching thighs for inspiration, and numerous dedicated blogs. The images posted mirror the ubiquitous images of young, slim models and pop stars in shorts, often at festivals such as Glastonbury or Coachella, that have flooded the mainstream media in recent years, bringing with them the idea that skinniness, glamour and fun are intertwined.

There is even a “how to” page on the internet, although worshippers of thin may be disappointed to find that the first step is to “understand that a thigh gap is not physically possible for most people”.

Naomi Shimada began modelling at 13, but had to quit the industry when her weight changed. “I was what they call a straight-size model – a size 6 – when I started, which is normal for a very young girl.

“But as I got older my body didn’t stay like that, because, guess what, that doesn’t happen to people! So I took a break and went back in as a size 14 and now work as a plus-size model.”

Shimada is unequivocal about where the obsession with the thigh gap comes from. “It’s not a new trend: it’s been around for years. It comes partly from a fashion industry that won’t acknowledge that there are different ways a woman should look, and it comes from the pro-anorexic community. It’s a path to an eating disorder.”

Caryn Franklin, the former Clothes Show presenter who co-founded the diversity campaign All Walks Beyond the Catwalk, is quite appalled. “We now have a culture that convinces women to see themselves as an exterior only, and evaluating and measuring the component parts of their bodies is one of the symptoms.

“Young women do not have enough female role models showing them action or intellect. In their place are scantily clad celebrities. Sadly, young women are wrongly looking to fashion for some kind of guidance on what it is to be female.”

Franklin, who was fashion editor of style magazine i-D in the 1980s, says it hasn’t always been this way: “I had spent my teen years listening to Germaine Greer and Susie Orbach talking about female intellect.

“When I came out of college I knew I had a contribution to make that wasn’t based on my appearance. I then landed in a fashion culture that was busy celebrating diversity. There was no media saying ‘get the look’ and pointing to celebrities as style leaders because there wasn’t a homogenised fashion look, and there weren’t digital platforms that meant that I was exposed to more images of unachievable beauty.”

Asked whether the fixation on skinny thighs is a way of forcing women’s bodies to look pre-pubescent, Franklin says: “This culture has encouraged women to infantilise themselves. When you are so fixated on approval for what you look like, you are a little girl: you haven’t grown up.”

For many, the emergence of the thigh gap trend is baffling.

“About four hours ago, as far as I was concerned a ‘thigh gap’ was something anyone could have if they stood up and placed their feet wider than hip distance apart,” wrote Vice journalist Bertie Brandes when she discovered the phenomenon.

“A thigh gap is actually the hollow cavity which appears between the tops of your legs when you stand with your feet together. It also means that your body is underweight.”

Other bloggers have responded with a sense of the absurd; feminist blog Smells Like Girl Riot recently posted a diagram of a skeleton to show why the ischium and the pubis cannot be altered through diet alone.

Shimada, now 26, is about to launch her own fanzine, A-Genda, which aims to use a diverse range of models to show young women “something healthy to aspire to”.

“When I was a really young model there were girls who used to talk about the pencil test, which is when you measure the depth of your waist against the length of a pencil, and back dimples, when the lack of fat would create concave areas of skin,” she says. “But I don’t even think this kind of thing is limited to the fashion industry any more. It’s all a big mess. But we all have to play a role in making it better.”

Franklin also wonders: “When did everyone become so narcissistic? What happened to intellect? My sense of myself was not informed by a very shallow patriarchal media that prioritised the objectification of women – it was informed by feminism.”

Lawley signed off her call to arms with a similar acknowledgement of the potential power of women’s bodies.

“I’ve been trying to do just the opposite: I want my thighs to be bigger and stronger. I want to run faster and swim longer. I suppose we all just want different things, but women have enough pressure as it is without the added burden of achieving a ‘thigh gap’.

“The last thing I would want for my future daughter would be to starve herself because she thought a ‘thigh gap’ was necessary to be deemed attractive.”

Read the entire article here.

Image: Model Robyn Lawley. Courtesy of Jon Gorrigan / Observer.

Send to Kindle

Are You An H-less Socialist?

If you’re British and you drop your Hs while speaking then your likely to be considered of inferior breeding stock by the snootier classes. Or as the Times newspaper put it at the onset of the 20th century, you would be considered  an “h-less socialist”. Of course, a mere fifty years earlier it was generally acceptable to drop aitches, so you would have been correct in pronouncing “hotel” as “otel” or “horse” as “orse”. And, farther back still, in Ancient Rome adding Hs would have earned the scorn of the ruling classes for appearing too Greek. So, who’s right?

If you’re wondering how this all came about and who if anybody is right, check out the new book Alphabetical: How Every Letter Tells A Story by Michael Rosen.

From the Guardian:

The alphabet is something not to be argued with: there are 26 letters in as fixed a sequence as the numbers 1-26; once learned in order and for the “sounds they make”, you have the key to reading and the key to the way the world is classified. Or perhaps not.

Actually, in the course of writing my book about the history of the letters we use, Alphabetical, I discovered that the alphabet is far from neutral. Debates about power and class surround every letter, and H is the most contentious of all. No other letter has had such power to divide people into opposing camps.

In Britain, H owes its name to the Normans, who brought their letter “hache” with them in 1066. Hache is the source of our word “hatchet”: probably because a lower-case H looks a lot like an axe. It has certainly caused a lot of trouble over the years. A century ago people dropping their h’s were described in the Times as “h-less socialists.” In ancient Rome, they were snooty not about people who dropped their Hs but about those who picked up extra ones. Catullus wrote a nasty little poem about Arrius (H’arrius he called him), who littered his sentences with Hs because he wanted to sound more Greek. Almost two thousand years later we are still split, and pronouncing H two ways: “aitch”, which is posh and “right”; and “haitch”, which is not posh and thus “wrong”. The two variants used to mark the religious divide in Northern Ireland – aitch was Protestant, haitch was Catholic, and getting it wrong could be a dangerous business.

Perhaps the letter H was doomed from the start: given that the sound we associate with H is so slight (a little outbreath), there has been debate since at least AD 500 whether it was a true letter or not. In England, the most up-to-date research suggests that some 13th-century dialects were h-dropping, but by the time elocution experts came along in the 18th century, they were pointing out what a crime it is. And then received wisdom shifted, again: by 1858, if I wanted to speak correctly, I should have said “erb”, “ospital” and “umble”.

The world is full of people laying down the law about the “correct” choice: is it “a hotel” or “an otel”; is it “a historian” or “an historian”? But there is no single correct version. You choose. We have no academy to rule on these matters and, even if we did, it would have only marginal effect. When people object to the way others speak, it rarely has any linguistic logic. It is nearly always because of the way that a particular linguistic feature is seen as belonging to a cluster of disliked social features. Writing this book has been a fascinating journey: the story of our alphabet turns out to be a complex tug of war between the people who want to own our language and the people who use it. I know which side I’m on.

Read the (h)entire (h)article ‘ere.

Image: Alphabetical book cover. Courtesy of Michael Rosen.

Send to Kindle

The Coming Energy Crash

By some accounts the financial crash that began in 2008 is a mere economic hiccup compared with the next big economic (and environmental) disaster — the fossil fuel crisis accompanied by risk denial syndrome.

From the New Scientist:

FIVE years ago the world was in the grip of a financial crisis that is still reverberating around the globe. Much of the blame for that can be attributed to weaknesses in human psychology: we have a collective tendency to be blind to the kind of risks that can crash economies and imperil civilisations.

Today, our risk blindness is threatening an even bigger crisis. In my book The Energy of Nations, I argue that the energy industry’s leaders are guilty of a risk blindness that, unless action is taken, will lead to a global crash – and not just because of the climate change they fuel.

Let me begin by explaining where I come from. I used to be a creature of the oil and gas industry. As a geologist on the faculty at Imperial College London, I was funded by BP, Shell and others, and worked on oil and gas in shale deposits, among other things. But I became worried about society’s overdependency on fossil fuels, and acted on my concerns.

In 1989, I quit Imperial College to become a climate campaigner. A decade later I set up a solar energy business. In 2000 I co-founded a private equity fund investing in renewables.

In these capacities, I have watched captains of the energy and financial industries at work – frequently close to, often behind closed doors – as the financial crisis has played out and the oil price continued its inexorable rise. I have concluded that too many people across the top levels of business and government have found ways to close their eyes and ears to systemic risk-taking. Denial, I believe, has become institutionalised.

As a result of their complacency we face four great risks. The first and biggest is no surprise: climate change. We have way more unburned conventional fossil fuel than is needed to wreck the climate. Yet much of the energy industry is discovering and developing unconventional deposits – shale gas and tar sands, for example – to pile onto the fire, while simultaneously abandoning solar power just as it begins to look promising. It has been vaguely terrifying to watch how CEOs of the big energy companies square that circle.

Second, we risk creating a carbon bubble in the capital markets. If policymakers are to achieve their goal of limiting global warming to 2 °C, 60 to 80 per cent of proved reserves of fossil fuels will have to remain in the ground unburned. If so, the value of oil and gas companies would crash and a lot of people would lose a lot of money.

I am chairman of Carbon Tracker, a financial think tank that aims to draw attention to that risk. Encouragingly, some financial institutions have begun withdrawing investment in fossil fuels after reading our warnings. The latest report from the Intergovernmental Panel on Climate Change (IPCC) should spread appreciation of how crazy it is to have energy markets that are allowed to account for assets as though climate policymaking doesn’t exist.

Third, we risk being surprised by the boom in shale gas production. That, too, may prove to be a bubble, maybe even a Ponzi scheme. Production from individual shale wells declines rapidly, and large amounts of capital have to be borrowed to drill replacements. This will surprise many people who make judgement calls based on the received wisdom that limits to shale drilling are few. But I am not alone in these concerns.

Even if the US shale gas drilling isn’t a bubble, it remains unprofitable overall and environmental downsides are emerging seemingly by the week. According to the Texas Commission on Environmental Quality, whole towns in Texas are now running out of water, having sold their aquifers for fracking. I doubt that this is a boom that is going to appeal to the rest of the world; many others agree.

Fourth, we court disaster with assumptions about oil depletion. Most of us believe the industry mantra that there will be adequate flows of just-about-affordable oil for decades to come. I am in a minority who don’t. Crude oil production peaked in 2005, and oil fields are depleting at more than 6 per cent per year, according to the International Energy Agency. The much-hyped 2 million barrels a day of new US production capacity from shale needs to be put in context: we live in a world that consumes 90 million barrels a day.

It is because of the sheer prevalence of risk blindness, overlain with the pervasiveness of oil dependency in modern economies, that I conclude system collapse is probably inevitable within a few years.

Mine is a minority position, but it would be wise to remember how few whistleblowers there were in the run-up to the financial crash, and how they were vilified in the same way “peakists” – believers in premature peak oil – are today.

Read the entire article here.

Image: power plant. Courtesy of Think Progress.

Send to Kindle

Masters of the Universe: Silicon Valley Edition

As we all (should) know the “real” masters of the universe (MOTU) center around He-Man and his supporting cast of characters from the mind of the Mattel media company. In the 80s, we also find masters of the universe on Wall Street — bright young MBAs leading the charge towards the untold wealth (and eventual destruction) mined by investment banks. Ironically, many of the east coast MOTU have since disappeared from public view following the financial meltdown that many of them helped engineer. Now, we seem to be at risk from another group of arrogant MOTU: this time, a select group of high-tech entrepreneurs from Silicon Valley.

From the WSJ:

At a startup conference in the San Francisco Bay area last month, a brash and brilliant young entrepreneur named Balaji Srinivasan took the stage to lay out a case for Silicon Valley’s independence.

According to Mr. Srinivasan, who co-founded a successful genetics startup and is now a popular lecturer at Stanford University, the tech industry is under siege from Wall Street, Washington and Hollywood, which he says he believes are harboring resentment toward Silicon Valley’s efforts to usurp their cultural and economic power.

Balaji Srinivasan, an entrepreneur who proposes an ‘opt-in society,’ run by technology. His idea seems a more expansive version of a call by Google CEO Larry Page for ‘a piece of the world’ to try out controversial new technologies.

On its surface, Mr. Srinivasan’s talk,?called “Silicon Valley’s Ultimate Exit,”?sounded like a battle cry of the libertarian, anti-regulatory sensibility long espoused by some of the tech industry’s leading thinkers. After arguing that the rest of the country wants to put a stop to the Valley’s rise, Mr. Srinivasan floated a plan for techies to build an “opt-in society, outside the U.S., run by technology.”

His idea seemed a more expansive version of Google Chief Executive Larry Page‘s call for setting aside “a piece of the world” to try out controversial new technologies, and investor Peter Thiel’s “Seastead” movement, which aims to launch tech-utopian island nations.

But there was something more significant about Mr. Srinivasan’s talk than simply a rehash of Silicon Valley’s grievances. It was one of several recent episodes in which tech stars have sought to declare the Valley the nation’s leading center of power and to dismiss non-techies as unimportant to the nation’s future.

For instance, on “This Week in Start-Ups,” a popular tech podcast, the venture capitalist Chamath Palihapitiya recently argued that “it’s becoming excruciatingly, obviously clear to everyone else that where value is created is no longer in New York; it’s no longer in Washington; it’s no longer in L.A.; it’s in San Francisco and the Bay Area.”

This is Silicon Valley’s superiority complex, and it sure is an ugly thing to behold. As the tech industry has shaken off the memories of the last dot-com bust, its luminaries have become increasingly confident about their capacity to shape the future. And now they seem to have lost all humility about their place in the world.

Sure, they’re correct that whether you measure success financially or culturally, Silicon Valley now seems to be doing better than just about anywhere else. But there is a suggestion bubbling beneath the surface of every San Francisco networking salon that the industry is unstoppable, and that its very success renders it immune to legitimate criticism.

This is a dangerous idea. For Silicon Valley’s own sake, the triumphalist tone needs to be kept in check. Everyone knows that Silicon Valley aims to take over the world. But if they want to succeed, the Valley’s inhabitants would be wise to at least pretend to be more humble in their approach.

I tried to suggest this to Mr. Srinivasan when I met him at a Palo Alto, Calif., cafe a week after his incendiary talk. We spoke for two hours, and I found him to be disarming and charming.

He has a quick, capacious mind, the sort that flits effortlessly from discussions of genetics to economics to politics to history. (He is the kind of person who will refer to the Treaty of Westphalia in conversation.)

Contrary to press reports, Mr. Srinivasan says he wasn’t advocating Silicon Valley’s “secession.” And, in fact, he hadn’t used that word. Instead he was advocating a “peaceful exit,” something similar to what his father did when he emigrated from India to the U.S. in the past century. But when I asked him what harms techies faced that might prompt such a drastic response, he couldn’t offer much evidence.

He pointed to a few headlines in the national press warning that robots might be taking over people’s jobs. These, he said, were evidence of the rising resentment that technology will foster as it alters conditions across the country and why Silicon Valley needs to keep an escape hatch open.

But I found Mr. Srinivasan’s thesis to be naive. According to the industry’s own hype, technologies like robotics, artificial intelligence, data mining and ubiquitous networking are poised to usher in profound changes in how we all work and live. I believe, as Mr. Srinivasan argues, that many of these changes will eventually improve human welfare.

But in the short run, these technologies could cause enormous economic and social hardships for lots of people. And it is bizarre to expect, as Mr. Srinivasan and other techies seem to, that those who are affected wouldn’t criticize or move to stop the industry pushing them.

Tech leaders have a choice in how to deal with the dislocations their innovations cause. They can empathize and even work with stalwarts of the old economy to reduce the shock of new invention in sectors such as Hollywood, the news and publishing industries, the government, and finance—areas that Mr. Srinivasan collectively labels “the paper belt.”

They can continue to disrupt many of these institutions in the marketplace without making preening claims about the superiority of tech culture. (Apple’s executives rarely shill for the Valley, but still sometimes manage to change the world).

Or, tech leaders can adopt an oppositional tone: If you don’t recognize our superiority and the rightness of our ways, we’ll take our ball and go home.

Read the entire article here.

Image courtesy of Silicon Valley.

Send to Kindle

Pre-Twittersphere Infectious Information

While our 21st century always-on media and information sharing circus pervades every nook and cranny of our daily lives, it is useful to note that pre-Twittersphere, ideas and information did get shared. Yes, useful news and even trivial memes did go viral back in the 18oos.

From Wired:

The story had everything — exotic locale, breathtaking engineering, Napoleon Bonaparte. No wonder the account of a lamplit flat-bottom boat journey through the Paris sewer
went viral after it was published — on May 23, 1860.

At least 15 American newspapers reprinted it, exposing tens of thousands of readers to the dank wonders of the French city’s “splendid system of sewerage.”

Twitter is faster and HuffPo more sophisticated, but the parasitic dynamics of networked media were fully functional in the 19th century. For proof, look no further than the Infectious Texts project, a collaboration of humanities scholars and computer scientists.

The project expects to launch by the end of the month. When it does, researchers and the public will be able to comb through widely reprinted texts identified by mining 41,829 issues of 132 newspapers from the Library of Congress. While this first stage focuses on texts from before the Civil War, the project eventually will include the later 19th century and expand to include magazines and other publications, says Ryan Cordell, an assistant professor of English at Northeastern University and a leader of the project.

Some of the stories were printed in 50 or more newspapers, each with thousands to tens of thousands of subscribers. The most popular of them most likely were read by hundreds of thousands of people, Cordell says. Most have been completely forgotten. “Almost none of those are texts that scholars have studied, or even knew existed,” he said.

The tech may have been less sophisticated, but some barriers to virality were low in the 1800s. Before modern copyright laws there were no legal or even cultural barriers to borrowing content, Cordell says. Newspapers borrowed freely. Large papers often had an “exchange editor” whose job it was to read through other papers and clip out interesting pieces. “They were sort of like BuzzFeed employees,” Cordell said.

Clips got sorted into drawers according to length; when the paper needed, say, a 3-inch piece to fill a gap, they’d pluck out a story of the appropriate length and publish it, often verbatim.

Fast forward a century and a half and many of these newspapers have been scanned and digitized. Northeastern computer scientist David Smith developed an algorithm that mines
this vast trove of text for reprinted items by hunting for clusters of five words that appear in the same sequence in multiple publications (Google uses a similar concept for its Ngram viewer).

The project is sponsored by the NULab for Texts, Maps, and Networks at Northeastern and the Office of Digital Humanities at the National Endowment for the Humanities. Cordell says the main goal is to build a resource for other scholars, but he’s already capitalizing on it for his own research, using modern mapping and network analysis tools to explore how things went viral back then.

Counting page views from two centuries ago is anything but an exact science, but Cordell has used Census records to estimate how many people were living within a certain distance of where a particular piece was published and combined that with newspaper circulation data to estimate what fraction of the population would have seen it (a quarter to a third, for the most infectious texts, he says).

He’s also interested in mapping how the growth of the transcontinental railroad — and later the telegraph and wire services — changed the way information moved across the country. The animation below shows the spread of a single viral text, a poem by the Scottish poet Charles MacKay, overlaid on the developing railroad system. The one at the very bottom depicts how newspapers grew with the country from the colonial era to modern times, often expanding into a territory before the political boundaries had been drawn.

Read the entire article here.

Image: Courtesy of Ryan Cordell / Infectious texts project. Thicker lines indicate more content-sharing between 19th century newspapers.

Send to Kindle

Millionaires are So Yesterday

Not far from London’s beautiful Hampstead Heath lies The Bishops Avenue. From the 1930s until the mid-1970s this mile-long street became the archetypal symbol for new wealth; the nouveau riche millionaires made this the most sought after — and well-known — address for residential property in the nation (of course “old money” still preferred its stately mansions and castles). But since then, The Bishops Avenue has changed, with many properties now in the hands of billionaires, hedge fund investors and oil rich plutocrats.

From the Telegraph:

You can tell when a property is out of your price bracket if the estate agent’s particulars come not on a sheet of A4 but are presented in a 50-page hardback coffee-table book, with a separate section for the staff quarters.

Other giveaway signs, in case you were in any doubt, are the fact the lift is leather-lined, there are 62 internal CCTV cameras, a private cinema, an indoor swimming pool, sauna, steam room, and a series of dressing rooms – “for both summer and winter”, the estate agent informs me – which are larger than many central London flats.

But then any property on The Bishops Avenue in north London is out of most people’s price bracket – such as number 62, otherwise known as Jersey House, which is on the market for £38 million. I am being shown around by Grant Alexson, from Knight Frank estate agents, both of us in our socks to ensure that we do not grubby the miles of carpets or marble floors in the bathrooms (all of which have televisions set into the walls).

My hopes of picking up a knock-down bargain had been raised after the news this week that one property on The Bishops Avenue, Dryades, had been repossessed. The owners, the family of the former Pakistan privatisation minister Waqar Ahmed Khan, were unable to settle a row with their lender, Deutsche Bank.

It is not the only property in the hands of the receivers on this mile-long stretch. One was tied up in a Lehman Brothers property portfolio and remains boarded up. Meanwhile, the Saudi royal family, which bought 10 properties during the First Gulf War as boltholes in case Saddam Hussein invaded, has offloaded the entire package for a reported £80 million in recent weeks. And the most expensive property on the market, Heath Hall, had £35 million knocked off the asking price (taking it down to a mere £65 million).

This has all thrown the spotlight once again on this strange road, which has been nicknamed “Millionaires’ Row” since the 1930s – when a million meant something. Now, it is called “Billionaires’ Row”. It was designed, from its earliest days, to be home to the very wealthy. One of the first inhabitants was George Sainsbury, son of the supermarket founder; another was William Lyle, who used his sugar fortune to build a vast mansion in the Arts and Crafts style. Stars such as Gracie Fields also lived here.

But between the wars, the road became the butt of Music Hall comedians who joked about it being full of “des-reses” for the nouveaux riches such as Billy Butlin. Evelyn Waugh, the master of social nuance, made sure his swaggering newspaper baron Lord Copper of Scoop resided here. It was the 1970s, however, that saw the road vault from being home to millionaires to a pleasure ground for international plutocrats, who used their shipping or oil wealth to snap up properties, knock them down and build monstrous mansions in “Hollywood Tudor” style. Worse were the pastiches of Classical temples, the most notorious of which was built by the Turkish industrialist Halis Toprak, who decided the bath big enough to fit 20 people was not enough of a statement. So he slapped “Toprak Mansion” on the portico (causing locals to dub it “Top Whack Mansion”). It was sold a couple of years ago to the Kazakhstani billionairess Horelma Peramam, who renamed it Royal Mansion.

Perhaps the most famous of recent inhabitants was Lakshmi Mittal, the steel magnate, and for a long time Britain’s richest man. But he sold Summer Palace, for £38 million in 2011 to move to the much grander Kensington Palace Gardens, in the heart of London. The cast list became even more varied with the arrival of Salman Rushdie who hid behind bullet-proof glass and tycoon Asil Nadir, whose address is now HM Belmarsh Prison.

Of course, you can be hard-pressed to discover who owns these properties or how much anyone paid. These are not run-of-the-mill transactions between families moving home. Official Land Registry records reveal a complex web of deals between offshore companies. Miss Peramam holds Royal Mansion in the name of Hartwood Resources Company, registered in the British Virgin Islands, and the records suggest she paid closer to £40 million than the £50 million reported.

Alexson says the complexity of the deals are not just about avoiding stamp duty (which is now at 7 per cent for properties over £2 million). “Discretion first, tax second,” he argues. “Look, some of the Middle Eastern families own £500 billion. Stamp duty is not an issue for them.” Still, new tax rules this year, which increase stamp duty to 15 per cent if the property is bought through an offshore vehicle, have had an effect, according to Alexson, who says that the last five houses he sold have been bought by an individual, not a company.

But there is little sign of these individuals on the road itself. Walking down the main stretch of the Avenue from the beautiful Hampstead Heath to the booming A1, which bisects the road, more than 10 of these 39 houses are either boarded up or in a state of severe disrepair. Behind the high gates and walls, moss and weeds climb over the balustrades. Many others are clearly uninhabited, except for a crew of builders and a security guard. (Barnet council defends all the building work it has sanctioned, with Alexson pointing out that the new developments are invariably rectifying the worst atrocities of the 1980s.)

Read the entire article here.

Image: Toprak Mansion (now known as Royal Mansion), The Bishops Avenue. Courtesy of Daily Mail.

Send to Kindle

Zombie Technologies

Next time Halloween festivities roll around consider dressing up as a fax machine — one of several technologies that seems unwilling to die.

From Wired:

One of the things we love about technology is how fast it moves. New products and new services are solving our problems all the time, improving our connectivity and user experience on a nigh-daily basis.

But underneath sit the technologies that just keep hanging on. Every flesh wound, every injury, every rupture of their carcass levied by a new device or new method of doing things doesn’t merit even so much as a flinch from them. They keep moving, slowly but surely, eating away at our livelihoods. They are the undead of the technology world, and they’re coming for your brains.

Below, you’ll find some of technology’s more persistent walkers—every time we seem to kill them off, more hordes still clinging to their past relevancy lumber up to distract you. It’s about time we lodged an axe in their skulls.

Oddly specific yet totally unhelpful error codes

It’s common when you’re troubleshooting hardware and software—something, somewhere throws an error code that pairs an incredibly specific alphanumerical code (“0x000000F4”) with a completely generic and unhelpful message like “an unknown error occurred” or “a problem has been detected.”

Back in computing’s early days, the desire to use these codes instead of providing detailed troubleshooting guides made sense—storage space was at a premium, Internet connectivity could not be assumed, and it was a safe bet that the software in question came with some tome-like manual to assist people in the event of problems. Now, with connectivity virtually omnipresent and storage space a non-issue, it’s not clear why codes like these don’t link to more helpful information in some way.

All too often, you’re left to take the law into your own hands. Armed with your error code, you head over to your search engine of choice and punch it in. At this point, one of two things can happen, and I’m not sure which is more infuriating: you either find an expanded, totally helpful explanation of the code and how to fix it on the official support website (could you really not have built that into the software itself?), or, alternatively, you find a bunch of desperate, inconclusive forum posts that offer no additional insight into the problem (though they do offer insight into the absurdity of the human condition). There has to be a better way.

Copper landlines

I’ve been through the Northeast blackout, the 9-11 attacks, and Hurricane Sandy, all of which took out cell service at the same time family and friends were most anxious to get in touch. So I’m a prime candidate for maintaining a landline, which carries enough power to run phones, often provided by a facility with a backup generator. And, in fact, I’ve tried to retain one. But corporate indifference has turned copper wiring into the technology of the living dead.

Verizon really wants you to have two things: cellular service and FiOS. Except it doesn’t actually want to give you FiOS—the company has stopped expanding its fiber footprint, and it’s moving with the speed of a glacier to hook up neighborhoods that are FiOS accessible. That has left Verizon in a position where the company will offer you cell service, but, if you don’t want that, it will stick you with a technology it no longer wants to support: service over copper wires.

This was made explicit in the wake of Sandy when a shore community that had seen its wires washed out was offered cellular service as a replacement. When the community demanded wires, Verizon backed down and gave it FiOS. But the issue shows up in countless other ways. One of our editors recently decided to have DSL service over copper wire activated in his apartment; Verizon took two weeks to actually get the job done.

I stuck with Verizon DSL in the hope that I would be able to transfer directly to FiOS when it finally got activated. But Verizon’s indifference to wired service led to a six-month nightmare. I’d experience erratic DSL, call for Verizon for help, and have it fixed through a process that cut off the phone service. Getting the phone service restored would degrade the DSL. On it went until I gave up and switched to cable—which was a good thing, because it took Verizon about two years to finally put fiber in place.

At the moment, AT&T still considers copper wiring central to its services, but it’s not clear how long that position will remain tenable. If AT&T’s position changes, then it’s likely that the company will also treat the copper just as Verizon has: like a technology that’s dead even as it continues to shamble around causing trouble.

The scary text mode insanity lying in wait beneath it all

PRESS DEL TO ENTER SETUP. Oh, BIOS, how I hate thee. Often the very first thing you have to deal with when dragging a new computer out of the box is the text mode BIOS setup screen, where you have to figure out how to turn on support for legacy USB devices, or change the boot order, or disable PXE booting, or force onboard video to work, or any number of other crazy things. It’s like being sucked into a time warp back into 1992.

Though slowly being replaced across the board by UEFI, BIOS setup screens are definitely still a thing even on new hardware—the small dual-Ethernet server I purchased just a month ago to serve as my new firewall required me to spend minutes figuring out which of its onboard USB ports were legacy-enabled and then which key summoned the setup screen (F2? Delete? F10? F1? IT’S NEVER THE SAME ONE!). Once in, I had to figure out how to enable USB device booting so that I could get Smoothwall installed, but the computer inexplicably wouldn’t boot from my carefully prepared USB stick, even though the stick worked great on the other servers in the closet. I ended up having to install from a USB CD-ROM drive instead.

Many motherboard OEMs now provide a way to adjust BIOS options from inside of Windows, which is great, but that won’t necessarily help you on a fresh Windows install (or on a computer you’ve parted together yourself and on which you haven’t installed the OEM’s universally hideous BIOS tweaking application). UEFI as a replacement has been steadily gaining ground for almost three years now, but we’ve likely got many more years of occasionally having to reboot and hold DEL to adjust some esoteric settings. Ugh.

Fax machines, and the general concept of faxing

Faxing has a longer and more venerable history than I would have guessed, based on how abhorrent it is in the modern day. The first commercial telefaxing service was established in France in 1865 via wire transmission, and we started sending faxes over phone lines circa 1964. For a long time, faxing was actually the best and fastest way to get a photographic clone of one piece of paper to an entirely different geographical location.

Then came e-mail. And digital cameras. And electronic signatures. And smartphones with digital cameras. And Passbook. And cloud storage. Yet people continue to ask me to fax them things.

When it comes to signing contracts or verifying or simply passing along information, digital copies, properly backed up with redundant files everywhere, are easier to deal with at literally every step in the process. On the very rare occasion that a physical piece of paper is absolutely necessary, here: e-mail it; I will sign it electronically and e-mail it back to you, and you print it out. You already sent me that piece of paper? I will sign it, take a picture with my phone, e-mail that picture to you, and you print it out. Everyone comes out ahead, no one has to deal with a fax machine.

That a business, let alone businesses, have actually cropped up around the concept of allowing people to e-mail documents to a fax number is ludicrous. Get an e-mail address. They are free. Get a printer. It is cheaper than a fax machine. Don’t get a printer that is also a fax machine, because then you are just encouraging this technological concept to live on, when, in fact, it needs to die.

Read the entire article here.

Image courtesy of Mobiledia.

Send to Kindle

Chromosomal Chronometer

Researchers find possible evidence of DNA mechanism that keep track of age. It is too early to tell if changes over time in specific elements of our chromosomes result in or are a consequence of aging. Yet, this is a tantalizing discovery that bodes well for a better understanding into the genetic and biological systems that underlie the aging process.

From the Guardian:

A US scientist has discovered an internal body clock based on DNA that measures the biological age of our tissues and organs.

The clock shows that while many healthy tissues age at the same rate as the body as a whole, some of them age much faster or slower. The age of diseased organs varied hugely, with some many tens of years “older” than healthy tissue in the same person, according to the clock.

Researchers say that unravelling the mechanisms behind the clock will help them understand the ageing process and hopefully lead to drugs and other interventions that slow it down.

Therapies that counteract natural ageing are attracting huge interest from scientists because they target the single most important risk factor for scores of incurable diseases that strike in old age.

“Ultimately, it would be very exciting to develop therapy interventions to reset the clock and hopefully keep us young,” said Steve Horvath, professor of genetics and biostatistics at the University of California in Los Angeles.

Horvath looked at the DNA of nearly 8,000 samples of 51 different healthy and cancerous cells and tissues. Specifically, he looked at how methylation, a natural process that chemically modifies DNA, varied with age.

Horvath found that the methylation of 353 DNA markers varied consistently with age and could be used as a biological clock. The clock ticked fastest in the years up to around age 20, then slowed down to a steadier rate. Whether the DNA changes cause ageing or are caused by ageing is an unknown that scientists are now keen to work out.

“Does this relate to something that keeps track of age, or is a consequence of age? I really don’t know,” Horvath told the Guardian. “The development of grey hair is a marker of ageing, but nobody would say it causes ageing,” he said.

The clock has already revealed some intriguing results. Tests on healthy heart tissue showed that its biological age – how worn out it appears to be – was around nine years younger than expected. Female breast tissue aged faster than the rest of the body, on average appearing two years older.

Diseased tissues also aged at different rates, with cancers speeding up the clock by an average of 36 years. Some brain cancer tissues taken from children had a biological age of more than 80 years.

“Female breast tissue, even healthy tissue, seems to be older than other tissues of the human body. That’s interesting in the light that breast cancer is the most common cancer in women. Also, age is one of the primary risk factors of cancer, so these types of results could explain why cancer of the breast is so common,” Horvath said.

Healthy tissue surrounding a breast tumour was on average 12 years older than the rest of the woman’s body, the scientist’s tests revealed.

Writing in the journal Genome Biology, Horvath showed that the biological clock was reset to zero when cells plucked from an adult were reprogrammed back to a stem-cell-like state. The process for converting adult cells into stem cells, which can grow into any tissue in the body, won the Nobel prize in 2012 for Sir John Gurdon at Cambridge University and Shinya Yamanaka at Kyoto University.

“It provides a proof of concept that one can reset the clock,” said Horvath. The scientist now wants to run tests to see how neurodegenerative and infectious diseases affect, or are affected by, the biological clock.

Read the entire article here.

Image: Artist rendition of DNA fragment. Courtesy of Zoonar GmbH/Alamy.

Send to Kindle