Tag Archives: error

Launch-On-Warning

minuteman3-test-launch

Set aside your latest horror novel and forget the terror from the Hollywood blood and gore machine. What follows is a true tale of existential horror.

It’s a story of potential catastrophic human error, aging and obsolete technology, testosterone-fueled brinkmanship, volatile rhetoric and nuclear annihilation.

Written by Eric Schlosser over at the New Yorker. He is author of Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety”.

I wonder if the command and control infrastructure serving the U.S. nuclear arsenal has since been upgraded so that the full complement of intercontinental ballistic missiles can be launched at a whim via Twitter.

What a great start to the new year.

From the New Yorker:

On June 3, 1980, at about two-thirty in the morning, computers at the National Military Command Center, beneath the Pentagon, at the headquarters of the North American Air Defense Command (NORAD), deep within Cheyenne Mountain, Colorado, and at Site R, the Pentagon’s alternate command post center hidden inside Raven Rock Mountain, Pennsylvania, issued an urgent warning: the Soviet Union had just launched a nuclear attack on the United States.

U.S. Air Force ballistic-missile crews removed their launch keys from the safes, bomber crews ran to their planes, fighter planes took off to search the skies, and the Federal Aviation Administration prepared to order every airborne commercial airliner to land.

President Jimmy Carter’s national-security adviser, Zbigniew Brzezinski, was asleep in Washington, D.C., when the phone rang. His military aide, General William Odom, was calling to inform him that two hundred and twenty missiles launched from Soviet submarines were heading toward the United States. Brzezinski told Odom to get confirmation of the attack. A retaliatory strike would have to be ordered quickly; Washington might be destroyed within minutes. Odom called back and offered a correction: twenty-two hundred Soviet missiles had been launched.

Brzezinski decided not to wake up his wife, preferring that she die in her sleep. As he prepared to call Carter and recommend an American counterattack, the phone rang for a third time. Odom apologized—it was a false alarm. An investigation later found that a defective computer chip in a communications device at NORAD headquarters had generated the erroneous warning. The chip cost forty-six cents.

Read the entire sobering article here.

Image: Minuteman III ICBM test launch from Vandenberg Air Force Base, CA, United States. Courtesy: U.S. Air Force, DOD Defense Visual Information Center. Public Domain.

Send to Kindle

Wrong Decisions, Bad Statistics

Each of us makes countless decisions daily. A not insignificant number of these — each day — is probably wrong. And, in most cases we continue, recover, readjust, move on, and sometimes even correct ourselves and learn. In the majority of instances these wrong decisions lead to inconsequential results.

However, sometimes the results are much more tragic, leading to accidents, injury and death. When those incorrect decisions are made by healthcare professionals the consequences are much more stark. By some estimates, around 50,000 hospital deaths could be prevented each year in Canada and the U.S. from misdiagnosis.

From the New York Times:

Six years ago I was struck down with a mystery illness. My weight dropped by 30 pounds in three months. I experienced searing stomach pain, felt utterly exhausted and no matter how much I ate, I couldn’t gain an ounce.

I went from slim to thin to emaciated. The pain got worse, a white heat in my belly that made me double up unexpectedly in public and in private. Delivering on my academic and professional commitments became increasingly challenging.

It was terrifying. I did not know whether I had an illness that would kill me or stay with me for the rest of my life or whether what was wrong with me was something that could be cured if I could just find out what on earth it was.

Trying to find the answer, I saw doctors in London, New York, Minnesota and Chicago.

I was offered a vast range of potential diagnoses. Cancer was quickly and thankfully ruled out. But many other possibilities remained on the table, from autoimmune diseases to rare viruses to spinal conditions to debilitating neural illnesses.

Treatments suggested ranged from a five-hour, high-risk surgery to remove a portion of my stomach, to lumbar spine injections to numb nerve paths, to a prescription of antidepressants.

Faced with all these confusing and conflicting opinions, I had to work out which expert to trust, whom to believe and whose advice to follow. As an economist specializing in the global economy, international trade and debt, I have spent most of my career helping others make big decisions — prime ministers, presidents and chief executives — and so I’m all too aware of the risks and dangers of poor choices in the public as well as the private sphere. But up until then I hadn’t thought much about the process of decision making. So in between M.R.I.’s, CT scans and spinal taps, I dove into the academic literature on decision making. Not just in my field but also in neuroscience, psychology, sociology, information science, political science and history.

What did I learn?

Physicians do get things wrong, remarkably often. Studies have shown that up to one in five patients are misdiagnosed. In the United States and Canada it is estimated that 50,000 hospital deaths each year could have been prevented if the real cause of illness had been correctly identified.

Yet people are loath to challenge experts. In a 2009 experiment carried out at Emory University, a group of adults was asked to make a decision while contemplating an expert’s claims, in this case, a financial expert. A functional M.R.I. scanner gauged their brain activity as they did so. The results were extraordinary: when confronted with the expert, it was as if the independent decision-making parts of many subjects’ brains pretty much switched off. They simply ceded their power to decide to the expert.

If we are to control our own destinies, we have to switch our brains back on and come to our medical consultations with plenty of research done, able to use the relevant jargon. If we can’t do this ourselves we need to identify someone in our social or family network who can do so on our behalf.

Anxiety, stress and fear — emotions that are part and parcel of serious illness — can distort our choices. Stress makes us prone to tunnel vision, less likely to take in the information we need. Anxiety makes us more risk-averse than we would be regularly and more deferential.

We need to know how we are feeling. Mindfully acknowledging our feelings serves as an “emotional thermostat” that recalibrates our decision making. It’s not that we can’t be anxious, it’s that we need to acknowledge to ourselves that we are.

It is also crucial to ask probing questions not only of the experts but of ourselves. This is because we bring into our decision-making process flaws and errors of our own. All of us show bias when it comes to what information we take in. We typically focus on anything that agrees with the outcome we want.

Read the entire article here.

Send to Kindle

Financial Apocalypse and Economic Collapse via Excel

It’s long been known that Microsoft Powerpoint fuels corporate mediocrity and causes brain atrophy if used by creative individuals. Now we discover that another flashship product from the Seattle software maker, this time Excel, is to blame for some significant stresses on the global financial system.

From ars technica:

An economics paper claiming that high levels of national debt led to low or negative economic growth could turn out to be deeply flawed as a result of, among other things, an incorrect formula in an Excel spreadsheet. Microsoft’s PowerPoint has been considered evil thanks to the proliferation of poorly presented data and dull slides that are created with it. Might Excel also deserve such hyperbolic censure?

The paper, Growth in a Time of Debt, was written by economists Carmen Reinhart and Kenneth Rogoff and published in 2010. Since publication, it has been cited abundantly by the world’s press politicians, including one-time vice president nominee Paul Ryan (R-WI). The link it draws between high levels of debt and negative average economic growth has been used by right-leaning politicians to justify austerity budgets: slashing government expenditure and reducing budget deficits in a bid to curtail the growth of debt.

This link was always controversial, with many economists proposing that the correlation between high debt and low growth was just as likely to have a causal link in the other direction to that proposed by Reinhart and Rogoff: it’s not that high debt causes low growth, but rather that low growth leads to high debt.

However, the underlying numbers and the existence of the correlation was broadly accepted, due in part to Reinhart and Rogoff’s paper not including the source data they used to draw their inferences.

A new paper, however, suggests that the data itself is in error. Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst, tried to reproduce the Reinhart and Rogoff result with their own data, but they couldn’t. So they asked for the original spreadsheets that Reinhart and Rogoff used to better understand what they were doing. Their results, published as “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff,” suggest that the pro-austerity paper was flawed. A comprehensive assessment of the new paper can be found at the Rortybomb economics blog.

It turns out that the Reinhart and Rogoff spreadsheet contained a simple coding error. The spreadsheet was supposed to calculate average values across twenty countries in rows 30 to 49, but in fact it only calculated values in 15 countries in rows 30 to 44. Instead of the correct formula AVERAGE(L30:L49), the incorrect AVERAGE(L30:L44) was used.

There was also a pair of important, but arguably more subjective, errors in the way the data was processed. Reinhart and Rogoff excluded data for some countries in the years immediately after World War II. There might be a reason for this; there might not. The original paper doesn’t justify the exclusion.

The original paper also used an unusual scheme for weighting data. The UK’s 19-year stretch of high debt and moderate growth (during the period between 1946 and 1964, the debt-to-GDP ratio was above 90 percent, and growth averaged 2.4 percent) is conflated into a single data point and treated as equivalent to New Zealand’s single year of debt above 90 percent, during which it experienced growth of -7.6. Some kind of weighting system might be justified, with Herndon, Ash, and Pollin speculating that there is a serial correlation between years.

Recalculating the data to remove these three issues turns out to provide much weaker evidence for austerity. Although growth is higher in countries with a debt ratio of less than 30 percent (averaging 4.2 percent), there’s no point at which it falls off a cliff and inevitably turns negative. For countries with a debt of between 30 and 60 percent, average growth was 3.1 percent, between 60 and 90 it was 3.2 percent, and above 90 percent it was 2.2 percent. Lower than the low debt growth, but far from the -0.1 percent growth the original paper claimed.

As such, the argument that high levels of debt should be avoided and the justification for austerity budgets substantially evaporates. Whether politicians actually used this paper to shape their beliefs or merely used its findings to give cover for their own pre-existing beliefs is hard to judge.

Excel, of course, isn’t the only thing to blame here. But it played a role. Excel is used extensively in fields such as economics and finance, because it’s an extremely useful tool that can be deceptively simple to use, making it apparently perfect for ad hoc calculations. However, spreadsheet formulae are notoriously fiddly to work with and debug, and Excel has long-standing deficiencies when it comes to certain kinds of statistical analysis.

It’s unlikely that this is the only occasion on which improper use of Excel has produced a bad result with far-reaching consequences. Bruno Iksil, better known as the “London Whale,” racked up billions of dollars of losses for bank JPMorgan. The post mortem of his trades revealed extensive use of Excel, including manual copying and pasting between workbooks and a number of formula errors that resulted in underestimation of risk.

Read the entire article following the jump.

Image: Default Screen of Microsoft Excel 2013, component of Microsoft Office 2013. Courtesy of Microsoft / Wikipedia.

Send to Kindle