It is likely that business will never eliminate gut instinct from the decision-making process. However, as data, now big data, increasingly pervades every crevice of every organization, the use of data-driven decisions will become the norm. As this happens, more and more businesses find themselves employing data scientists to help filter, categorize, mine and analyze these mountains of data in meaningful ways.
The caveat, of course, is that data, big data and an even bigger reliance on that data requires subject matter expertise and analysts with critical thinking skills and sound judgement — data cannot be used blindly.
From Technology review:
Throughout history, innovations in instrumentation—the microscope, the telescope, and the cyclotron—have repeatedly revolutionized science by improving scientists’ ability to measure the natural world. Now, with human behavior increasingly reliant on digital platforms like the Web and mobile apps, technology is effectively “instrumenting” the social world as well. The resulting deluge of data has revolutionary implications not only for social science but also for business decision making.
As enthusiasm for “big data” grows, skeptics warn that overreliance on data has pitfalls. Data may be biased and is almost always incomplete. It can lead decision makers to ignore information that is harder to obtain, or make them feel more certain than they should. The risk is that in managing what we have measured, we miss what really matters—as Vietnam-era Secretary of Defense Robert McNamara did in relying too much on his infamous body count, and as bankers did prior to the 2007–2009 financial crisis in relying too much on flawed quantitative models.
The skeptics are right that uncritical reliance on data alone can be problematic. But so is overreliance on intuition or ideology. For every Robert McNamara, there is a Ron Johnson, the CEO whose disastrous tenure as the head of JC Penney was characterized by his dismissing data and evidence in favor of instincts. For every flawed statistical model, there is a flawed ideology whose inflexibility leads to disastrous results.
So if data is unreliable and so is intuition, what is a responsible decision maker supposed to do? While there is no correct answer to this question—the world is too complicated for any one recipe to apply—I believe that leaders across a wide range of contexts could benefit from a scientific mind-set toward decision making.
A scientific mind-set takes as its inspiration the scientific method, which at its core is a recipe for learning about the world in a systematic, replicable way: start with some general question based on your experience; form a hypothesis that would resolve the puzzle and that also generates a testable prediction; gather data to test your prediction; and finally, evaluate your hypothesis relative to competing hypotheses.
The scientific method is largely responsible for the astonishing increase in our understanding of the natural world over the past few centuries. Yet it has been slow to enter the worlds of politics, business, policy, and marketing, where our prodigious intuition for human behavior can always generate explanations for why people do what they do or how to make them do something different. Because these explanations are so plausible, our natural tendency is to want to act on them without further ado. But if we have learned one thing from science, it is that the most plausible explanation is not necessarily correct. Adopting a scientific approach to decision making requires us to test our hypotheses with data.
While data is essential for scientific decision making, theory, intuition, and imagination remain important as well—to generate hypotheses in the first place, to devise creative tests of the hypotheses that we have, and to interpret the data that we collect. Data and theory, in other words, are the yin and yang of the scientific method—theory frames the right questions, while data answers the questions that have been asked. Emphasizing either at the expense of the other can lead to serious mistakes.
Also important is experimentation, which doesn’t mean “trying new things” or “being creative” but quite specifically the use of controlled experiments to tease out causal effects. In business, most of what we observe is correlation—we do X and Y happens—but often what we want to know is whether or not X caused Y. How many additional units of your new product did your advertising campaign cause consumers to buy? Will expanded health insurance coverage cause medical costs to increase or decline? Simply observing the outcome of a particular choice does not answer causal questions like these: we need to observe the difference between choices.
Replicating the conditions of a controlled experiment is often difficult or impossible in business or policy settings, but increasingly it is being done in “field experiments,” where treatments are randomly assigned to different individuals or communities. For example, MIT’s Poverty Action Lab has conducted over 400 field experiments to better understand aid delivery, while economists have used such experiments to measure the impact of online advertising.
Although field experiments are not an invention of the Internet era—randomized trials have been the gold standard of medical research for decades—digital technology has made them far easier to implement. Thus, as companies like Facebook, Google, Microsoft, and Amazon increasingly reap performance benefits from data science and experimentation, scientific decision making will become more pervasive.
Nevertheless, there are limits to how scientific decision makers can be. Unlike scientists, who have the luxury of withholding judgment until sufficient evidence has accumulated, policy makers or business leaders generally have to act in a state of partial ignorance. Strategic calls have to be made, policies implemented, reward or blame assigned. No matter how rigorously one tries to base one’s decisions on evidence, some guesswork will be required.
Exacerbating this problem is that many of the most consequential decisions offer only one opportunity to succeed. One cannot go to war with half of Iraq and not the other just to see which policy works out better. Likewise, one cannot reorganize the company in several different ways and then choose the best. The result is that we may never know which good plans failed and which bad plans worked.
Read the entire article here.
Image: Screenshot of Iris, Ayasdi’s data-visualization tool. Courtesy of Ayasdi / Wired.