The Mismanagement of Covid-19 Part II: The Hazards of Experts
How is it that highly intelligent people can make senseless decisions? This was the question that sparked the groundbreaking work by Daniel Kahneman and Amos Tversky in understanding the psychological dynamics of human decision-making. These dynamics include two radically different thinking modes: the intuitive System 1, which is fast thinking that is both highly confident and prone to error, and the rational System 2, which is slow thinking that, while more grounded in probability, uncertainty and doubt, is likely to inform better decisions.
Kahneman and Tversky found that, while most people generally perceive themselves as predominantly rational System 2 thinkers, this is a delusion. Their work provided clear evidence that most human judgments and decisions—even those by experts—are based on System 1, which means most decisions are informed by unconscious biases. Often these unconscious biases are swayed by how a problem is initially framed. The two psychologists discovered that how we frame a situation heavily influences how we decide between alternative courses of action.
Framing Effects
Kahneman and Tversky applied the label of “framing effects” to what they described as the unjustified influences of formulation on beliefs and preferences. They noticed in a series of experiments that people did not choose between things, but rather, they choose between descriptions of things. Thus, by simply changing the framing—the description of a situation—they could cause people to completely flip their attitude on how to respond to the situation.
For example, in an experiment conducted at the Harvard Medical School, Tversky divided physicians into two groups. Each group was given statistics about the five-year survival rates for the outcomes of two treatments for lung cancer: surgery and radiation. While the five-year survival rates were clearly higher for those who received surgery, in the short-term surgery was riskier than radiation. The two groups were then given two different descriptions of the short-term outcomes and were asked to choose the preferred treatment. The first group was given the survival rate: The one-month survival rate is 90%. The second group was given the corresponding 10% mortality rate. Although these two descriptions are logically equivalent, 84% of physicians in the first group choose surgery while the second group was split 50%/50% between the two options.
If the preferences were completely rational, the physicians would make the same choice regardless of how the descriptions were framed. However, System 1 thinking is not rational and can be swayed by emotional words. Thus, while 90% survival sounds promising, 10% mortality is shocking. This experiment showed that physicians were just as vulnerable to the framing effect as hospital patients and business school graduates. As Kahneman observed, “Medical training is, evidently, no defense against the power of framing.”
This observation was reinforced by an example that has, interestingly enough, become known as the “Asian disease problem.” Again, the psychologists presented two groups with the same problem:
Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people.
The first group was asked to choose one of two alternative programs that were framed in terms of how many lives would be saved:
If program A is adopted, 200 people will be saved.
If program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.
The second group was asked to choose between two alternative programs that were framed in terms of how many people would die:
If program A is adopted, 400 people will die.
If program B is adopted, there is a one-third probability that nobody will die and a two-third probability that 600 people will die.
Once again, the two framings were logically equivalent and the expectations for each program presented were exactly the same. Nevertheless, Kahneman and Tversky consistently found the overwhelming majority of respondents in the first group choose program A while the substantial majority in the second group choose program B.
We might be tempted to conclude that all this experiment proves is the importance of leaving these types of decisions to the experts. Tversky was able to test this assumption when he was invited to give a speech and had the opportunity to do this experiment at a meeting of public health professionals. Once again, half the participants received the “lives-saved” options and the others received the “lives-lost” alternatives. Like previous respondents, the public health experts were just as prone to the framing effect.
This troubled Kahneman who noted, “It is somewhat worrying that the officials who make decisions that affect everyone’s health can be swayed by such a superficial manipulation—but we must get used to the idea that even important decisions are influenced, if not governed, by System 1.”
The Default Mode
The prime contribution of Kahneman and Tversky’s lifelong work is to convincingly demonstrate that, when it comes to human decision-making, System 1 is the default mode. Although we may perceive ourselves as thoughtful rational decision-makers, the evidence says otherwise. All of us—even experts who profess to be data driven—are susceptible to forming cognitive illusions if their conclusions are based on limited evidence. These illusions are a product of System 1’s propensity to act as if what you see is all there is. This appears to be the case with the current pandemic where the number of confirmed cases is consistently reported as the number of cases, even though the number of actual cases is unknown. As a consequence, according to Kahneman, “System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.”
In their work, Kahneman and Tversky found that people are very comfortable with thinking about two or three things at a time. We easily think associatively, which is the process of linking one thought or idea to another. We think metaphorically when we say that one thing is like another. And we naturally think causally whenever we conclude that one thing causes another thing to happen, sometimes even when there is no empirical evidence to support the connection. System 1 is compatible with and designed for these forms of linear thinking. This may explain why System 1 is the default thinking mode.
We are not as comfortable with System 2, which usually requires us to think statistically about many things at one time. In fact, most people—even highly educated people—have difficulty thinking statistically. Statistical thinking recognizes that what you see is not all there is, understands reality is probabilistic rather than deterministic, and naturally produces holistic solutions that are less likely to result in unintended consequences.
Statistical thinking is often hard work because it requires us to learn what we need to know to formulate a solution rather than to assume, based on our experience, we already know what needs to be done. Thinking statistically involves deliberate attention to understanding the complex web of relationships among the various components of a problem and recognizing that some of those components are beyond our areas of expertise. The solutions to complex problems are always holistic and usually require the balanced input of multiple perspectives. That’s why the first step in finding a holistic solution often begins with reframing the problem.
Reframing the Problem
What might have happened if we had had the opportunity to reframe the coronavirus problem? What, if instead of framing this as a public health crisis, we had framed it as a social system crisis? Would that have helped us to construct a more rational solution?
Framing the pandemic as a social system crisis would have almost certainly given greater voice to advisors who were not public health experts. If the primary advisors were a diversity of experts that included intensive care medical professionals, social psychologists, sociologists, economists, mental health professionals, small and large business managers, and legal scholars in addition to public health experts, there would have been a greater opportunity to formulate a holistic solution to address the many concurrent dimensions of this social system crisis.
While the input of the public health experts would be immensely valuable in crafting a holistic solution, they would not necessarily be the dominant voices in shaping the thinking of the team because successful holistic solutions effectively balance the many dimensions of complex problems. Thus, while minimizing Covid-19 deaths would be a critically important goal, it would not be the only important goal.
It is likely that a diversified group would have focused on balancing four goals. The first goal, of course, would be to minimize the number of deaths from Covid-19. A second goal would be to minimize the number of unintended deaths that might result from actions taken to curb the virus. A third goal would almost certainly be to minimize the number of jobs lost in meeting the other two goals. And, finally, a fourth goal would be doing everything we could to maintain the best possible functioning society.
A diversity of equal and differing perspectives would have likely insisted upon performing the random sample Louis Kaplow suggested in his New York Times editorial as soon as possible. Relying upon quantitative models based on spurious assumptions that could have draconian impacts on various parts of society would not be good enough. They would have recognized that building models based on confirmed cases to assess the spread of the virus is analogous to assessing the proportion of people who are vegetarians by only visiting vegan restaurants. They would have also likely recognized that the continual updating of confirmed cases and related deaths inevitably fosters a cognitive illusion that the perceived death rate is far higher—perhaps many times higher—than it actually is. A diversified group of experts would have challenged the notion that a solution can be data-driven when the most important piece of data—the actual infection rate—remains unknown.
Knowing accurate estimates of how many people were actually infected, as well as the proportions that were asymptomatic, had mild symptoms, or showed severe symptoms would have informed us of the true dimensions of the pandemic threat. Most importantly, we would have known the true death rate as government leaders contemplated what policies to put in place to ensure both medical and economic safety.
If the true death rate turned out to be a small fraction of a percent, as now appears to be the case, a diverse group of advisory experts would have likely recommended an alternative strategy. For example, rather than having a one-size-fits-all approach, they might have recommended that only the high-risk population needed to stay at home and practice social distancing, while the low-risk population continued to work and maintain the economy, similar to the Great Barrington Declaration proposed by three esteemed health professionals at the beginning of the pandemic.
To minimize the fear and uncertainty that individuals feel about their own personal vulnerability, the advisory experts could have rapidly created an online app with a series of questions that could assess the risk of every single person in the country. This could be easily accomplished by adapting tools such as the Blue Zone life expectancy assessment, which contains many questions that would be pertinent to calculating an individual’s health risk from the coronavirus. Having some sense of one’s personal vulnerability would have gone a long way toward reducing the pervasive sense of fear that seems to have gripped our nation.
With this alternative strategy, governmental economic stimulus packages could have been targeted on the high-risk populations by making sure that those workers at high risk for the disease could continue to be paid their full salaries while they needed to stay at home. This approach would have allowed most small businesses to remain open and operate at some threshold level. Any loss of revenues owing to coronavirus effects could have also been covered by federal stimulus funds. This targeted approach would have likely required less funds than were appropriated under the existing stimulus bills. But more importantly, schools could have been kept open, few businesses would have needed to close, far fewer people would have been put out of work, and almost everyone would have felt a sense of financial security because they would be able to pay their bills—all while still flattening the curve. These things may not be as important to the public health experts, but they do matter to the vast majority of everyday citizens.
This alternative strategy is not presented as a specific recommendation, but rather as an example of what could have happened had we reframed the problem. If we had entrusted the crafting of the strategy to solve what is certainly the greatest social system crisis of our lifetimes to a diversified group of experts, they would have probably designed a far better approach to deliver a comprehensive safe solution. Instead, we have opened ourselves to the hazards of experts in relying upon the narrow and limited knowledge of one scientific specialty to attempt to solve a very broad and complex problem.