44 pages • 1 hour read
Michael LewisA modern alternative to SparkNotes and CliffsNotes, SuperSummary offers high-quality Study Guides with detailed chapter summaries and analysis of major themes, characters, and more.
Designed by French economist Maurice Allais, this paradox illustrates the inconsistency in preferences when people consider risks and gambles. Allais initially devised the paradox as an argument against the “self-certainty of American economists” (257), who in his view oversimplified human choice in economics and overemphasized the notion that math models could generally explain human decision making. In response to the Allais paradox, Danny and Amos further developed their own ideas on how human decision making was impacted by both positive prospects and regret. Lewis clarifies their findings as such: “When they made decisions, people did not seek to maximize utility. They sought to minimize regret” (261). Studying theories like the Allais paradox helped Danny and Amos better contextualize their own work.
Named after English statistician Thomas Bayes, Bayes’s theorem is a statistical formula that evaluates probability when the mind is given new information. The primary example Lewis provides pertains to a University of Michigan study, in which students picked either red or white poker chips and made decisions based on probabilities about the color of the next chip they pulled from the bag. In The Undoing Project, Bayes’s theorem serves as the foundation for Danny and Amos’s relationship, as Amos explained this study as a guest in one of Danny’s classes. Even after Danny scoffed at the premise of the study, the underlying ideas sparked something in both men: How do people actually make decisions? While the University of Michigan study investigated whether people were by nature “conservative Bayesians” or intuitive statisticians, Danny and Amos expanded its skeletal outline into the primary foundations of their subsequent work.
Danny and Amos sought to show that context is an important consideration for anyone considering a risky choice, and they developed the notion that “framing” could lead to important inferences. Lewis writes, “simply by changing the description of a situation, and making a gain seem like a loss, you could cause people to completely flip their attitude toward risk, and turn them from risk avoiding to risk seeking” (276). If a problem or scenario was framed differently, people would choose different outcomes, whether making a decision about a medical diagnosis or financial risk. In short, what Danny and Amos discovered was that “people did not choose between things […] they chose between descriptions of things” (278). The story, or “frame,” that accompanies a problem is therefore crucial to people’s understanding of said problem.
To understand Danny and Amos’s ideas on cognitive bias, their definition of heuristics must also be considered. For Danny and Amos, heuristics are what the mind does when faced with uncertain events, such as “success in a new job, the outcome in an election, or the state of a market” (183). Rather than quickly calculating mathematical odds or defaulting to the laws of chance, the mind resorts to rules of thumb as a framework for decisions. These “rules of thumb” are what “Danny and Amos called ‘heuristics’” (183). These so-called rules would often replace more statistically sound or data-driven decisions in favor of expert opinions that in turn were often influenced by hidden or even explicit biases. Furthermore, in their jointly published paper “Judgment under Uncertainty: Heuristics and Biases,” Danny and Amos made the following argument:
“while statistically sophisticated people might avoid the simple mistakes made by less savvy people, even the most sophisticated minds were prone to error, […] their intuitive judgments liable to similar fallacies in more intricate and less transparent problems” (223).
This concept of heuristics could serve as a warning for anyone making high-stakes decisions, such as medical doctors like Donald Redelmeier, who argued that Danny and Amos’s clarification on heuristics “provided a language and a logic for articulating some of the pitfalls people encounter when they think” (223).
This effect, linked to Danny and Amos’s ideas on framing and risk aversion; is defined by the lack of contextualization that often occurs when people face risky choices. Rather than considering a risky choice in context, Danny and Amos argued that people “evaluated it in isolation” (276). Instead of objectively evaluating the surrounding circumstances of a particular problem or scenario, most people simply assess their chances of loss or gain through the lens of their own perceptions. Any decision that involves significant stakes is therefore more susceptible to human subjectivity than a thorough examination of the factors influencing a particular problem. And as Lewis writes, “in exploring what they now called the isolation effect, Amos and Danny had stumbled upon another idea—and its real-world implications were difficult to ignore” (277). In essence, the modus operandi here was not finite mathematical calculation but a psychological state.
In the context of the book’s narrative, Lewis also alludes to the “isolation” that existed between Danny and Amos when they were most productive together, working behind a closed door on “a gamble with no context” (289), as life imitated their work and vice versa. As they worked together in close collaboration, they explored a vast array of inquiries without context from the outside world.
This term refers to the notion that people focus on what is similar to a model held in their minds when they make judgments. If what they are judging is similar to the model, they then have a basis for comparison. Lewis explains this concept by offering some potential questions people might ask themselves when making a judgment using their own mental models:
“How much do these clouds resemble my mental model of an approaching storm? How closely does this ulcer resemble my mental model of a malignant cancer? Does Jeremy Lin match my mental picture of a future NBA player? Does that belligerent German political leader resemble my idea of a man capable of orchestrating genocide?” (183)
In their theories about these judgments of similarity, which rely on what they called “representativeness,” Danny and Amos were not arguing that these mental models were altogether dysfunctional or harmful. In fact, these models are often useful in forming interpretations of new events. However, when these mental models are subverted or challenged, then in “the systematic errors [these subversions] led people to make, you could glimpse the nature of these rules of thumb” (184). Thus, while often useful, judgments of similarity also lead the human mind to make consistent, cyclical mistakes in judgment.
This theory was the basis for the proposal that eventually won Danny the Nobel prize in 2002 (Amos had already passed away when this award was given, and as Lewis notes, the Nobel is not awarded posthumously). Originally named “value theory,” prospect theory is the idea that when people are given a choice between two identical options, one focused on what the subject has to gain and the other focused on what the subject has to lose, they tend to opt for the former. This concept combines two seemingly distinct fields with each other—economics and psychology. When economist Richard Thaler came across Danny and Amos’s paper on prospect theory, he “instantly saw it for what it was, a truck packed with psychology that might be driven into inner sanctums of economics and exploded. The logic in the paper was awesome, overpowering” (284). In articulating his enthusiasm for Danny and Amos’s argument for prospect theory, Thaler said: “That paper was what an economist would call proof of existence. It captured so much of human nature” (285). Danny and Amos’s ideas on prospect theory circulated among economists and quickly gained traction, with many economists even adopting the theory as a worldview, in which it was “a given that the only way to change people’s behavior was to change their financial incentives” (286). Prospect theory was more than a thought experiment—it was a real attempt to understand human nature.
As Danny and Amos sought to determine whether human beings are “conservative Bayesians,” they had to understand how subjective probability affects people’s decision making. Rather than relying on mathematical data or computed calculations, subjective probability is derived mainly from opinion and past experience: The subject’s own familiarity with a potential outcome will likely determine their decision-making process. To understand how people form judgments in the first place, Danny and Amos devoted an entire paper to the topic of subjective probability. As Lewis explains, “subjective probability meant: the odds you assign to any given situation when you are more or less guessing” (183). These odds, based on personal experience, often lead to mistakes that can become systematic. Along with personal experiences, people often resort to generalization and stereotype when the odds are difficult or even impossible to ascertain. For example, when “they guessed what a little boy would do for a living when he grew up, they thought in stereotypes. If he matched their mental picture of a scientist, they guessed he’d be a scientist—and neglect the prior odds of any kid becoming a scientist” (188). In essence, statistical probability often gives way to distortions caused by bias.
Johns Hopkins psychology professor Ward Edwards tested theories of human decision making, and his efforts were a precursor to Danny and Amos’s work. Edwards also proposed what Danny and Amos eventually were able to do, which was to merge economics and psychology. In one of Edwards’s thought experiments, if people were handed a menu and given three hot beverage options, “and they said that at some given moment they preferred coffee to tea, and tea to hot chocolate, they should logically prefer coffee to hot chocolate” (103). If this premise were accurate, people would be considered “transitive.” Transitivity refers to the idea that people will order their preferences in a liner, logically consistent manner. Understanding whether people are transitive carried significant implications for economic markets. The more Edwards tested this idea, the clear one fact became: People are inherently unpredictable—not transitive but constantly shifting from one point to another. As Lewis writes, “more than a quarter of the students had revealed themselves as irrational, at least from the point of view of economic theory” (104). When Danny and Amos started collaborating, they were less interested in answering the transitivity question and more interested in understanding why human beings are generally not transitive in nature.
This term, cleverly infused into the book’s title, refers to the idea that people often wish to “undo” unpleasant events by creating alternate realities in their mind. When Danny’s nephew died in a plane crash, people found themselves asking “what if” questions, and every question—each asked in a subconscious effort to “undo” the actual event—created a new alternate reality. The more Danny explored this notion, the more convinced he became that “as they moved through the world, people ran simulations of the future” (300). Thus, the rules of thumb for decision making in the human mind are also affected by another type of heuristic, simulation. In other words, people’s decision making is also affected by running through possible scenarios in their mind, such as, “What if I say what I think instead of pretending to agree? What if they hit it to me and the grounder goes through my legs? What happens if I say no to his proposal instead of yes?” (300). These questions, coupled with a desire to avoid regret, are a central part of human decision making. As Lewis writes, “in undoing some event, the mind tended to remove whatever felt surprising or unexpected—which was different from saying that it was obeying the rules of probability” (304).
By Michael Lewis