A Note on Sources
Thanks to my friend Nassim Nicholas Taleb for inspiring me to write this book, even if his advice was not to publish it under any circumstances. Alas, he encouraged me to write novels, arguing that nonfiction isn’t “sexy.” The time we have passed together discussing how to live in a world we don’t understand have been my favorite hours of the week. Thanks to Koni Gebistorf, who masterfully edited the original German texts, and to Nicky Griffin, who translated the book into English (when she was away from her office at Google). I couldn’t have picked better publishers and editors than Hollis Heimbouch from HarperCollins and Drummond Moir from Sceptre who have given these chapters their final finesse. Thanks to the scientists of the ZURICH.MINDS community for the countless debates about the state of research. Special thanks go to Gerd Gigerenzer, Roy Baumeister, Leda Cosmides, John Tooby, Robert Cialdini, Jonathan Haidt, Ernst Fehr, Bruno Frey, Iris Bohnet, Dan Golstein, Tomáš Sedláček and the philosopher John Gray for the enlightening conversations. I also thank my literary agent, John Brockman, and his superb crew, for helping me with both the American and British editions of this book. Thanks to Frank Schirrmacher for finding space for my columns in the Frankfurter Allgemeine Zeitung, to Giovanni di Lorenzo and Moritz Mueller-Wirth for their publication in Die Zeit (Germany), and to Martin Spieler who gave them a good home in Switzerland’s Sonntagszeitung. Without the weekly pressure to forge one’s thoughts into a readable format, my notes would never have been published in book form.
For everything that appears here after the countless stages of editing, I alone bear the responsibility. My greatest thanks goes to my wife, Sabine Ried, who proves to me every day that the “good life”—as defined by Aristotle—consists of far more than clear thoughts and clever actions.
A Note on Sources
Hundreds of studies have been conducted on the vast majority of cognitive and behavioral errors. In a scholarly work, the complete reference section would easily double the pages of this book. I have focused on the most important quotes, technical references, recommendations for further reading, and comments. The knowledge encompassed in this book is based on the research carried out in the fields of cognitive and social psychology over the past three decades.
Survivorship bias in funds and stock market indices, see: Edwin J. Elton, Martin J. Gruber, and Christopher R. Blake, “Survivorship Bias and Mutual Fund Performance,” The Review of Financial Studies 9, no. 4 (1996): 1097—1120.
Statistically relevant results by coincidence (self-selection), see: John P. A. Ioannidis, “Why Most Published Research Findings Are False,” PLoS Med 2, no. 8 (2005): e124.
SWIMMER’S BODY ILLUSION
Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 109—10.
“Ideally, the comparison should be made between people who went to Harvard and people who were admitted to Harvard but chose instead to go to Podunk State. Unfortunately, this is likely to produce samples too small for statistical analysis.” Thomas Sowell, Economic Facts and Fallacies (New York: Basic Books, 2008), 106.
David Lykken and Auke Tellegen, “Happiness Is a Stochastic Phenomenon,” Psychological Science 7, no. 3 (May 1996): 189.
In his book Good to Great, Jim Collins cites the CEO of Pitney Bowes, Dave Nassef: “I used to be in the Marines, and the Marines get a lot of credit for building people’s values. But that’s not the way it really works. The Marine Corps recruits people who share the corps’ values, then provides them with training required to accomplish the organization’s mission.”
The random sequence OXXXOXXXOXXOOOXOOXXOO: Thomas Gilovich, How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life (New York: Free Press, 1993), 16.
Daniel Kahneman and Amos Tversky, “Subjective Probability: A Judgment of Representativeness,” in Daniel Kahneman, Paul Slovic, and Amos Tversky, Judgment under Uncertainty: Heuristics and Biases (New York: Cambridge University Press, 1982), 32—47.
This paper caused controversy because it destroyed many athletes and sports commentators’ belief in the “hot hand”—in lucky streaks: Thomas Gilovich, Robert Vallone, and Amos Tversky, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” Cognitive Psychology 17 (1985): 295—314.
The Virgin Mary on toast on BBC: accessed November 1, 2012, http://news.bbc.co.uk/2/hi/4034787.stm.
The clustering illusion has been recognized for centuries. In the eighteenth century, David Hume commented in The Natural History of Religion: “We see faces on the moon and armies in the clouds.”
Other examples from the Wikipedia entry for “Perceptions of Religious Imagery in Natural Phenomena”: The “Nun Bun” was a cinnamon pastry whose markings resembled the nose and jowls of Mother Teresa. It was found in a Nashville coffee shop in 1996 but was stolen on Christmas in 2005. See: “Mother Teresa Is Not Amused,” Seattle Times, May 22, 1997. “Our Lady of the Underpass” was another appearance by the Virgin Mary, this time as a salt stain under Interstate 94 in Chicago in 2005. Other cases include Hot Chocolate Jesus, Jesus on a shrimp tail dinner, Jesus in a dental X-ray, and a Cheeto shaped like Jesus.
A side comment: I don’t understand how people can recognize the face of Jesus—or of the Virgin Mary. Nobody knows how he looked. No pictures exist from his lifetime.
Recognizing faces in objects is called “pareidolia”—clocks, the front of a car, the moon.
The brain processes different things in different regions. As soon as an object looks like a face, the brain treats it like a face—this is very different from other objects.
Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: William Morrow, 1993), 114—65.
Solomon E. Asch, “Effects of Group Pressure upon the Modification and Distortion of Judgment,” in H. Guetzkow (ed.), Groups, Leadership and Men (Pittsburgh: Carnegie Press, 1951), 177—90.
Canned laughter works especially well if it’s in-group laughter. “Participants laughed and smiled more, laughed longer, and rated humorous material more favorably when they heard in-group laughter rather than out-group laughter or no laughter at all.” See: Michael J. Platow et al., “It’s Not Funny If They’re Laughing: Self-Categorization, Social Influence, and Responses to Canned Laughter,” Journal of Experimental Social Psychology 41, no. 5 (2005): 542—50.
The storm of enthusiasm for Goebbels’s speech did not stem from social proof alone. What you do not see in the YouTube video is a banner above the speaker declaring “Total War = Shortest War,” an argument that made sense to many. After the Stalingrad debacle, people were sick of the war. Thus, the population had to be won back with this argument: The more aggressively it was fought, the quicker it would be over. Thanks to Johannes Grützig (Germany) for this insight. My comment: I don’t think that before the speech the Hitler regime was interested in waging war for longer than was necessary. In this respect, Goebbels’s argument is not convincing.
Besides the vacation restaurant, there’s another case where social proof is of value: if you have tickets to a football game in a foreign city and don’t know where the stadium is. Here, it makes sense to follow the people who look like football fans.
German philosopher Friedrich Nietzsche warned half a century before the Goebbel craze: “Madness is a rare thing in individuals—but in groups, parties, peoples, and ages it is the rule.”
SUNK COST FALLACY
The classic research on the sunk cost fallacy is: H. R. Arkes and C. Blumer, “The Psychology of Sunk Cost,” Organizational Behavior and Human Decision Processes 35 (1985): 124—40. In this research, Arkes and Blumer asked subjects to imagine that they had purchased tickets for a ski trip to Michigan (at a price of $100) and to Wisconsin (at a price of $50)—for the same day. The tickets are nonrefundable. Which ticket are you going to keep, assuming that you prefer the Wisconsin trip? Most subjects picked the less preferred trip to Michigan because of its higher ticket price.
On the Concorde, see: P. J. Weatherhead, “Do Savannah Sparrows Commit the Concorde Fallacy?,” in Behavioral Ecology and Sociobiology (Berlin: Springer-Verlag, 1979), vol. 5, 373—81.
It’s a strange finding that lower animals and children don’t exhibit the sunk cost fallacy. Only in later years do we start to display this wrong behavior. Read: Hal R. Arkes and Peter Ayton, “The Sunk Cost and Concorde Effects: Are Humans Less Rational than Lower Animals?,” Psychological Bulletin 125 (1999): 591—600.
Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: HarperCollins, 1993), 17—56.
Robert Trivers published the theory of reciprocal altruism in 1971, which shed light on all kinds of human behavior. Thus, reciprocity is the basis for biological cooperation—besides kinship. See any basic biology textbook since 1980.
For evolutionary psychology’s justification of reciprocity, see: David M. Buss, Evolutionary Psychology: The New Science of the Mind (Boston: Allyn and Bacon, 1999). Also: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005).
CONFIRMATION BIAS (PART 1)
How Darwin handled the confirmation bias, in: Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 462.
“What Keynes was reporting is that the human mind works a lot like the human egg. When one sperm gets into a human egg, there’s an automatic shut-off device that bars any other sperm from getting in. The human mind tends strongly toward the same sort of result. And so, people tend to accumulate large mental holdings of fixed conclusions and attitudes that are not often reexamined or changed, even though there is plenty of good evidence that they are wrong.” In: Munger, Poor Charlie’s Almanack, 461.
“What the human being is best of doing, is interpreting all new information so that their prior conclusions remain intact.” Warren Buffett at the Berkshire Hathaway annual meeting, 2002, quoted in Peter Bevelin, Seeking Wisdom: From Darwin to Munger (Malmö, Sweden: PCA Publications, 2007), 56.
Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 58—59.
For the experiment with the sequence of numbers, see: Peter C. Wason, “On the Failure to Eliminate Hypotheses in a Conceptual Task,” Quarterly Journal of Experimental Psychology 12, no. 3 (1960): 129—40.
“Faced with the choice between changing one’s mind and proving there is no need to do so, almost everyone gets busy on the proof.” John Kenneth Galbraith, The Essential Galbraith (New York: Houghton Mifflin, 2001), 241.
CONFIRMATION BIAS (PART 2)
Stereotyping as a special case of the confirmation bias, see: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 198—200.
Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: HarperCollins, 1993), 208—36.
For the track record of doctors before 1900 and a beautiful exposition on the authority of doctors and their strange theories, see: Noga Arkiha, Passions and Tempers: A History of the Humours (New York: Harper Perennial, 2008).
“Iatrogenic” conditions and injuries are those caused by medical treatment, for example, bloodletting.
After the 2008 financial crisis, two unexpected events of global proportions (Black Swans) took place: The Arab uprisings (2011) and the tsunami/nuclear disaster in Japan (2011). Not one of the world’s estimated 100,000 political and security authorities foresaw (or even could foresee) these events. This should be reason enough to distrust them—particularly if they are “experts” in all things social (fashion trends, politics, economics). These people are not stupid. They are simply misfortunate enough to have chosen a career in which they cannot win. Two alternatives are open to them: (a) to admit they don’t know (not the best choice if you have a family to feed) or (b) to spout hot air.
Stanley Milgram, Obedience to Authority; An Experimental View (New York: Harper and Row, 1974). There is also a great DVD entitled Obedience (1969).
“If a CEO is enthused about a particularly foolish acquisition, both his internal staff and his outside advisors will come up with whatever projections are needed to justify his stance. Only in fairy tales are emperors told that they are naked.” In: Warren Buffett, letter to shareholders of Berkshire Hathaway, 1998.
Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: HarperCollins, 1993), 11—16.
Charlie Munger calls the contrast effect the “Contrast-Misreaction Tendency.” See: Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 483.
Dan Ariely refers to the effect as the “relativity problem.” See: Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions, rev. and expanded ed. (New York: Harper, 2009), chapter 1.
Contrasting factors may lead you to take the long way around: See: Daniel Kahneman and Amos Tversky, “Prospect Theory: An Analysis of Decision under Risk,” Econometrica 47, no. 2 (1979): 263—92.
The example with the letter “k”: Amos Tversky and Daniel Kahneman, “Availability: A Heuristic for Judging Frequency and Probability,” Cognitive Psychology 5 (1973): 207—32.
The availability bias leads to a wrong risk map in our mind. Tornadoes, airplane crashes, and electrocutions are widely reported in the media, which makes them easily available in our minds. On the other hand, deaths resulting from asthma, vaccinations, and glucose intolerance are underestimated because they are usually not reported. Read: Sarah Lichtenstein et al., “Judged Frequency of Lethal Events,” Journal of Experimental Psychology: Human Learning and Memory 4 (1978): 551—78.
Another great quote from Charlie Munger on the availability bias: “You see that again and again—that people have some information they can count well and they have other information much harder to count. So they make the decision based only on what they can count well. And they ignore much more important information because its quality in terms of numeracity is less—even though it’s very important in terms of reaching the right cognitive result. All I can tell you is that around Wesco [Charlie Munger’s investment firm, comment RD] and Berkshire, we try not to be like that. We have Lord Keynes’ attitude, which Warren quotes all the time: ’We’d rather be roughly right than precisely wrong.’ In other words, if something is terribly important, we’ll guess at it rather than just make our judgment based on what happens to be easily countable.” In: Peter Bevelin, Seeking Wisdom: From Darwin to Munger (Malmö, Sweden: PCA Publications, 2007), 176.
Another way of stating the availability bias by Charlie Munger: “An idea or a fact is not worth more merely because it is easily available to you.” In: Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 486. Quoted from Wesco Financial annual meeting, 1990, Outstanding Investor Digest, June 28, 1990, 20—21.
The availability bias is the reason why, when it comes to risk management, firms focus primarily on risks in the financial market: There is plenty of data on this. With operational risk, however, there is almost no data. It’s not public. You would have to painstakingly cobble it together from many companies and that’s expensive. For this reason, we create theories using material that is easy to find.
“The medical literature shows that physicians are often prisoners of their first-hand experience: their refusal to accept even conclusive studies is legendary.” Robyn M. Dawes, Everyday Irrationality: How Pseudo-Scientists, Lunatics, and the Rest of Us Systematically Fail to Think Rationally (New York: Westview Press, 2011), 102.
Confidence in the quality of your own decisions depends solely on the number of decisions (predictions) made, regardless of how accurate or inaccurate they were. This is the chief problem with consultants. They make tons of decisions and predictions, but seldom validate them after the fact. They are on to the next projects, the next clients, and if something went wrong, well, it was a faulty implementation of their ideas and strategies. See: Hillel J. Einhorn and Robin M Hogarth, “Confidence in Judgment: Persistence of the Illusion of Validity,” Psychological Review 85, no. 5 (September 1978): 395—416.
THE IT’LL-GET-WORSE-BEFORE-IT-GETS-BETTER FALLACY
No reference literature. This error in thinking is obvious.
“The king died and then the queen” is a story. “The king died and then the queen died of grief” is a plot. The difference between the two is causality. The English novelist E. M. Forster proposed this distinction in 1927.
Scientists still debate about which version of the king/queen debate is easier to recall from memory. The results of one study point to the following direction: If it takes a lot of mental effort to link two propositions, then recall is poor. If it takes zero mental effort to link two propositions, recall is poor, too. But if it takes an intermediate level of mental work, then recall is best. In other words, take these two sentences: “Joey’s big brother punched him again and again. The next day his body was covered by bruises.” “Joey’s crazy mother became furiously angry with him. The next day his body was covered by bruises.” To understand the second pair of sentences, you must make an extra logical inference. By putting in this extra work you form a richer memory for what you’ve read. The following study showed that recognition and recall memory for the causes was poorest for the most and least related causes and best for causes of intermediate levels of relatedness. Janice E. Keenan et al., “The Effects of Causal Cohesion on Comprehension and Memory,” Journal of Verbal Learning and Verbal Behavior 23, no. 2 (April 1984): 115—26.
Robyn M. Dawes, Everyday Irrationality: How Pseudo-Scientists, Lunatics, and the Rest of Us Systematically Fail to Think Rationally (New York: Westview Press, 2001), 111—13.
“Narrative imagining—story—is the fundamental instrument of thought.” Mark Turner, The Literary Mind: The Origins of Thought and Language (New York: Oxford University Press, 1998), 4.
The vignette of the car driving over the bridge, from Nassim Nicholas Taleb, personal communication.
On Reagan’s election: John F. Stacks, “Where the Polls Went Wrong,” Time magazine, December 1, 1980.
One of the classic studies is from Baruch Fischhoff. He asked people to judge the outcome of a war they knew little about (British forces against the Nepalese Gurkhas in Bengal in 1814). Those who knew the outcome judged that outcome as much more probable. See: Baruch Fischhoff, “Hindsight ≠ Foresight: The Effect of Outcome Knowledge on Judgment under Uncertainty,” Journal of Experimental Psychology: Human Perception and Performance 104 (1975): 288—99.
H. Blank, J. Musch, and R. Pohl, “Hindsight Bias: On Being Wise after the Event,” Social Cognition 25, no. 1 (2007): 1—9.
The original research paper on overconfidence: Sarah Lichtenstein and Baruch Fischhoff, “Do Those Who Know More Also Know More about How Much They Know?,” Organizational Behavior and Human Performance 20 (1977): 159—83.
Marc Alpert and Howard Raiffa, “A Progress Report on the Training of Probability Assessors,” in Daniel Kahneman, Paul Slovic, and Amos Tversky, Judgment under Uncertainty: Heuristics and Biases (New York: Cambridge University Press, 1982), 294—305.
Ulrich Hoffrage, “Overconfidence,” in Rüdiger Pohl, Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgment and Memory (Hove, UK: Psychology Press, 2004), 235—54.
Dale Griffin and Amos Tversky, “The Weighing of Evidence and the Determinants of Confidence,” in Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 230—49.
Even self-predictions are consistently overconfident: Robert P. Vallone, Dale W. Griffin, Sabrina Lin, and Lee Ross, “Overconfident Predictions of Future Actions and Outcomes by Self and Others,” Journal of Personality and Social Psychology 58, no. 4 (April 1990): 582—92.
See also: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 241—44.
“ . . . for men, overconfidence probably paid off more than underconfidence did.” To learn more about why male overconfidence was important for evolution, see this interesting hypothesis: Roy F. Baumeister, Is There Anything Good about Men? How Cultures Flourish by Exploiting Men (Oxford, UK: Oxford University Press, 2010), 211—13.
Discussion on overconfidence, particularly the hypothesis that an inflated self-image benefits health, see: Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), chapter 19, 217—30.
Extreme confidence or even overconfidence plays a role in the relationship between patient and doctor. “Doctors need to have some level of confidence to be able to interact with patients and everybody else, the nurses . . . In the emergency room, when everything is happening at once and the patient’s in shock, I like to hear a voice that’s steady and calm.” Dr. Keating quoted in Christopher Chabris and Daniel Simons, The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us (New York: Crown, 2010), 104.
“We all encounter hundreds or even thousands of people whom we don’t know well, but whose confidence we can observe—and draw conclusions from. For such casual acquaintances, confidence is a weak signal. But in a smaller-scale, more communal society, such as the sort in which our brains evolved, confidence would be a much more accurate signal of knowledge and abilities.” Ibid., 108.
The story with Max Planck is probably invented: Charlie Munger, University of Southern California School of Law Commencement, May 13, 2007. Printed in Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 399 and 435.
“You have to stick within what I call your circle of competence. You have to know what you understand and what you don’t understand. It’s not terribly important how big the circle is. But it is terribly important that you know where the perimeter is.” In: Peter Bevelin, Seeking Wisdom: From Darwin to Munger (Malmö, Sweden: PCA Publications, 2007), 253.
“Again, that is a very, very powerful idea. Every person is going to have a circle of competence. And it’s going to be very hard to enlarge that circle. If I had to make my living as a musician . . . I can’t even think of a level low enough to describe where I would be sorted out to if music were the measuring standard of the civilization. So you have to figure out what your own aptitudes are. If you play games where other people have their aptitudes and you don’t, you’re going to lose. And that’s as close to certain as any prediction that you can make. You have to figure out where you’ve got an edge. And you’ve got to play within your own circle of competence.” Charlie Munger, “A Lesson on Elementary Worldly Wisdom as It Relates to Investment Management and Business,” University of Southern California, 1994, in Munger, Poor Charlie’s Almanack, 192.
“In the 2005 comedy-drama The Weather Man, the title character (played by Nicolas Cage) is paid well but receives little respect for his job, which consists entirely of acting authoritative while reading forecasts prepared by others.” Christopher Chabris and Daniel Simons, The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us (New York: Crown, 2010), 143.
ILLUSION OF CONTROL
The giraffe example from Max Gunther, The Luck Factor: Why Some People Are Luckier Than Others and How You Can Become One of Them (Petersfield, UK: Harriman House, 1977), chapter 3.
On rolling dice in casinos: J. M. Henslin, “Craps and Magic,” American Journal of Sociology 73 (1967): 316—30.
Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 171.
The original study: Ellen J. Langer and J. Roth, “Heads I Win, Tails It’s Chance: The Illusion of Control as a Function of the Sequence of Outcomes in a Purely Chance Task,” Journal of Personality and Social Psychology 32, no. 6 (December 1975): 951—55.
Psychologist Roy Baumeister has shown that people tolerate more pain if they feel they understand their disease. The chronically ill cope much better when doctors can name the disease and explain what it is and does. It doesn’t even have to be true. The effect works even if there is no proven cure for the disease. See: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 98—103.
People gain control by bringing the environment in line with their wishes (primary) but also by bringing their wishes in line with the environment (secondary). The illusion of control is part of the former strategies. This is the paper on it: Fred Rothbaum, John R. Weisz, and Samuel S. Snyder, “Changing the World and Changing the Self: A Two-Process Model of Perceived Control,” Journal of Personality and Social Psychology 42, no. 1 (1982): 5—37.
The original experiment with two buttons: Herbert M. Jenkins and William C. Ward, “Judgment of Contingency between Responses and Outcomes,” Psychological Monographs 79 (1965): 1—17.
The later experiment with just one button and no obligation to push the button. The subjects still had the illusion of control: Lorraine G. Allan and Herbert M. Jenkins, “The Judgment of Contingency and the Nature of the Response Alternatives,” Canadian Journal of Psychology 34 (1980): 1—11.
The following four references shed light on placebo buttons:
Dan Lockton, “Placebo Buttons, False Affordances and Habit-Forming,” Design with Intent, blog (http://architectures.danlockton.co.uk/2008/10/01/placebo-buttons-false-affordances-and-habit-forming/).
Michael Luo, “For Exercise in New York Futility, Push Button,” New York Times, February 27, 2004.
Nick Paumgarten, “Up and Then Down—The Lives of Elevators,” New Yorker, April 21, 2008.
Jared Sandberg, “Employees Only Think They Control Thermostat,” Wall Street Journal, January 15, 2003.
INCENTIVE SUPER-RESPONSE TENDENCY
For an overview of Charlie Munger’s thoughts on the incentive super-response tendency, read: Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 450—57.
Charles T. Munger: “Perhaps the most important rule in management is: ’Get the incentives right.’ ” Ibid., 451.
The story with the fish lures: Ibid., 199.
REGRESSION TO MEAN
Beware: Regression to mean is not a causal correlation; it is purely statistical.
Daniel Kahneman: “I had the most satisfying Eureka experience of my career while attempting to teach flight instructors that praise is more effective than punishment for promoting skill-learning. When I had finished my enthusiastic speech, one of the most seasoned instructors in the audience raised his hand and made his own short speech, which began by conceding that positive reinforcement might be good for the birds, but went on to deny that it was optimal for flight cadets. He said, ’On many occasions I have praised flight cadets for clean execution of some aerobatic maneuver, and in general when they try it again, they do worse. On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time. So please don’t tell us that reinforcement works and punishment does not, because the opposite is the case.’ This was a joyous moment, in which I understood an important truth about the world.” Quote: Wikipedia entry, “Regression toward the Mean.”
The story with the monkeys, see: Burton Gordon Malkiel, A Random Walk Down Wall Street: The Time-Tested Strategy for Successful Investing (New York: W.W. Norton, 1973), 26.
Jonathan Baron and John C. Hershey, “Outcome Bias in Decision Evaluation,” Journal of Personality and Social Psychology 54, no. 4 (1988): 569—79.
In case you want to calculate the example with the surgeons on your own, take any textbook on statistics and go to the chapter on urn models and “drawing with replacement.” With no skills involved, the probabilities are as follows: nobody dies: 32.8 percent. One person dies: 41.1 percent. Two patients die: 20.5 percent. Three patients die: 5.1 percent. Four patients die: 0.6 percent. Five patients die: virtually zero probability.
See also: Nassim Nicholas Taleb, Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, 2nd updated ed. (New York: Random House, 2004), 154.
For the historian error, see also: David Hackett Fischer, Historians’ Fallacies: Toward a Logic of Historical Thought (New York: Harper, 1970), 209—13.
PARADOX OF CHOICE
The Barry Schwartz video The Paradox of Choice can be found on TED.com.
Barry Schwartz, The Paradox of Choice: Why More Is Less (New York: Harper, 2004).
The problems with the paradox of choice are even more serious that those presented in the text. Tests have confirmed that decision making depletes energy that is later needed to keep emotional impulses in check. See: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 316—25.
People like autonomy but dislike making highly consequential decisions. See: Simona Botti, Kristina Orfali, and Sheena S. Iyengar, “Tragic Choices: Autonomy and Emotional Response to Medical Decisions,” Journal of Consumer Research 36, no. 3 (2009): 337—52.
The more choice we have, the less satisfied we are after having made the choice. See: Sheena S. Iyengar, Rachael E. Wells, and Barry Schwartz, “Doing Better but Feeling Worse: Looking for the ’Best’ Job Undermines Satisfaction,” Psychological Science 17, no. 2 (2006): 143—50.
“Letting people think they have some choice in the matter is a powerful tool for securing compliance.” Baumeister, The Cultural Animal, 323.
Joe Girard, How to Sell Anything to Anybody (New York: Simon & Schuster, 1977).
“We rarely find that people have good sense unless they agree with us.” (La Rochefoucauld)
Robert Cialdini dedicated an entire chapter to the liking bias: Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: HarperCollins, 1993), 167—207.
Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions, expanded ed. (New York: Harper Perennial, 2010), chapter 7, “The High Price of Ownership,” 127—38.
The coffee mugs: Daniel Kahneman, Jack Knetsch, and Richard Thaler, “Experimental Test of the Endowment Effect and the Coase Theorem,” Journal of Political Economy 98, no. 6 (1990): 1325—48.
Transactions don’t happen if the lowest price a seller is ready to accept is higher than the highest price a seller is willing to pay. Why this often is the case: Ziv Carmon and Dan Ariely, “Focusing on the Forgone: How Value Can Appear So Different to Buyers and Sellers,” Journal of Consumer Research 27 (2000): 360—70.
“ . . . cutting your losses is a good idea, but investors hate to take losses because, tax considerations aside, a loss taken is an acknowledgment of error. Loss-aversion combined with ego leads investors to gamble by clinging to their mistakes in the fond hope that some day the market will vindicate their judgment and make them whole.” Peter L. Bernstein, Against the Gods: The Remarkable Story of Risk (New York: Wiley, 1996), 276.
“A loss has about two and a half times the impact of a gain of the same magnitude.” Niall Ferguson, The Ascent of Money: A Financial History of the World (New York: Penguin Press, 2008), 345.
“Losing ten dollars is perceived as a more extreme outcome than gaining ten dollars. In a sense, you know you will be more unhappy about losing ten dollars than you would be happy about winning the same amount, and so you refuse, even though a statistician or accountant would approve of taking the bet.” Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 319.
The more work you put into something, the more ownership you begin to feel for it (also called the “IKEA effect”). Michael I. Norton, Daniel Mochon, and Dan Ariely, “The ’IKEA Effect’: When Labor Leads to Love” (working paper 11—091, Harvard Business School, March 2011).
The story about the church explosion: Luke Nichols, “Church Explosion 60 Years Ago Not Forgotten,” Beatrice Daily Sun, March 1, 2010.
Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 164.
For a good discussion on miracles, see: Peter Bevelin, Seeking Wisdom: From Darwin to Munger (Malmö, Sweden: PCA Publications, 2007), 167.
Numerous readers have contacted me regarding the story of the exploding church. They point out that the probability of all fifteen members arriving late is infinitesimally small. Let’s assume, for example, that there is a 5 percent probability that a member will arrive thirty minutes late—meaning every twentieth rehearsal, or around twice a year, someone will come late, and that there is no correlation between individuals’ late coming. This means the probability that all fifteen members will arrive late is 0.05 to the power of 15. This gives us a result of 3 times 10 to the power of —20. This calculation is correct, but imagine that the probabilities are correlated, which I believe is the case. How often does it happen that a drama or sports club has a terrible ambiance, so no one races to get to the next practice? In the very beginning of my literary career I had readings for which we’d sold thirty tickets, but not one person showed up. The weather was miserable and something more exciting was on television. In short (and without evidence for it), I believe that the probabilities were highly correlated. It certainly is the case with the married couple whose car didn’t start.
Of course, the probability does not increase exactly by a factor of 100 if you have a hundred other friends. Imagine the probability is 2 percent that a friend calls just as you think about him. This does not become 200 percent if you have a hundred friends. Rather, it is 1—0.98^100 = 86.7 percent.
Irving L. Janis, Groupthink: Psychological Studies of Policy Decisions and Fiascoes, 2nd ed. (Boston: Houghton Mifflin, 1982).
An opposite case of groupthink is swarm intelligence (James Surowiecki, The Wisdom of Crowds [New York: Doubleday, 2004]). Here is an overview: The large mass of average people (i.e., not a pool of experts) often finds remarkably correct solutions. Francis Galton (1907) demonstrated this in a nice experiment: He attended a cattle fair, which was also running a competition to guess the weight of an ox. Galton reckoned the visitors would not be up to the challenge and decided to statically evaluate the almost eight hundred guesses. The median of the estimates (1,197 pounds) was astonishingly close to the real weight of the ox (1,207 pounds). Groupthink occurs when participants interact. Swarm intelligence, on the other hand, occurs when players act independently of one another (e.g., when making guesses), which happens less and less. Swarm intelligence is very difficult to replicate scientifically.
NEGLECT OF PROBABILITY
Alan Monat, James R. Averill, and Richard S. Lazarus, “Anticipatory Stress and Coping Reactions under Various Conditions of Uncertainty,” Journal of Personality and Social Psychology 24, no. 2 (November 1972): 237—53.
“Probabilities constitute a major human blind spot and hence a major focus for simplistic thought. Reality (especially social reality) is essentially probabilistic, but human thought prefers to treat it in simple, black-and-white categories.” Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 206.
Since we have no intuitive understanding of probabilities, we also have no intuitive understanding of risk. Thus, stock market crashes must happen again and again to make hidden risks visible. It took an amazingly long time for economists to understand this. See: Peter L. Bernstein, Against the Gods: The Remarkable Story of Risk (New York: Wiley, 1996), 247—48.
However, what many economists and investors have not yet grasped is: Volatility is a poor measure of risk. And yet they use it in their evaluation models. See the following quote from Charlie Munger: “How can professors spread this nonsense that a stock’s volatility is a measure of risk? I’ve been waiting for this craziness to end for decades. It’s been dented, but it’s still out there.” Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 101.
For a full discussion on how we (incorrectly) perceive risk: Paul Slovic, The Perception of Risk (London: Earthscan, 2000).
If the potential outcome of a technology is emotionally powerful, the risk (1 percent or 99 percent) has almost no baring on the attractiveness or unattractiveness of that technology. Paul Slovic, Melissa Finuane, Ellen Peters, and Donald G. MacGregor, “The Affect Heuristic,” in Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 409.
People are very sensitive to departures from absolute certainty and impossibility. But they are not very sensitive to departures from mid-range probabilities. See: Yuval Rottenstreich and Christopher K. Hsee, “Money, Kisses, and Electric Shocks: On the Affective Psychology of Risk,” Psychological Science 12 (2001): 185—90.
An example is the Delaney Clause of the Food and Drug Act of 1958, which stipulated a total ban on synthetic carcinogenic food additives. The Delaney Clause stated, “No additive shall be deemed safe if it is found to induce cancer when ingested by man or animal.”
Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: HarperCollins, 1993), 237—71.
The cookie experiment, see: Stephen Worchel, Jerry Lee, and Akanabi Adewole, “Effects of Supply and Demand on Ratings of Object Value,” Journal of Personality and Social Psychology 32, no. 5 (November 1975): 906—14.
For the poster story, see: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 102.
The same works with music records instead of posters: Jack W. Brehm, Lloyd K. Stires, John Sensenig, and Janet Shaban, “The Attractiveness of an Eliminated Choice Alternative,” Journal of Experimental Social Psychology 2, no. 3 (1966): 301—13.
Jack W. Brehm and Sharon S. Brehm frame the behavior as “reactance.” Brehm and Brehm, Psychological Reactance: A Theory of Freedom and Control (New York: Academic Press, 1981).
The aphorism “When you hear hoofbeats behind you, don’t expect to see a zebra” was coined in the late 1940s. Since horses are the most commonly encountered hoofed animal and zebras are very rare, logically you could confidently guess that the animal making the hoof beats is probably a horse. By 1960, the aphorism was widely known in medical circles. Source: http://en.wikipedia.org/wiki/Zebra_(medicine).
The example with the Mozart fan, see: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 206—7.
The classic study on the base-rate neglect is: Daniel Kahneman and Amos Tversky, “On the Psychology of Prediction,” Psychological Review 80 (1973): 237—51.
The vignette with the wine tasting: Nassim Nicholas Taleb, personal communication and early manuscript of The Black Swan.
See also: Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 115—16.
One of the classic papers is: Iddo Gal and Jonathan Baron, “Understanding Repeated Simple Choices,” Thinking and Reasoning 2, no. 1 (May 1, 1996): 81—98.
The gambler’s fallacy is also called the “Monte Carlo fallacy.” You can find the example from 1913 in the footnote of: Jonah Lehrer, How We Decide (New York: Houghton Mifflin Harcourt, 2009), 66.
The IQ example: Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 113.
See also: Thomas Gilovich, Robert Vallone, and Amos Tversky, “The Hot Hand in Basketball: On the Misperception of Random Sequences,” in Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 601—16.
The example with the loaded dice adapted from: Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 124.
For the social security numbers and wheel of fortune, see: Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions, expanded ed. (New York: Harper Perennial, 2010), chapter 2. See also: Amos Tversky and Daniel Kahneman, “Judgment under Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (September 27, 1974): 1124—31.
The Abraham Lincoln example—albeit in modified form, see: Nicholas Epley and Thomas Gilovich, “Putting Adjustment Back in the Anchoring and Adjustment Heuristic,” in Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 139—49.
Also slightly modified in: Ulrich Frey and Johannes Frey, Fallstricke; Die häufigsten Denkfehler in Alltag und Wissenschaft (Munich: Beck, 2009), 40. There is no English translation of this book.
The Attila anecdote, see: Edward J. Russo and Paul. J. H. Shoemaker, Decision Traps: The Ten Barriers to Decision-Making and How to Overcome Them (New York: Simon & Schuster, 1989), 6.
On estimating house prices, see: Gregory B. Northcraft and Margaret A. Neale, “Experts, Amateurs, and Real Estate: An Anchoring-and-Adjustment Perspective on Property Pricing Decisions,” Organizational Behavior and Human Decision Processes 39 (1987): 84—97.
Anchoring in negotiation and sales situations, see: Ilana Ritov, “Anchoring in Simulated Competitive Market Negotiation,” Organizational Behavior and Human Decision Processes 67, no. 1 (July 1996): 16—25.
We all know the extraordinarily high requests for damages in liability lawsuits. One hundred million dollars for burning your fingers on a coffee cup. These requests work—thanks to anchoring. See: Gretchen B. Chapman and Brian H. Bornstein, “The More You Ask For, the More You Get: Anchoring in Personal Injury Verdicts,” Applied Cognitive Psychology 10 (1996): 519—40.
The goose example comes from Nassim Taleb, though he used a Thanksgiving turkey. Taleb borrowed the example from Bertrand Russell (he used a chicken), who, in turn, borrowed it from David Hume. See: Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 40.
Induction is a major topic in epistemology: How can we make statements about the future when the past is all we have? Answer: We cannot. Each case of induction is always fraught with uncertainty. The same goes for causality: We can never know if things are causally linked, even if we have observed them a million times. David Hume covered these issues brilliantly in the eighteenth century. Later it was Karl Popper who warned against our naive belief in induction.
The original research that brought the loss aversion to light stems from Daniel Kahneman and Amos Tversky. They called their findings Prospect Theory for lack of a better word. This is the original paper: Daniel Kahneman and Amos Tversky, “Prospect Theory: An Analysis of Decision under Risk,” Econometrica 47, no. 2 (1979): 263—92. This paper generated an avalanche of follow-up research, mostly confirming the original findings.
The example with the breast-cancer awareness campaign, see: Beth E. Meyerowitz and Shelly Chaiken, “The Effect of Message Framing on Breast Self-Examination Attitudes, Intentions, and Behavior,” Journal of Personality and Social Psychology 52, no. 3 (March 1987): 500—510. The emphasis in the quoted text is mine. The study included two more short paragraphs with a gain-frame or loss-frame, respectively.
Recent studies, however, don’t see such a clear results. See: Daniel J. O’Keefe and Jakob D. Jensen, “The Relative Persuasiveness of Gain-Framed and Loss-Framed Messages for Encouraging Disease Prevention Behaviors: A Meta-Analytic Review,” Journal of Health Communication, 12, no. 7 (2007): 623—44, DOI: 10.1080/10810730701615198.
We react more strongly to negative enticements than to positive ones. See: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 318—21.
This research paper explains that we’re not the only species prone to loss aversion. Monkeys also fall for it, albeit for other reasons: A. Silberberg et al., “On Loss Aversion in Capuchin Monkeys,” Journal of the Experimental Analysis of Behavior 89 (2008): 145—55.
David A. Kravitz and Barbara Martin, “Ringelmann Rediscovered: The Original Article,” Journal of Personality and Social Psychology 50, no. 5 (1986): 936—41.
Bibb Latané, Kippling Williams, and Stephen Harkins, “Many Hands Make Light the Work: The Causes and Consequences of Social Loafing,” Journal of Personality and Social Psychology 37, no. 6 (1979): 822—32.
See also: Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 192—93.
To learn more about risky shift, see: Dean G. Pruitt, “Choice Shifts in Group Discussion: An Introductory Review,” Journal of Personality and Social Psychology 20, no. 3 (1971): 339—60, and Serge Moscovici and Marisa Zavalloni, “The Group as a Polarizer of Attitudes,” Journal of Personality and Social Psychology 12, no. 2 (1969): 125—35.
Where does the number 70 come from? It is the natural logarithm of 2 times 100. That’s 69.3, which is close enough to 70. If you’d be interested in the tripling time, you can use the natural logarithm of 3. If you’d be interested in the quintupling time, you’d use the natural logarithm of 5.
For good examples of exponential growth, see: Dietrich Dörner, Die Logik des Misslingens: Strategisches Denken in komplexen Situationen (Reinbek, Germany: Rororo Publisher, 2003), 161—71. There is no English translation of this book.
See also: Hans-Hermann Dubben and Hans-Peter Beck-Bornholdt, Der Hund, der Eier legt: Erkennen von Fehlinformation durch Querdenken (Reinbek, Germany: Rororo Publisher, 2006), 120. There is no English translation of this book.
Exponential population growth was a hot topic during the 1970s when resource scarcity came to the fore. See: Donella H. Meadows, Dennis L. Meadows, Jorgen Randers, and William W. Behrens III, The Limits to Growth (New York: Universe Books, 1972). The “new economy,” which set the stage for the “great moderation” and promoted growth free from inflation and such scarcity, cleared the issue from the table. However, since the raw material shortages of 2007, we know that this continues to be a problem—especially since the global population is still growing exponentially.
The classic source: Richard H. Thaler, “The Winner’s Curse,” Journal of Economic Perspectives 2, no. 1 (Winter 1988): 191—202.
If you need to outdo another person, see: Deepak Malhotra, “The Desire to Win: The Effects of Competitive Arousal on Motivation and Behavior,” Organizational Behavior and Human Decision Processes 111, no. 2 (March 2010): 139—46.
There are numerous examples of the winner’s curse in action. For example, in book publishing. “The problem is, simply, that most of the auctioned books are not earning their advances. In fact, very often such books have turned out to be dismal failures whose value was more perceived than real.” John P. Dessauer, Book Publishing (New York: Bowker, 1981), 33. I sincerely hope that the book you hold in your hands is an exception.
How much would you pay for $100? An example from Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 248—49. Plous describes it with $1 instead of $100. The mechanics are the same.
“The Warren Buffett rule for open-outcry auctions: Don’t go.” Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 494.
Value destroying M&A, in: Werner Rehm, Robert Uhlaner, and Andy West, “Taking a Longer-Term Look at M&A Value Creation,” McKinsey on Finance 42 (Winter 2012): 8.
FUNDAMENTAL ATTRIBUTION ERROR
Stanford psychologist Lee Ross described this for the first time, see: Lee Ross, “The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process,” in L. Berkowitz (ed.), Advances in Experimental Social Psychology, vol. 10 (New York: Academic Press, 1977).
The experiment with the speech, see: Edward E. Jones and Victor A. Harris, “The Attribution of Attitudes,” Journal of Experimental Social Psychology 3 (1967): 1—24. Actually, there are three experiments in that paper, two about Fidel Castro, one about racial segregation in the United States. The point of interest here is the result after the first Fidel Castro experiment: “Perhaps the most striking result of the first experiment was the tendency to attribute correspondence between behavior and private attitude even when the direction of the essay was assigned.” Ibid., 7.
See also: Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 180—81.
Buffett: “A wise friend told me long ago, ’If you want to get a reputation as a good businessman, be sure to get into a good business.’ ” In: Berkshire Hathaway Inc. 2006 Annual Report, 11.
Hans-Hermann Dubben and Hans-Peter Beck-Bornholdt, Der Hund, der Eier legt: Erkennen von Fehlinformation durch Querdenken (Reinbek, Germany: Rororo Publisher, 2006), 175—78. Unfortunately, there is no English translation of this book.
The nice example using the stork. Ibid., 181.
Having books at home, see: “To Read or Not to Read: A Question of National Consequence,” National Endowment for the Arts, Research Report #47, November 2007.
The ultimate book about the halo effect in business, including the Cisco example: Phil Rosenzweig, The Halo Effect—and the Eight Other Business Delusions That Deceive Managers (New York: Free Press, 2007).
Thorndike defined the halo effect as “a problem that arises in data collection when there is carry-over from one judgment to another.” Edward L. Thorndike, “A Constant Error on Psychological Rating,” Journal of Applied Psychology 4 (1920): 25—29.
Richard E. Nisbett and Timothy D. Wilson, “The Halo Effect: Evidence for Unconscious Alteration of Judgments,” Journal of Personality and Social Psychology 35, no. 4 (1977): 250—56.
The Russian roulette example: Nassim Nicholas Taleb, Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, 2nd updated ed. (New York: Random House, 2004), 23.
“It is hard to think of Alexander the Great or Julius Caesar as men who won only in the visible history, but who could have suffered defeat in others. If we have heard of them, it is simply because they took considerable risks, along with thousands of others, and happened to win. They were intelligent, courageous, noble (at times), had the highest possible obtainable culture in their day—but so did thousands of others who live in the musty footnotes of history.” Ibid., 34.
“My argument is that I can find you a security somewhere among the 40,000 available that went up twice that amount every year without fail. Should we put the social security money into it?” Ibid.,146.
The classic book on the forecast illusion is: Philip E. Tetlock, Expert Political Judgment: How Good Is It? How Can We Know? (Princeton, NJ: Princeton University Press, 2005).
For a short summary: Philip E. Tetlock, “How Accurate Are Your Pet Pundits?,” Project Syndicate/Institute for Human Sciences, 2006, accessed October 20, 2012. http://www.project-syndicate.org/commentary/how-accurate-are-your-pet-pundits.
Derek J. Koehler, Lyle Brenner, and Dale Griffin, “The Calibration of Expert Judgment: Heuristics and Biases Beyond the Laboratory,” in Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 686—715.
“The only function of economic forecasting is to make astrology look respectable.” John Kenneth Galbraith quoted in U.S. News & World Report, March 7, 1988, 64.
The forecast anecdote from Tony Blair: Roger Buehler, Dale Griffin, and Michael Ross, “Inside the Planning Fallacy: The Causes and Consequences of Optimistic Time Predictions,” in Gilovich, Griffin, and Kahneman (eds.), Heuristics and Biases, 270.
“There have been as many plagues as wars in history, yet always plagues and wars take people equally by surprise.” Albert Camus, The Plague, part 1.
“I don’t read economic forecasts. I don’t read the funny papers.” Warren Buffett quoted in “Buffett Builds Up Stake in UK Blue Chip,” Independent, April 13, 1999, http://www.independent.co.uk/news/business/buffett-builds-up-stake-in-uk-blue-chip—1086992.html.
Harvard Professor Theodore Levitt: “It’s easy to be a prophet. You make twenty-five predictions and the ones that come true are the ones you talk about.” In: Peter Bevelin, Seeking Wisdom: From Darwin to Munger (Malmö, Sweden: PCA Publications, 2007), 167.
“There are 60,000 economists in the U.S., many of them employed full-time trying to forecast recessions and interest rates, and if they could do it successfully twice in a row, they’d all be millionaires by now. They’d have retired to Bimini where they could drink rum and fish for marlin. But as far as I know, most of them are still gainfully employed, which ought to tell us something.” In: Peter Lynch, One Up on Wall Street: How to Use What You Already Know to Make Money in the Market (New York: Simon & Schuster, 2000), 85.
And since it is so pithy, here’s another quote from the same book: “Thousands of experts study overbought indicators, oversold indicators, head-and-shoulder patterns, put-call ratios, the Fed’s policy on money supply, foreign investment, the movement of the constellations through the heavens, and the moss on oak trees, and they can’t predict markets with any useful consistency, any more than the gizzard squeezers could tell the Roman emperors when the Huns would attack.” Ibid.
Stock market analysts are especially good at retrospective forecasting: “The analysts and the brokers. They don’t know anything. Why do they always downgrade stocks after the bad earnings come out? Where’s the guy that downgrades them before the bad earnings come out? That’s the smart guy. But I don’t know any of them. They’re rare, they’re very rare. They’re rarer than Jesse Jackson at a Klan meeting.” Marc Perkins interviewed by Brett D. Fromson, The TSC Streetside Chat, part 2, TheStreet.com, September 8, 2000.
Buffett: “When they make these offerings, investment bankers display their humorous side: They dispense income and balance sheet projections extending five or more years into the future for companies they barely had heard of a few months earlier. If you are shown such schedules, I suggest that you join in the fun: Ask the investment banker for the one-year budgets that his own firm prepared as the last few years began and then compare these with what actually happened.” In: Berkshire Hathaway, Inc., letter to shareholders, 1989.
Warren Buffett: “I have no use whatsoever for projections or forecasts. They create an illusion of apparent precision. The more meticulous they are, the more concerned you should be. We never look at projections, but we care very much about, and look very deeply at, track records.” Berkshire Hathaway annual meeting, 1995, quoted in Andrew Kilpatrick, Of Permanent Value: The Story of Warren Buffett (Birmingham, AL: AKPE, 2010), 1074.
Here is another great study that shows the inability for experts to forecast. Gustav Torngren and Henry Montgomery asked participants to select the stock from a pair of stocks that would outperform each month. They were known blue chip names, and the players were given the prior twelve months’ performance for each stock. Participants included lay people (undergrads in psychology) and professional investors. Both groups performed worse than sheer luck. Both would have fared better by tossing a coin. Overall, the laypeople were 59 percent confident in their stock picking abilities, the experts 65 percent. See: Gustav Torngren and Henry Montgomery, “Worse Than Chance? Performance and Confidence among Professionals and Laypeople in the Stock Market,” Journal of Behavioural Finance 5, no. 3 (2004): 148—53.
The Chris story is a modified version of the so-called Bill story and Linda story by Tversky and Kahneman: Amos Tversky and Daniel Kahneman, “Extension versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (October 1983): 293—315. Thus, the conjunction fallacy is often referred to as the “Linda problem.”
The example using oil consumption: Ibid., 308. Another interesting example of the conjunction fallacy can be found in the same paper. What is more probable? (a) “a complete suspensions of diplomatic relations between the US and the Soviet Union, sometime in 1983,” or (b) “a Russian invasion of Poland, and a complete suspensions of diplomatic relations between the US and the Soviet Union, sometime in 1983.” Many more people opted for the more plausible scenario B, although it is less likely.
On the two types of thinking—intuitive versus rational, or system 1 versus system 2, see: Daniel Kahneman, “A Perspective on Judgment and Choice: Mapping Bounded Rationality,” American Psychologist 58 (September 2003): 697—720. Or you can read Kahneman’s Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), which is all about system 1 versus system 2.
A much simpler version of the conjunction fallacy is the following question that has been posed to children: “In summer at the beach are there more women or more tanned women?” Most children fell for (the more representative or available) “tanned women.” See: Franca Agnoli, “Development of Judgmental Heuristics and Logical Reasoning: Training Counteracts the Representativeness Heuristic,” Cognitive Development 6, no. 2 (April—June 1991): 195—217.
Tversky and Kahneman asked: What is more likely, that a seven-letter word randomly selected from a novel would end in ing or has the letter “n” as its sixth letter? This highlights both the availability bias and the conjunction fallacy. All seven-letter words ending with ing have the letter “n” as its sixth letter, but not all with the letter “n” as its sixth letter end in ing. Again, the driving force for the conjunction fallacy is the availability bias. Words ending with ing come to mind more easily. See: Tversky and Kahneman, “Extensional versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” 295.
The story with the terrorism insurance is adapted from Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 76—77.
Amos Tversky and Daniel Kahneman, “The Framing of Decisions and the Psychology of Choice,” Science 211, no. 4481 (January 30, 1981): 453—58.
The framing effect in medicine, see: Robyn M. Dawes, Everyday Irrationality: How Pseudo-Scientists, Lunatics, and the Rest of Us Systematically Fail to Think Rationally (New York: Westview Press, 2001), 3—8.
R. Shepherd, P. Sparks, S. Bellier, and M. M. Raats, “The Effects of Information on Sensory Ratings and Preferences: The Importance of Attitudes,” Food Quality and Preference 3, no. 3 (1992): 147—55.
Michael Bar-Eli, Ofer H. Azar, Ilana Ritov, Yael Keidar-Levin, and Galit Schein, “Action Bias among Elite Soccer Goalkeepers: The Case of Penalty Kicks,” Journal of Economic Psychology 28, no. 5 (2007): 606—21.
The quote from Charlie Munger: “We’ve got great flexibility and a certain discipline in terms of not doing some foolish thing just to be active—discipline in avoiding just doing any damn thing just because you can’t stand inactivity.” In: Wesco Financial annual meeting, 2000, Outstanding Investor Digest, December 18, 2000, 60.
Warren Buffett successfully avoids the action bias: “We don’t get paid for activity, just for being right. As to how long we’ll wait, we’ll wait indefinitely.” Warren Buffett, 1998 Berkshire Hathaway annual meeting.
“The stock market is a no-called-strike game. You don’t have to swing at everything—you can wait for your pitch. The problem when you’re a money manager is that your fans keep yelling, ’Swing, you bum!’ ” Warren Buffett, 1999 Berkshire Hathaway annual meeting.
“It takes character to sit there with all that cash and do nothing. I didn’t get to where I am by going after mediocre opportunities.” Charlie Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 61.
“Charlie realizes that it is difficult to find something that is really good. So, if you say ’No’ ninety percent of the time, you’re not missing much in the world.” Otis Booth in ibid., 99.
Charlie Munger: “There are huge advantages for an individual to get into a position where you make a few great investments and just sit on your ass: You’re paying less to brokers. You’re listening to less nonsense.” Ibid., 209.
The example with the police officers in: “Action Bias in Decision Making and Problem Solving,” Ambiguity Advantage, blog, February 21, 2008.
Jonathan Baron, Thinking and Deciding (Cambridge, UK: Cambridge University Press, 2000), 407—8 and 514.
To get around the omission bias, put yourself in the shoes of the harmed individual. If you were that baby about to get vaccinated, what is more preferable to you: a 10/10,000 chance of death from the disease or a 5/10,000 chance death from the vaccine? And does it matter if these chances are a matter of commission or omission? Ibid., 407.
D. A. Asch, Jonathan Baron, J . C. Hershey, H. Kunreuther, J. R. Meszaros, Ilana Ritov, and M. Spranca, “Omission Bias and Pertussis Vaccination,” Medical Decision Making 14, no. 2 (April—June 1994): 118—23.
There is some confusion as to whether a behavior is due to the omission bias, the status quo bias, or social norm. Baron and Ritov disentangle these questions in this paper: Jonathan Baron and Ilana Ritov, “Omission Bias, Individual Differences, and Normality,” Organizational Behavior and Human Decision Processes 94 (2004): 74—85.
The following paper deals with the omission bias in legal practice in Switzerland. It is only available in German: Mark Schweizer, “Der Unterlassungseffekt,” chapter from “Kognitive Täuschungen vor Gericht” (PhD dissertation, University of Zurich, 2005), 108—23.
Just as in the “taking out the garbage” example, Ross and Sicoly asked husbands and wives to which percentage they are responsible for activities like cleaning the house, making breakfast, causing arguments. Each spouse overestimated his or her role. The answers always added up to more than 100 percent. Read: Ross and Sicoly, “Egocentric Bias in Availability and Attribution.”
Barry R. Schlenker and Rowland S. Miller, “Egocentrism in Groups: Self-Serving Biases or Logical Information Processing?,” Journal of Personality and Social Psychology 35, no. 10 (October 1977): 755—64.
The following research modifies that view that we always attribute failure to outside factors: Dale T. Miller and Michael Ross, “Self-Serving Biases in the Attribution of Causality: Fact or Fiction?,” Psychological Bulletin 82 (1975): 213—25.
Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 214—19.
“Of course you also want to get the self-serving bias out of your mental routines. Thinking that what’s good for you is good for the wider civilization, and rationalizing foolish or evil conduct, based on your subconscious tendency to serve yourself, is a terrible way to think.” Charles T. Munger: Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 432.
Joel T. Johnson, Lorraine M. Cain, Toni L. Falke, Jon Hayman, and Edward Perillo, “The ’Barnum Effect’ Revisited: Cognitive and Motivational Factors in the Acceptance of Personality Descriptions,” Journal of Personality and Social Psychology 49, no. 5 (November 1985): 1378—91.
This is an example of a study with school grades: Robert M. Arkin and Geoffrey M. Maruyama, “Attribution, Affect and College Exam Performance,” Journal of Educational Psychology 71, no. 1 (February 1979): 85—93.
See this video on grades on TED.com: Dan Ariely, Why We Think It’s OK to Cheat and Steal (Sometimes).
The self-serving bias is sometimes also called “egocentric bias.” Sometimes, the scientific literature differentiates between the two, especially when it comes to group settings. The self-serving bias claims credit for positive outcomes only. The egocentric bias, however, claims credit even for negative outcomes. It is suggested that the egocentric bias is simply an availability bias in disguise because your own actions and contributions are more available to you (in memory) than the actions and contributions of the other group members. See: Ross and Sicoly, “Egocentric Biases in Availability and Attribution.”
The classic paper on the hedonic treadmill effect: Philip Brickman and D. T. Campbell, “Hedonic Relativism and Planning the Good Society,” in M. H. Appley (ed.), Adaptation-Level Theory: A Symposium (New York: Academic Press, 1971), 278—301. It focuses not just on income, but on improvements of consumer electronic and gadgets. We quickly adjust to the latest gadgets and their “happiness effect” fades away quickly.
Daniel T. Gilbert et al., “Immune Neglect: A Source of Durability Bias in Affective Forecasting,” Journal of Personality and Social Psychology 75, no. 3 (1989): 617—38.
Daniel T. Gilbert and Jane E. Ebert, “Decisions and Revisions: The Affective Forecasting of Changeable Outcomes,” Journal of Personality and Social Psychology 82, no. 4 (2002): 503—14.
Daniel T. Gilbert, Stumbling on Happiness (New York: Alfred A. Knopf, 2006).
Major live dramas have almost no long-term impact on happiness. Daniel T. Gilbert, Why Are We Happy?, video on TED.com (http://www.youtube.com/watch?v=LTO_dZUvbJA).
Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 91.
Bruno S. Frey and Alois Stutzer, Happiness and Economics: How the Economy and Institutions Affect Human Well-Being (Princeton, NJ: Princeton University Press, 2002).
Subjective well-being (happiness) seems to be heavily influenced by genetics. In other words, it’s chance! Socioeconomic status, educational attainment, family income, marital status, or religious commitment can account for no more than about 3 percent of the variance in subjective well-being. See: David Lykken and Auke Tellegen, “Happiness Is a Stochastic Phenomenon,” Psychological Science 7, no. 3 (May 1996): 186—89.
Life satisfaction seems to be extremely stable over time, although it can be more volatile in the short term. See: Frank Fujita and Ed Diener, “Life Satisfaction Set Point: Stability and Change,” Journal of Psychology and Social Psychology 88, no. 1 (2005): 158—64.
In case you are looking for more research on the topic: hedonic treadmill is also called “hedonic adaptation.”
On incubation of funds: “A more deliberate form of self selection bias often occurs in measuring the performance of investment managers. Typically, a number of funds are set up that are initially incubated: kept closed to the public until they have a track record. Those that are successful are marketed to the public, while those that are not successful remain in incubation until they are. In addition, persistently unsuccessful funds (whether in an incubator or not) are often closed, creating survivorship bias. This is all the more effective because of the tendency of investors to pick funds from the top of the league tables regardless of the performance of the manager’s other funds.” Quoted from Moneyterms, http://moneyterms.co.uk/self-selection-bias/.
“It is not uncommon for someone watching a tennis game on television to be bombarded by advertisements for funds that did (until that minute) outperform other by some percentage over some period. But, again, why would anybody advertise if he didn’t happen to outperform the market? There is a high probability of the investment coming to you if its success is caused entirely by randomness. This phenomenon is what economists and insurance people call adverse selection.” Nassim Nicholas Taleb, Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, 2nd updated ed. (New York: Random House, 2004), 158.
The story with the gas leak, see: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 280.
Buffett wants to hear the bad news—in plain terms. “Always tell us the bad news promptly. It is only the good news that can wait.” In: Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 472.
“Don’t shoot the messenger” appears in Shakespeare’s Henry IV, last act.
In the eighteenth century, many states, including the states in New England, employed town criers. Their task was to disseminate news—often bad news—for example, tax increases. In order to beat the “kill the messenger” syndrome, the states adopted a law (probably read aloud by the town crier), whereby injury or abuse of the crier earned the harshest penalty. Today we are no longer as civilized. We try to lock up the loudest criers. Such an example is Julian Assange, founder of Wikileaks.
Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 109.
Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 22—25.
The classic paper on cognitive dissonance: Leon Festinger and James M. Carlsmith, “Cognitive Consequences of Forced Compliance,” Journal of Abnormal and Social Psychology 58 (1959): 203—10.
There is a French version of the sour-grapes rationalization: The fox wrongly believes the grapes to be green instead of vermillion and sweet. See: Jon Elster, Sour Grapes: Studies in the Subversion of Rationality (Cambridge, UK: Cambridge University Press, 1983), 123—24.
One of investor George Soros’s strengths, according to Taleb, is his complete lack of cognitive dissonance. Soros can change his mind from one second to the next—without the slightest sense of embarrassment. See: Nassim Nicholas Taleb, Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets, 2nd updated ed. (New York: Random House, 2004), 239.
A range of research papers cover this topic. This is the first: Richard H. Thaler, “Some Empirical Evidence on Dynamic Inconsistency,” Economic Letters 8 (1981): 201—7.
For the marshmallow test, see: Yuichi Shoda, Walter Mischel, and Philip K. Peake, “Predicting Adolescent Cognitive and Self-Regulatory Competencies from Preschool Delay of Gratification: Identifying Diagnostic Conditions,” Developmental Psychology 26, no. 6 (1990): 978—86.
“ . . . the ability to delay gratification is very adaptive and rational, but sometimes it fails and people grab for immediate satisfaction. The effect of the immediacy resembles the certainty effect: People prefer the immediate gain just as they prefer the guaranteed gain. And both of these suggest that underneath the sophisticated thinking process of the cultural animal there still lurk the simpler needs and inclinations of the social animal. Sometimes these win out.” Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 321.
What about very long periods of time? Suppose you run a restaurant and a diner makes the following suggestion: Instead of paying his check of $100 today, he will pay you $1,700 in thirty years’ time—that’s a nice interest rate of 10 percent. Would you go for it? Probably not. Who knows what will happen in the next thirty years? So have you just committed a thinking error? No. In contrast to hyperbolic discounting, higher interest rates over long periods of time are quite advisable. In Switzerland (before Fukushima), there was debate about a plan to build a nuclear power plant with a payback period of thirty years. An idiotic idea. Who knows what new technologies will come on the market during those thirty years? A payback period of ten years would be justified, but not thirty years—and that’s not even mentioning the risks.
The Xerox experiment by Ellen Langer cited in Robert B. Cialdini, Influence: The Psychology of Persuasion, rev. ed. (New York: HarperCollins, 1993), 4.
The “because” justification works beautifully as long as the stakes are small (making copies). As soon as the stakes are high, people mostly listen attentively to the arguments. Noah Goldstein, Steve Martin, and Robert Cialdini, Yes!—50 Scientifically Proven Ways to Be Persuasive (New York: Free Press, 2008), 150—53.
“The problem of decision fatigue affects everything from the careers of CEOs to the prison sentences of felons appearing before weary judges. It influences the behavior of everyone, executive and nonexecutive, every day.” Roy Baumeister and John Tierney, Willpower: Rediscovering the Greatest Human Strength (New York: Penguin Press, 2011), 90.
The student experiment with the “deciders” and “non-deciders”: Ibid., 91, 92.
The example with the judges: Ibid., 96—99.
The detailed paper on the judges’ decisions: Shai Danziger, Jonathan Levav, and Liora Avnaim-Pesso, “Extraneous Factors in Judicial Decisions,” Proceedings of the National Academy of Science 108, no. 17 (February 25, 2011): 6889—92.
Roy Baumeister, “Ego Depletion and Self-Control Failure: An Energy Model of the Self’s Executive Function,” Self and Identity 1, no. 2 (April 1, 2002): 129—36.
Kathleen D. Vohs, Roy F. Baumeister, Jean M. Twenge, Brandon J. Schmeichel, Dianne M. Tice, and Jennifer Crocker, “Decision Fatigue Exhausts Self-Regulatory Resources—But So Does Accommodating to Unchosen Alternatives,” Working paper, 2005.
George Loewenstein, Daniel Read, and Roy Baumeister, Time and Decision: Economic and Psychological Perspectives on Intertemporal Choice (New York: Russell Sage Foundation, 2003), 208.
After the hard slog through the supermarket, consumers suffer decision fatigue. Retailers capitalize on this and place impulse buys, such as gum and candy, right next to cashiers—just before the finishing line of the decision marathon. See: John Tierney, “Do You Suffer from Decision Fatigue?,” New York Times Magazine, August 17, 2011.
When to present it to your CEO? The best time is eight a.m. The CEO will be relaxed after a good night’s sleep, and after breakfast his blood sugar level will be high—all perfect for making courageous decisions.
Contagion bias is also called the “contagion heuristic.”
The one-line summary of the contagion bias: “Once in contact, always in contact.”
Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 212.
See also the Wikipedia entry for the “Peace and Truce of God,” accessed October 21, 2012.
Philip Daileader, The High Middle Ages (Chantilly, VA: The Teaching Company, 2001), course no. 869, lecture 3, beginning at ~26:30.
The example with the arrows comes from Kennedy vs. Hitler in: Gilovich, Griffin, and Kahneman (eds.), Heuristics and Biases, 205. The authors of the article (Paul Rozin and Carol Nemeroff) are not talking about “contagion” but about the “law of similarity.” I have added the example of contagion heuristic, which in the broader sense deals with a penchant for magic.
Photos of mothers: A control group that did not use photos was better at hitting the targets. Participants behaved as if the photos contained magic powers that might hurt the real subjects. In a similar experiment, photographs of either John F. Kennedy or Hitler were pasted onto the targets. Although all students were trying to shoot as accurately as possible, those who had JFK in their crosshairs fared much worse. (Ibid.)
We do not like to move into recently deceased people’s houses, apartments, or rooms. Conversely, companies love when their new offices previously housed successful companies. For example, when milo.com moved into 165 University Avenue in Palo Alto, there was a lot of press because Logitech, Google, and PayPal all used to be in that building. As if some “good vibes” would lift the start-ups in that building. It certainly has more to do with the proximity to Stanford University.
To calculate the number of molecules per breath: The atmosphere consists of approximately 10^44 molecules. The total atmospheric mass is 5.1x10^18 kg. Air density at sea level is about 1.2 kg/m3. According to the Avogadro constant, there are 2.7x10^25 molecules in a cubic meter of air. So, in one liter there are 2.7x10^22 molecules. On average, we breathe about seven liters of air per minute (about one liter per breath) or 3,700 cubic meters per year. Saddam Hussein “consumed” 260,000 cubic meters of air in his life. Assuming he re-inhaled approximately 10 percent of that, we have 230,000 cubic meters of “Saddam-contaminated” air in the atmosphere. Thus 6.2x10^30 molecules passed through Saddam’s lungs, which are now scattered in the atmosphere. The concentration of these molecules in the atmosphere equals 6.2x10^—14. That makes 1.7 billion “Saddam-contaminated” molecules per breath.
See also: Carol Nemeroff and Paul Rozin, “The Makings of the Magical Mind: The Nature of Function of Sympathetic Magic,” in Karl S. Rosengren, Carl N. Johnson, and Paul L. Harris (eds.), Imagining the Impossible: Magical, Scientific, and Religious Thinking in Children (Cambridge, UK: Cambridge University Press, 2000), 1—34.
THE PROBLEM WITH AVERAGES
Don’t cross a river if it is (on average) four feet deep: Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007), 160.
The overall median wealth per family in the United States was $109,500 in 2007. See: Wikipedia Entry on “Wealth in the United States,” accessed October 25, 2012, http://en.wikipedia.org/wiki/Wealth_in_the_United_States. Since I used individuals and not families in the example with the bus, I took 50 percent of that figure. That’s not a correct figure, since individuals who live by themselves also constitute a household in the technical sense. But the exact number doesn’t matter for the example.
Bruno S. Frey, “Die Grenzen ökonomischer Anreize,” Neue Zürcher Zeitung, May 18, 2001. (Translation: “The Limits of Economic Incentives.” Bruno Frey makes the case to scientifically study intrinsic motivation instead of [mostly] monetary incentives. There is no English translation of this article.)
This paper provides a good overview: Bruno S. Frey and Reto Jegen, “Motivation Crowding Theory: A Survey of Empirical Evidence,” Journal of Economic Surveys 15, no. 5 (2001): 589—611.
The story with the day care center: Steven D. Levitt and Stephen J. Dubner, Freakonomics: A Rogue Economist Explores the Hidden Side of Everything (New York: William Morrow, 2005), 19.
Ori Brafman and Rom Brafman, Sway: The Irresistible Pull of Irrational Behavior (New York: Doubleday, 2008), 131—35.
It’s not all black and white. In certain settings, pay for performance can also have a positive effect on self-determination and task enjoyment. Robert Eisenberger, Linda Rhoades, and Judy Cameron, “Does Pay for Performance Increase or Decrease Perceived Self-Determination and Intrinsic Motivation?,” Journal of Personality and Social Psychology 77, no. 5 (1999): 1026—40.
There are so many examples of motivation crowding, and the scientific literature is ample. Here is an example: “Every year, on a predetermined day, students go from house to house collecting monetary donations that households make to societies for cancer research, help for disabled children, and the like. Students performing these activities typically receive much social approval from parents, teachers, and other people. This is the very reason why they perform these activities voluntarily. When students were each offered one percent of the money they collected, the amount collected decreased by 36 percent.” Ernst Fehr and Armin Falk, “Psychological Foundations of Incentives,” European Economic Review 46 (May 2002): 687—724.
An example of smoke screen writing: Jürgen Habermas, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy (Cambridge, MA: MIT Press, 1998), 490.
WILL ROGERS PHENOMENON
Stage migration when diagnosing tumors goes even further than described in the chapter. Because stage 1 now contains so many cases, doctors adjust the boundaries between stages. The worst stage 1 patients are categorized as stage 2, the worst stage 2 patients as stage 3, and the worst stage 3 patients as stage 4. Each of these new additions raises the average life expectancy of the group. The result: Not a single patient lives longer. It appears that the therapy has helped patients, but merely the diagnosis has improved. A. R. Feinstein, D. M. Sosin, and C. K. Wells, “The Will Rogers Phenomenon—Stage Migration and New Diagnostic Techniques as a Source of Misleading Statistics for Survival in Cancer,” New England Journal of Medicine 312, no. 25 (June 1985): 1604—8.
Further examples can be found in the excellent book: Hans-Hermann Dubben and Hans-Peter Beck-Bornholdt, Der Hund, der Eier legt: Erkennen von Fehlinformation durch Querdenken (Reinbek, Germany: Rororo Publisher, 2006), 34—235. There is no English translation of this book.
“To bankrupt a fool, give him information.” Nassim Nicholas Taleb, The Bed of Procrustes: Philosophical and Practical Aphorisms (New York: Random House, 2010), 4.
The example with the three diseases: Jonathan Baron, Jane Beattie, and John C. Hershey, “Heuristics and Biases in Diagnostic Reasoning: II. Congruence, Information, and Certainty,” Organizational Behavior and Human Decision Processes 42 (1988): 88—110.
For Aronson and Mills the effort justification is nothing but the reduction of cognitive dissonance. Elliot Aronson and Judson Mills, “The Effect of Severity of Initiation on Liking for a Group,” Journal of Abnormal and Social Psychology 59 (1959): 177—81.
Michael I. Norton: Michael I. Norton, Daniel Mochon, and Dan Ariely, “The IKEA Effect: When Labor Leads to Love,” Journal of Consumer Psychology 22, no. 3 (July 2012): 453—60.
THE LAW OF SMALL NUMBERS
Daniel Kahneman uses a good example in his book Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 109—113. My story with the shoplifting rates borrows heavily from this.
In the main text, we did not cover asymmetry. Shares that exceed expectations rise, on average, by 1 percent. Shares that fall below expectations drop, on average, by 3.4 percent. See: Jason Zweig, Your Money and Your Brain (New York: Simon & Schuster, 2007), 181.
Rosenthal effect: Robert Rosenthal and Leonore Jacobson, Pygmalion in the Classroom, expanded ed. (New York: Irvington, 1968).
Robert S. Feldman and Thomas Prohaska, “The Student as Pygmalion: Effect of Student Expectation on the Teacher,” Journal of Educational Psychology 71, no. 4 (1979): 485—93.
The original paper on the CRT: Shane Frederick, “Cognitive Reflection and Decision Making,” Journal of Economic Perspectives 19, no. 4 (Fall 2005): 25—42.
Amitai Shenhav, David G. Rand, and Joshua D. Greene, “Divine Intuition: Cognitive Style Influences Belief in God,” Journal of Experimental Psychology 141, no. 3 (August 2012): 423—28.
Bertram R. Forer, “The Fallacy of Personal Validation: A Classroom Demonstration of Gullibility,” Journal of Abnormal and Social Psychology 44, no. 1 (1949): 118—23.
This is also called the “Barnum effect.” Ringmaster Phineas T. Barnum designed his show around the motto: “a little something for everybody.”
Joel T. Johnson, Lorraine M. Cain, Toni L. Falke, Jon Hayman, and Edward Perillo, “The ’Barnum Effect’ Revisited: Cognitive and Motivational Factors in the Acceptance of Personality Descriptions,” Journal of Personality and Social Psychology 49, no. 5 (November 1985): 1378—91.
D. H. Dickson and I. W. Kelly, “The ’Barnum Effect’ in Personality Assessment: A Review of the Literature,” Psychological Reports 57 (1985): 367—82.
The Skeptic’s Dictionary has a good entry on the Forer Effect: http://www.skepdic.com/forer.html.
No topic has drawn more feedback than this (previously these chapters were newspaper columns). One reader commented that it would be even better to have the birdhouses manufactured in China than to get a local carpenter to make them. The reader is right, of course, providing you subtract the environmental damage caused by the shipping. The point is that volunteer’s folly is nothing more than David Ricardo’s law of comparative advantage.
Trevor M. Knox, “The Volunteer’s Folly and Socio-Economic Man: Some Thoughts on Altruism, Rationality, and Community,” Journal of Socio-Economics 28, no. 4 (1999): 475—92.
Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 139—42.
Priming the affect through smilies or frownies before judging Chinese icons: Sheila T. Murphy, Jennifer L. Monahan, and R. B. Zajonc, “Additivity of Nonconscious Affect: Combined Effects of Priming and Exposure,” Journal of Personality and Social Psychology 69, no. 4 (October 1995): 589—602.
See also: Piotr Winkielman, Robert B. Zajonc, and Norbert Schwarz, “Subliminal Affective Priming Attributional Interventions,” Cognition and Emotion 11, no. 4 (1997): 433—65.
How morning sun affects the stock market: David Hirshleifer and Tyler Shumway, “Good Day Sunshine: Stock Returns and the Weather,” Journal of Finance 58, no. 3 (2003): 1009—32.
Kathryn Schulz, Being Wrong: Adventures in the Margin of Error (New York: Ecco, 2010), 104—10. I’ve adapted Schulz’s green teas story and made it into a story of a vitamin pill producer.
Much of the introspection illusion comes down to “shallow thinking”: Thomas Gilovich, Nicholas Epley, and Karlene Hanko, “Shallow Thoughts about the Self: The Automatic Components of Self-Assessment,” in Mark D. Alicke, David A. Dunning, and Joachim I. Krueger, The Self in Social Judgment: Studies in Self and Identity (New York: Psychology Press, 2005), 67—81.
Richard E. Nisbett and Timothy D. Wilson, “Telling More Than We Can Know: Verbal Reports on Mental Processes,” Psychological Review 84 (1977): 231—59.
INABILITY TO CLOSE DOORS
Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions, rev. and expanded ed. (New York: HarperCollins, 2008), chapter 9, “Keeping Doors Open,” 183—98.
Mark Edmundson describing today’s generation of students: “They want to study, travel, make friends, make more friends, read everything (superfast), take in all the movies, listen to every hot band, keep up with everyone they’ve ever known. And there’s something else, too, that distinguishes them: They live to multiply possibilities. They’re enemies of closure. For as much as they want to do and actually manage to do, they always strive to keep their options open, never to shut possibilities down before they have to.” Mark Edmundson, “Dwelling in Possibilities,” Chronicle of Higher Education, March 14, 2008.
Nassim Nicholas Taleb, Antifragile: Things That Gain from Disorder (New York: Random House, 2012), 322—28.
Carl Hovland carried out his tests using the propaganda movie Why We Fight. The movie is available on YouTube.
See also: Gareth Cook, “TV’s Sleeper Effect: Misinformation on Television Gains Power over Time,” Boston Globe, October 30, 2011.
Beliefs acquired by reading fictional narratives are integrated into real-world knowledge. In: Markus Appel and Tobias Richter, “Persuasive Effects of Fictional Narratives Increase over Time,” Media Psychology 10 (2007): 113—34.
Tarcan G. Kumkale and Dolore Albarracín, “The Sleeper Effect in Persuasion: A Meta-Analytic Review,” Psychological Bulletin 130, no. 1 (January 2004): 143—72.
David Mazursky and Yaacov Schul, “The Effects of Advertisement Encoding on the Failure to Discount Information: Implications for the Sleeper Effect,” Journal of Consumer Research 15, no. 1 (1988): 24—36.
Ruth Ann Weaver Lariscy and Spencer F. Tinkham, “The Sleeper Effect and Negative Political Advertising,” Journal of Advertising 28, no. 4 (Winter 1999): 13—30.
SOCIAL COMPARISON BIAS
Stephen M. Garcia, Hyunjin Song, and Abraham Tesser, “Tainted Recommendations: The Social Comparison Bias,” Organizational Behavior and Human Decision Processes 113, no. 2 (2010): 97—101.
B-players hire C-players, and so on. Watch this excellent video on YouTube: Guy Kawasaki, The Art of the Start.
By the way: Some authors succeed at mutually flattering each another, such as Niall Ferguson and Ian Morris. They continually bestow the title of “best historian” upon each other. Clever. It’s rare, a perfected art.
PRIMACY AND RECENCY EFFECTS
Primacy effect: Psychologist Solomon Asch scientifically investigated this in the 1940s. The example using Alan und Ben comes from him. Solomon E. Asch, “Forming Impressions of Personality,” Journal of Abnormal and Social Psychology 41, no. 3 (July 1946): 258—90.
The example from Alan and Ben cited in: Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 82—83.
The final ad before a film starts is the most expensive for another reason: It will reach the most people since everyone will have taken their seats by then.
There is a myriad of research on the primacy and recency effects. Here are two papers: Arthur M. Glenberg et al., “A Two-Process Account of Long-Term Serial Position Effects,” Journal of Experimental Psychology: Human Learning and Memory 6, no. 4 (July 1980): 355—69. And: M. W. Howard and M. Kahana, “Contextual Variability and Serial Position Effects in Free Recall,” Journal of Experimental Psychology: Learning, Memory and Cognition 25, no. 4 (July 1999): 923—41.
Ralph Katz and Thomas J. Allen, “Investigating the Not Invented Here (NIH) Syndrome: A Look at the Performance, Tenure and Communication Patterns of 50 R&D Project Groups,” R&D Management 12, no. 1 (1982): 7—19.
Joel Spolsky wrote an interesting blog entry contesting NIH syndrome. It’s available online under the name: In Defense of Not-Invented-Here Syndrome (in http://www.joelonsoftware.com, October 14, 2001). His theory: World-class teams should not be dependent on the developments of other teams or other companies. When developing any in-house product, you should design the central part yourself from top to bottom. This reduces dependencies and guarantees the highest quality.
THE BLACK SWAN
Nassim Nicholas Taleb, The Black Swan: The Impact of the Highly Improbable (New York: Random House, 2007).
“Upon arriving at the hotel in Dubai, the businessman had a porter carry his luggage; I later saw him lifting free weights in the gym.” Nassim Nicholas Taleb, The Bed of Procrustes: Philosophical and Practical Aphorisms (New York: Random House, 2010), 75.
Another brilliant aphorism by Taleb on the subject: “My best example of domain dependence of our minds, from my recent visit to Paris: at lunch in a French restaurant, my friends ate the salmon and threw away the skin; at dinner, at the sushi bar, the very same friends ate the skin and threw away the salmon.” Ibid., 76.
Domestic violence is two to four times more common in police families than in the general population. Read: Peter H. Neidig, Harold E. Russell, and Albert F. Seng, “Interspousal Aggression in Law Enforcement Families: A Preliminary Investigation,” Police Studies 15, no. 1 (1992): 30—38.
L. D. Lott, “Deadly Secrets: Violence in the Police Family,” FBI Law Enforcement Bulletin 64 (November 1995): 12—16.
The Markowitz example: “I should have computed the historical covariance of the asset classes and drawn an efficient frontier. Instead I visualized my grief if the stock market went way up and I wasn’t in it—or if it went way down and I was completely in it. My intention was to minimize my future regret, so I split my [pension scheme] contributions 50/50 between bonds and equities.” Harry Markowitz, quoted in Jason Zweig, “How the Big Brains Invest at TIAA-CREF,” Money 27, no. 1 (January 1998): 114. See also: Jason Zweig, Your Money and Your Brain (New York: Simon & Schuster, 2007), 4.
The Bobbi Bensman example: Zweig, Your Money and Your Brain, 127.
Domain specificity is connected to the modular structure of the brain. If you are skilled with your hands (like pianists), it does not mean that you will have equally reactive legs (like footballers). Though both brain regions are in the “motor cortex,” they are not in the same place—they are not even next to each other.
The quote from Barry Mazur see: Barry C. Mazur, presentation given at 1865th Stated Meeting titled The Problem of Thinking Too Much, December 11, 2002, http://www.amacad.org/publications/bulletin/spring2003/diaconis.pdf.
Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 642.
The sandwich board “Eat at Joe’s” example: Lee Ross, David Greene, and Pamela House, “The ’False Consensus Effect’: An Egocentric Bias in Social Perception and Attribution Processes,” Journal of Personality and Social Psychology 13, no. 3 (May 1977): 279—301.
This effect overlaps with other mental errors. For example, the availability bias can lead into the false consensus effect. Whoever deliberates on a question can easily recall their conclusions (they are available). The person wrongly assumes that these findings will be as readily available to someone else. The self-serving bias also influences the false-consensus effect. Whoever wants to present something in a convincing manner does well to tell themselves that many (maybe even the majority) share their view and that their ideas will not fall on deaf ears. Philosophy deems the false-consensus effect “naive realism”: People are convinced that their positions are well thought out. Whoever fails to share their views will see the light if they reflect and open their minds sufficiently.
The false-consensus effect can be reduced by explaining or showing subjects both sides of the story. Kathleen P. Bauman and Glenn Geher, “We Think You Agree: The Detrimental Impact of the False Consensus Effect on Behavior,” Current Psychology 21, no. 4 (2002): 293—318.
FALSIFICATION OF HISTORY
More information on Gregory Markus: See: Kathryn Schulz, Being Wrong: Adventures in the Margin of Error (New York: Ecco, 2010), 185.
Gregory Markus, “Stability and Change in Political Attitudes: Observe, Recall and Explain,” Political Behavior 8 (1986): 21—44.
Flashbulb memory: Ibid., 17—73.
In 1902, University of Berlin criminology professor Franz von Liszt (nothing to do with the composer Franz Liszt) showed that the best witnesses in court recall at least a fourth of the facts incorrectly. Ibid., 223.
IN-GROUP OUT-GROUP BIAS
“Life in nature involves competition, and groups can certainly compete better than individuals. The hidden dimension is that individuals cannot usually compete against groups. Therefore, once groups exist anywhere, everyone else has to join a group, if only for self-protection.” Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 377—79.
The classic paper: Henri Tajfel, “Experiments in Intergroup Discrimination,” Scientific American 223 (1970): 96—102.
For agreement surplus in groups, see: Kathryn Schulz, Being Wrong: Adventures in the Margin of Error (New York: Ecco, 2010), 149.
More about “pseudokinship,” see Robert Sapolsky, “Anthropology/Humans Can’t Smell Trouble/“Pseudokinship” and Real War,” SF Gate, March 2, 2003, http://www.sfgate.com/opinion/article/ANTHROPOLOGY-Humans-Can-t-Smell-Trouble—2666430.php.
Knightian uncertainty is named after University of Chicago economist Frank Knight (1885—1972), who distinguished risk and uncertainty in his work: Frank H. Knight, Risk, Uncertainty, and Profit (Boston: Houghton Mifflin Company, 1921).
The Ellsberg paradox is actually a little more complicated. A detailed explanation is available on Wikipedia (http://en.wikipedia.org/wiki/Ellsberg_paradox).
Yes, we curse uncertainty. But it has its positive sides. Suppose you live in a dictatorship and want to get past the censors. You can resort to ambiguity.
The car insurance policies: Jonathan Baron, Thinking and Deciding (Cambridge, UK: Cambridge University Press, 2000), 299.
Eric J. Johnson and Daniel Goldstein, “Do Defaults Save Lives?,” Science 302, no. 5649 (November 2003): 1338—39.
Cass Sunstein and Richard Thaler, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2008).
The difficulties of renegotiating contracts: Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 304—5.
FEAR OF REGRET
The story with Paul and George: Daniel Kahneman and Amos Tversky, “Intuitive Prediction: Biases and Corrective Procedures,” in Daniel Kahneman, Paul Slovic, and Amos Tversky, Judgment under Uncertainty: Heuristics and Biases (New York: Cambridge University Press, 1982), 414—21.
The passenger who should not have been on the plane that crashed: Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 346—48.
For traders’ off-loading, see: Meir Statman and Kenneth L. Fisher, “Hedging Currencies with Hindsight and Regret,” Journal of Investing 14 (2005): 15—19.
Ilana Ritov and Jonathan Baron, “Outcome Knowledge, Regret, and Omission Bias,” Organizational Behavior and Human Decision Processes 64 (1995): 119—27.
Another regret question is the following: On your way to the airport you are caught in a traffic jam. You arrive at the airport thirty minutes after scheduled departure time. What makes you more upset (more regret): (a) your flight left on time, (b) your flight was delayed and it left only five minutes ago. Most people answer with (b). The example is again from Kahneman and Tversky. I shortened it a bit. The original wording in: Daniel Kahneman and Amos Tversky, “The Psychology of Preferences,” Scientific American 246 (1982): 160—73.
An example of fear of regret. “ ’A Fear of Regret Has Always Been My Inspiration’: Maurizio Cattelan on His Guggenheim Survey,” Blouin ArtInfo, November 2, 2011.
We empathize more with Anne Frank than with a similar girl who was immediately arrested and sent to Auschwitz. Compared to other detentions, Anne Frank’s is an exception. Of course, the availability bias also plays a role. Anne Frank’s story is known worldwide through her diary. Most other detentions are forgotten and therefore not available to us.
Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 211.
Werner F. M. De Bondt and Richard H. Thaler, “Do Analysts Overreact?,” in Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 678—85.
Scott Plous, The Psychology of Judgment and Decision Making (New York: McGraw-Hill, 1993), 125—27. Plous substitutes “salience” with “vividness.” The two are similar.
The salience effect is related to the availability bias. With both effects, information that is more easily accessible enjoys undue explanatory power or leads to above-average motivation.
Cass Sunstein and Richard Thaler, Nudge: Improving Decisions about Health, Wealth, and Happiness (New Haven, CT: Yale University Press, 2008), 54—55.
Peter L. Bernstein, Against the Gods: The Remarkable Story of Risk (New York: Wiley, 1996), 274—75.
You’ve just received: Carrie M. Heilman, Kent Nakamoto, and Ambar G. Rao, “Pleasant Surprises: Consumer Response to Unexpected In-Store Coupons,” Journal of Marketing Research 39, no. 2 (May 2002): 242—52.
Pamela W. Henderson and Robert A. Peterson, “Mental Accounting and Categorization,” Organizational Behavior and Human Decision Processes 51, no. 1 (February 1992): 92—117.
The government can utilize the house-money effect. As part of President Bush’s 2001 tax reform, each American taxpayer received a credit of $600. People who viewed this as a gift from the government spent more than three times as much as those who saw it as their own money. In this way, tax credits can be used to stimulate the economy.
Jason Zweig, Your Money and Your Brain (New York: Simon & Schuster, 2007), 253—54.
On the effectiveness of self-imposed deadlines: Dan Ariely and Klaus Wertenbroch, “Procrastination, Deadlines, and Performance: Self-Control by Precommitment,” Psychological Science 13, no. 3 (May 1, 2002): 219—24.
Envy is one of the Catholic Church’s seven deadly sins. In the book of Genesis, Cain kills his brother Abel out of envy because God prefers his sacrifice. This is the first murder in the Bible.
One of the floweriest accounts of envy is the fairy tale “Snow White and the Seven Dwarves.” In the story, Snow White’s stepmother envies her beauty. First, she hires an assassin to kill her, but he does not go through with it. Snow White flees into the forest to the seven dwarfs. Outsourcing didn’t work so well, so now the stepmother has to take matters into her own hands. She poisons the beautiful Snow White.
Munger: “The idea of caring that someone is making money faster than you are is one of the deadly sins. Envy is a really stupid sin because it’s the only one you could never possibly have any fun at. There’s a lot of pain and no fun. Why would you want to get on that trolley?” in Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 138.
Of course, not all envy is spiteful—there are also innocent episodes, such as a grandfather envying his grandchildren’s youth. This is not resentment; the older man would simply like to be young and carefree again.
Deborah A. Small, George Loewenstein, and Paul Slovic, “Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims,” Organizational Behavior and Human Decision Processes 102, no. 2 (2007): 143—53.
“If I look at the mass, I will never act. If I look at the one, I will.” Mother Teresa in ibid.
ILLUSION OF ATTENTION
Christopher Chabris and Daniel Simons, The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us (New York: Crown, 2010), 1—42.
For using your cell phone while driving, see: Donald D. Redelmeier and Robert J. Tibishirani, “Association between Cellular-Telephone Calls and Motor Vehicle Collisions,” New England Journal of Medicine 336 (1997): 453—58.
See also: David L. Strayer, Frank A. Drews, and Dennis J. Crouch, “Comparing the Cell-Phone Driver and the Drunk Driver,” Human Factors 48 (2006): 381—91.
And, if instead of phoning someone, you chat with whomever is in the passenger seat? Research found no negative effects. First, face-to-face conversations are much clearer than phone conversations, that is, your brain must not work so hard to decipher the messages. Second, your passenger understands that if the situation gets dangerous, the chatting will be interrupted. That means you do not feel compelled to continue the conversation. Third, your passenger has an additional pair of eyes and can point out dangers.
Flyvbjerg defines strategic misrepresentation as “lying, with a view to getting projects started.” Bent Flyvbjerg, Megaprojects and Risk: An Anatomy of Ambition (Cambridge, UK: Cambridge University Press, 2003), 16.
L. R. Jones and K. J. Euske, “Strategic Misrepresentation in Budgeting,” Journal of Public Administration Research and Theory 1, no. 4 (October 1991): 437—60.
In online dating, men are more likely to misrepresent personal assets, relationship goals, personal interests, and personal attributes, whereas women are more likely to misrepresent weight: Jeffrey A. Hall et al., “Strategic Misrepresentation in Online Dating,” Journal of Social and Personal Relationships 27, no. 1 (2010): 117—35.
Timothy D. Wilson and Jonathan W. Schooler, “Thinking Too Much: Introspection Can Reduce the Quality of Preferences and Decisions,” Journal of Personality and Social Psychology 60, no. 2 (February 1991): 181—92.
Known to chess players as the Kotov syndrome: A player contemplates too many moves, fails to come to a decision, and, under time pressure, makes a rookie mistake.
Roger Buehler, Dale Griffin, and Michael Ross, “Inside the Planning Fallacy: The Causes and Consequences of Optimistic Time Predictions,” in Thomas Gilovich, Dale Griffin, and Daniel Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment (Cambridge, UK: Cambridge University Press, 2002), 250—70.
Gary Klein doesn’t spell out the exact speech as mentioned in this chapter. This is how he prescribes it: “A typical premortem begins after the team has been briefed on the plan. The leader starts the exercise by informing everyone that the project has failed spectacularly. Over the next few minutes those in the room independently write down every reason they can think of for the failure—especially the kinds of things they ordinarily wouldn’t mention as potential problems, for fear of being impolitic.” See: Gary Klein, “Performing a Project Premortem,” Harvard Business Review, http://hbr.org/2007/09/performing-a-project-premortem/ar/1. Accessed December 17, 2012.
Samuel Johnson wrote: People who remarry represent “the triumph of hope over experience”—in James Boswell’s Life of Samuel Johnson (London: Printed by Henry Baldwin for Charles Dilly, in the Poultry, 1791). In making plans, we are all serial brides and grooms.
Hofstadter’s Law: “It always takes longer than you expect, even when you take into account Hofstadter’s Law.” Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid, 20th anniversary ed. (New York: Basic Books, 1999), 152.
The planning fallacy is related to the overconfidence effect. With the overconfidence effect, we believe our capabilities are greater than they are, whereas the planning fallacy leads us to overestimate our abilities, turnaround times, and budgets. In both cases, we are convinced that the error rate of our predictions (whether in terms of achieving goals or forecasting timelines) is smaller than it actually is. In other words, we know we make mistakes when estimating durations. But we are confident that they will happen only rarely or not at all.
A great example of a premortem is described in: Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 264.
The Danish planning expert Bent Flyvbjerg has researched mega-projects more than anyone else. His conclusion: “The prevalent tendency to underweight distributional information is perhaps the major source of error in forecasting.” Quoted in ibid., 251.
The planning fallacy in the military: “No battle plan survives contact with the enemy.” The saying is attributed to German military strategist Helmuth von Moltke.
See also: Roy F. Baumeister, The Cultural Animal: Human Nature, Meaning, and Social Life (Oxford, UK: Oxford University Press, 2005), 241—44.
Here’s a great way to avoid the planning fallacy even if you don’t have access to a database of similar projects: “You can ask other people to take a fresh look at your ideas and make their own forecast for the project. Not a forecast of how long it would take them to execute the ideas (since they too will likely underestimate their own time and costs), but of how long it will take you (or your contractors, employees, etc.) to do so.” Quoted from Christopher Chabris and Daniel Simons, The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us (New York: Crown, 2010), 127.
“You’ve got to have models across a wide array of disciplines.” Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 167.
Roy Baumeister and John Tierney, Willpower: Rediscovering the Greatest Human Strength (New York: Penguin Press, 2011), 80—82.
Whether it was a scarf or something else that was left in the restaurant we do not know. We also do not know if it was Bluma Zeigarnik who went back to the restaurant. To make the chapter more fluid, I assumed these were the case.
ILLUSION OF SKILL
Daniel Kahneman, Thinking, Fast and Slow (New York: Farrar, Straus and Giroux, 2011), 204—21.
Warren Buffett: “My conclusion from my own experiences and from much observation of other businesses is that a good managerial record (measured by economic returns) is far more a function of what business boat you get into than it is of how effectively you row (though intelligence and effort help considerably, of course, in any business, good or bad). Some years ago I wrote: ’When a management with a reputation for brilliance tackles a business with a reputation for poor fundamental economics, it is the reputation of the business that remains intact.’ Nothing has since changed my point of view on that matter.” Warren Buffett, letter to shareholders of Berkshire Hathaway, 1985.
The antismoking campaign: Guangzhi Zhao and Cornelia Pechmann, “Regulatory Focus, Feature Positive Effect, and Message Framing,” Advances in Consumer Research 33, no. 1 (2006): 100.
An overview of the research on the feature-positive effect: Frank R. Kardes, David M. Sanbonmatsu, and Paul M. Herr, “Consumer Expertise and the Feature-Positive Effect: Implications for Judgment and Inference,” Advances in Consumer Research 17 (1990): 351—54.
“The harmful effects of smoking are roughly equivalent to the combined good ones of every medical intervention developed since the war. Those who smoke, in other words, now have the same life expectancy as if they were non-smokers without access to any health care developed in the last half-century. Getting rid of smoking provides more benefit than being able to cure people of every possible type of cancer.” Druin Burch, Taking the Medicine: A Short History of Medicine’s Beautiful Idea and Our Difficulty Swallowing It (London: Chatto & Windus, 2009), 238.
Cherry picking in religion: People take what suits them from the Bible and ignore the other teachings. If we wanted to follow the Bible literally, we would have to stone disobedient sons and unfaithful wives (Deuteronomy 21 and 22) and kill all homosexuals (Leviticus 20:13).
Cherry picking in forecasting: Forecasts that turn out to be correct are announced triumphantly. Wrong prognoses remain “unpicked.” See the chapter on the forecast illusion.
FALLACY OF THE SINGLE CAUSE
Chris Matthews cited in: Christopher Chabris and Daniel Simons, The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us (New York: Crown, 2010), 172. The authors highlighted the quotes.
Leo Tolstoy, War and Peace (New York: Vintage Classics, 2008), 606.
A great essay on the fallacy of the single cause: John Tooby, “Nexus Causality, Moral Warfare, and Misattribution Arbitrage,” in John Brockman, This Will Make You Smarter (New York: Harper, 2012), 34—35.
Hans-Hermann Dubben and Hans-Peter Beck-Bornholdt, Der Hund, der Eier legt: Erkennen von Fehlinformation durch Querdenken (Reinbek, Germany: Rororo Publishers, 2006), 238—39. Unfortunately, no English translation of this excellent book exists.
For a full description of the intention-to-treat error, sometimes also referred to as “intent-to-treat,” read: John M. Lachin, “Statistical Considerations in the Intent-to-Treat Principle,” Controlled Clinical Trials 21, no. 5 (October 2000): 526.
Via Negativa: “Charlie generally focuses first on what to avoid—that is, on what NOT to do—before he considers the affirmative steps he will take in a given situation. ’All I want to know is where I’m going to die, so I’ll never go there’ is one of his favorite quips.” In: Charles T. Munger, Poor Charlie’s Almanack, expanded 3rd ed. (Virginia Beach, VA: The Donning Company Publishers, 2006), 63.
Via Negativa: “Part of (having uncommon sense) is being able to tune out folly, as opposed to recognizing wisdom.” Ibid., 134.
About the Author
ROLF DOBELLI, born in 1966, is a Swiss novelist. This is his first work of nonfiction. He earned his MBA from the University of St. Gallen, Switzerland, and received his PhD for a dissertation in philosophy. He is the founder or co-founder of several companies and communities, including: ZURICH.MINDS, a community of the leading personalities from science, culture, and business; and getAbstract, the world’s largest resource of compressed business literature. Rolf Dobelli lives in Lucerne, Switzerland.
Visit the author’s website: www.rolfdobelli.com.
Visit www.AuthorTracker.com for exclusive information on your favorite HarperCollins authors.