Know Thyself: The Science of Self-Awareness - Stephen M Fleming 2021
Decisions About Decisions
The Power of Reflection
Even a popular pilot has to be able to land a plane.
—KATTY KAY AND CLAIRE SHIPMAN, The Confidence Code
In 2013, Mark Lynas underwent a radical change of mind about an issue that he had been passionate about for many years. As an environmental campaigner, he had originally believed that in creating genetically modified (GM) foods, science had crossed a line by conducting large-scale experiments on nature behind closed doors. He was militant in his opposition to GM, using machetes to hack up experimental crops and removing what he referred to as “genetic pollution” from farms and science labs around the country.
But then Lynas stood up at the Oxford Farming Conference and confessed that he had been wrong. The video of the meeting is on YouTube, and it makes for fascinating viewing. Reading from a script, he calmly explains that he had been ignorant about the science, and that he now realizes GM is a critical component of a sustainable farming system—a component that is saving lives in areas of the planet where non-GM crops would otherwise have died out due to disease. In an interview as part of the BBC Radio 4 series Why I Changed My Mind, Lynas comments that his admission felt “like changing sides in a war,” and that he lost several close friends in the process.1
Lynas’s story fascinates us precisely because such changes of heart on emotive issues are usually relatively rare. Instead, once we adopt a position, we often become stubbornly entrenched in our worldview and unwilling to incorporate alternative perspectives. It is not hard to see that when the facts change or new information comes to light this bias against changing our minds can be maladaptive or even downright dangerous.
But the lessons from Part I of this book give us reason for cautious optimism that these biases can be overcome. The kind of thought processes supported by the neural machinery for metacognition and mindreading can help jolt us out of an overly narrow perspective. Every decision we make, from pinpointing the source of a faint sound to choosing a new job, comes with a degree of confidence that we have made the right call. If this confidence is sufficiently low, it provides a cue to change our minds and reverse our decision. By allowing us to realize when we might be wrong, just as Lynas had an initial flicker of realization that he might have got the science backward, metacognition provides the mental foundations for subsequent changes of mind. And by allowing us to infer what other people know, and whether their knowledge is likely to be accurate or inaccurate, a capacity for mindreading ensures we benefit from the perspective and advice of others when working together in teams and groups. In this chapter, we are going to take a closer look at how metacognition enables (or disables) this ability to change our minds.2
To Change or Not to Change
We have already seen how Bayes’s theorem provides us with a powerful framework for knowing when or whether to change our minds about a hypothesis. Consider the trick dice game from Chapter 1. If after multiple rolls of the dice we have seen only low numbers, we can be increasingly confident about a hypothesis that the trick die is showing a 0. In such a situation, one anomalous roll that returns a higher number such as a 10 should not affect our view, as this can still be consistent with the trick die showing a 0 (and the regular dice showing two 5s, or a 4 and a 6). More generally, the more confident a rational Bayesian becomes about a particular hypothesis, the less likely she should be to change her mind.
We have devised laboratory experiments to test this prediction and examine how the processing of new information is handled at a neural level. In these studies, we have extended our usual measure of metacognition—getting people to make difficult decisions about what they see on a computer screen—by allowing volunteers to see additional information after they have made a decision. In one version of this experiment, people are asked to say which way a cloud of noisy dots is moving. They are then shown another cloud of dots that is always moving in the same direction as the first, but which might be more or less noisy, and are then asked how confident they feel about their original decision.
A Bayesian observer solves this task by summing up the evidence samples (specifically, the sum of the ratio of the log-probability of each hypothesis) obtained before and after a decision, and then compares it to the decision that was actually made. By running computer simulations of these equations, we were able to make a specific prediction for the pattern of activity we should see in regions of the brain involved in revising our beliefs. If you were correct, the new evidence samples serve to confirm your initial choice, and the probability of being right goes up (we didn’t try to trick our participants in this experiment). But if you were incorrect, then the new evidence samples disconfirm your initial choice, and the probability of being right goes down. The activity of a brain region engaged in updating our beliefs based on new evidence, then, should show opposite relationships with the strength of new evidence, depending on whether we were initially right or wrong.
By scanning people with fMRI while they are doing this task, we have been able to identify activity patterns in the brain that show precisely this neural signature. We were most interested in the cases in which people got their initial decision about the dots wrong, but then were presented with more information that they could use to reverse their choice. These results show that when the new information comes to light—when the new patch of dots comes on the screen—it triggers activation in the dorsal anterior cingulate cortex, and this activation pattern shows the signature predicted by the Bayesian model. The dACC is the same region that we saw in Chapter 2 that is involved in error detection. Our study of changes of mind suggests a more nuanced interpretation of the role of the dACC in metacognition—rather than simply announcing that we have made an error, it instead tracks how much we should update our beliefs based on new evidence.3
These results indicate that, just as tracking uncertainty is important for perceiving the world, it is also important for striking the right balance between being fixed and flexible in our beliefs. We can adapt our cake-batter analogy of Bayes’s theorem to explain how uncertainty shapes whether or not we change our minds. Recall that the cake batter represents the data coming in, and the mold represents our prior belief about the world. Imagine that now the cake batter is of a medium consistency—not too runny and not too solid—and that we have two different molds, one made of thin, flexible rubber and another made of hard plastic. The flexibility of the mold represents how certain we are about our current beliefs about the world—for instance, whether the trick die is a 0 or a 3, or whether the dots are moving to the left or right. Thinner rubber—less confident beliefs—will tend to conform to the weight of the incoming batter, obliterating the original shape of the mold, whereas the hard plastic retains its shape, and you end up with a mold-shaped cake. The hard plastic is the equivalent of a confident belief: it retains its shape regardless of what the data throws at it.
This is all well and good, and a rational Bayesian can gracefully trade off current beliefs against new data. But the snag is that, as we have seen in Part I, our confidence may become unmoored from the accuracy of our models of the world. If we are overconfident, or think we have more reliable information than we do, we run the risk of not changing our minds when we should. In contrast, if we are underconfident, we may remain indecisive even when the way forward is clear. More generally, poor metacognition can leave us stuck with decisions, beliefs, and opinions that we should have reversed or discarded long ago.
The impact of confidence on changes of mind was the subject of a series of experiments led by Max Rollwage, a former PhD student in my group. He devised a variant of our task in which people were shown a cloud of moving dots on a computer screen and asked to decide whether they were moving to the left or right. After their initial choice, they had a chance to see the dots again—and, in some cases, this led people to change their minds. The clever part of the experiment was that by manipulating the properties of the first set of dots, Max could make people feel more confident in their initial choice, even if their performance remained unchanged (this is a version of the “positive evidence” effect we encountered in Part I). We found that these heightened feelings of confidence led to fewer changes of mind, exactly as we would expect if metacognitive feelings are playing a causal role in guiding future decisions about whether to take on board new information. However, now these feelings of confidence had become decoupled from the accuracy of people’s decisions.4
Another prominent influence on how we process new evidence is confirmation bias. This refers to the fact that after making a choice, processing of new evidence that supports our choice tends to increase, whereas evidence that goes against our choice is downweighted. Confirmation bias has been documented in settings ranging from medical diagnoses to investment decisions and opinions on climate change. In one experiment, people were asked to bet on whether the list prices of various houses shown on a real estate website were more or less than a million dollars. They were then shown the opinion of a fictitious partner who either agreed or disagreed with their judgment about the house price and asked whether they would like to change their bet. The data showed that people became substantially more confident in their opinions when the partner agreed with them, but only slightly less confident when the partner disagreed. This asymmetry in the processing of new evidence was reflected in the activity profile of the dACC.5
At first glance, this pattern is not easy to square with Bayes’s theorem, as a good Bayesian should be sensitive to new evidence regardless of whether they agree with it. But there is a further twist in this story. In our studies, Max found that this bias against disconfirmatory evidence is also modulated by how confident we feel in our initial decision. In this experiment, we used a technique known as magnetoencephalography (MEG), which can detect very small changes in the magnetic field around the heads of our volunteers. Because neurons communicate by firing tiny electrical impulses, it is possible to detect the telltale signs of this activity in subtle shifts in the magnetic field. By applying techniques borrowed from machine learning, it is even possible to decode features of people’s thinking and decision-making from the spatial pattern of these changes in the magnetic field. In our experiment, we were able to decode whether people thought that a patch of moving dots was going to the left or to the right. But we found that this decoding differed according to how confident people were in their decision. If they were highly confident, then any evidence that went against their previous decision was virtually un-decodable. It was as if the brain simply did not care about processing new evidence that contradicted a confident belief—a confidence-weighted confirmation bias.6
When we put all this data together, it leads to an intriguing hypothesis: perhaps seemingly maladaptive phenomena such as confirmation bias become beneficial when paired with good metacognition. The logic here is as follows: If holding high confidence tends to promote a bias toward confirmatory information, then this is OK, as long as I also tend to be correct when I am confident. If, on the other hand, I have poor metacognition—if I am sometimes highly confident and wrong—then on these occasions I will tend to ignore information that might refute my (incorrect) belief and have problems in building up a more accurate picture of the world.7
One consequence of this view is that there should be a tight coupling between people’s metacognitive sensitivity and their ability to reconsider and reverse their decisions when they have made an error. We have directly tested this hypothesis in an experiment conducted online on hundreds of volunteers. People started off by completing a simple metacognition assessment: deciding which of two boxes on a computer screen contained more dots and rating their confidence in these decisions. We then gave them a variant of the information-processing task I described above. After making an initial decision, they saw another glimpse of the dots and were asked again to rate their confidence in their choice. In two separate experiments, we have found that people who have good metacognition in the first task tend to be the ones who are more willing to change their minds after making an error in the second task, demonstrating a direct link between self-awareness and more careful, considered decision-making.8
This power of metacognition in promoting a change in worldview can be naturally accommodated within a Bayesian framework. Our opinions can, as we have seen, be held with various levels of confidence or precision. For instance, I might have a very precise belief that the sun will rise tomorrow, but a less precise belief about the science on GM foods. The precision or confidence we attach to a particular model of the world is a metacognitive estimate, and as our cake-batter analogy highlights, how much we change our minds about something is crucially dependent on our current confidence. But the science of self-awareness also tells us that confidence is prone to illusions and biases. When metacognition is poor, being able to know when or whether to change our minds also begins to suffer.
From Perception to Value
Much of the research we have encountered so far in this book has focused on situations in which there is an objectively correct answer: whether a stimulus is tilted to the left or right, or whether a word was in a list we have just learned. But there is another class of decisions that are often closer to those we make in everyday life, and which don’t have an objectively correct answer but are instead based on subjective preferences. Neuroscientists refer to these as value-based decisions, to contrast them against perceptual decisions. A perceptual decision would be whether I think the object on the kitchen table is more likely to be an apple or an orange, whereas a value-based decision would be whether I would prefer to eat that same apple or orange. The former can be correct or incorrect (I might have misperceived the apple as an orange from a distance, for instance) whereas it would be strange to say that I have incorrectly decided to eat the apple. No one can tell me I’m wrong about this decision; they instead should assume that I simply prefer apples over oranges. It would therefore seem strange to tell myself that I’m wrong about what I want. Or would it?
This was the question that I began to tackle with my friend and UCL colleague Benedetto De Martino in 2011, just after I moved to New York to start my postdoc. On trips back to London, and over multiple Skype calls, we debated and argued over whether it was even sensible for the brain to have metacognition about value-based choices. The key issue was the following: if people “knew” that they preferred the orange over the apple, surely they would have chosen it in the first place. There didn’t seem to be much room for metacognition to get into the game.
The problem we studied was one familiar to behavioral economists. Say you are choosing dessert in a restaurant, and you have the opportunity to choose between two similarly priced ice creams, one with two scoops of vanilla and one of chocolate, and another with two scoops of chocolate and one of vanilla. If you choose the predominantly chocolate ice cream, we can infer that your internal ranking of preferences is for chocolate over vanilla; you have revealed your preference through your choice. If we were to then give you choices between all possible pairs and mixtures of ice creams, it would be possible to reconstruct a fairly detailed picture of your internal preferences for ice cream flavors just from observing the pattern of your choices.
We can make this situation more formal by assigning a numerical value to these internal (unobserved) preferences. Say I reveal through my choices that chocolate is worth twice as much to me as vanilla; then I can write that Uchocolate = 2Uvanilla, where U stands for an abstract quantity, the “utility” or value of the ice cream I will eventually eat (in these cases, we can think of utility as referring to all the subjective benefits I will accrue from the ice cream—including the taste, calories, and so on—minus the costs, such as worrying about a change in waistline). We can also define a sense of confidence, C, in our choice of flavor. It seems intuitive that when the difference in the utility between the two options becomes greater, then our confidence in making a good decision also increases. If I strongly prefer chocolate, I should be more confident about choosing one scoop of chocolate over one of vanilla, even more confident about choosing two scoops, and most confident of all about being given a voucher for unlimited purchases of chocolate ice cream. As the difference in value increases, the decision gets easier. We can write down this assumption mathematically as follows:
C ∝ |UA − UB|
It says that our confidence is proportional to the absolute difference in value between the two options.
The problem is that this equation never allows us to think we have made an error—we can never be more confident about the option we didn’t choose. This intuitive model, then, is inconsistent with people being able to apply metacognition to value-based choices; confidence is perfectly aligned with (and defined by) the values of the items we are deciding about. This seemed odd to us. The more we thought about it, the more we realized that metacognition about value-based choices was not only possible but central to how we live our lives. We began to see this kind of metacognition everywhere.
Consider choosing whether to take a new job. This is a value-based choice between option A (your existing job) and option B (the new job). After thinking carefully through the pros and cons—your colleagues, promotion opportunities, commute, and so on—you might come up with an overall sense that B is better. You decide on leaving, and carefully draft both resignation and acceptance letters. But then a wave of regret and second-guessing hits. Have you actually made the right choice? Wouldn’t you have been better staying where you are? These are metacognitive thoughts about whether you have made a good decision—that is, metacognition about value-based choices. This kind of self-endorsement of our choices is a key aspect of decision-making, and it can have profound consequences for whether we decide to reverse or undo such decisions.
Together with our colleagues Neil Garrett and Ray Dolan, Benedetto and I set out to investigate people’s self-awareness about their subjective choices in the lab. In order to apply the statistical models of metacognition that we encountered in Chapter 4, we needed to get people to make lots of choices, one after the other, and rate their confidence in choosing the best option—a proxy for whether they in fact wanted what they chose. We collected a set of British snacks, such as chocolate bars and crisps, and presented people with all possible pairs of items to choose between (hundreds of pairs in total). For instance, on some trials you might be asked to choose whether you prefer the Milky Way to the Kettle Chips, and on other trials whether you prefer the Lion Bar to the Twirl, or the Twirl to the Kettle Chips. We made sure that these choices mattered in several ways. First, one of the decisions was chosen at random to be played out for real, and people could eat the item they chose. Second, people were asked to fast for four hours before coming to the lab, so they were hungry. And third, they were asked to stay in the lab for an hour after the study, and the only thing they could eat was one of the snacks they chose in the experiment.
We next explored people’s metacognition by applying statistical models that estimate the link between accuracy—whether they were actually right or wrong—and confidence. The snag in the value-based choice experiments was that it was difficult to define what “accurate” meant. How could we tell whether people really intended to choose the Lion Bar over the Twirl?
As a first attempt, we asked people to provide what amount of money they would be willing to pay for each snack after the experiment (again, to ensure people were incentivized to state their real willingness to pay, we made sure that this had consequences: they had more chances of getting a snack to eat if they indicated a higher price). For instance, you might bid £0.50 for the Lion Bar, but £1.50 for the Twirl, a clear statement that you prefer Twirls to Lion Bars. We then had both of the components we needed to estimate the equations above: people’s confidence in each decision and a (subjective) value for each item.
The first thing we found in the data was that people tended to be more confident about easier decisions, just as the standard model predicts. What was more surprising, though, was that even when the value difference was the same—when two decisions were subjectively equally difficult—sometimes people’s confidence was high, and sometimes it was low. When we dug into the data, we found that when people were highly confident, they were more likely to have chosen the snack they were willing to pay more for. But when they were less confident, they sometimes chose the snack that was less valuable to them. It seemed that people in our experiment were aware of making subjective errors—cases in which they chose the Twirl, but realized that, actually, they preferred the Lion Bar after all.
We also used fMRI to track the neural basis of people’s decision-making process. We found that, in line with many other studies of subjective decisions, the values of different snacks were tracked by brain activity in the ventromedial PFC. This same region also showed higher activation when people were more confident in their choices. In contrast, the lateral frontopolar cortex—a brain region that, as we saw in Part I, is important for metacognitive sensitivity—tracked the confidence people had in their choices, but was relatively insensitive to their value. In other words, people have a sense of when they are acting in line with their own values, and this self-knowledge may share a similar neural basis to metacognition about other types of decisions.9
These experiments reveal that there is a real sense in which we can “want to want” something. We have an awareness of whether the choices we are making are aligned with our preferences, and this sense of confidence can be harnessed to ensure that, over the long run, we end up both wanting what we choose and choosing what we want. In fact, later on in our experiment, participants encountered the exact same pairs of snacks a second time, allowing us to identify cases in which they switched their choice from one snack to another. When people had low confidence in an initial choice, they were more likely to change their mind the second time around—allowing them to make choices that were, in the end, more in line with their preferences.10
A Delicate Balance
Our research on metacognition and changes of mind suggests that, in fact, being willing to admit to having low confidence in our decisions can often prove adaptive. By being open to change, we become receptive to new information that might contradict our existing view, just as in the case of Mark Lynas. And, as we have seen, this is most useful when our metacognition is accurate. We want to be open to changing our minds when we are likely to be wrong but remain steadfast when we are right. In this way, good metacognition nudges us into adopting a more reflective style of thinking, and protects us against developing inaccurate beliefs about the world.
For instance, consider the following question:
A bat and a ball cost £1.10 in total. The bat costs £1.00 more than the ball. How much does the ball cost?
The intuitive answer—one that is given by a high proportion of research participants—is 10p. But a moment’s thought tells us that this can’t be right: if the bat cost £1.00 more than 10p, it alone costs £1.10, which means that the bat and the ball together would cost £1.20. By working back through the sum, we find that, actually, the answer is 5p.
This question is part of the Cognitive Reflection Test (CRT) developed by the psychologist Shane Frederick. One difficulty with getting the right answers to questions such as this is that they have been designed to maximize metacognitive illusions. They generate a high feeling of confidence in knowing the answer despite accuracy being poor. And if our initial feeling of confidence is high, we might blurt out “10p” without pausing to reconsider or change our minds.11
People’s scores on the CRT reliably predict performance in other endeavors that prize rational, reflective thought, including understanding of scientific topics, rejection of paranormal ideas, and the ability to detect fake news. One interpretation of these associations is that the CRT is tapping into a style of self-aware thinking that promotes the ability to know when we might be wrong and that new evidence is needed. These statistical associations remain even when controlling for general cognitive ability, suggesting that self-reflectiveness as measured by the CRT, like measures of metacognitive sensitivity, may be distinct from raw intelligence.12
Detailed studies of why CRT failures occur suggest that people would do better if they listened to initial feelings of low confidence in their answers and took time to reconsider their decisions. But there is another force at work that cuts in the opposite direction and tends to make us overconfident, irrespective of whether we are right or wrong. This is the fact that projecting confidence and decisiveness holds a subjective allure in the eyes of others. Despite all the benefits of knowing our limits and listening to feelings of low confidence, many of us still prefer to be fast, decisive, and confident in our daily lives—and prefer our leaders and politicians to be the same. What explains this paradox?
One clue comes from an elegant study by the political scientists Dominic Johnson and James Fowler. They set up a computer game in which large numbers of simulated characters were allowed to compete for limited resources. As in standard evolutionary simulations, those who won the competitions tended to acquire more fitness and be more likely to survive. Each of the characters also had an objective strength or ability that made them more or less likely to win the competition for resources. The twist was that, here, the decision about whether or not to compete for a resource was determined by the character’s metacognitive belief about its ability—its confidence—rather than actual ability. And this confidence level could be varied in the computer simulations, allowing the researchers to create and study both underconfident and overconfident agents.
Intriguingly, in most scenarios, overconfident agents tended to do a bit better. This was especially the case when the benefit of gaining a resource was high, and when there was uncertainty about the relative strength of different agents. The idea is that overconfidence is adaptive because it encourages you to engage in fights in situations where you might have otherwise demurred. As the old saying goes, “You have to be in it to win it.” In this way, the benefits of a dose of overconfidence for decision-making are similar to the benefits of self-efficacy for learning. It can become a self-fulfilling prophecy.13
People who are more confident indeed seem to achieve greater social status and influence. In one experiment in which people had to collaborate to pinpoint US cities on a map, overconfident individuals were perceived as more competent by their partners, and this overconfidence was associated with more respect and admiration. When videotapes of the experiment were rated afterward, more confident individuals spoke more, used a more assertive tone of voice, and exhibited a calm and relaxed demeanor. As the authors of this study wryly concluded, “Overconfident individuals more convincingly displayed competence cues than did individuals who were actually competent.”14
Projecting decisiveness rather than caution is also liked and respected in our leaders and politicians, whereas admitting mistakes is often taken as a sign of weakness. From lingering too long over the menu at a restaurant to abrupt U-turns by politicians, flip-flopping does not have a good reputation. In the autumn of 2007, the incumbent prime minister of the UK, Gordon Brown, was enjoying high popularity ratings. He had just taken over the leadership of the Labour Party from Tony Blair, and he had deftly dealt with a series of national crises including terrorist plots. All indications were that he was going to coast to victory in an election—a poll that was his alone to call. But his very public decision to postpone going to the polls tarred him with a reputation for dithering and indecisiveness, and his authority began to crumble. In the 2004 US presidential election, the Democratic candidate, John Kerry, was similarly plagued by accusations of flip-flopping. In one famous remark, he tried to explain his voting record on funding for the military in the Middle East by saying, “I actually did vote for the $87 billion, before I voted against it.”
There is a delicate balancing act here. The evolutionary simulations confirm a benefit of overconfidence in situations in which different individuals are competing for a limited resource. A subtle boost to confidence can also make us more competitive and likeable in the eyes of others. But these studies do not incorporate the potential downsides of overconfidence for monitoring our own decision-making. With too much overconfidence, as we have seen, we lose the benefits of an internal check on whether we are right or wrong.
So does this mean that we are damned if we do and damned if we don’t? Do we have to choose between being confident but unreflective leaders or meek, introspective followers?
Luckily, there is a middle road, one that takes advantage of the benefits of being aware of our weaknesses while strategically cultivating confidence when needed. The argument goes as follows: If I have the capacity for self-awareness, then I can be ready and willing to acknowledge when I might be wrong. But I can also strategically bluff, upping my confidence when needed. Bluffing is only truly bluffing when we have some awareness of reality. If not, it is just blind overconfidence. (I remember playing poker as a student with a friend who was only just getting the hang of the rules. He went all in on a worthless hand and bluffed everyone into folding. This would have been a stunningly confident and impressive move, had he actually known that he was bluffing!)15
This kind of strategic metacognition requires a split in our mental architecture, a dissociation between the confidence that we feel privately and the confidence that we signal to others. For example, we may deliberately overstate our confidence in order to persuade others, or understate it in order to avoid responsibility for potentially costly errors. In a recent experiment conducted by Dan Bang, a postdoctoral researcher in my group, we have gotten a glimpse of the neural machinery that might coordinate strategic metacognition. Dan set up a scenario in which people needed to collaborate with another fictional “player” to jointly judge the direction of a cloud of randomly moving dots on the computer screen, similar to those we had used in our experiments on changes of mind. The twist was that the players were engineered to have different levels of confidence. Some players tended to be underconfident, and others tended to be overconfident. But the rules of the game said that the most confident judgment would be taken as the group’s decision—similar to the person who shouts loudest in meetings, dominating the proceedings. This meant that when collaborating with less-confident players, it was useful to strategically reduce your confidence (to avoid dominating the decision), whereas when playing with more-confident ones, it was better to shout a bit louder to ensure your voice was heard.
The volunteers in our experiment got this immediately, naturally shifting their confidence to roughly match that of their partners. We then looked at the patterns of brain activity in the prefrontal regions we knew were important for metacognition. In the ventromedial PFC, we found activation that tracked people’s private sense of confidence in their decision. It was affected by how difficult the judgment was, but not by who they were playing with. In contrast, in the lateral frontopolar cortex, we found neural signals that tracked how much people needed to strategically adjust their confidence when playing with different partners. These findings also help us further understand the distinction between implicit and explicit metacognition we encountered in Part I. Implicit signals of confidence and uncertainty seem to be tracked at multiple stages of neural processing. But the ability to strategically use and communicate confidence to others may depend on brain networks centered on the frontopolar cortex—networks that are uniquely expanded in the human brain and take a while to mature in childhood.16
It takes courage to adopt a metacognitive stance. As we have seen, publicly second-guessing ourselves puts us in a vulnerable position. It is perhaps no surprise, then, that some of the world’s most successful leaders put a premium on strategic metacognition and prolonged, reflective decision-making. Effective leaders are those who both are aware of their weaknesses and can strategically project confidence when it is needed. As Ray Dalio recounts in his best-selling book Principles, “This episode taught me the importance of always fearing being wrong, no matter how confident I am that I’m right.”17
In 2017, Amazon’s letter to its shareholders was the usual mix of ambitious goals and recounted milestones that characterizes many global companies. But it, too, stood out for its unusual focus on self-awareness: “You can consider yourself a person of high standards in general and still have debilitating blind spots. There can be whole arenas of endeavor where you may not even know that your standards are low or nonexistent, and certainly not world class. It’s critical to be open to that likelihood.” Amazon CEO Jeff Bezos practices what he preaches, being famous for his unusual executive meetings. Rather than engaging in small talk around the boardroom table, he instead mandates that executives engage in a silent thirty-minute “study session,” reading a memo that one of them has prepared in advance. The idea is to force each person to think about the material, form their own opinion, and reflect on what it means for them and the company. With his shareholder letter and the unusual meeting setup, it is clear that Bezos places high value on individual self-awareness—not only being well-informed, but also knowing what you know and what you do not.18
For Bezos, self-awareness is important for the same reason that it is important to sports coaches. By gaining self-awareness, we can recognize where there is room for improvement. Further on in the 2017 memo, he notes, “The football coach doesn’t need to be able to throw, and a film director doesn’t need to be able to act. But they both do need to recognize high standards.”
This focus on metacognition espoused by many successful individuals and companies is likely to be no accident. It enables them to be agile and adaptive, realizing their mistakes before it’s too late and recognizing when they may need to improve. As we saw in the story of Charmides, the Greeks considered sophrosyne—the living of a balanced, measured life—as being grounded in effective self-knowledge. By knowing what we want (as well as knowing what we know), we can reflectively endorse our good decisions and take steps to reverse or change the bad ones. It’s often useful to be confident and decisive and project an air of reassurance toward others. But when the potential for errors arises, we want leaders with good metacognition, those who are willing to quickly recognize the danger they are in and change course accordingly. Conversely, self-awareness helps us be good citizens in a world that is increasingly suffused with information and polarized in opinion. It helps us realize when we don’t know what we want, or when we might need more information in order to figure this out.
Just as in the cases of Jane, Judith, and James at the start of the book, if I have poor metacognition about my knowledge or skills or abilities, my future job prospects, financial situation, or physical health might suffer. In these isolated cases, failures of self-awareness are unlikely to affect others. But, as we have seen, the impact of metacognition is rarely limited to any one individual. If I am working with others, a lack of self-awareness may result in network effects—effects that are difficult to anticipate at the level of the individual but can have a detrimental impact on teams, groups, or even institutions. In the next chapter, we are going to expand our focus from individuals making decisions alone to people working together in groups. We will see that effective metacognition not only is central to how we reflect on and control our own thinking, but it also allows us to broadcast our mental states to others, becoming a catalyst for human collaboration of all kinds.