Know Thyself: The Science of Self-Awareness - Stephen M Fleming 2021
Collaborating and Sharing
The Power of Reflection
Consciousness is actually nothing but a network of connections between man and man—only as such did it have to develop: a reclusive or predatory man would not have needed it.
—FRIEDRICH NIETZSCHE, The Joyous Science
The increasing specialization of human knowledge means that an ability to work together on a global scale has never been more important. It’s rare for one individual to have all the expertise needed to build a plane, treat a patient, or run a business. Instead, humans have succeeded in all these endeavors largely thanks to an ability to collaborate, sharing information and expertise as and when required. This ability to share and coordinate with others requires us to keep track of who knows what. For instance, when my wife and I go on vacation, I know that she will know where to look for the sunscreen. Conversely, she knows that I will know where to find the beach towels.
We have already seen that many animals have an ability for implicit metacognition. But it seems likely that only humans have an ability for explicitly representing the contents of the minds of ourselves and others. This capacity for self-awareness mutually reinforces the usefulness of language. Our linguistic abilities are clearly central to our ability to collaborate and share ideas. But language is not enough. Human language would be no more than an elaborate version of primitive alarm calls were it not for our ability to broadcast and share our thoughts and feelings. Monkeys use a set of calls and gestures to share information about what is going on in the world, such as the location of food sources and predators. But they do not, as far as we know, use their primitive language to communicate mental states, such as the fact that they are feeling fearful or anxious about a predator. No matter how baroque or complex linguistic competence becomes, without self-awareness we cannot use it to tell each other what we are thinking or feeling.1
The central role of self-awareness in working with others has a downside, though. It means that effective collaboration often depends on effective metacognition, and, as we have seen, there are many reasons why metacognition may fail. Subtle shifts in self-awareness then become significant when magnified at the level of groups and societies. In this chapter, we will see how metacognition plays a pivotal role in our capacity to coordinate and collaborate with others in fields as diverse as sports, law, and science—and see how the science of self-awareness can supercharge the next wave of human ingenuity.
Two Heads Are Better than One
Consider a pair of hunters stalking a deer. They crouch next to each other in the long grass, watching for any sign of movement. One whispers to the other, “I think I saw a movement over to the left.” The other replies, “I didn’t see that—but I definitely saw something over there. Let’s press on.” Like many of us in this situation, we might defer to the person who is more confident in what they saw.
Confidence is a useful currency for communicating strength of belief in these scenarios. On sports fields around the world, professional referees are used to pooling their confidence to come to a joint decision. As a boy growing up in the north of England, like all my friends I was obsessed with football (soccer). I vividly remember the European Championship of 1996, which was notable for the “Three Lions” song masterminded by David Baddiel, Frank Skinner, and the Lightning Seeds. This record—which, unusually for a football song, was actually pretty good—provided the soundtrack to a glorious summer. It contained the memorable line, “Jules Rimet still gleaming / Thirty years of hurt.” Jules Rimet refers to the World Cup trophy, and “thirty years” reminded us that England hadn’t won the World Cup since 1966.
Those lyrics prompted me to find out what happened back in 1966 at the famous final against West Germany at Wembley in London. Going into extra time, the game was tied at 2—2, and the whole country was holding its breath. A home World Cup was poised for a fairytale ending. On YouTube, you can find old TV footage showing Alan Ball crossing the ball to Geoff Hurst, the England striker. Shooting from close range, Hurst hits the underside of the crossbar and the ball bounces down onto the goal line. The England players think Hurst has scored, and they wheel away in celebration.
It was the linesman’s job to decide whether the ball has crossed the line for a goal. The linesman that day was Tofiq Bahramov, from Azerbaijan in the former Soviet Union. Bahramov’s origins provide some extra spice to the story, because West Germany had just knocked the USSR out of the competition in the semifinals. Now Bahramov was about to decide the fate of the game while four hundred million television viewers around the world looked on. Without recourse to technology, he decided the ball was over the line, and England was awarded the crucial goal. Hurst later added another to make it 4—2 in the dying seconds, but by then fans were already flooding the field in celebration.
Today’s professional referees are linked up by radio microphones, and those in some sports such as rugby and cricket have the ability to request an immediate review of the TV footage before making their decision. What would have been the outcome had Bahramov communicated a degree of uncertainty in his impression of what happened? It is not hard to imagine that, absent any other influence on the referee’s decision process, a more doubtful linesman may have swung the day in Germany’s favor.
As we have seen, however, sharing our confidence is only useful if it’s a reliable marker of the accuracy of others’ judgments. We would not want to work with someone who is confident when they are wrong, or less confident when they are right. This importance of metacognition when making collective decisions has been elegantly demonstrated in the lab in studies by Bahador Bahrami and his colleagues. Pairs of individuals were asked to view briefly flashed stimuli, each on their own computer monitor. Their task was to decide whether the first or second flash contained a slightly brighter target. If they disagreed, they were prompted to discuss their reasons why and arrive at a joint decision. These cases are laboratory versions of the situation in a crowded Wembley: How do a referee and linesman arrive at a joint assessment of whether the ball crossed the line?
The results were both clear-cut and striking. First, the researchers quantified each individual’s sensitivity when detecting targets alone. This provides a baseline against which to assess any change in performance when decisions were made with someone else. Then they examined the joint decisions in cases where participants initially disagreed. Remarkably, in most cases, decisions made jointly were more accurate than equivalent decisions made by the best subject working alone. This is known as the “two heads are better than one” (2HBT1) effect and can be explained as follows: Each individual provides some information about where the target was. A mathematical algorithm for combining these pieces of information weighs them by their reliability, leading to a joint accuracy that is greater than the sum of its parts. Because people intuitively communicate their confidence in their decisions, the 2HBT1 effect is obtained.2
Pairs of decision makers in these experiments tend to converge on a common, fine-grain currency for sharing their confidence (using phrases such as “I was sure” or “I was very sure”), and those who show greater convergence also show more collective benefit. There are also many implicit cues to what other people are thinking and feeling; we don’t always need to rely on what they say. People tend to move more quickly and decisively when they are confident, and in several languages confident statements are generally marked by greater intonation and louder or faster pronunciation. There are even telltale hints of confidence in email and social media. In one experiment simulating online messaging, participants who were more confident in their beliefs tended to message first, and this was predictive of how persuasive the message was. These delicate, reciprocal, and intuitive metacognitive interactions allow groups of individuals to subtly modulate each other’s impression of the world.3
Mistaken Identification
It is perhaps rare that we are asked, explicitly, to state how confident we are in our beliefs. But there is one high-stakes arena in which the accuracy of our metacognitive statements takes on critical importance. In courts of law around the world, witnesses take the stand to declare that they saw, or did not see, a particular crime occur. Often, confident eyewitness reports can be enough to sway a jury. But given everything we have learned about metacognition so far, we should not be surprised if what people say does not always match up with the truth.
In February 1987, eighteen-year-old Donte Booker was arrested for an incident involving a toy gun. Recalling that a similar toy gun was involved in a still-unsolved rape case, a police officer put Booker’s photograph into a photo lineup. The victim confidently identified him as the attacker, which, together with the circumstantial evidence of the toy gun, was enough to send Booker to jail for twenty-five years. He was granted parole in 2002 after serving fifteen years of his sentence, and he started his life over again while battling the stigma of being a convicted sex offender. It wasn’t until January 2005 that DNA testing definitively showed that Booker was not the rapist. The real attacker was an ex-convict whose DNA profile was obtained during a robbery he committed while Booker was already in prison.
Cases such as these are unfortunately common. The Innocence Project, a US public-policy organization dedicated to exonerating people who were wrongfully convicted, estimates that mistaken identification contributed to approximately 70 percent of the more than 375 wrongful convictions in the United States overturned by post-conviction DNA evidence (note that these are only the ones that have been proven wrongful; the real frequency of mistaken identification is likely much higher). And mistaken identifications are often due to failures of metacognition: eyewitnesses often believe that their memory of an event is accurate and report unreasonably high confidence in their identifications. Jurors take such eyewitness confidence to heart. In one mock-jury study, different factors associated with the crime were manipulated, such as whether the defendant was disguised or how confident the eyewitness was in their recall of the relevant details. Strikingly, the confidence of the eyewitness was the most powerful predictor of a guilty verdict. In similar studies, eyewitness confidence has been found to influence the jury more than the consistency of the testimony or even expert opinion.4
All this goes to show that good metacognition is central to the legal process. If the eyewitness has good metacognition, then they will be able to appropriately separate out occasions when they might be mistaken (communicating lower confidence to the jury) from occasions when they are more likely to be right. It is concerning, therefore, that studies of eyewitness memory in the laboratory have shown that metacognition is surprisingly poor.
In a series of experiments conducted in the 1990s, Thomas Busey, Elizabeth Loftus, and colleagues set out to study eyewitness confidence in the lab. Busey and Loftus asked participants to remember a list of faces, and afterward to respond as to whether a face was previously on the list or was novel. Participants were also asked to indicate their confidence in their decision. So far, this is a standard memory experiment, but the researchers introduced an intriguing twist. Half of the photographs were presented in a dim light on the computer screen, whereas the other half were presented brightly lit. This is analogous to a typical police lineup—often a witness captures only a dim glimpse of an attacker at the crime scene but is then asked to pick him or her out of a brightly lit lineup. The results were clear-cut and unsettling. Increasing the brightness of the face during the “lineup” phase decreased accuracy in identification but increased subjects’ confidence that they had got the answer right. The authors concluded that “subjects apparently believe (slightly) that a brighter test stimulus will help them, when in fact it causes a substantial decrease in accuracy.”5
This influence of light levels on people’s identification confidence is another example of a metacognitive illusion. People feel that they are more likely to remember a face when it is brightly lit, even if this belief is inaccurate. Researchers have found that eyewitnesses also incorporate information from the police and others into their recollection of what happened, skewing their confidence further as time passes.
What, then, can we do about these potentially systemic failures of metacognition in our courtrooms? One route to tackling metacognitive failure is by making judges and juries more aware of the fragility of self-awareness. This is the route taken by the state of New Jersey, which in 2012 introduced jury instructions explaining that “although some research has found that highly confident witnesses are more likely to make accurate identifications, eyewitness confidence is generally an unreliable indicator of accuracy.” A different strategy is to identify the conditions in which people’s metacognition is intact and focus on them. Here, by applying the science of self-awareness, there is some reason for optimism. The key insight is that witness confidence is typically a good predictor of accuracy at the time of an initial lineup or ID parade but becomes poorer later on during the trial, after enough time has elapsed and people become convinced about their own mistaken ID.6
Another approach is to provide information about an individual’s metacognitive fingerprint. For instance, mock jurors find witnesses who are confident about false information less credible. We tend to trust people who have better metacognition, all else being equal. We have learned that when they tell us something with confidence, it is likely to be true. The problem arises in situations where it is not possible to learn about people’s metacognition—when we are interacting with them only a handful of times, or if they are surrounded by the anonymity of the Internet and social media. In those scenarios, confidence is king.
By understanding the factors that affect eyewitnesses’ self-awareness, we can begin to design our institutions to ensure their testimony is a help, rather than a hindrance, to justice. For instance, we could ask the witness to take a metacognition test and report their results to the judge and jury. Or we could ensure that numerical confidence estimates are routinely recorded at the time of the lineup and read back to the jury to ensure that they are reminded about how the witness felt about their initial ID, when metacognition tends to be more accurate. Putting strict rules and regulations in place is easier to do in situations such as courts of law. But what about in the messy back-and-forth of our daily lives? Is there anything we can do to ensure we do not fall foul of self-awareness failures in our interactions with each other?7
The Right Kind of Ignorance
If someone has poor metacognition, then their confidence in their judgments and opinions will often be decoupled from reality. Over time, we might learn to discount their opinions on critical topics. But if we are interacting with someone for the first time, we are likely to give them the benefit of the doubt, listening to their advice and opinions, and assuming that their metacognition is, if not superlative, at least intact. I hope that, by now, you will be more cautious before making this assumption—we have already seen that metacognition is affected by variation in people’s stress and anxiety levels, for instance. Metacognitive illusions and individual variability in self-awareness mean that it is wise to check ourselves before taking someone’s certainty as an indicator of the correctness of their opinions.
This is all the more important when we are working with people for the first (or perhaps only) time. When we have only one or two data points—for instance, a single statement of high confidence in a judgment—it is impossible to assess that person’s metacognitive sensitivity. We cannot know whether that person has good metacognition and is communicating they are confident because they are actually likely to be correct, or whether they have poor metacognition and are asserting high confidence regardless.
The lawyer Robert Rothkopf and I have studied these kinds of mismatches in the context of giving legal advice. Rob runs a litigation fund that invests in cases such as class actions. His team often needs to decide whether or not to invest capital in a new case, depending on whether litigation is likely to succeed. Rob noted that often lawyers presenting cases for investment used verbal descriptions of the chances of success: phrases like “reasonable prospects” or a “significant likelihood” of winning. But what do they really mean by this?
To find out, Rob and I issued a survey to 250 lawyers and corporate clients around the world, asking them to assign a percentage to verbal descriptions of confidence such as “near certainty,” “reasonably arguable,” and “fair chance.” The most striking result was the substantial variability in the way people interpreted these phrases. For instance, the phrase “significant likelihood” was associated with probabilities ranging from below 25 percent to near 100 percent.8
The implications of these findings are clear. First, verbal labels of confidence are relatively imprecise and cover a range of probabilities, depending on who is wielding the pen. Second, different sectors of a profession might well be talking past each other if they do not have a shared language to discuss their confidence in their beliefs. And third, without knowing something about the accuracy of someone’s confidence judgments over time, isolated statements need to be taken with considerable caution. To avoid these pitfalls, Rob’s team now requires lawyers advising them to use numerical estimates of probabilities instead of vague phrases. They have also developed a procedure that requires each member of the team to score their own predictions about a case independently before beginning group discussions, to precisely capture individual confidence estimates.9
Scientists are also not immune to collective failures of metacognition. Millions of new scientific papers are published each year, with tens of thousands within my field of psychology alone. It would be impossible to read them all. And it would probably not be a good idea to try. In a wonderful book entitled Ignorance, the Columbia University neuroscientist Stuart Firestein argues persuasively that when facing a deluge of scientific papers, talking about what you don’t know, and cultivating ignorance about what remains to be discovered, is a much more important skill than simply knowing the facts. I try to remind students in my lab that while knowledge is important, in science a critical skill is knowing enough to know what you don’t know!10
In fact, science as a whole may be becoming more and more aware of what it does and does not know. In a 2015 study, it was found that out of one hundred textbook findings in academic psychology, only thirty-nine could be successfully reproduced. A more recent replication of twenty-one high-profile studies published in Science and Nature found a slightly better return—62 percent—but one that should still be concerning to social scientists wishing to build on the latest findings. This replication crisis takes on deeper meaning for students just embarking on their PhDs, who are often tasked with repeating a key study from another lab as a jumping-off point for new experiments. When apparently solid findings begin to disintegrate, months and even years can be wasted chasing down results that don’t exist. But it is increasingly recognized that despite some findings in science being flaky and unreliable, many people already knew this! In bars outside scientific conferences and meetings, it’s common to overhear someone saying, “Yeah I saw the latest finding from Bloggs et al., but I don’t think it’s real.”11
In other words, scientists seem to have finely tuned bullshit detectors, but we have allowed their influence to be drowned out by a slavish adherence to the process of publishing traditional papers. The good news is that the system is slowly changing. Julia Rohrer at the Max Planck Institute for Human Development in Berlin has set out to make it easier for people to report a change of mind about their data, spearheading what she calls the Loss-of-Confidence Project. Researchers can now fill out a form explaining why they no longer trust the results of an earlier study that they themselves conducted. The logic is that the authors know their study best, and so are in the best place to critique it. Rohrer hopes the project will “put the self back into self-correction” in science and make things more transparent for other researchers trying to build on the findings.12
In addition, by loosening the constraints of traditional scientific publishing, researchers are becoming more used to sharing their data and code online, allowing others to probe and test their claims as a standard part of the peer-review process. By uploading a time-stamped document outlining the predictions and rationale for an experiment (known as preregistration), scientists can keep themselves honest and avoid making up stories to explain sets of flaky findings. There is also encouraging data that shows that when scientists own up about getting things wrong, the research community responds positively, seeing them as more collegiate and open rather than less competent.13
Another line of work is aiming to create “prediction markets” where researchers can bet on which findings they think will replicate. The Social Sciences Replication Project team set up a stock exchange, in which volunteers could buy or sell shares in each study under scrutiny, based on how reproducible they expected it to be. Each participant in the market started out with $100, and their final earnings were determined by how much they bet on the findings that turned out to replicate. Their choices of which studies to bet on enabled the researchers to precisely determine how much meta-knowledge the community had about its own work. Impressively, these markets do a very good job of predicting which studies will turn out to be robust and which won’t. The traders were able to tap into features like the weakness of particular statistical results or small sample sizes—features that don’t always preclude publication but raise quiet doubts in the heads of readers.14
This collective metacognition is likely to become more and more important as science moves forward. Einstein had a powerful metaphor for the paradox of scientific progress. If we imagine the sum total of scientific knowledge as a balloon, then as we blow up the balloon, the surface area expands and touches more and more of the unknown stuff outside of the balloon. The more knowledge we have, the more we do not know, and the more important it is to ask the right questions. Science is acutely reliant on self-awareness—it depends on individuals being able to judge the strength of evidence for a particular position and to communicate it to others. Stuart Firestein refers to this as “high quality” ignorance, to distinguish it from the low quality ignorance typified by knowing very little at all. He points out that “if ignorance… is what propels science, then it requires the same degree of care and thought that one accords data.”
Creating a Self-Aware Society
We have seen that effective collaboration in situations ranging from a sports field to a court of law to a science lab depends on effective metacognition. But our social interactions are not restricted to handfuls of individuals in workplaces and institutions. Thanks to social media, each of us now has the power to share information and influence thousands, if not millions, of individuals. If our collective self-awareness is skewed, then the consequences may ripple through society.
Consider the role metacognition may play in the sharing of fake news on social media. If I see a news story that praises my favored political party, I may be both less likely to reflect on whether it is indeed real and more likely to just mindlessly forward it on. If others in my social network view my post, they may infer (incorrectly) that because I am usually a reliable source of information on social media, they don’t need to worry about checking the source’s accuracy—a case of faulty mindreading. Finally, due to the natural variation of trait-level metacognition in the population, just by chance some people will be less likely to second-guess whether they have the right knowledge on a topic and perhaps be more prone to developing extreme or inaccurate beliefs. What started out as a minor metacognitive blind spot may quickly snowball into the mindless sharing of bad information.15
To provide a direct test of the role of metacognition in people’s beliefs about societal issues, we devised a version of our metacognition tasks that people in the United States could do over the Internet, and also asked volunteers to fill out a series of questions about their political views. From the questionnaire data, we could extract a set of numbers that told us both where people sat on the political spectrum (from liberal to conservative) and also how dogmatic or rigid they were in these views. For instance, dogmatic people tended to strongly agree with statements such as “My opinions are right and will stand the test of time.” There was variability in dogmatism across a range of political views; it was possible to both be relatively centrist in one’s politics and dogmatic. However, the most dogmatic individuals were found at both the left and right extremes of the political spectrum.
In two samples of over four hundred people, we found that one of the best predictors of holding dogmatic political views—believing that you are right and everyone else is wrong—was a lack of metacognitive sensitivity about simple perceptual decisions. Being dogmatic did not make you any worse at the task, but it did make you worse at knowing whether you were right or wrong in choosing which of two boxes contained more dots. This lack of metacognition also predicted the extent to which people would ignore new information and be unwilling to change their minds, especially when receiving information that indicated they were wrong about their original judgment. It’s important to point out that this relationship was not specific to one or other political view. Dogmatic Republicans and dogmatic Democrats were equally likely to display poor metacognition. Instead, people who were less self-aware were more likely to have dogmatic opinions about political issues of all kinds.16
In a subsequent study, we wanted to go a step further and ask whether subtle distortions in metacognition might also affect people’s decisions to seek out information in the first place. As we have seen in the case of Jane studying for her exam, if our metacognition is poor we might think we know something when we don’t and stop studying before we should. We wondered if a similar process might be at work in situations where people need to decide whether to seek out new information about topics such as politics and climate change. To investigate this, we adjusted our experiment slightly so that people were now also asked whether they would like to see the stimulus again in situations where they were unsure of the right answer. Doing so meant a small amount was deducted from their earnings in the experiment, but we made sure this cost was outweighed by the amount of points they received for getting a correct answer. We found that, on average, people decided to see the new information more when they were less confident, exactly as we would expect from a tight link between metacognition and information seeking. However, those people who were more dogmatic about political issues were also less likely to seek out new information, and their decisions to seek out new information were less informed by their confidence.17
These results indicate that poor metacognition may have quite general effects. By collecting confidence ratings on a simple dot-counting task, we could quantify people’s capacity for metacognition in isolation from the kinds of emotional or social influences that often come along with decisions about hot-button issues such as politics. And yet, in all our studies so far, poor metacognition predicted whether people would hold extreme beliefs better than other, more traditional predictors in political science such as gender, education, or age.
This does not mean that specific factors are not playing a role in shaping how people reflect on and evaluate their beliefs. There may be some areas of knowledge that are particularly susceptible to metacognitive blind spots. The climate scientist Helen Fischer has quantified German citizens’ self-awareness of various scientific topics, including climate change. People were given a series of statements—such as, “It is the father’s gene that decides whether the baby is a boy or a girl,” “Antibiotics kill viruses as well as bacteria,” and “Carbon dioxide concentration in the atmosphere has increased more than 30 percent in the past 250 years”—and asked whether or not they are supported by science (the correct answers for these three questions would be yes, no, and yes). The volunteers also gave a confidence rating in their answers so that their metacognition could be quantified. People’s metacognition tended to be quite good for general scientific knowledge. Even if they got some questions wrong, they tended to know that they were likely to be wrong, rating low confidence in their answers. But for climate change knowledge, metacognition was noticeably poor, even when controlling for differences in the accuracy of people’s answers. It is not hard to see how such skewed metacognition might help drive the sharing of incorrect information on social media, causing ripples of fake news within a social network.18
Many conflicts in society arise from disagreements about fundamental cultural, political, and religious issues. These conflicts can become magnified when people are convinced that they are right and the other side is wrong. In contrast, what psychologists call intellectual humility—recognizing that we might be wrong and being open to corrective information—helps us diffuse these conflicts and bridge ideological gaps. Self-awareness is a key enabler of intellectual humility, and, when it is working as it should, it provides a critical check on our worldview.19
Thankfully, as we will see, there are ways of cultivating self-awareness and promoting it in our institutions and workplaces. By understanding the factors that affect metacognitive prowess, we can capitalize on the power of self-awareness and avoid the pitfalls of metacognitive failure. For instance, by engaging in regular social interaction, team members can intuitively apply mindreading and metacognition to adapt to each other’s communication style and avoid metacognitive mismatches when under pressure. For high-stakes decisions, such as whether or not to take on a new legal case or make a major business deal, submitting private, numerical estimates of confidence can increase the accuracy of a group’s predictions. The process of communicating science or interrogating eyewitnesses can be structured to encourage a healthy degree of doubt and skepticism—just as with New Jersey’s instructions to its judges. Those in positions of leadership—from lawyers to professors to football referees—can recognize that confidence is not always an indicator of competence and ensure that all voices, not just the loudest, are heard. Successful businesspeople—from Dalio to Bezos—know this. Innovative lawyers and scientists know this too.20
More broadly, collective self-awareness allows institutions and teams to change and innovate—to have autonomy over their futures, rather than mindlessly continuing on their current path. In the next chapter, we are going to return to the level of the individual to see that the same is also true of ourselves.