Billions of Self-Aware Brains - Building Minds That Know Themselves

Know Thyself: The Science of Self-Awareness - Stephen M Fleming 2021

Billions of Self-Aware Brains
Building Minds That Know Themselves

The biggest danger, that of losing oneself, can pass off in the world as quietly as if it were nothing; every other loss, an arm, a leg, five dollars, a wife, etc., is bound to be noticed.

—S—REN KIERKEGAARD, The Sickness unto Death

On February 12, 2002, then US secretary of defense Donald Rumsfeld was asked a question by NBC correspondent Jim Miklaszewski about evidence that the Iraqi government had weapons of mass destruction. Rumsfeld’s response was to become famous:

As we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

The idea of known and unknown unknowns is usually applied to judgments about the external world (such as weapons or economic risks). Rumsfeld’s argument was influential in persuading the United States to invade Iraq, after both the White House and UK government agreed it was too dangerous to do nothing about one particular unknown unknown: the ultimately illusory weapons of mass destruction. But we can also apply the same categories to judgments about ourselves, providing a tool for quantifying self-awareness.

This is a strange idea the first time you encounter it. A useful analogy is the index of a book. Each of the entries in the index usually points to the page number containing that topic. We can think of the index as representing the book’s knowledge about itself. Usually the book’s metacognition is accurate—the entries in the index match the relevant page number containing that topic. But if the index maker has made a mistake and added an extra, irrelevant entry, the book’s metacognition will be wrong: the index will “think” that it has pages on a topic when it actually doesn’t. Similarly, if the index maker has omitted a relevant topic, the book will have information about something that the index does not “know” about—another form of inaccurate metacognition.

In a similar way to the index of a book, metacognitive mechanisms in the human mind give us a sense of what we do and don’t know. There are some things we know, and we know that we know them (the index matches the book), such as an actor’s belief that he’ll be able to remember his lines. There are other things we know that we don’t know, or won’t be able to know, such as that we’re likely to forget more than a handful of items on a shopping list unless we write them down. And just like the unknown unknowns in Rumsfeld’s taxonomy, there are also plenty of cases in which we don’t know that we don’t know—cases in which our self-awareness breaks down.

The measurement and quantification of self-awareness has a checkered history in psychology, although some of the field’s initial pioneers were fascinated by the topic. In the 1880s, Wilhelm Wundt began collecting systematic data on what people thought about their perceptions and feelings, spending thousands of hours in the lab painstakingly recording people’s judgments. But, partly because the tools to analyze this data had not yet been invented, the resulting research papers on what became known as introspectionism were criticized for being unreliable and not as precise as other branches of science. This led to a schism among the early psychologists. In one camp sat the behaviorists, who argued that self-awareness was irrelevant (think rats in mazes). In the other camp sat the followers of Freud, who believed in the importance of self-awareness but thought it better investigated through a process of psychoanalysis rather than laboratory experiments.1

Both sides were right in some ways and wrong in others. The behaviorists were right that psychology needed rigorous experiments, but they were wrong that people’s self-awareness in these experiments did not matter. The Freudians were right to treat self-awareness as important and something that could be shaped and changed, but wrong to ground their approach in storytelling rather than scientific experiments. Paradoxically, to create a science of self-awareness, we cannot rely only on what people tell us. By definition, if you have poor metacognition, you are unlikely to know about it. Instead, a quantitative approach is needed.2

One of the first such attempts to quantify the accuracy of metacognition was made in the 1960s by a young graduate student at Stanford named Joseph Hart. Hart realized that people often think they know more than they can currently recall, and this discrepancy provides a unique window onto metacognition. For instance, if I ask you, “What is Elton John’s real name?” you might have a strong feeling that you know the answer, even if you can’t remember it. Psychologists refer to these feelings as “tip of the tongue” states, as the answer feels as though it’s just out of reach. Hart found that the strength of these feelings in response to a set of quiz questions predicted whether people would be able to subsequently recognize the correct answers. In other words, people have an accurate sense of knowing that they know, even if they cannot recall the answer.3

Hart’s approach held the key to developing quantitative measures of self-awareness. His data showed it was possible to collect systematic data on people’s judgments about themselves and then compare these judgments to the reality of their cognitive performance. For instance, we can ask people questions such as:

• Will you be able to learn this topic?

• How confident are you about making the right decision?

• Did you really speak to your wife last night or were you dreaming?

All these questions require judging the success of another cognitive process (specifically, learning, decision-making, and memory). In each case, it is then possible to assess if our judgments track our performance: we can ask whether judgments of learning relate to actual learning performance, whether judgments of decision confidence relate to the likelihood of making good decisions, and so on. If we were able to observe your confidence over multiple different occasions and record whether your answers were actually right or wrong, we could build up a detailed statistical picture of the accuracy of your metacognition. We could summarize your responses using the following table:

Image

*

The relative proportion of judgments that fall into each box in the table acts as a ruler by which we can quantify the accuracy of your metacognition. People with better metacognition will tend to rate higher confidence when they’re correct (box A) and lower confidence when they make errors (box D). In contrast, someone who has poorer metacognition may sometimes feel confident when they’re actually wrong (box C) or not know when they’re likely to be right (box B). The more As and Ds you have, and the fewer Bs and Cs, the better your metacognition—what we refer to as having good metacognitive sensitivity. Metacognitive sensitivity is subtly but importantly different from metacognitive bias, which is the overall tendency to be more or less confident. While on average I might be overconfident, if I am still aware of each time I make an error (the Ds in the table), then I can still achieve a high level of metacognitive sensitivity. We can quantify people’s metacognitive sensitivity by fitting parameters from statistical models to people’s confidence ratings (with names such as meta-d’ and Φ). Ever more sophisticated models are being developed, but they ultimately all boil down to quantifying the extent to which our self-evaluations track whether we are actually right or wrong.4

What Makes One Person’s Metacognition Better than Another’s?

When I was starting my PhD in cognitive neuroscience at UCL in 2006, brain imaging studies such as those we encountered in the previous chapter were beginning to provide clear hints about the neural basis of self-awareness. What was lacking, however, were the tools needed to precisely quantify metacognition in the lab. In the first part of my PhD, I dabbled in developing these tools as a side project, while spending the majority of my time learning how to run and analyze brain imaging experiments. It wasn’t until my final year that a chance discussion made me realize that we could combine neuroscience with this toolkit for studying self-awareness.

On a sunny July day in 2008, I had lunch in Queen Square with Rimona Weil, a neurologist working on her PhD in Geraint Rees’s group at our Centre. She told me how she and Geraint were interested in what made individuals different from one another, and whether such differences were related to measurable differences in brain structure and function. In return, I mentioned my side project on metacognition—and almost simultaneously we realized we could join forces and ask what it is about the brain that made one person’s metacognition better than another’s. I was unaware then that this one project would shape the next decade of my life.

Rimona and I set out to relate individual differences in people’s metacognition to subtle differences in their brain structure, as a first attempt to zero in on brain networks that might be responsible for metacognitive ability. In our experiment, people came into the laboratory for two different tests. First, they sat in a quiet room and performed a series of difficult judgments about visual images. Over an hour of testing, people made hundreds of decisions as to whether the first or second flashed image on a computer screen contained a slightly brighter patch. After every decision, they indicated their confidence on a six-point scale. If people made a lot of mistakes, the computer automatically made the task a bit easier. If they were doing well, the computer made the task a bit harder. This ensured everyone performed at a similar level and allowed us to focus on measuring their metacognition—how well they could track moment-to-moment changes in their performance.

This gave us two scores for each person: how adept they were at making visual discriminations and how good their metacognition was. Despite people performing equally well on the primary task, we still observed plenty of variation in their metacognitive sensitivity.

In a second part of the experiment, the same people came back to the lab and had their brains scanned using MRI. We collected two types of data: The first scan allowed us to quantify differences in the volume of gray matter (the cell bodies of neurons) in different regions of the brain. The second scan examined differences in the integrity of white matter (the connections between brain regions). Given the findings in other studies of patients with brain damage, we hypothesized that we would find differences in the PFC related to metacognition. But we didn’t have any idea of what these differences might be.

The results were striking. People with better metacognition tended to have more gray matter in the frontal pole (also known as the anterior prefrontal or frontopolar cortex)—a region of the PFC toward the very front of the brain. They also had greater white matter integrity in bundles of fibers projecting to and from this region. Together, these findings suggest that we may have stumbled upon part of a brain circuit playing a role in supporting accurate self-awareness.5

This data is difficult to collect—involving many hours of painstaking psychophysics and brain scanning—but once it is assembled the analysis is surprisingly swift. One of the most widely used (and freely available) software packages for analyzing brain imaging data, called SPM, has been developed at our Centre. No one had previously looked for differences in the healthy brain related to metacognition, and there was a good chance of finding nothing. But after writing hundreds of lines of computer code to process the data, all that was needed was a single mouse click in SPM to see whether we had any results at all. It was a thrilling feeling when statistical maps, rather than blank brains, started to emerge on the screen.

Image

The frontal pole, part of the prefrontal cortex

Our research was exploratory, and voxel-based morphometry, as this technique is known, is a coarse and indirect measure of brain structure. In hindsight, we now know that this sample size was probably too small, or underpowered, for this kind of experiment. Statistical power refers to whether an experiment is likely to find an effect, given that it actually exists. The number of samples you need is governed by the size of the effect you expect to find. For instance, to determine whether men are statistically taller than women, I would perhaps need to sample fifteen to twenty of each to gain confidence in the difference and iron out any noise in my samples. But to establish that children are smaller than adults of either gender (a bigger effect size), I would need to sample fewer of each category. In the case of brain imaging, it’s now appreciated that effect sizes tend to be small, especially for differences between individuals, and therefore much larger samples are needed than the ones we were studying only a few years ago.6

It was reassuring, then, that other labs were finding convergent evidence for a role of the frontopolar cortex in metacognition. To have confidence in any finding in science, it is important to replicate it using multiple approaches. Coincidentally, Katsuki Nakamura’s lab in Japan published a similar study in the same year as ours, but using measurements of brain function rather than structure. They found that the level of activity in the frontopolar cortex predicted differences in metacognition across individuals. A few years later, my collaborator Hakwan Lau replicated our results in his lab at Columbia University in New York, showing again that gray matter volume in the frontal pole was higher in individuals with better metacognition.7

The frontal pole, at the very front of the PFC, is thought to sit near the top of the kind of processing hierarchies that we encountered in the previous chapter. It is likely no coincidence that the frontopolar cortex is also one of the most expanded brain regions in humans compared to other primates. Anatomical studies of human and macaque monkey brains by researchers at Oxford University have found that many of the same subregions of the PFC can be reliably identified in both species. But they have also isolated unique differences in the lateral frontal pole, where the human brain appears to have acquired a new way station.8

Since we completed these initial studies, my lab has now amassed several data sets quantifying volunteers’ metacognitive sensitivity on a variety of tasks. From this data, we are learning that there are surprisingly large and consistent differences in metacognition between individuals. One person may have limited insight into how well they are doing from one moment to the next, while another may have good awareness of whether they are likely to be right or wrong—even if they are both performing at the same level on the task. Another feature of metacognition is that, in controlled laboratory settings, it is relatively independent of other aspects of task performance. Your metacognition can still be in fine form as long as you recognize that you might be performing badly at the task (by having appropriately low confidence in your incorrect answers). This is the laboratory equivalent of recognizing a shoddy grasp of calculus or becoming painfully aware of disfluency in a new language. Self-awareness is often most useful for recognizing when we have done stupid things.

The general picture that emerges from research using these tools to quantify metacognition is that while people are often overconfident—thinking that they are better than others—they are also reasonably sensitive to fluctuations in their performance. Surveys routinely find that people think they are “better than average” in attributes ranging from driving skill to job performance to intelligence—an overconfident metacognitive bias. (Academics are some of the worst culprits: in one study, 94 percent of university professors in the United States rated their teaching performance as “above average”—a statistical impossibility!) But despite these generally inflated self-evaluations, we can still recognize when we have made a mistake on a test or put the car in the wrong gear.9

We have also found that metacognition is a relatively stable feature of an individual—it is trait-like. In other words, if you have good metacognition when I test you today, then you are also likely to have good metacognition when I test you tomorrow. The Argentinian neuroscientist Mariano Sigman refers to this as your metacognitive “fingerprint.”10 The trait-like nature of metacognition suggests that other features of people’s personality, cognitive abilities, and mental health might play a role in shaping self-awareness. In one of our studies, Tricia Seow and Marion Rouault asked hundreds of individuals to fill in a series of questionnaires about their mood, anxiety, habits, and beliefs. From the pattern of their answers, we could extract a set of numbers for each individual that indexed where they fell on three core dimensions of mental health: their levels of anxiety and depression, their levels of compulsive behavior, and their levels of social withdrawal. Where people fell along these three dimensions predicted their metacognitive fingerprint, measured on a separate task. More anxious people tended to have lower confidence but heightened metacognitive sensitivity, whereas more compulsive people showed the reverse pattern. This result is consistent with the idea that how we think about ourselves may fluctuate in tandem with our mental health.11

As part of this experiment, we also included a short IQ test. We found that while IQ was consistently related to overall task performance, as expected, it was unrelated to metacognitive sensitivity. Because our sample contained almost one thousand individuals, it is likely that if a systematic relationship between IQ and metacognition had existed, then we would have detected it. Another piece of evidence for a difference between intelligence and self-awareness comes from a study I carried out in collaboration with neuropsychologist Karen Blackmon in New York, while I was a postdoctoral researcher at NYU. We found that patients who had recently had surgery to remove tumors from their anterior PFC had similar IQ to a control group but showed substantial impairments in metacognitive sensitivity. It is intriguing to consider that while both self-awareness and intelligence may depend on the PFC, the brain circuits that support flexible thinking may be distinct from those supporting thinking about thinking.12

Constructing Confidence

The abstract nature of metacognition makes sense when we consider that the uppermost levels of the prefrontal hierarchy have access to a broad array of inputs from elsewhere in the brain. They have a wide-angle lens, pooling many different sources of information and allowing us to build up an abstract model of our skills and abilities. This implies that brain circuits involved in creating human self-awareness transcend both perception and action—combining estimates of uncertainty from our sensory systems with information about the success of our actions. The two building blocks of self-awareness we encountered at the start of the book are becoming intertwined.

Emerging evidence from laboratory experiments is consistent with this idea. For instance, subtly perturbing brain circuits involved in action planning using transcranial magnetic stimulation can alter our confidence in a perceptual judgment, even though our ability to make such judgments remains unchanged. Similarly, the simple act of committing to a decision—making an action to say that the stimulus was A or B—is sufficient to improve metacognitive sensitivity, suggesting that our own actions provide an important input to computations supporting self-awareness.13 The results of experiments with typists also show us that detecting our errors relies on keeping track of both the action we made (the key press) and the perceptual consequence of this action (the word that appears on the screen). If the experiment is rigged such that a benevolent bit of computer code kicks in to correct the typists’ errors before they notice (just like the predictive text on a smartphone), they slow down their typing speed—suggesting that the error is logged somewhere in their brain—but do not admit to having made a mistake. In contrast, if extra errors are annoyingly inserted on the screen—for instance, the word “word” might be altered to “worz” on the fly—the typists gullibly accept blame for errors they never made.14

More broadly, we can only recognize our errors and regret our mistakes if we know what we should have done in the first place but didn’t do. In line with this idea, the neuroscientists Lucie Charles and Stanislas Dehaene have found that the neural signatures of error monitoring disappear when the stimulus is flashed so briefly that it is difficult to see. This makes intuitive sense—if we don’t see the stimulus, then we are just guessing, and there is no way for us to become aware of whether we have made an error. We can only consciously evaluate our performance when the inputs (perception) and outputs (action) are clear and unambiguous.15

The wide-angle lens supporting metacognition means that the current state of our bodies also exerts a powerful influence on how confident we feel about our performance. For instance, when images of disgusted faces—which lead to changes in pupil size and heart rate—are flashed very briefly on the computer screen (so briefly that they are effectively invisible), people’s confidence in completing an unrelated task is subtly modulated. Similar cross talk between bodily states and self-awareness is seen when people are asked to name odors: they tend to be more confident about identifying smells they find emotionally laden (such as petrol or cloves) than those they rated as less evocative (such as vanilla or dill). We can think of these effects as metacognitive versions of the visual illusions we encountered in Chapter 1. Because different emotional and bodily states are often associated with feelings of uncertainty in daily life, manipulating emotions in the lab may lead to surprising effects on our metacognitive judgments.16

Together, these laboratory studies of metacognition tell us that self-awareness is subtly buffeted by a range of bodily and brain states. By pooling information arising from different sources, the brain creates a global picture of how confident it is in its model of the world. This global nature of metacognition endows human self-awareness with flexibility: over the course of a single day we may reflect on what we are seeing, remembering, or feeling, and evaluate how we are performing either at work or in a sports team. Evidence for this idea comes from studies that have found that people’s metacognition across two distinct tasks—such as a memory task and a general-knowledge task—is correlated, even if performance on the two tasks is not. Being able to know whether we are right or wrong is an ability that transcends the particular problem we are trying to solve. And if metacognition relies on a general resource, this suggests that when I reflect on whether my memory is accurate, I am using similar neural machinery as when I reflect on whether I am seeing things clearly, even though the sources of information for these two judgments are very different. Data from my lab supports this idea. Using brain imaging data recorded from the PFC, we were able to train a machine learning algorithm to predict people’s levels of confidence on a memory task (Which of two images do you remember?) using data on neural patterns recorded during a perception task (Which of two images is brighter?). The fact that we could do this suggests the brain tracks confidence using a relatively abstract, generic neural code.17

We can even think of personality features such as self-esteem and self-worth as sitting at the very top of a hierarchy of internal models, continuously informed by lower-level estimates of success in a range of endeavors, whether at work, at home, or on the sports field. In studies led by Marion Rouault, we have found that people’s “local” confidence estimates in completing a simple decision-making task are used to inform their “global” estimates of how well they thought they were performing overall. These expectations also affect how lower-level self-monitoring algorithms respond to various types of error signal. In an impressive demonstration of this effect, Sara Bengtsson has shown that when people are primed to feel clever (by reading sentences that contain words such as “brightest”), brain responses to errors made in an unrelated task are increased, whereas priming people to feel stupid (by reading words such as “moron”) leads to decreased error-related activity. This pattern of results is consistent with the primes affecting higher-level estimates about ourselves. When I feel like I should perform well (the “clever” context) but actually make errors, this is a bigger error in my prediction than when I expect to perform poorly.18

One appealing hypothesis is that the cortical midline structures we encountered in the previous chapter create a “core” or “minimal” representation of confidence in various mental states, which, via interactions with the lateral frontopolar cortex, supports explicit judgments about ourselves. There are various pieces of data that are consistent with the this idea. Using a method that combines EEG with fMRI, one recent study has shown that confidence signals are seen early on in the medial PFC, followed by activation in the lateral frontal pole at the time when an explicit self-evaluation is required. We have also found that the extent to which medial and lateral prefrontal regions interact (via an analysis that examines the level of correlation in their activity profiles) predicts people’s metacognitive sensitivity.19

By carefully mapping out individuals’ confidence and performance on various tasks, then, we are beginning to build up a detailed picture of our volunteers’ self-awareness profile. We are also beginning to understand how self-awareness is supported by the brain. It seems that no two individuals are alike. But what is the origin of these differences? And is there anything we can do to change them?

Design Teams for Metacognition

The way our brains work is the result of a complex interaction between evolution (nature) and development (nurture). This is the case for even basic features of our mental and physical makeup; it’s almost never the case that we can say something is purely “genetic.” But it is useful to consider the relative strength of different influences. For many of the neuronal autopilots we encountered early in the book, genetic influences lead the way. Our visual systems can solve inverse problems at breakneck speed because they have been shaped by natural selection to be able to efficiently perceive and categorize objects in our environment. Over many generations, those systems that were better able to solve the problem of vision tended to succeed, and others died out.

But in other cases, the mechanisms sitting inside our heads are also shaped by our upbringing and education. For instance, our education systems create a cognitive process for reading—one that, if it goes to plan, results in neural systems configured for reading written words. Genetic evolution provides us with a visual system that can detect and recognize fine text, and our culture, parenting, and education does the rest.

Reading is an example of intentional design. We want to get our kids to read better, and so we send them to school and develop state educational programs to achieve this goal. But there are also examples of mental abilities that are shaped by society and culture in ways that are not intentional—that arise from the natural interactions between children and their parents or teachers. The Oxford psychologist Cecilia Heyes suggests that mindreading is one such process: we are (often unintentionally) taught to read minds by members of our social group, similar to the way that we are intentionally taught to read words. Mindreading is an example of a cognitive gadget that we acquire by being brought up in a community that talks about other people’s mental states.20

Evidence for this idea comes from studies comparing mindreading across cultures. In Samoa, for instance, it is considered impolite to talk about mental states (how you are feeling or what you are thinking), and in these communities children pass mindreading tests much later than children in the West, at around eight years of age. Perhaps the most persuasive evidence that mindreading is culturally acquired comes from an investigation of deaf users of Nicaraguan Sign Language. When this language was in its infancy, it only had rudimentary words for mental states, and people who learned it early on had a fairly poor understanding of false beliefs. But those who acquired it ten years later—when it was a mature language with lots of ways to talk about the mind—ended up being more adept at mindreading.21

A role for culture and parenting does not mean that genetics do not factor in, of course, or that there cannot be genetic reasons for failures of cultural learning. For instance, dyslexia has a heritable (genetic) component, which may lead to relatively general problems with integrating visual information. These genetic adjustments may go unnoticed in most situations, only becoming uncovered by an environment that places a premium on visual skills such as reading.

By comparing the similarity of identical twins (who share the same DNA) with the similarity of fraternal twins (who share only portions of DNA, in the same way as ordinary siblings), it is possible to tease out the extra contribution the shared genetic code makes to different aspects of the twins’ mental lives. When this analysis was applied to mindreading (using data from more than one thousand pairs of five-year-old twins), the correlations in performance for identical and nonidentical twins were very similar. This result suggests that the main driver of variation in mindreading skill is not genetics, but the influence of a shared environment (sharing the same parents).22

Similarly, in the case of metacognition, a genetic starter kit may enable the implicit self-monitoring that is in place early in life, and then our parents and teachers take over to finish the job. Parents often help their children figure out what they are feeling or thinking by saying things like “Now you see me” and “Now you don’t” in peekaboo, or suggesting that the child might be tired or hungry in response to various cries. As children grow up, they may continue to receive feedback that helps them understand what they are going through: sports coaches might explain to student athletes that feelings of nervousness and excitement mean that they are in the zone, rather than that they are unprepared.23

This extended period of development and learning perhaps explains why metacognition changes throughout childhood, continuing into adolescence. In a study I carried out in collaboration with Sarah-Jayne Blakemore’s research group, we asked twenty-eight teenagers aged between eleven and seventeen to take the same metacognition test that Rimona and I developed: judging which image contained a brighter patch and rating their confidence in their judgment. For each volunteer, we then calculated how reliably their confidence ratings could tell apart their correct and incorrect judgments. We also measured how many correct judgments they made overall. We found that while age did not matter for the ability to choose the brighter patch, it did matter for their metacognition. Older teenagers were better at monitoring how well they were doing, reaching the highest level of self-awareness around the same time they were taking their A levels.24

The extended development of metacognition in adolescence is related to the fact that one of the key hubs for metacognition in the brain, the PFC, takes a long time to mature. A baby’s brain has more connections between cells (known as synapses) than an adult’s brain—around twice as many, in fact. Rather than building up the connections piece by piece, the brain gradually eliminates connections that are no longer required, like a sculpture emerging from a block of marble. Improvements in metacognition in childhood are strongest in individuals who show more pronounced structural changes in parts of the frontal lobe—in particular, the ventromedial PFC, the same region that we saw in the previous chapter is a nexus for both metacognition and mindreading. This protracted development of sensitivity to our own minds—through childhood and into our teenage years—is another piece of evidence that acquiring fully-fledged self-awareness is a long, arduous process, perhaps one that requires learning new tools for thinking.25

Losing Insight

Differences between individuals in the general population are relatively subtle, and large samples are needed in order to detect relationships between the brain and metacognition. But more extreme changes in self-awareness are unfortunately common in disorders of mental health. Neurologists and psychiatrists have different words to describe this phenomenon. Neurologists talk of anosognosia—literally, the absence of knowing. Psychiatrists refer to a patient’s lack of insight. In psychiatry in particular it used to be thought that lack of insight was due to conscious denial or other kinds of strategy to avoid admitting to having a problem. But there is growing evidence that lack of insight may be due to the brain mechanisms responsible for metacognition being themselves affected by brain damage or disorders of mental health.26

A striking example of one such case was published in 2009 in the journal Neuropsychologia. It told the story of patient “LM,” a highly intelligent sixty-seven-year-old lady, recently retired from a job in publishing. She had suddenly collapsed in a heap, unable to move her left leg and arm. Brain scans revealed that a stroke had affected the right side of her brain, leading to paralysis. When Dr. Katerina Fotopoulou, a clinician at UCL, went to examine her at hospital, she was able to move her right arm, gesture, and hold a normal conversation. But her left side was frozen in place.

Remarkably, despite her injury confining her to a hospital bed, LM believed she was just fine. Upon questioning, she batted away the challenge that she was paralyzed and breezily asserted that she could move her arm and clap her hands, attempting to demonstrate this by waving her healthy right hand in front of her body as if clapping an imaginary left hand. Despite being otherwise sane, LM had a view of herself markedly at odds with reality, and her family and doctors could not convince her otherwise. Her self-knowledge had been eroded by damage to her brain.27

In the case of neurological problems such as stroke or brain tumors, clinicians often describe the consequence of brain damage as attacking the very foundations of our sense of self. The British neurosurgeon Henry Marsh notes that “the person with frontal-lobe damage rarely has any insight into it—how can the ’I’ know that it is changed? It has nothing to compare itself with.”28 This is sometimes referred to as the frontal lobe paradox. The paradox is that people with frontal lobe damage may have significant difficulties with everyday tasks such as cooking or organizing their finances, but because their metacognition is impaired, they are unable to recognize that they need a helping hand. Without metacognition, we may lose the capacity to understand what we have lost. The connection between our self-impression and the reality of our behavior—the reality seen by our family and friends—becomes weakened.29

Anosognosia is also common in various forms of dementia. Take the following fictional example: Mary is seventy-six years old and lives alone in a small town in Connecticut. Each morning, she walks to the nearby shops to pick up her groceries, a daily routine that until recently she carried off without a hitch. But then she begins to find herself arriving at the shop without knowing what she came for. She becomes frustrated that she didn’t bother to write a list. Her daughter notices other lapses, such as forgetting the names of her grandchildren. While it is clear to her doctors that Mary has Alzheimer’s disease, for her, like millions of others with this devastating condition, she is not aware of the havoc it is wreaking on her memory. In a cruel double hit, the disease has attacked not only brain regions supporting memory but also regions involved in metacognition. These impairments in metacognition have substantial consequences, such as a reluctance to seek help, take medication, or avoid situations (such as going out in the car) in which memory failure can be dangerous—all of which increase the risk of harm and the burden on families and caregivers. However, despite its significant clinical importance, metacognition is not typically part of a standard neuropsychological assessment. Aside from a handful of pioneering studies, a lack of metacognition is still considered only an anecdotal feature of dementia, acutely understood by clinicians but not routinely quantified or investigated as a consequence of the disease.30

It is not difficult to see how these gradual alterations in metacognition might ultimately lead to a complete loss of connection with reality. If I am no longer able to track whether my memory or perception is accurate, it may be hard to distinguish things that are real from things that I have just imagined. In fact, this kind of failure to discern reality from imagination is something we all experience. One of my earliest memories from my childhood is of being at the zoo with my grandparents, watching the elephants. This might be a true memory; but, equally, because I have heard over the years that my grandparents took me to the zoo, it could be an imagined memory that I now take as being real. Being able to tell the difference requires me to ask something about the nature of another cognitive process—in this case, whether my memory is likely to be accurate or inaccurate. The same kind of cognitive processes that I use to second-guess myself and question my memories or perceptions—the processes that underpin metacognition—are also likely to contribute to an ability to know what is real and what is not.31

Some of the most debilitating of these cases are encountered in schizophrenia. Schizophrenia is a common disorder (with a lifetime prevalence of around 1 percent) with an onset in early adulthood, and one that can lead to a profound disconnection from reality. Patients often suffer from hallucinations and delusions (such as believing that their thoughts are being controlled by an external source). Psychiatric disorders such as schizophrenia are also associated with changes in the brain. We can’t necessarily pick up these changes on an MRI scan in the same way as we can a large stroke, but we can see more subtle and widespread changes in how the brain is wired up and the chemistry that allows different regions to communicate with each other. If this rewiring affects the long-range connections between the association cortex and other areas of the brain, a loss of self-awareness and contact with reality may ensue.

My former PhD adviser Chris Frith has developed an influential theory of schizophrenia that focuses on the deficit in self-awareness as being a root cause of many symptoms of psychosis. The core idea is that if we are unable to predict what we will do next, then we may reasonably infer that our actions and thoughts are being controlled by some external, alien force (sometimes literally). To the extent to which metacognition and mindreading share a common basis, this distortion in metacognitive modeling may also extend to others—making it more likely that patients with delusions come to believe that other people are intending to communicate with them or mean them harm.32

Psychologists have devised ingenious experiments for testing how we distinguish reality from imagination in the lab. In one experiment, people were given well-known phrases such as “Laurel and Hardy” and asked to read them aloud. Some of the phrases were incomplete—such as “Romeo and ?”—and in these cases they were asked to fill in the second word themselves, saying “Romeo and Juliet.” In another condition, the subjects simply listened to another person reading the sentences. Later in the experiment, they were asked questions about whether they had seen or imagined the second word in the phrase. In general, people are good at making these judgments but not perfect—sometimes they thought they had perceived something when they had actually imagined it, and vice versa. When people answer questions about whether something was perceived or imagined, they activate the frontopolar cortex—the region of the brain that we have found to be consistently involved in metacognition. Variation in the structure of the PFC also predicts reality-monitoring ability, and similar neural markers are altered in people with schizophrenia.33

Understanding failures of self-awareness as being due to physical damage to the brain circuity for metacognition provides us with a gentler, more humane perspective on cases in which patients begin to lose contact with reality. Rather than blaming the person for being unable to see what has changed in their lives, we can instead view the loss of metacognition as a consequence of the disease. Therapies and treatments targeted at metacognitive function may prove helpful in restoring or modulating self-awareness. Recent attempts at developing metacognitive therapies have focused on gradually sowing the seeds of doubt in overconfident or delusional convictions and encouraging self-reflection. Clinical trials of the effectiveness of one such approach have found that it showed a small but consistent effect in reducing delusions, and it has become a recommended treatment for schizophrenia in Germany and Australia.34

Cultivating Self-Awareness

We have seen that self-awareness is both grounded in brain function and affected by our social environment. While the initial building blocks of metacognition (the machinery for self-monitoring and error detection) may be present early in life, fully-fledged self-awareness continues to develop long into adolescence, and, just like the ability to understand the mental states of others, is shaped by our culture and upbringing. All these developmental changes are accompanied by extended changes to the structure and function of the prefrontal and parietal network that is critical for adult metacognition.

By the time we reach adulthood, most of us have acquired a reasonably secure capacity for knowing ourselves. We have also seen, though, that metacognition varies widely across individuals. Some people might perform well at a task but have limited insight into how well they are doing. Others might have an acute awareness of whether they are getting things right or wrong, even (or especially) when they are performing poorly. Because our self-awareness acts like an internal signal of whether we think we are making errors, it is easy to see how even subtle distortions in metacognition may lead to persistent under- or overconfidence and contribute to anxiety and stress about performance.

The good news is that metacognition is not set in stone. LM’s case, for instance, had an unexpected and encouraging resolution. In one examination session, Dr. Fotopoulou used a video camera to record her conversations. At a later date, she asked LM to watch the tape. A remarkable change occurred:

As soon as the video stopped, LM immediately and spontaneously commented: “I have not been very realistic.”

EXAMINER (AF): “What do you mean?”

LM: “I have not been realistic about my left side not being able to move at all.”

AF: “What do you think now?”

LM: “I cannot move at all.”

AF: “What made you change your mind?”

LM: “The video. I did not realize I looked like this.”

This exchange took place in the space of only a few minutes, but LM’s restored insight was still intact six months later. Seeing new information about her own body was sufficient to trigger a sudden transformation in self-awareness. This is, of course, only a single case report, and the dynamics of losing and gaining insight are likely to vary widely across different disorders and different people. But it contains an important lesson: metacognition is not fixed. Just as LM was able to recalibrate her view of herself, our own more subtle cases of metacognitive failure can also be shaped and improved. A good place to start is by making sure we are aware of the situations in which self-awareness might fail. Let’s turn to these next.