Knowing Me, Knowing You - Building Minds That Know Themselves

Know Thyself: The Science of Self-Awareness - Stephen M Fleming 2021

Knowing Me, Knowing You
Building Minds That Know Themselves

Just as our picture of the physical world is a fantasy constrained by sensory signals, so our picture of the mental world, of our own or of others, is a fantasy constrained by sensory signals about what we, and they, are doing and saying.

—CHRIS FRITH, Making Up the Mind

Unfortunately, we cannot go back in time and measure self-awareness in our now-extinct ancestors. But one elegant story about the origins of self-awareness goes like this: At some point in our evolutionary history, humans found it important to keep track of what other people were thinking, feeling, and doing. Psychologists refer to the skills needed to think through problems such as “Sheila knows that John knows that Janet doesn’t know about the extra food” as theory of mind, or mindreading for short. After mindreading was established, the story goes, at some point it gradually dawned on our ancestors that they could apply such tools to think about themselves.

This particular transition in human history may have occurred somewhere between seventy thousand and fifty thousand years ago, when the human mind was undergoing substantial cognitive changes. Discoveries of artifacts show that the wearing of jewelry such as bracelets and beads became more common around this time—suggesting that people began to care about and understand how they were perceived by others. The first cave art can also be dated to a similar era, emerging in concert in locations ranging from Chauvet in France to Sulawesi in Indonesia. These haunting images show stencils of the hands of the artist or lifelike and beautiful drawings of animals such as bison or pigs. It’s impossible to know for sure why they were created, but it is clear that these early human artists had some appreciation of how their paintings influenced other minds—whether the minds of other humans or the minds of the gods.1

The idea that there is a deep link between awareness of ourselves and awareness of others is often associated with the Oxford philosopher Gilbert Ryle. He argued that we self-reflect by applying the tools we use to understand other minds to ourselves: “The sorts of things that I can find out about myself are the same as the sorts of things that I can find out about other people, and the methods of finding them out are much the same.”2 Ryle’s proposal is neatly summarized by an old New Yorker cartoon in which a husband says to his wife, “How should I know what I’m thinking? I’m not a mind reader.”

I first encountered Ryle’s ideas after finishing my PhD at UCL. I had just moved to New York to start my postdoctoral fellowship under the guidance of Nathaniel Daw, an expert on computational models of the brain. I had two main goals for this postdoc: the first was to teach myself some more mathematics (having dropped the subject much too early, at the age of sixteen), and the second was to think about ways of building computational models of self-awareness. I initially intended to visit NYU for a year or two, but New York is a difficult city to leave and I kept extending my stay. This was lucky for two reasons: it gave Nathaniel and me the chance to figure out our model of metacognition (which, like most science worth doing, took much longer than expected), and it gave me a chance to meet my wife, who was working as a diplomat at the United Nations.

The model Nathaniel and I developed is known as the second-order model of metacognition. The idea is that when we self-reflect, we use the same computational machinery as when we think about other people, just with different inputs. If a second-order Rylean view of self-awareness is on the right track, then we should be able to discover things about how self-awareness works from studying social cognition. More specifically—how we think about other minds.3

Thinking About Minds

A defining feature of mindreading is that it is recursive—as in, “Keith believes that Karen thinks he wants her to buy the cinema tickets.” Each step in this recursion may be at odds with reality. For instance, Karen may not be thinking that at all, and even if she is, Keith may not want her to buy the tickets. The success or failure of mindreading often turns on this ability to represent the possibility that another person’s outlook on a situation might be at odds with our own. Sometimes this can be difficult to do, as captured by another New Yorker cartoon caption: “Of course I care about how you imagined I thought you perceived I wanted you to feel.”

Mismatches between what we think others are thinking and what they are actually thinking can be rich sources of comic misunderstanding. When US president Jimmy Carter gave a speech at a college in Japan in 1981, he was curious to know how the interpreter translated his joke, as it sounded much shorter than it should have been and the audience laughed much harder than they usually did back home. After much cajoling, the interpreter finally admitted that he had simply said, “President Carter told a funny story. Everyone must laugh.”

As adults, we usually find mindreading effortless; we don’t have to churn through recursive calculations of who knows what. Instead, in regular conversation, we have a range of shared assumptions about what is going on in each other’s minds—what we know, and what others know. I can text my wife and say, “I’m on my way,” and she will know that by this I mean I’m on my way to collect our son from day care, not on my way home, to the zoo, or to Mars. But this fluency with reading minds is not something we are born with, nor is it something that is guaranteed to emerge in development.

In a classic experiment designed to test the ability for mindreading, children were told stories such as the following: Maxi has put his chocolate in the cupboard. While Maxi is away, his mother moves the chocolate from the cupboard to the drawer. When Maxi comes back, where will he look for the chocolate? Only if children are able to represent the fact that Maxi thinks the chocolate is still in the cupboard (known as a false belief, because it conflicts with reality) can they answer correctly. Until the age of four, children often fail this test, saying that Maxi will look for the chocolate where it actually is, rather than where he thinks it is. Children with autism tend to struggle with this kind of false-belief test, suggesting that they have problems with smoothly tracking the mental states of other people. Problems with mindreading in autism can be quite specific. In one experiment, autistic children were just as good at, if not better than, children of a similar age when asked to order pictures that implied a physical sequence (such as a rock rolling down a hill) but selectively impaired at ordering pictures that need an understanding of changes in mental states (such as a girl being surprised that someone has moved her teddy bear).4

A similar developmental profile is found in tests of children’s self-awareness. In one experiment from Simona Ghetti’s lab, three-, four-, and five-year-old children were first asked to memorize a sequence of drawings of objects, such as a boat, baby carriage, and broom. They were then asked to pick, from pairs of drawings, which one they had seen before. After each pair, the children were asked to indicate their confidence by choosing a picture of another child who most matched how they were feeling: very unsure, a little unsure, or certain they were right. Each age group had a similar level of memory performance—they all tended to forget the same number of pictures. But their metacognition was strikingly different. Three-year-olds’ confidence ratings showed little difference between correct and incorrect decisions. Their ability to know whether they were right or wrong was poor. In contrast, the four- and five-year-olds showed good metacognition, and were also more likely to put high-confidence answers forward to get a prize. Just as adults taking a multiple-choice exam may elect to skip a question when they feel uncertain, by the time children reach four years old they are able to notice when they might be wrong and judiciously put those answers to one side.5

It is striking, then, that the ability to estimate whether someone else has a different view of the world—mindreading—emerges in children around the same time that they acquire explicit metacognition. Both of these abilities depend on being able to hold reality at arm’s length and recognizing when what we believe might be deviating from reality. In other words, to understand that we ourselves might be incorrect in our judgments about the world, we engage the same machinery used for recognizing other people’s false beliefs. To test this idea, in one study children were first presented with “trick” objects: a rock that turned out to be a sponge, or a box of Smarties that actually contained pencils. When asked what they thought the object was when they first perceived it, three-year-olds said that they knew all along that the rock was a sponge and that the Smarties box was full of pencils. But by the age of five, most children recognized that their first impression of the object was false—they successfully engaged in self-doubt. In another version of this experiment, children sat opposite each other and various boxes were placed on a table in between them. Each box contained a surprise object, such as a coin or a piece of chocolate, and one of the children was told what was in the box. Only half of three-year-olds correctly realized that they didn’t know what was in the box when they had not been told, whereas by the age of five all of the children were aware of their own ignorance.6

It is possible, of course, that all this is just a coincidence—that metacognition and mindreading are two distinct abilities that happen to develop at a similar rate. Or it could be that they are tightly intertwined in a virtuous cycle, with good metacognition facilitating better mindreading and vice versa. One way to test this hypothesis is to ask whether differences in mindreading ability early in childhood statistically predict later self-awareness. In one study, at least, this was found to be the case: mindreading ability at age four predicted later self-awareness, even when controlling for differences in language development. Another way to test this hypothesis is to ask whether the two abilities—metacognition and mindreading—interfere with each other, which would indicate that they rely on a common mental resource. Recent data is consistent with this prediction: thinking about what someone else is feeling disrupts the ability to self-reflect on our own task performance but does not affect other aspects of task performance or confidence.This is exactly what we would expect if awareness of ourselves and others depends on common neural machinery.7

Self-awareness does not just pop out of nowhere, though. As we saw in the previous chapter, laboratory experiments show that the building blocks of self-monitoring are already in place in infants as young as twelve months old. By monitoring eye movements, it is possible to detect telltale signs that toddlers under than the age of three are also sensitive to false beliefs. From around two years of age, children begin to evaluate their behavior against a set of standards or rules set by parents and teachers, and they show self-conscious emotions such as guilt and embarrassment when they fall short and pride when they succeed. This connection between metacognition and self-conscious emotions was anticipated by Darwin, who pointed out that “thinking what others think of us… excites a blush.”8

An understanding of mirrors and fluency with language are also likely to contribute to the emergence of self-awareness in childhood. In the famous mirror test, a mark is made on the test subject’s body, and if they make movements to try to rub it off, this is evidence that they recognize the person in the mirror as themselves, rather than someone else. Children tend to pass this test by the age of two, suggesting that they have begun to understand the existence of their own bodies. Being able to recognize themselves in a mirror also predicts how often children use the personal pronoun (saying things like “I,” “me,” “my,” or “mine”), suggesting that awareness of our bodies is an important precursor to becoming more generally self-aware.9

Linguistic fluency also acts as a booster for recursive thought. The mental acrobatics required for both metacognition and mindreading share the same set of linguistic tools: thinking “I believe x” or “She believes x.” Again, it is likely to be no coincidence that the words we use to talk about mental states—such as “believe,” “think,” “forget,” and “remember”—arise later in childhood than words for bodily states (“hungry!”), and that this developmental shift occurs in English, French, and German around the same time as children acquire an understanding of other minds. Just like language, mindreading is recursive, involving inserting what you believe to be someone else’s state of mind into your own.10

Watching self-awareness emerge in your own child or grandchild can be a magical experience. My son Finn was born halfway through writing this book, and around the time I was finalizing the proofs, when he was eighteen months old, we moved to a new apartment with a full-length mirror in the hallway. One afternoon, when we were getting ready to go out to the park, I quietly watched as he began testing his reflection in the mirror, gradually moving his head from side to side. He then slowly put one hand into his mouth while watching his reflection (a classic example of “mirror touch”) and a smile broke out across his face as he turned to giggle at me.

There may even be a connection between the first glimmers of self-awareness and the playfulness of childhood. Initial evidence suggests that markers of self-awareness (such as mirror-self-recognition and pronoun use) are associated with whether or not children engage in pretend play, such as using a banana as a telephone or creating an elaborate tea party for their teddy bears. It’s possible that the emergence of metacognition allows children to recognize the difference between beliefs and reality and create an imaginary world for themselves. There is a lovely connection here between the emergence of play in children and the broader role of metacognition and mindreading in our appreciation of theater and novels as adults. We never stop pretending—it is just the focus of our pretense changes.11

Some, but not all, of these precursor states of self-awareness that we find in children can also be identified in other animals. Chimpanzees, dolphins, and an elephant at the Bronx Zoo in New York have been shown to pass the mirror test. Chimpanzees can also track what others see or do not see. For instance, they know that when someone is blindfolded, they cannot see the food. Dogs also have similarly sophisticated perspective-taking abilities: stealing food when a human experimenter is not looking, for instance, or choosing toys that are in their owner’s line of sight when asked to fetch. But only humans seem to be able to understand that another mind may hold a genuinely different view of the world to our own. For instance, if chimpanzee A sees a tasty snack being moved when chimpanzee B’s back is turned (the ape equivalent of the Maxi test), chimpanzee A does not seem to be able to use this information to their advantage to sneakily grab the food. This is indeed a harder computational problem to solve. To recognize that a belief might be false, we must juggle two distinct models of the world. Somehow, the human brain has figured out how to do this—and, in the process, gained an unusual ability to think about itself. In the remainder of this chapter, we are going to explore how the biology of the human brain enables this remarkable conjuring trick.12

Machinery for Self-Reflection

A variety of preserved brains can be seen at the Hunterian Museum in London, near the law courts of Lincoln’s Inn Fields. The museum is home to a marvelous collection of anatomical specimens amassed by John Hunter, a Scottish surgeon and scientist working at the height of the eighteenth-century Enlightenment. I first visited the Hunterian Museum shortly after starting my PhD in neuroscience at UCL. I was, of course, particularly interested in the brains of all kinds—human and animal—carefully preserved in made-to-measure jars and displayed in rooms surrounding an elegant spiral staircase. All these brains had helped their owners take in their surroundings, seek out food, and (if they were lucky) find themselves a mate. Before they were each immortalized in formaldehyde, these brains’ intricate networks of neurons fired electrical impulses to ensure their host lived to fight another day.

Each time I visited the Hunterian, I had an eerie feeling when looking at the human brains. On one level, I knew that they were just like any of the other animal brains on display: finely tuned information-processing devices. But it was hard to shake an almost religious feeling of reverence as I peered at the jars. Each and every one of them once knew that they were alive. What is it about the human brain that gives us these extra layers of recursion and allows us to begin to know ourselves? What is the magic ingredient? Is there a magic ingredient?

One clue comes from comparing the brains of humans and other animals. It is commonly assumed that humans have particularly large brains for our body size—and this is partly true, but not in the way that you might think. In fact, comparing brain and body size does not tell us much. It would be like concluding that a chip fitted in a laptop computer is more powerful than the same chip fitted into a desktop, just because the laptop has a smaller “body.” This kind of comparison does not tell us much about whether the brains of different species—our own included—are similar or different.

Instead, the key to properly comparing the brains of different species lies in estimating the number of neurons in what neuroscientist Suzana Herculano-Houzel refers to as “brain soup.” By sticking the (dead!) brains of lots of different species in a blender, it is possible to plot the actual number of cells in a brain against the brain mass, enabling meaningful comparisons to be made.

After several painstaking studies of brains of all shapes and sizes, a fascinating pattern has begun to emerge. The number of neurons in primate brains (which include monkeys, apes such as chimpanzees, and humans) increases linearly with brain mass. If one monkey brain is twice as large as another, we can expect it to have twice as many neurons. But in rodents (such as rats and mice), the number of neurons increases more slowly and then begins to flatten off, in a relationship known as a power law. This means that to get a rodent brain with ten times the number of neurons, you need to make it forty times larger in mass. Rodents are much less efficient than primates at packing neurons into a given brain volume.13

It’s important to put this result in the context of what we know about human evolution. Evolution is a process of branching, rather than a one-way progression from worse to better. We can think of evolution like a tree—we share with other animals a common ancestor toward the roots, but other groups of species branched off the trunk many millions of years ago and then continued to sprout subbranches, and subbranches of subbranches, and so on. This means that humans (Homo sapiens) are not at the “top” of the evolutionary tree—there is no top to speak of—and instead we just occupy one particular branch. It is all the more remarkable, therefore, that the same type of neuronal scaling law seen in rodents is found both in a group that diverged from the primate lineage around 105 million years ago (the afrotherians, which include the African elephant) and a group that diverged much more recently (the artiodactyls, which include pigs and giraffes). Regardless of their position on the tree, it seems that primates are evolutionary outliers—but, relative to other primates, humans are not.14

What seems to pick primates out from the crowd is that they have unusually efficient ways of cramming more neurons into a given brain volume. In other words, although a cow and a chimpanzee might have brains of similar weight, we can expect the chimpanzee to have around twice the number of neurons. And, as our species is the proud owner of the biggest primate brain by mass, this creates an advantage when it comes to sheer number of neurons. The upshot is that what makes our brains special is that (a) we are primates, and (b) we have big heads!15

We do not yet know what this means. But, very roughly, it is likely that there is simply more processing power devoted to so-called higher-order functions—those that, like self-awareness, go above and beyond the maintenance of critical functions like homeostasis, perception, and action. We now know that there are large swaths of cortex in the human brain that are not easy to define as being sensory or motor, and are instead traditionally labeled as association cortex—a somewhat vague term that refers to the idea that these regions help associate or link up many different inputs and outputs.

Regardless of the terminology we favor, what is clear is that association cortex is particularly well-developed in the human brain compared to other primates. For instance, if you examined different parts of the human prefrontal cortex (which is part of the association cortex, located toward the front of the brain) under the microscope, you would sometimes find an extra layer of brain cells in the ribbonlike sheet of cortex known as a granular layer. We still don’t fully understand what this additional cell layer is doing, but it provides a useful anatomical landmark with which to compare the brains of different species. The granular portion of the PFC is considerably more folded and enlarged in humans compared to monkeys and does not exist at all in rodents. It is these regions of the association cortex—particularly the PFC—that seem particularly important for human self-awareness.16

Many of the experiments that we run in our laboratory are aimed at understanding how these parts of the human brain support self-awareness. If you were to volunteer at the Wellcome Centre for Human Neuroimaging, we would meet you in our colorful reception, decorated with images of different types of scanners at work, and then we would descend to the basement where we have an array of large brain scanners, each arranged in different rooms. After filling in forms to ensure that you are safe to enter the scanning suite—magnetic resonance imaging (MRI) uses strong magnetic fields, so volunteers must have no metal on them—you would hop onto a scanner bed and see various instructions on a projector screen above your head. While the scanner whirs away, we would ask you a series of questions: Do you remember seeing this word? Which image do you think is brighter? Occasionally, we might also ask you to reflect on your decisions: How confident are you that you got the answer right?

MRI works by using strong magnetic fields and pulses of radio waves to pinpoint the location and type of tissue in the body. We can use one type of scan to create high-resolution three-dimensional pictures of the brains of volunteers in our experiments. By tweaking the settings of the scanner, rapid snapshots can also be taken every few seconds that track changes in blood oxygen levels in different parts of the brain (this is known as functional MRI, or fMRI). Because more vigorous neural firing uses up more oxygen, these changes in blood oxygen levels are useful markers of neural activity. The fMRI signal is very slow compared to the rapid firing of neurons, but, by applying statistical models to the signal, it is possible to reconstruct maps that highlight brain regions as being more or less active when people are doing particular tasks.

If I put you in an fMRI scanner and asked you to think about yourself, it’s a safe bet that I would observe changes in activation in two key parts of the association cortex: the medial PFC and the medial parietal cortex (also known as the precuneus), which collectively are sometimes referred to as the cortical midline structures. These are shown in the image here, which was created from software that searches the literature for brain activation patterns that are consistent with a particular search term, which in this case was “self-referential.” Robust activation of the medial PFC is seen in experiments where people are asked to judge whether adjectives such as “kind” or “anxious” apply to either themselves or someone famous, such as the British queen. Retrieving memories about ourselves, such as imagining the last time we had a birthday party, also activates the same regions. Remarkably, and consistent with Ryle’s ideas of a common system supporting mindreading and self-awareness, the same brain regions are also engaged when we are thinking about other people. How close these activity patterns match depends on how similar the other person is to ourselves.17

Brain imaging is a powerful tool, but it relies on correlating what someone is doing or thinking in the scanner with their pattern of neural activity. It cannot tell us whether a particular region or activity pattern is causally involved in a particular cognitive process. Instead, to probe causality, we can use stimulation techniques such as transcranial magnetic stimulation (TMS), which uses strong magnetic pulses to temporarily disrupt normal neural activity in a particular region of cortex. When TMS is applied to the parietal midline, it selectively affects how quickly people can identify an adjective as being relevant to themselves, suggesting that the normal brain processes in this region are important for self-reflection.18

Image

Medial surface activations obtained using the meta-analysis tool NeuroQuery in relation to the term “self-referential.”

(https://neuroquery.org, accessed September 2020.)

Damage to these networks can lead to isolated changes in self-awareness—we may literally lose the ability to know ourselves. The first hints that brain damage could lead to problems with metacognition came in the mid-1980s. Arthur Shimamura, then a postdoctoral researcher at the University of California, San Diego, was following up on the famous discovery of patient “HM,” who had become forever unable to form new memories after brain surgery originally carried out to cure his epilepsy. The surgery removed HM’s medial temporal lobe, an area of the brain containing the hippocampus and a region crucial for memory. Shimamura’s patients, like HM, had damage to the temporal lobe, and therefore it was unsurprising that many of them were also amnesic. What was surprising was that some of his patients were also unaware of having memory problems. In laboratory tests, they showed a striking deficit in metacognition: they were unable to rate how confident they were in getting the answers right or wrong.

The subgroup of patients who showed this deficit in metacognition turned out to have Korsakoff’s syndrome, a condition linked to excessive alcohol use. Korsakoff’s patients often have damage not only to structures involved in memory storage, such as the temporal lobe, but also to the frontal lobe of the brain that encompasses the PFC. Shimamura’s study was the first to indicate that the PFC is also important for metacognition.19

However, there was one concern about this striking result. All of Shimamura’s patients were amnesic, so perhaps their metacognitive deficit was somehow secondary to their memory problems. This illustrates a general concern we should keep in mind when interpreting scientific studies of self-awareness. If one group appears to have poorer metacognition than another, this is less interesting if they also show worse perception, memory, or decision-making, for instance. Their loss of metacognition, while real, may be a consequence of changes in other cognitive processes. But if we still find differences in metacognition when other aspects of task performance are well matched between groups or individuals, we can be more confident that we have isolated a change in self-awareness that cannot be explained by other factors.

To control for this potential confounding variable, Shimamura needed to find patients with impaired metacognition but intact memory. In a second paper published in 1989, he and his colleagues reported exactly this result. In a group of patients who had suffered damage to their PFC, memory was relatively intact but metacognition was impaired. It seemed that damage to one set of brain regions (such as the medial temporal lobes) could lead to memory deficits but leave metacognition intact, whereas damage to another set of regions (the frontal lobes) was associated with impaired metacognition but relatively intact memory. This is known as a double dissociation and is a rare finding in neuroscience. It elegantly demonstrates that self-awareness relies on distinct brain processes that may be selectively affected by damage or disease.20

Shimamura’s findings also helped shed light on a puzzling observation made around the same time by Thomas Nelson, one of the pioneers of metacognition research. Nelson was a keen mountaineer and combined his interests in climbing and psychology by testing his friends’ memory as they were ascending Mount Everest. While the extreme altitude did not affect the climbers’ ability to complete basic memory tests, it did affect their metacognition—they were less accurate at predicting whether they would know the answer or not. The oxygen content of the atmosphere at the summit of Everest (8,848 meters, or 29,029 feet) is around one-third of that at sea level. Because a lack of oxygen is particularly damaging to the functions of the PFC, this may explain why the climbers temporarily showed similar characteristics to Shimamura’s patients.21

A few years later, the advent of functional brain imaging technology allowed this hypothesis to be directly tested by measuring the healthy brain at work. Yun-Ching Kao and her colleagues used fMRI to visualize changes in brain activity while volunteers were asked to remember a series of pictures such as a mountain scene or a room in a house. After viewing each picture, they were asked a simple question to tap into their metacognition: Will you remember this later on? Kao sorted people’s brain activations into categories based on whether people actually remembered the picture and whether they predicted they would remember the picture. Being able to remember the picture was associated with increased activity in the temporal lobe, as expected. But the temporal lobe did not track people’s metacognition. Instead, metacognitive judgments were linked to activation in the medial PFC—activity here was higher when people predicted they would remember something, regardless of whether they actually did. The activation of this region was strongest in people with better metacognition. Imaging of the healthy brain supports the conclusions from Shimamura’s amnesic patients: that the PFC is a crucial hub for self-awareness.22

By the time we reach adulthood, most of us are adept at reflecting on what we know and what others know. Recently, Anthony Vaccaro and I surveyed the accumulating literature on mindreading and metacognition and created a brain map that aggregated the patterns of activations reported across multiple papers. In general, metacognition tended to engage regions that were more dorsal (above) and posterior (behind) the mindreading network. However, clear overlap between brain activations involved in metacognition and mindreading was observed in the ventral and anterior medial PFC. Thoughts about ourselves and others indeed seem to engage similar neural machinery, in line with a Rylean, second-order view of how we become self-aware.23

Image

Brain activations obtained in a meta-analysis of metacognition compared to brain activations related to the term “mentalizing,” from Neurosynth.

(Reproduced with permission from Vaccaro and Fleming, 2018.)

Breakthrough Powers of Recursion

We have already seen that animals share a range of precursors for self-awareness. The science of metacognition deals in shades of gray, rather than black or white. Other animals have the capacity for at least implicit metacognition: they can track confidence in their memories and decisions, and use these estimates to guide future behavior. It makes sense, then, that we can also identify neural correlates of confidence and metacognition in animal brains—for instance, patterns of neural activity in the frontal and parietal cortex of rodents and monkeys.24

Again, then, we have a picture in which self-awareness is a continuum, rather than an all-or-nothing phenomenon. Many of these precursors of self-awareness are seen in human infants. But it is also likely that adult humans have an unusual degree of self-awareness, all thanks to a sheer expansion of neural real estate in the association cortex, which, together with our fluency with language, provides a computational platform for building deep, recursive models of ourselves.25

We already encountered the idea that sensory and motor cortices are organized in a hierarchy, with some parts of the system closer to the input and others further up the processing chain. It is now thought that the association cortex also has a quasi-hierarchical organization to it. For instance, in the PFC, there is a gradient in which increasingly abstract associations are formed as you move forward through the brain. And the cortical midline system involved in self-awareness seems to be one of the most distant in terms of its connections to primary sensory and motor areas. It is likely no coincidence that this network is also reliably activated when people are quietly resting in the scanner. When we are doing nothing, we often turn our thoughts upon ourselves, trawling through our pasts and imagining our potential futures. The psychologist Endel Tulving referred to this aspect of metacognition as “autonoetic”—an awareness of ourselves as existing in memories from our past, in perceptions of the present, and in projections into the future.26

A link between metacognition and mindreading provides hints about the evolutionary driving forces behind humans acquiring a remarkable ability for self-awareness. Of course, much of this is speculation, as it is difficult to know how our mental lives were shaped by the distant past. But we can make some educated guesses. A rapid cortical expansion, thanks to the primate scaling rules, allowed humans to achieve an unprecedented number of cortical neurons. This was put to use in creating an ever more differentiated PFC and machinery that goes beyond the standard perception-action loop. But as Herculano-Houzel has pointed out, the human brain could not have expanded as it did without a radical increase in the number of calories. The fuel for this expansion may have come from a virtuous cycle of social cooperation, allowing more sophisticated hunting and cooking, fueling further cortical expansion, which allowed even greater cooperation and even greater caloric gain. This positive feedback loop is likely to have prized an ability to coordinate and collaborate with others. We have already seen that metacognition provides a unique benefit in social situations, allowing us to share what is currently on our minds and pool our perceptual and cognitive resources. In turn, mindreading becomes important to convert simple, one-way utterances into a joint understanding of what others are thinking and feeling. Many other animals have an ability for self-monitoring. But only humans have the ability (and the need) to explicitly represent the contents of the minds of themselves and others.27

Let’s recap our journey so far. We have seen how simple systems can estimate uncertainty and engage in self-monitoring. Many of these building blocks for metacognition can operate unconsciously, providing a suite of neural autopilots that are widely shared across the animal kingdom and present early in human development. Self-awareness continues to crystallize in toddlers, becoming fully formed between the ages of three and four. But the emergence of self-awareness around the age of three is only the beginning of a lifetime of reflective thought. In the next chapter, we will see how a multitude of factors continue to buffet and shape the capacity for self-awareness throughout our adult lives. By harnessing these factors, we will also discover tools for deliberately boosting and shaping our powers of reflection.