The Power of Reflection
It is the capacity to self-monitor, to subject the brain’s patterns of reactions to yet another round (or two or three or seven rounds) of pattern discernment, that gives minds their breakthrough powers.
—DANIEL DENNETT, From Bacteria to Bach and Back
When we set out to acquire a new skill such as playing tennis or learning to drive, we are often painfully aware of the minute details of our actions and why they might be going wrong. Metacognition is very useful in these early phases, as it tells us the potential reasons why we might have failed. It helps us diagnose why we hit the ball out of court and how to adjust our swing to make this less likely next time around. Rather than just registering the error and blindly trying something different, if we have a model of our performance, we can explain why we failed and know which part of the process to alter. Once a skill becomes well practiced, this kind of self-awareness is sometimes superfluous and often absent. In comparison to when we were student drivers, thoughts about how to drive are usually the last thing on our minds when we jump in the car to head off to work.1
The psychologist Sian Beilock aimed to quantify this relationship between skill and self-awareness in a landmark study of expert golfers. She recruited forty-eight Michigan State undergraduates, some of whom were intercollegiate golf stars and others who were golf novices. Each student was asked to make a series of “easy” 1.5 meter putts to a target marked on an indoor putting green. The golf team members showed superior putting ability, as expected, but also gave less detailed descriptions of the steps that they were going through during a particular putt, what Beilock called “expertise-induced amnesia.” It seemed they were putting on autopilot and couldn’t explain what they had just done. But when the experts were asked to use a novelty putter (one with an S-shaped bend and weights attached), they began to attend to their performance and gave just as detailed descriptions of what they were doing as the novices.
Beilock suggested that attention to our performance benefits novices in the earlier stages of learning but becomes counterproductive as movements become more practiced and routine. In support of this idea, she found that when expert golfers were instructed to monitor a series of beeps on a tape recorder while putting instead of attending to their swing, they became more accurate in their performance. Conversely, choking under pressure may be a case of the pilot interfering too much with the autopilot.2
This also suggests an explanation of why the best athletes don’t always make the best coaches. To teach someone else how to swing a golf club, you first need to be able to explain how you swing a golf club. But if you have already become an expert golfer, then this is exactly the kind of knowledge we would expect you to have lost some time ago. Players and fans often want a former champion at the helm, as it’s assumed that, having won titles and matches themselves, they must be able to impart the relevant know-how to others. And yet the history of sport is littered with examples of star athletes who faltered once they transitioned into coaching.
The fates of two different managers associated with my home football team, Manchester United, help illustrate this point. In the 1990s, Gary Neville was part of the famous “class of ’92”—one of the most decorated teams in the club’s history. But his transition to management, at the storied Spanish club Valencia, was less successful. During his short time in charge, Valencia were beaten 7—0 by rivals Barcelona, crashed out of the Champions League, and became embroiled in a relegation battle in their domestic league. In contrast, José Mourinho played fewer than one hundred games in the Portuguese league. Instead, he studied sports science, taught at schools, and worked his way up the coaching ladder before going on to become one of the most successful managers in the world, winning league titles in Portugal, England, Italy, and Spain before a brief (and less successful) stint with United in 2016. As the sports scientists Steven Rynne and Chris Cushion point out, “Coaches who did not have a career as a player were able to develop coaching skills in ways former champions did not have the time to engage with—because they were busy maximising their athletic performances.” When you are focused on becoming a world-class player, the self-awareness needed to teach and coach the same skills in others may suffer.3
Taken to an extreme, if a skill is always learned through extensive practice, rather than explicit teaching, it is possible to get caught in a loop in which teaching is not just inefficient but impossible. One of the strangest examples of the cultural absence of metacognitive knowledge is the world of chicken sexing. To churn out eggs commercially, it is important to identify female chicks as early as possible to avoid resources being diverted to unproductive males. The snag is that differentiating features, such as feather color, only emerge at around five to six weeks of age. It used to be thought that accurate sexing before this point was impossible. This all changed in the 1920s, when Japanese farmers realized that the skill of early sexing could be taught via trial-and-error learning. They opened the Zen-Nippon Chick Sexing School, offering two-year courses on how to sort the chicks by becoming sensitive to almost imperceptible differences in anatomy (known as vents) and turning Japan into a hotbed of chick-sexing expertise. Japanese sexers became rapidly sought after in the United States, and in 1935 one visitor wowed agricultural students by sorting 1,400 chicks in an hour with 98 percent accuracy.
While a novice will start out by guessing, expert sexers are eventually able to “see” the sex by responding to subtle differences in the pattern of the vents. But despite their near-perfect skill levels, chick sexers’ metacognition is generally poor. As the cognitive scientist Richard Horsey explains, “If you ask the expert chicken sexers themselves, they’ll tell you that in many cases they have no idea how they make their decisions.” Apprentice chick sexers must instead learn by watching their masters, gradually picking up signs of how to sort the birds. The best chick sexers in the world are like those star footballers who never made it as coaches; they can perform at the highest level, without being able to tell others how.4
Another example of skill without explainability is found in a strange neurological condition known as blindsight. It was first discovered in 1917 by George Riddoch, a medic in the Royal Army Medical Corps tasked with examining soldiers who had suffered gunshot wounds to the head. Soldiers with damage to the occipital lobe had cortical blindness—their eyes were still working, but regions of the visual system that received input from the eyes were damaged. However, during careful testing, Riddoch found that a few of the patients in his sample were able to detect moving objects in their otherwise “blind” field of view. Some residual processing capacity remained, but awareness was absent.
The Oxford psychologist Lawrence Weiskrantz followed up on Riddoch’s work by intensively studying a patient known as “DB” at the National Hospital for Neurology in central London, an imposing redbrick building opposite our lab at UCL. DB’s occipital cortex on the right side had been removed during surgery on a brain tumor, and he was cortically blind on the left side of space as a result. But when Weiskrantz asked DB to guess whether a visual stimulus was presented at location A or B, he could perform well above chance. Some information was getting into his brain, allowing him to make a series of correct guesses. DB was unable to explain how he was doing so well and was unaware he had access to any visual information. To him, it felt like he was just blind. Recent studies using modern anatomical tracing techniques have shown that blindsight is supported by information from the eyes traveling via a parallel and evolutionarily older pathway in the brain stem. This pathway appears able to make decisions about simple stimuli without the cortex getting involved.5
At first glance, blindsight seems like a disorder of vision. After all, it starts with the visual cortex at the back of the brain being damaged. And it is indeed the case that blindsight patients feel as though they are blind; they don’t report having conscious visual experience of the information coming into their eyes. And yet, one of the hallmarks of blindsight is a lack of explainability, an inability for the normal machinery of self-awareness to gain access to the information being used to make guesses about the visual stimuli. As a result, blindsight patients’ metacognition about their visual decisions is typically low or absent.6
These studies tell us two things. First, they reinforce the idea that self-awareness plays a central role in being able to teach things to others. If I cannot know how I am performing a task, I will make a poor coach. They also highlight how metacognition underpins our ability to explain what we are doing and why. In the remainder of this chapter, we are going to focus on this subtle but foundational role of self-awareness in constructing a narrative about our behavior—and, in turn, providing the bedrock for our societal notions of autonomy and responsibility.
Imagine that it’s a Saturday morning and you are trawling through your local supermarket. At the end of an aisle, a smiling shop assistant is standing behind a jam-tasting stall. You say hello and dip a plastic spoon into each of the two jars. The assistant asks you: Which one do you prefer? You think about it briefly, pointing to the jam on the left that tasted a bit like grapefruit. She gives you another taste of the chosen pot, and asks you to explain why you like it. You describe the balance of fruit and sweetness, and think to yourself that you might even pick up a jar to take home. You’re just about to wander off to continue shopping when you’re stopped. The assistant informs you that this isn’t just another marketing ploy; it’s actually a live psychology experiment, and with your consent your data will be analyzed as part of the study. She explains that, using a sleight of hand familiar to stage magicians, you were actually given the opposite jar to the one you chose when asked to explain your choice. Many other people in the study responded just as you did. They went on to enthusiastically justify liking a jam that directly contradicted their original choice, with only around one-third of the 180 participants detecting the switch.7
This study was carried out by Lars Hall, Petter Johansson, and their colleagues at Lund University in Sweden. While choices about jam may seem trivial, their results have been replicated in other settings, from judging the attractiveness of faces to justifying political beliefs. Even if you indicated a clear preference for environmental policies on a political survey, being told you in fact gave the opposite opinion a few minutes earlier is enough to prompt a robust round of contradictory self-justification. This phenomenon, known as choice blindness, reveals that we often construct a narrative to explain why we chose what we did, even if this narrative is partly or entirely fiction.8
It is even possible to hone in on the neural machinery that is involved in constructing narratives about our actions. In cases of severe epilepsy, a rare surgical operation is sometimes performed to separate the two hemispheres of the brain by cutting the large bundle of connections between them, known as the corpus callosum. Surprisingly, despite their brains being sliced in two, for most of these so-called split-brain patients the surgery is a success—the seizures are less likely to spread through the brain—and they do not feel noticeably different upon waking. But with careful laboratory testing, some peculiar aspects of the split-brain condition can be uncovered.
Michael Gazzaniga and Roger Sperry pioneered the study of split-brain syndrome in California in the 1960s. Gazzaniga developed a test that took advantage of how the eyes are wired up to the brain: the left half of our visual field goes into the right hemisphere, whereas the right half is processed by the left hemisphere. In intact brains, whatever information the left hemisphere receives is rapidly transferred to the right via the corpus callosum and vice versa (which is partly why the idea of left- and right-brained processing is a myth). But in split-brain patients, it is possible to send in a stimulus that remains sequestered in one hemisphere. When this is done, something remarkable happens. Because in most people the ability for language depends on neural machinery in the left hemisphere, a stimulus can be flashed on the left side of space (and processed by the right brain) and the patient will deny seeing anything. But because the right hemisphere can control the left hand, it is still possible for the patient to signal what he has seen by drawing a picture or pressing a button. This can lead to some odd phenomena. For instance, when the instruction “walk” was flashed to the right hemisphere, one patient immediately got up and left the room. When asked why, he responded that he felt like getting a drink. It seems that the left hemisphere was tasked with attempting to make sense of what the patient was doing, but without access to the true reason, which remained confined to the right hemisphere. Based on data such as this, Gazzaniga refers to the left hemisphere as the “interpreter.”9
This construction of a self-narrative or running commentary about our behavior has close links with the neural machinery involved in metacognition—specifically, the cortical midline structures involved in self-reflection and autobiographical memory that we encountered in Chapter 3. In fact, the origins of the word narrative are from the Latin narrare (to tell), which comes from the Indo-European root gnarus (to know). Constructing a self-narrative shares many characteristics with the construction of self-knowledge. Patients with frontal lobe damage—the same patients who, as we saw in Chapter 4, have problems with metacognition—also often show strange forms of self-narrative, making up stories about why they are in the hospital. One patient who had an injury to the anterior communicating artery, running through the frontal lobe, confidently claimed that his hospital pajamas were only temporary and that he was soon planning to change into his work clothes. One way of making sense of these confabulations is that they are due to impaired self-monitoring of the products of memory, making it difficult to separate reality and imagination.10
These narrative illusions may even filter down to our sense of control or agency over our actions. In October 2016, Karen Penafiel, executive director of the American company National Elevator Industry, Inc., caused a stir by announcing that the “close door” feature in most elevators had not been operational for many years. Legislation in the early 1990s required elevator doors to remain open long enough for anyone with crutches or a wheelchair to get on board, and since then it’s been impossible to make them close faster. This wouldn’t have been much of a news story, aside from the fact that we still think these buttons make things happen. The UK’s Sun newspaper screamed with the headline: “You Know the ’Close Door’ Button in a Lift? It Turns Out They’re FAKE.” We feel like we have control over the door closing, and yet we don’t. Our sense of agency is at odds with reality. The Harvard psychologist Daniel Wegner explained this phenomenon as follows:
The feeling of consciously willing our actions… is not a direct readout of such scientifically verifiable will power. Rather, it is the result of a mental system whereby each of us estimates moment-to-moment the role that our minds play in our actions. If the empirical will were the measured causal influence of an automobile’s engine on its speed, in other words, the phenomenal will might best be understood as the speedometer reading. And as many of us have tried to explain to at least one police officer, speedometer readings can be wrong.11
A clever experiment by Wegner and Thalia Wheatley set out to find the source of these illusions of agency. Two people sat at a single computer screen on which a series of small objects (such as a toy dinosaur or car) was displayed. They were both asked to place their hands on a flat board that was stuck on top of a regular computer mouse, allowing them to move the cursor together. Through headphones, the participants listened to separate audio tracks while continuing to move the mouse cursor around the screen. If a burst of music was heard, each participant was instructed to stop the cursor (a computerized version of musical chairs). They then rated how much control they felt over making the cursor stop on the screen.
Unbeknownst to the participant, however, there was a trick involved. The other person was a confederate, acting on the instructions of the experimenter. In some of the trials, the confederate heard instructions to move to a particular object on the screen, slow down, and stop. On these very same trials the participant heard irrelevant words through the headphones that highlighted the object they happened to be moving toward (for example, the word “swan” when the confederate was instructed to move toward the swan). Remarkably, on these trials, participants often felt more in control of making the cursor stop, and this responsibility went up when the object was recently primed. In other words, just thinking about a particular goal can cause us to feel in control of getting there, even when this feeling is an illusion. Other experiments have shown that people feel more agency when their actions are made more quickly and fluently, or when the consequences of their actions are made more predictable (such as a beep always following a button press). This mimics the case in most elevators: because the doors normally do close shortly after we press the “close door” button, we start to feel a sense of control over making this happen.12
As suggested by the fact that the left (rather than the right) hemisphere plays the role of interpreter in Gazzaniga’s experiments, another powerful influence on self-narratives is language. From an early age, children begin to talk to themselves, first out loud, and then in their heads. Language—whether spoken or internalized—provides us with a rich set of tools for thinking that allows the formation of recursive beliefs (for instance: I think that I am getting ill, but it could be that I am stressed about work). This recursive aspect of language supercharges our metacognition, allowing us to create, on the fly, brand new thoughts about ourselves.13
But there is a problem with this arrangement. It can mean that what we think we are doing (our self-narratives) can begin to slowly drift away from what we are actually doing (our behavior). Even without a split brain, I might construct a narrative about my life that is subtly at odds with reality. For instance, I might aspire to get up early to write, head in to the lab to run a groundbreaking experiment, then return home to play with my son and have dinner with my wife, all before taking a long weekend to go sailing. This is an appealing narrative. But the reality is that I sometimes oversleep, spend most of the day answering email, and become grumpy in the evening due to lack of progress on all these fronts, leading me to skip weekends away to write. And so on.
Another way of appreciating this point is that our narratives need to be reasonably accurate to make sense of our lives, even if they may in part be based on confabulation. When the self-narrative drifts too far from reality, it becomes impossible to hold onto. If I had a narrative that I was an Olympic-level sailor, then I would need to construct a more extensive set of (false) beliefs about why I haven’t been picked for the team (perhaps they haven’t spotted me yet) or why I’m not winning races at my local club (perhaps I always pick the slow boat). This is a feature of the delusions associated with schizophrenia. But as long as our narratives broadly cohere with the facts, they become a useful shorthand for our aspirations, hopes, and dreams.14
A higher-level narrative about what we are doing and why can even provide the scaffold for our notions of autonomy and responsibility. The philosopher Harry Frankfurt suggested that humans have at least two levels of desires. Self-knowledge about our preferences is a higher-order desire that either endorses or goes against our first-order motives, similar to the notion of confidence about value-based decisions we encountered in Chapter 7. Frankfurt proposed that when our second-order and first-order desires match up, we experience heightened autonomy and free will. He uses the example of someone addicted to drugs who wants to stop. They want to not want the drugs, so their higher-order desire (wanting to give up) is at odds with their first-order desire (wanting the drugs). We intuitively would sympathize with such a person as struggling with themselves and perhaps think that they are making the choice to take drugs less of their own free will than another person who enthusiastically endorses their own drug taking.15
When second-order and first-order desires roughly line up—when our narratives are accurate—we end up wanting what we choose and choosing what we want. Effective metacognition about our wants and desires allows us to take steps to ensure that such matches are more likely to happen. For instance, while writing this chapter, I have already tried to check Twitter a couple of times, but my browser’s blocking software stops me from being sidetracked by mindless scrolling. My higher-order desire is to get the book written, but I recognize that I have a conflicting first-order desire to check social media. I can therefore anticipate that I might end up opening Twitter and take advance steps (installing blocking software) to ensure that my actions stay true to my higher-order desires—that I end up wanting what I choose.16
The upshot of all this is an intimate link between self-knowledge and autonomy. As the author Al Pittampalli sums up in his book Persuadable, “The lesson here is this: Making the choice that matches your interests and values at the highest level of reflection, regardless of external influence and norms, is the true mark of self-determination. This is what’s known as autonomy.”17 The idea of autonomy as a match between higher-order and first-order desires can seem lofty and philosophical. But it has some profound consequences. First, it suggests that our feeling of being in charge of our lives is a construction—a narrative that is built up at a metacognitive level from multiple sources. Second, it suggests that a capacity for metacognition may be important for determining whether we are to blame for our actions.
We can appreciate this close relationship between self-awareness and autonomy by examining cases in which the idea of responsibility is not only taken seriously but also clearly defined, as in the field of criminal law. A central tenet of the Western legal system is the concept of mens rea, or guilty mind. In the United States, the definition of mens rea has been somewhat standardized with the introduction of the Model Penal Code (MPC), which was developed in the 1960s to help streamline this complex aspect of the law. The MPC helpfully defines four distinct levels of culpability or blame:
• Purposely—one’s conscious object is to engage in conduct of that nature
• Knowingly—one has awareness that one’s conduct will cause such a result
• Recklessly—one has conscious disregard of a substantial and unjustifiable risk
• Negligently—one should be aware of a substantial and unjustifiable risk
I have highlighted the terms related to self-awareness in italics—which strikingly appear in every definition. It’s clear that the extent to which we have awareness of our actions is central to legal notions of responsibility and blame. If we are unaware of what we are doing, then we may sometimes be excused even of the most serious of crimes, or at the very least would only be found negligent.
A tragic example is provided by a case of night terrors. In the summer of 2008, Brian Thomas was on holiday with his wife in Aberporth, in western Wales, when he had a vivid nightmare. He recalled thinking he was fighting off an intruder in their caravan, perhaps one of the kids who had been disturbing his sleep by revving motorbikes outside. In reality, he was gradually strangling his wife to death. He awoke from his nightmare into a living nightmare, and made a 999 call to tell the operator that he was stunned and horrified by what had happened, and entirely unaware of what he had done.
Crimes committed during sleep are rare, thankfully for both their perpetrators and society at large. Yet they provide a striking example of the potential to carry out complex actions while remaining unaware of what we are doing. In Brian Thomas’s case, expert witnesses agreed that he suffered from a sleep disorder known as pavor nocturnus, or night terrors, which affects around 1 percent of adults and 6 percent of children. The court was persuaded that his sleep disorder amounted to “automatism,” a comprehensive defense under UK law that denies even the lowest degree of mens rea. After a short trial, the prosecution simply withdrew their case.18
This pivotal role played by self-awareness in Western legal systems seems sensible. After all, we have already seen that constructing a narrative about our behavior is central to personal autonomy. But it also presents a conundrum. Metacognition is fragile and prone to illusions, and can be led astray by the influence of others. If our sense of agency is just a construction, created on the fly, then how can we hold people responsible for anything? How can we reconcile the tension between metacognition as an imperfect monitor and explainer of behavior with its critical role in signaling responsibility for our actions?
I think there are two responses to this challenge. The first is that we needn’t worry, because our sense of agency over our actions is accurate enough for most purposes. Think of the last time that you excused a mistake with the comment, “Sorry, I just wasn’t thinking.” I suspect that your friends and family took you at your word, rather than indignantly complaining: “How can you possibly know? Introspection is a mental fiction!” And in most cases, they would be right to trust in your self-knowledge. If you really were “not thinking” when you forgot to meet a friend for lunch, then it is useful for others to know that it was not your intention to miss the appointment and that you can be counted on to do better in future. In many, even most, cases, metacognition works seamlessly, and we do have an accurate sense of what we are doing and why we are doing it. We should be in awe that the brain can do this at all.
The second response is to recognize that self-awareness is only a useful marker of responsibility in a culture that agrees that this is the case. Our notions of autonomy, like our legal systems, are formed out of social exchange. Responsibility, then, is rather like money. Money has value only because we collectively agree that it does. Similarly, because we collectively agree that self-awareness is a useful marker of a particular mode of decision-making, it becomes central to our conception of autonomy. And, just like money, autonomy and responsibility are ultimately creations of the human mind—creations that depend on our ability to build narratives about ourselves and each other. As with money, we can recognize this constructed version of responsibility while at the same time enjoying having it and using it.
A deeper implication of this tight link between self-awareness and responsibility is that if the former suffers, the latter might also become weakened. Such questions are becoming increasingly urgent in an aging population, which may suffer from diseases such as dementia that attack the neural foundations of self-awareness. Western democracies are grappling with how to strike the balance between preserving autonomy and providing compassionate support in these cases. Recently enacted laws, such as the UK’s Mental Capacity Act, codify when the state should take charge of a person’s affairs when they are no longer able to make decisions for themselves due to a psychiatric or neurological disorder. Other proposals, such as that put forward in the United Nations Convention on the Rights of Persons with Disabilities, hold that liberty should be maintained at all costs. At the heart of this battle is a fight for our concept of autonomy: Under what circumstances can and should our friends, family, or government step in to make decisions on our behalf?19
While the law in this area is complex, it is telling that one commonly cited factor in capacity cases is that the patient lacked insight or self-awareness. This implies that systematic changes in our own self-awareness may affect our chances of continuing to create a narrative and hold autonomy over our lives. The potential for these changes might be more widespread than we think, arriving in the form of new technology, drugs, or social structures. In the final two chapters of this book, we are going to explore what the future holds for human self-awareness—from the need to begin to coordinate and collaborate with intelligent machines, to the promise of technology for supercharging our ability to know ourselves.