Avoiding Self-Awareness Failure - Building Minds That Know Themselves

Know Thyself: The Science of Self-Awareness - Stephen M Fleming 2021

Avoiding Self-Awareness Failure
Building Minds That Know Themselves

There are three Things extreamly hard, Steel, a Diamond, and to know one’s self.

—BENJAMIN FRANKLIN

We have almost reached the end of our tour through the neuroscience of self-awareness and the end of Part I of the book. We have seen that the fundamental features of brains—their sensitivity to uncertainty and their ability to self-monitor—provide a rich set of building blocks of self-awareness. We started with simple systems that take in information from the world and process it to solve the inverse problems inherent in perceiving and acting. A vast array of neural autopilots trigger continual adjustments to keep our actions on track. Much self-monitoring takes place unconsciously, without rising to the level of explicit self-awareness, and is likely to depend on a genetic starter kit that is present early in life. We also share many of these aspects of metacognition with other animals.

These building blocks form a starting point for understanding the emergence of fully-fledged self-awareness in humans. We saw that mutually reinforcing connections between social interaction, language, and an expansion of the capacity for recursive, deep, hierarchical models led the human brain to acquire a unique capacity for conscious metacognition—an awareness of our own minds. This form of metacognition develops slowly in childhood and, even as adults, is sensitive to changes in mental health, stress levels, and social and cultural environment.

In Part II of this book we are going to consider how this remarkable capacity for self-awareness supercharges the human mind. We will encounter its role in learning and education, in making complex decisions, in collaborating with others, and ultimately in taking responsibility for our actions. Before we dive in, though, I want to take a moment to extract three key lessons from the science of self-awareness.

Metacognition Can Be Misleading

First of all, it is important to distinguish between the capacity for metacognition and the accuracy of the self-awareness that results.

The capacity for metacognition becomes available to us after waking each morning. We can turn our thoughts inward and begin to think about ourselves. The accuracy of metacognition, on the other hand, refers to whether our reflective judgments tend to track our actual skills and abilities. We often have metacognitive accuracy in mind when we critique colleagues or friends for lacking self-awareness—as in, “Bill was completely unaware that he was dominating the meeting.” Implicit in this critique is the idea that if we were to actually ask Bill to reflect on whether he dominated the meeting, he would conclude he did not, even if the data said otherwise.

As we have seen, there are many cases in which metacognition may lead us astray and become decoupled from objective reality. We saw that sometimes metacognition can be fooled by devious experimenters inserting or scrubbing away our errors before “we” have noticed. It is tempting to see these as failures of self-awareness. But this is just what we would expect from a system that is trying to create its best guess at how it is performing at any given moment from noisy internal data.

In fact, metacognition is likely to be even more susceptible to illusions and distortions than perception. Our senses usually remain in touch with reality because we receive constant feedback from the environment. If I misperceive my coffee cup as twice as large as it is, then I will likely knock it over when I reach out to pick it up, and such errors will serve to rapidly recalibrate my model of the world. Because my body remains tightly coupled to the environment, there is only so much slack my senses can tolerate. The machinery for self-awareness has a tougher job: it must perform mental alchemy, conjuring up a sense of whether we are right or wrong from a fairly loose and diffuse feedback loop. The consequences of illusions about ourselves are usually less obvious. If we are lacking in self-awareness, we might get quizzical looks at committee meetings, but we won’t tend to knock over coffee cups or fall down flights of stairs. Self-awareness is therefore less moored to reality and more prone to illusions.

One powerful source of metacognitive illusions is known as fluency. Fluency is the psychologist’s catch-all term for the ease with which information is processed. If you are reading these words in a well-lit, quiet room, then they will be processed more fluently than if you were struggling to concentrate in dim light. Such fluency colors how we interpret information. For instance, in a study of the stock market, companies with easy-to-pronounce names (such as Deerbond) were on average valued more highly than companies with disfluent names (such as Jojemnen). And because fluency can also affect metacognition, it may create situations in which we feel like we are performing well when actually we are doing badly. We feel more confident about our decisions when we act more quickly, even if faster responses are not associated with greater accuracy. Similarly, we are more confident about being able to remember words written in a larger font, even if font size does not influence our ability to remember. There are many other cases where these influences of fluency can lead to metacognitive illusions—metacognitive versions of the perceptual illusions we encountered earlier in the book.1 As the Nobel prize—winning psychologist Daniel Kahneman points out:

Subjective confidence in a judgment is not a reasoned evaluation of the probability that this judgment is correct. Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it. It is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.2

In fact, the process of making metacognitive judgments can be thought of as the brain solving another kind of inverse problem, working out what to think about itself based on limited data. Just as our sensory systems pool information to mutually constrain and anchor our view of the world, a range of cues constrain and anchor our view of ourselves. Sometimes these inputs are helpful, but other times they can be hijacked. Just as illusions of perception reveal the workings of a powerful generative model that is trying to figure out what is “out there” in the world, metacognitive illusions created by false feedback, devious experimenters, and post-hoc justifications reveal the workings of the constructive process underpinning self-awareness. The fragility of metacognition is both a pitfall and an opportunity. The pitfall is that, as we have seen, self-awareness can be easily distorted or destroyed through brain damage and disorder. But the opportunity is that self-awareness can be molded and shaped by the way we educate our children, interact with others, and organize our lives.

Self-Awareness Is Less Common Than We Think

Another surprising feature of self-awareness is that it is often absent or offline. When a task is well learned—such as skilled driving or concert piano playing—the need to be aware of the details of what we are doing becomes less necessary. Instead, as we saw in our discussions of action monitoring, fine-scale unconscious adjustments are sufficient to ensure our actions stay on course. If lower-level processes are operating as they should—if all is well—then there is no need to propagate such information upward as a higher-level prediction error. Self-awareness operates on a need-to-know basis. This feature has the odd consequence that the absence of self-awareness is often more common than we think.

A useful analogy for the engagement of self-awareness is how problem-solving is typically handled in any big organization. If the problem can be tackled by someone relatively junior, then usually they are encouraged to take the initiative and resolve it without bothering their boss (and their boss’s boss). In these cases, it’s possible that the boss never becomes aware of the problem; the error signal was dealt with lower down the hierarchy and did not propagate up to the corner office. In these cases, there is a real sense in which the organization lacked metacognitive awareness of the problem (which may or may not be problematic, depending on whether it was handled effectively).

For another example, take the case of driving. It is quite common to travel several miles on a familiar route without much awareness of steering or changing gear. Instead, our thoughts might wander away to think about other worries, concerns, or plans. As the psychologist Jonathan Schooler notes, “We are often startled by the discovery that our minds have wandered away from the situation at hand.” Self-awareness depends on neural machinery that may, for various reasons, become uncoupled from whatever task we are currently engaged in.3

Laboratory studies have shown that these metacognitive fade-outs are more common than we like to think—anywhere between 15 and 50 percent of the time, depending on the task. One workhorse experiment is known as the sustained attention to response task (SART). The SART is very simple, and very boring. A rapid series of numbers is presented on the screen, and people are asked to press a button every time they see a number except when they see the number 3, when they are supposed to inhibit their response. In parts of the task when people report their minds having wandered, their response times on the SART speed up, and they are more likely to (wrongly) hit buttons in response to the number 3.

This data suggests that when we mind-wander, we default to a mindless and repetitive stimulus-response mode of behaving. Perception and action are still churning away—they must be, in order to allow a response to the stimulus—but awareness of ourselves doing the SART has drifted off. This pattern is exacerbated under the influence of alcohol, when people become more likely to mind-wander and less likely to catch themselves doing so. Such mind-wandering episodes are also more likely when the task is well practiced, just as we would expect if self-awareness becomes redundant as skill increases.4

Mind-wandering, then, is a neat example of how awareness of an ongoing task might fade in and out. This does not mean that self-awareness is lost entirely; instead, it may become refocused on whatever it is we are daydreaming about. But this refocusing could mean that we are no longer aware of ourselves as actors in the world. A clever experiment by researchers at Weizmann Institute of Science in Israel attempted to measure what happens in the brain when awareness begins to fade in this way. They gave people two tasks to do. In one run of the fMRI scanner, subjects simply had to say whether a series of pictures contained animals or not. In another run, they saw the same series of images but were asked to introspect about whether the image elicited an emotional experience. The nice thing about this comparison is that the stimuli being presented (the images of animals) and the actions required (button presses) are identical in the two cases. The lower-level processes are similar; only the engagement of metacognition differs.

When the brain activity patterns elicited by these two types of scanner run were compared, a prefrontal network was more active in the introspection compared to the control condition, as we might expect if the PFC contributes to self-awareness. But the neatest aspect of the experiment was that the researchers also included a third condition, a harder version of the animal categorization task. The stimuli required much quicker responses, and the task was more stressful and engrossing. Strikingly, while this harder task increased the activation level in many brain areas (including parietal, premotor, and visual regions), it decreased activity in the prefrontal network associated with metacognition. The implication is that self-awareness had decreased as the task became more engaging. Similar neural changes may underpin the fading of self-awareness we experience when we are engrossed in a film or video game.5

Another factor that might lead to similar fade-outs in self-awareness, but for different reasons, is stress. Much is now known about the neurobiology of the stress response in both animals and humans, with one of the well-documented actions of stress hormones such as glucocorticoids being a weakening of the functions of the PFC. One implication is that metacognition may be one of the first functions to become compromised under stress. Consistent with this idea, in one study, people who showed greater cortisol release in response to a social stress test were also those who showed the most impaired metacognition. Similarly, in another experiment, simply giving people a small dose of hydrocortisone—which leads to a spike in cortisol over a period of a few hours—was sufficient to decrease metacognitive sensitivity compared to a group that received a placebo.6

A link between stress and lowered metacognition has some disturbing consequences. Arguably, we most need self-awareness at precisely the times when it is likely to be impaired. As the pressure comes on at work or when we are stressed by money or family worries, engaging in metacognition might reap the most benefits, enabling us to recognize errors, recruit outside help, or change strategy. Instead, if metacognition fades out under times of stress, it is more likely that we will ignore our errors, avoid asking for help, and plow on regardless.

Our second lesson from the science of self-awareness, then, is that the machinery for self-awareness might sometimes become disengaged from what we are currently doing, saying, or thinking. By definition, it is particularly hard to recognize when such fade-outs occur, because—as in the frontal lobe paradox—a loss of self-awareness also impacts the very functions we would need to realize its loss. The frequency of such self-awareness fade-outs is more common than we think.7

The Causal Power of Metacognition

A final lesson of the science of metacognition is that it has consequences for how we behave. Rather than self-awareness being a mere by-product of our minds at work, it has causal power in guiding our behavior. How do we know this?

One way of going about answering this question is to directly alter people’s metacognition and observe what happens. If how you feel about your performance changes, and these feelings play a causal role in guiding behavior, then your decision about what to do next—whether skipping a question or changing your mind—should also change. But if these feelings are epiphenomenal, with no causal role, then manipulating metacognition should have no effect.

The data so far supports a causal role for metacognition in guiding learning and decision-making. When people are given pairs of words to learn (such as “duck-carrot”), we can generate an illusion of confidence in having remembered the words by simply repeating the pairs during learning. Crucially, when people are made to feel more confident in this way, they are also less likely to choose the pairs again. The illusion of confidence is sufficient to make them believe further study is not necessary. Similar illusions of confidence about our decisions can be reliably induced using a trick known as a “positive evidence” manipulation. Imagine deciding which of two stimuli, image A or image B, has just been flashed on the computer screen. The images are presented in noise so it’s difficult to tell A from B. If we now ramp up the brightness of both the image and the noise, the signal strength in favor of A or B increases, but the judgment remains objectively just as difficult (because the noise has also increased). Remarkably, this method is a reliable way of making people feel more confident, even if they are no more accurate in their choices. These heightened feelings of confidence affect how people behave, making them less likely to seek out new information or change their mind. The general lesson is that altering how we feel about our performance is sufficient to change our behavior, even if performance itself remains unchanged.8

Like any powerful tool, metacognition can be both creative and destructive. If our metacognition is accurate, it can supercharge how we go about our daily lives. Seemingly minor decisions made at a metacognitive level can have an outsize impact. For instance, I occasionally have to decide how much time to allocate to preparing a new lecture about my research. If I prepare all week, then I might increase my chances of giving a good talk, but this would be at the expense of actually doing the research that leads to invites to give talks in the first place! I need to have reasonable metacognition to know where to invest my time and energy, based on knowledge of my weaknesses in different areas. But if we act on metacognitive illusions, our performance can suffer. If I thought I was terrible at giving talks, I might spend all week investing unnecessary time practicing, leaving no time for other aspects of my job. Conversely, being overconfident and not preparing at all might result in an embarrassing failure.

Again, aircraft pilots give us a neat metaphor for the power of metacognitive illusions. Typically, when all is well, the different levels of self-monitoring of the aircraft are in alignment. At a lower level, the autopilot and instrument panels might inform the pilots that they are flying level at ten thousand feet, and they have no reason to disbelieve this. But in a particularly devastating situation known as the Coriolis illusion, in thick cloud pilots can sometimes think they are flying banked over when in fact their instruments (correctly) inform them that they are flying straight and level. Trying to correct the illusory bank can lead the aircraft to get into trouble. This factor was thought to play a role in the death of John F. Kennedy Jr. when his light aircraft crashed over Martha’s Vineyard. Student pilots are now routinely instructed about the possibility of such illusions and told to always trust their instruments unless there is a very good reason not to.

We can protect ourselves from metacognitive illusions by taking a leaf out of the flying instructor’s book. By learning about the situations in which self-awareness may become impaired, we can take steps to ensure we retain a reasonably clear-eyed view of ourselves. More specifically, we should be wary about placing too much trust in our metacognition if other sources (such as feedback from family and friends) tell us we are veering off course.

Like many things in science, when we begin to understand how something works, we can also begin to look for ways to harness it. In Part II, we are going to zoom in on the role that self-awareness plays in how we educate our children, make high-stakes decisions, work in teams, and augment the power of AI. This will not be a self-help guide with straightforward answers. It’s rare that the science of self-awareness tells us we should do x but not y. But I hope that by understanding how self-awareness works—and, particularly, why it might fail—we can learn how to use it better.