Know Thyself: The Science of Self-Awareness - Stephen M Fleming 2021
Learning to Learn
The Power of Reflection
Rather amazingly, we are animals who can think about any aspect of our own thinking and can thus devise cognitive strategies (which may be more or less indirect and baroque) aimed to modify, alter, or control aspects of our own psychology.
—ANDY CLARK, Supersizing the Mind
From the Industrial Revolution onward, a dominant model in education was one of rote learning of facts and memorization—of capital cities, times tables, body parts, and chemical elements. In John Leslie’s 1817 book The Philosophy of Arithmetic, he argued that children should be encouraged to memorize multiplication tables all the way up to 50 times 50. The schoolteacher Mr. Gradgrind in Charles Dickens’s Hard Times agrees, telling us that “facts alone are wanted in life. Plant nothing else and root out everything else.” The assumption was that the goal of education was to create people who can think faster and squirrel away more knowledge.
This approach may have achieved modest results in Victorian times. But in an increasingly complex and rapidly changing world, knowing how to think and learn is becoming just as important as how much we learn. With people living longer, having multiple jobs and careers, and taking up new hobbies, learning is becoming a lifelong pursuit, rather than something that stops when we leave formal schooling. As The Economist noted in 2017, “The curriculum needs to teach children how to study and think. A focus on ’metacognition’ will make them better at picking up skills later in life.”1
The consequences of being a good or poor learner ricochet throughout our lives. Cast your mind back to Jane, whom we encountered at the beginning of the book. To make smooth progress in her studying she needed to figure out what she knows and doesn’t know and make decisions about what to learn next. These decisions may seem trivial, but they can be the difference between success and failure. If Jane’s metacognition is good, then she will be able to effectively guide her learning. If, on the other hand, her metacognition is off, it won’t matter if she is a brilliant engineer—she will be setting herself up for failure.
The lessons from Part I on how metacognition works take on critical importance in the classroom. Because metacognition sets the stage for how we learn, the payoff from avoiding metacognitive failure can be sizeable. In this chapter, we will explore how to apply what we know about how metacognition works to improve the way we make decisions about how, what, and when to study.
In the learning process, metacognition gets into the game in at least three places. We begin by forming beliefs about how best to learn and what material we think we need to focus on—what psychologists refer to as judgments of learning. Think back to when you were brushing up for a test in high school, perhaps a French exam. You might have set aside an evening to study the material and learn the various vocabulary pairs. Without realizing it, you were also probably making judgments about your own learning: How well do you know the material? Which word pairs might be trickier than others? Do you need to test yourself? Is it time to stop studying and head out with friends?
Once we have learned the material, we then need to use it—for an exam, in a dinner party conversation, or as a contestant on Who Wants to Be a Millionaire? Here, as we have seen in Part I, metacognition creates a fluctuating sense of confidence in our knowledge that may or may not be related to the objective reality of whether we actually know what we are talking about. Pernicious illusions of fluency can lead to dangerous situations in which we feel confident about inaccurate knowledge. Finally, after we’ve put an answer down on paper, additional metacognitive processing kicks into gear, allowing us to reflect on whether we might be wrong and perhaps to change our minds or alter our response. In what follows, we are going to look at each of these facets of self-awareness, and ask how we can avoid metacognitive failures.
The study of metacognition in the classroom has a long history. The pioneers of metacognition research were interested in how children’s self-awareness affects how they learn. As a result, this chapter is able to draw on more “applied” research than other parts of the book—studies that have been carried out in classroom or university settings, and which have clear implications for how we educate our children. But it remains the case that the best route to improving our metacognition is by understanding how it works, and I aim to explain the recommendations of these studies through the lens of the model of self-awareness we developed in Part I.
Choosing How to Learn
As an undergraduate studying brain physiology, I was forced to learn long lists of cell types and anatomical labels. My strategy was to sit in the carrels of my college library with a textbook open, writing out the strange vocabulary on sheets of paper: Purkinje, spiny stellate, pyramidal; circle of Willis, cerebellar vermis, extrastriate cortex. I would then go through with a colored highlighter, picking out words I didn’t know. In a final step, I would then (time and willpower permitting) transfer the highlighted items onto index cards, which I would carry around with me before the exam.
This worked for me, most of the time. But looking back, I had only blind faith in this particular approach to exam preparation. Deciding how to learn in the first place is a crucial choice. If we don’t know how to make this decision, we might inadvertently end up holding ourselves back.
A common assumption is that each of us has a preferred learning style, such as being visual, auditory, or kinesthetic learners. But this is likely to be a myth. The educational neuroscientist Paul Howard-Jones points out that while more than 90 percent of teachers believe it is a good idea to tailor teaching to students’ preferred learning styles, there is weak or no scientific evidence that people really do benefit from different styles. In fact, most controlled studies have shown no link between preferred learning style and performance. Yet the advice to tailor learning in this way has been propagated by august institutions such as the BBC and British Council.2
This widespread belief in learning styles may stem from a metacognitive illusion. In one study, fifty-two students were asked to complete a questionnaire measure of whether they prefer to learn through seeing pictures or written words. They were then given a memory test for common objects and animals, presented either as pictures or words. Critically, while the students were studying the word pairs, the researchers checked to see how confident the students felt by recording their judgments of learning. Their preferred learning style was unrelated to their actual performance: students who said they were pictorial learners were no better at learning from pictures, and those who said they were verbal learners were no better at learning from words. But it did affect their metacognition: pictorial learners felt more confident at learning from pictures, and verbal learners more confident about learning from words.3
The learning-styles myth, then, may be due to a metacognitive illusion: we feel more confident learning in our preferred style. The snag is that often the factors that make us learn better are also the things that make us less confident in our learning progress. Put simply, we might think we are better when using strategy A, whereas strategy B may actually lead to the best results.
Similar effects can be found when comparing digital and print reading. In one study, seventy undergraduates from the University of Haifa were asked to read a series of information leaflets about topics such as the advantages of different energy sources or the importance of warming up before exercise. Half of the time the texts were presented on a computer screen, and the other half they were printed out. After studying each text, students were asked how well they thought they would perform on a subsequent multiple-choice quiz. They were more confident about performing well after reading the information on-screen than on paper, despite performing similarly in both cases. This overconfidence had consequences: when they were allowed to study each passage for as long as they wanted, the heightened confidence for on-screen learning led to worse performance (63 percent correct versus 72 percent correct), because students gave up studying earlier.4
Another area in which metacognitive illusions can lead us astray is in deciding whether and how to practice—for instance, when studying material for an upcoming exam or test. A classic finding from cognitive psychology is that so-called spaced practice—reviewing the material once, taking a break for a day or two, and then returning to it a second time—is more effective for retaining information than massed practice, where the same amount of revision is crammed into a single session. Here again, though, metacognitive illusions may guide us in the wrong direction. Across multiple experiments, the psychologist Nate Kornell reported that 90 percent of college students had better performance after spaced, rather than massed, practice, but 72 percent of participants reported that massing, not spacing, was the more effective way of learning! This illusion may arise because cramming creates metacognitive fluency; it feels like it is working, even if it’s not. Kornell likens this effect to going to the gym, only to choose weights that are too light and that don’t have any training effect. It feels easy, but this isn’t because you are doing well; it’s because you have rigged the training session in your favor. In the same way that we want to feel like we’ve worked hard when we leave the gym, if we are approaching learning correctly, it usually feels like a mental workout rather than a gentle stroll.
Along similar lines, many students believe that just rereading their notes is the right way to study—just as I did when sitting in those library carrels, rereading my note cards about brain anatomy. This might feel useful and helpful, and it is probably better than not studying at all. But experiments have repeatedly shown that testing ourselves—forcing ourselves to practice exam questions, or writing out what we know—is more effective than passive rereading. It should no longer come as a surprise that our metacognitive beliefs about the best way to study are sometimes at odds with reality.5
Awareness of Ignorance
Once we have decided how to learn, we then need to make a series of microdecisions about what to learn. For instance, do I need to focus more on learning math or chemistry, or would my time be better spent practicing exam questions? This kind of metacognitive questioning does not stop being important when we leave school. A scientist might wonder whether they should spend more time learning new analysis tools or a new theory, and whether the benefits of doing so outweigh the time they could be spending on research. This kind of dilemma is now even more acute thanks to the rise of online courses providing high-quality material on topics ranging from data science to Descartes.
One influential theory of the role played by metacognition in choosing what to learn is known as the discrepancy reduction theory. It suggests that people begin studying new material by selecting a target level of learning and keep studying until their assessment of how much they know matches their target. One version of the theory is Janet Metcalfe’s region of proximal learning (RPL) model. Metcalfe points out that people don’t just strive to reduce the discrepancy between what they know and what they want to learn; they also prefer material that is not too difficult. The RPL model has a nice analogy with weight lifting. Just as the most gains are made in the gym by choosing weights that are a bit heavier than we’re used to, but not so heavy that we can’t lift them, students learn more quickly by selecting material of an intermediate difficulty.6
Both the discrepancy reduction and RPL models agree that metacognition plays a key role in learning by helping us monitor our progress toward a goal. Consistent with this idea, finely tuned metacognition has clear benefits in the classroom. For instance, one study asked children to “think aloud” while they were preparing for an upcoming history exam. Overall, 31 percent of the children’s thoughts were classified as “metacognitive,” as they were referring to whether they knew the material or not. Better readers and more academically successful students reported engaging in more metacognition.7
It stands to reason, then, that interventions to encourage metacognition might have widespread benefits for educational attainment. Patricia Chen and her colleagues at Stanford University set out to test this idea by dividing students into two groups prior to an upcoming exam. A control group received a reminder that their exam was coming up in a week’s time and that they should start preparing for it. The experimental group received the same reminder, along with a strategic exercise prompting them to reflect on the format of the upcoming exam, which resources would best facilitate their studying (such as textbooks, lecture videos, and so on), and how they were planning to use each resource. Students in the experimental group outperformed those in the control group by around one-third of a letter grade: in a first experiment, the students performed an average of 3.6 percent better than controls, and in a second experiment, 4.2 percent better. Boosting metacognition also led to lower feelings of anxiety and stress about the upcoming exam.8
It may even be useful to cultivate what psychologists refer to as desirable difficulty, as a safeguard against illusions of confidence driven by fluency. For instance, scientists at RMIT University in Melbourne, Australia, have developed a new, freely downloadable computer font memorably entitled Sans Forgetica, which makes it harder than usual to read the words on the page. The idea is that the disfluency produced by the font prompts the students to think that they aren’t learning the material very well, and as a result they concentrate and study for longer.9
Taken together, the current research indicates that metacognition is a crucial but underappreciated component of how we learn and study. What we think about our knowledge guides what we study next, which affects our knowledge, and so on in a virtuous (or sometimes vicious) cycle. This impact of metacognition is sometimes difficult to grasp and is not as easy to see or measure as someone’s raw ability at math or science or music. But the impact of metacognition does not stop when learning is complete. As we saw in the case of Judith Keppel’s game-show dilemma, it also plays a critical role in guiding how we use our newly acquired knowledge. This hidden role of metacognition in guiding our performance may be just as, if not more, important than raw intelligence for success on tests and exams. Let’s turn to this next.
How We Know That We Know
Each year, millions of American high school juniors and seniors take the SAT, previously known as the Scholastic Aptitude Test. The stakes are high: the SAT is closely monitored by top colleges, and even after graduation, several businesses such as Goldman Sachs and McKinsey & Company want to know their prospective candidates’ scores. At first glance, this seems reasonable. The test puts the students through their paces in reading, writing, and arithmetic and sifts them according to ability. Who wouldn’t want the best and the brightest to enroll in their college or graduate program? But while raw ability certainly helps, it is not the only contributor to a good test score. In fact, until 2016, metacognition was every bit as important.
To understand why, we need to delve into the mechanics of how the SAT score was calculated. Most of the questions on the SAT are multiple-choice. Until 2016 there were five possible answers to each question, one of which is correct. If you were to randomly guess with your eyes closed throughout the test, you should expect to achieve a score of around 20 percent, not 0 percent. To estimate a true ability level, the SAT administrators therefore implemented a correction for guessing in their scoring system. For each correct answer, the student received one point. But for each incorrect answer, one-quarter of a point was deducted. This ensured that the average expected score if students guessed with their eyes closed was indeed 0.
However, this correction had an unintended consequence. The student could now strategically regulate his or her potential score rather than simply volunteer an answer for each question. If they had low confidence in a particular answer, they could skip the question, avoiding a potential penalty. We have already seen from the studies of animal metacognition in Part I that being able to opt out of a decision when we are not sure of the answer helps us achieve higher performance, and that this ability relies on effective estimation of uncertainty. Students with excellent metacognition would adeptly dodge questions they knew they would fail on and would lose only a few points. But students with poor metacognition—even those with above-average ability—might rashly ink in several incorrect responses, totting up a series of quarter-point penalties as they did so.10
The accuracy of metacognition may even make the difference between acing and failing a test. In one laboratory version of the SAT, volunteers were asked to answer a series of general-knowledge questions such as “What was the name of the first emperor of Rome?” If they weren’t sure of the answer, they were asked to guess, and after each response the volunteers rated their confidence on a numerical scale. As expected, answers held with higher confidence were more likely to be correct. Now, armed with confidence scores for every question, the researchers repeated the test under SAT-style conditions: questions could be skipped, and there was a penalty for wrong answers. People tended to omit answers they originally held with lower confidence, and did so to a greater degree as the penalty for wrong responses went up, thus improving their score. But this same strategy had disastrous consequences when their metacognition was weak. In a second experiment, quiz questions were carefully chosen that often led to metacognitive illusions of confidence, such as “Who wrote the Unfinished Symphony?” or “What is the capital of Australia?” (the answers are Schubert, not Beethoven, and Canberra, not Sydney). For questions such as these, people’s misplaced confidence leads them to volunteer lots of incorrect answers, and their performance plummets.11
Beneficial interactions between metacognition and test-taking performance may explain why metacognition plays a role in boosting educational success over time. One recent study looked at how metacognition and intelligence both developed in children aged between seven and twelve years old. Crucially, the same children came back to the lab for a subsequent assessment three years later (when the children were aged between nine and fifteen). From this rare longitudinal data, it was possible to ask whether the level of metacognition measured at age seven predicted a child’s intelligence score at age nine and vice versa. While there were relatively weak links between metacognition and IQ at any one point in time (consistent with other findings of the independence of metacognition and intelligence), having good metacognition earlier in life predicted higher intelligence later on. An elegant explanation for this result is that having good metacognition helps children know what they don’t know, in turn guiding their learning and overall educational progress. In line with this idea, when people are allowed to use metacognitive strategies to solve an IQ test, the extent to which they improve their performance by relying on metacognition is linked to real-life educational achievement.12
What is clear is that the scores from SAT-style tests are not only markers of ability, but also of how good individuals are at knowing their own minds. In using such tests, employers and colleges may be inadvertently selecting for metacognitive abilities as well as raw intelligence. This may not be a bad thing, and some organizations even do it on purpose: the British Civil Service graduate scheme asks potential candidates to rate their own performance during the course of their entrance examinations and takes these ratings into account when deciding whom to hire. The implication is that the Civil Service would prefer to recruit colleagues who are appropriately aware of their skills and limitations.
Believing in Ourselves
There is one final area in which metacognition plays a critical role in guiding our learning: creating beliefs about our skills and abilities. We have already seen in Part I that confidence is a construction and can sometimes be misleading—it does not always track what we are capable of. We might believe we will be unable to succeed in an upcoming exam, in a sports match, or in our career, even if we are more than good enough to do so. The danger is that these metacognitive distortions may become self-fulfilling prophecies. Put simply, if we are unwilling to compete, then we have no way of winning.
One of the pioneers in the study of these kinds of metacognitive illusions was the social psychologist Albert Bandura. In a series of influential books, Bandura outlined how what people believe about their skills and abilities is just as, if not more, important for their motivation and well-being than their objective abilities. He referred to this set of beliefs as “self-efficacy” (our overall confidence in our performance, closely related to metacognitive bias). He summarized his proposal as follows: “People’s beliefs in their efficacy affect almost everything they do: how they think, motivate themselves, feel and behave.” By subtly manipulating how people felt about their abilities on an upcoming task, laboratory experiments have borne out this hypothesis. Illusory boosts in self-efficacy indeed lead people to perform better, and persist for longer, at challenging tasks, whereas drops in self-efficacy lead to the opposite.13
One well-researched aspect of self-efficacy is children’s beliefs about being able to solve math problems. In one set of longitudinal studies, children’s beliefs about their abilities at age nine affected how they performed at age twelve, even when differences in objective ability were controlled for. The implication is that self-efficacy drove attainment, rather than the other way around. Because these beliefs can influence performance, any gender disparity in mathematics self-efficacy is a potential cause of differences between boys and girls in performance in STEM subjects. A recent worldwide survey showed that 35 percent of girls felt helpless when doing math problems, compared to 25 percent of boys. This difference was most prominent in Western countries such as New Zealand and Iceland, and less so in Eastern countries including Malaysia and Vietnam. It is not hard to imagine that systematic differences in self-efficacy may filter through into disparities in performance, leading to a self-reinforcing cycle of mathematics avoidance, even though girls start off no less able than boys.14
These effects of self-efficacy continue to echo into our adult lives. In social settings, such as at work and school, measures of women’s confidence in their abilities are often lower than men’s. (Interestingly, this difference disappears in our laboratory studies, in which we measure people’s metacognition in isolation.) In their book The Confidence Code, Katty Kay and Claire Shipman describe a study conducted on Hewlett-Packard employees. They found that women applied for promotions when they believed they met 100 percent of the criteria, while men applied when they believed they met only 60 percent—that is, men were willing to act on a lower sense of confidence in their abilities. It is easy to see how this difference in confidence can lead to fewer promotions for women over the long run.15
On other occasions, though, lower self-efficacy can be adaptive. If we are aware of our weaknesses, we can benefit more from what psychologists refer to as offloading—using external tools to help us perform at maximum capacity. We often engage in offloading without thinking. Consider how you effortlessly reflect on whether you will be able to remember what you need to buy at the shop or if you need to write a list. Being self-aware of the limitations of your memory allows you to realize, “This is going to be tough.” You know when you are no longer able to remember the items and need a helping hand.
Allowing people to offload typically improves their performance in laboratory tests, compared to control conditions where the offloading strategy is unavailable. This boost depends on estimates of self-efficacy. To know when to offload, people need to first recognize that their memory or problem-solving abilities might not be up to the job. People who have lower confidence in their memory are more likely to spontaneously set reminders, even after controlling for variation in objective ability. And this ability to use external props when things get difficult can be observed in children as young as four years old, consistent with self-efficacy and confidence guiding how we behave from an early age.16
To zero in on this relationship between metacognition and offloading, my student Xiao Hu and I set up a simple memory test in which people were asked to learn unrelated pairs of words, such as “rocket-garden” or “bucket-silver.” The twist was that our participants were also given the opportunity to store any word pairs they weren’t sure they could remember in a computer file—the lab version of writing a shopping list. When we then tested their memory for the word pairs a little later on, people naturally made use of the saved information to help them get the right answers, even though accessing the stored words carried a small financial cost. Critically, the volunteers in our study tended to use the stored information only when their confidence in their memory was low, demonstrating a direct link between fluctuations in metacognition and the decision to recruit outside help. Later in the book, we will see that this role of self-awareness in helping us know when and how to rely on artificial assistants is becoming more and more important, as our technological assistants morph from simple lists and notes to having minds of their own.17
Teaching Ourselves to Learn
So far, we have considered the role of metacognition in helping us know when we know and don’t know and in guiding us toward the right things to learn next. But we have seen in Part I that metacognition (thinking about ourselves) and mindreading (thinking about others) are tightly intertwined. The human brain seems to use similar machinery in the two cases, just with different inputs. This suggests that simply thinking about what other people know (and what we think they should know) may feed back and sharpen our own understanding of what we know and don’t know. As Seneca said, “While we teach, we learn.”
The role of mindreading in teaching can already be seen in careful observations of young children. In one study, children aged between three and five were asked to teach a game to some puppets who were struggling with the rules. One puppet played the game perfectly, while others made mistakes. The older children tailored their teaching precisely, focusing on the needs of the puppets who made errors, whereas the younger children were more indiscriminate. This result is consistent with what we saw in Part I: around a critical age of about four years old, children become increasingly aware of other people’s mental states. This awareness can facilitate teaching by allowing children to understand what others do and do not know.18
Children also intuitively know what needs to be taught and what doesn’t. For instance, when preschool children were asked to think about someone who had grown up on a deserted island, they realized that the island resident could discover on their own that it’s not possible to hold your breath for a whole day or that when a rock is thrown in the air it falls down. But they also appreciated that the same person would need to be taught that the Earth is round and that bodies need vitamins to stay healthy. This distinction between knowledge that can be directly acquired and that which would need to be taught was already present in the youngest children in the study (five years old) and became sharper with age.19
Teaching others encourages us to make our beliefs about what we know and don’t know explicit and forces us to reconsider what we need to do to gain more secure knowledge of a topic. Prompts to consider the perspective of others indeed seem to have side benefits for self-awareness. When undergraduates were asked to study material for a quiz, they performed significantly better if they were also told they would have to teach the same material to another person. And asking students to engage in an eight-minute exercise advising younger students on their studies was sufficient to prompt higher grades during the rest of that school year, compared to a control group who did not provide advice. In their paper reporting this result, University of Pennsylvania psychologist Angela Duckworth and her colleagues suggest that the advice givers may have experienced heightened self-awareness of their own knowledge as a result of the intervention. This result is striking not least because meaningful educational interventions are notoriously difficult to find, and often those that are said to work may not actually do so when subjected to rigorous scientific testing. That a short advice-giving intervention could achieve meaningful effects on school grades is testament to the power of virtuous interactions between teaching, self-awareness, and performance.20
We can understand why teaching and advising others can be so useful for our own learning by considering how it helps us avoid metacognitive illusions. When we are forced to explain things to others, there is less opportunity to be swayed by internal signals of fluency that might create unwarranted feelings of confidence in our knowledge. For instance, the “illusion of explanatory depth” refers to the common experience of thinking we know how things work (from simple gadgets to government policies), but, when we are asked to explain them to others, we are unable to do so. Being forced to make our knowledge public ensures that misplaced overconfidence is exposed. For similar reasons, it is easier for us to recognize when someone else is talking nonsense than to recognize that same flaw in ourselves. Indeed, when people are asked to explain and justify their reasoning when judging difficult logic problems, they become more critical of their own arguments when they think they are someone else’s rather than their own. Importantly, they also become more discerning—they are also more likely to correct their initially wrong answer to a problem when it is presented as someone else’s answer.21
One implication of these findings is that a simple and powerful way to improve self-awareness is to take a third-person perspective on ourselves. An experiment conducted by the Israeli psychologists and metacognition experts Rakefet Ackerman and Asher Koriat is consistent with this idea. Students were asked to judge both their own learning and the learning progress of others, relayed via a video link. When judging themselves, they fell into the fluency trap; they believed that spending less time studying was a signal of confidence. But when judging others, this relationship was reversed; they (correctly) judged that spending longer on a topic would lead to better learning.22
External props and tools can also provide a new perspective on what we know. Rather than monitoring murky internal processes, many of which remain hidden from our awareness, getting words down on the page or speaking them out loud creates concrete targets for self-reflection. I have experienced this firsthand while writing this book. When I wasn’t sure how to structure a particular section or chapter, I found the best strategy was to start by getting the key points down on paper, and only then was I able to see whether it made sense. We naturally extend our minds onto the page, and these extensions can themselves be targets of metacognition, to be mused about and reflected upon just like regular thoughts and feelings.23
Creating Self-Aware Students
Metacognition is central to how we learn new skills and educate our children. Even subtle distortions in the way students assess their skills, abilities, and knowledge can make the difference between success and failure. If we underestimate ourselves, we may be unwilling to put ourselves forward for an exam or a prize. If we overestimate ourselves, we may be in for a nasty shock on results day. If we cannot track how much we have learned, we will not know what to study next, and if we cannot detect when we might have made an error, we are unlikely to circle back and revise our answers in the high-pressure context of an exam. All of these features of metacognition are susceptible to illusions and distortions.
There is room for optimism, though. When students are encouraged to adopt a third-person perspective on their learning and teach others, they are less likely to fall prey to metacognitive distortions. By learning more about the cognitive science of learning (such as the costs and benefits of note-taking or different approaches to studying), rather than putting faith in sometimes misleading feelings of fluency, we can minimize these metacognitive errors.
Paying attention to the effects of metacognition is likely to have widespread benefits at all levels of our educational system, creating lean and hungry students who leave school having learned how to learn rather than being overstuffed with facts. There have been laudable efforts to improve metacognition in schools. But unfortunately these studies have not yet included the objective metrics of self-awareness we encountered in Part I, so we often do not know if they are working as intended. A good start will be to simply measure metacognition in our classrooms. Are we cultivating learners who know what they know and what they don’t know? If we are not, we may wish to shift to an Athenian model in which cultivating self-awareness becomes just as prized as cultivating raw ability.
At higher levels of education, a broader implication is that lifelong teaching may in turn facilitate lifelong learning. The classical model in academia is that centers for teaching and learning should be collocated with centers for the discovery of new knowledge. Teaching and research are mutually beneficial. Unfortunately, this symbiosis is increasingly under threat. In the United States, the rise of adjunct teaching faculty without the status and research budgets of regular professors is decoupling teaching duties from research. In the UK, there is an increasing division between the research and teaching tracks, with junior academics on fellowships urged by their funders to protect their time for research. Too many teaching and administration responsibilities can indeed derail high-quality research—there is a balance to be struck. But I believe that all researchers should be encouraged to teach, if only to prompt us to reflect on what we know and don’t know.
After we leave school, we may no longer need to hone our exam technique or figure out how best to study. But we are still faced with numerous scenarios where we are asked to question what we know and whether we might be wrong. In the next chapter, we are going to examine how self-awareness seeps into the choices and decisions we go on to make in our adult lives. We will see that the role of metacognition goes well beyond the classroom, affecting how we make decisions, work together with others, and take on positions of leadership and responsibility.