Subliminal: how your unconscious mind rules your behavior - Leonard Mlodinow 2013
Senses Plus Mind Equals Reality
The Two-Tiered Brain
The eye that sees is not a mere physical organ but a means of perception conditioned by the tradition in which its possessor has been reared. —RUTH BENEDICT
THE DISTINCTION BETWEEN the conscious and the unconscious has been made in one form or another since the time of the Greeks.1 Among the most influential of the thinkers delving into the psychology of consciousness was the eighteenth-century German philosopher Immanuel Kant. During his time, psychology was not an independent subject but merely a catchall category for what philosophers and physiologists discussed when they speculated about the mind.2 Their laws concerning human thought processes were not scientific laws but philosophical pronouncements. Since these thinkers required little empirical basis for their theorizing, each one was free to favor his own purely speculative theory over his rival’s purely speculative theory. Kant’s theory was that we actively construct a picture of the world rather than merely documenting objective events, that our perceptions are not based just on what exists but, rather, are somehow created—and constrained—by the general features of the mind. That belief was surprisingly near the modern perspective, though today scholars generally take a more expansive view than Kant’s of the mind’s general features, especially with regard to biases arising from our desires, needs, beliefs, and past experiences. Today we believe that when you look at your mother-in-law, the image you see is based not only on her optical qualities but also on what is going on in your head—for example, your thoughts about her bizarre child-rearing practices or whether it was a good idea to agree to live next door.
Kant felt that empirical psychology could not become a science because you cannot weigh or otherwise measure the events that occur in your brain. In the nineteenth century, however, scientists took a stab at it. One of the first practitioners was the physiologist E. H. Weber, the man who, in 1834, performed the simple experiment on the sense of touch that involved placing a small reference weight at a spot on his subjects’ skin, then asking them to judge whether a second weight was heavier or lighter than the first.3 The interesting thing Weber discovered was that the smallest difference a person could detect was proportional to the magnitude of the reference weight. For example, if you were just barely able to sense that a six-gram weight was heavier than a reference object that weighed five grams, one gram would be the smallest detectible difference. But if the reference weight were ten times heavier, the smallest difference you’d be able to detect would be ten times as great—in this case, ten grams. This doesn’t sound like an earth-shattering result, but it was crucial to the development of psychology because it made a point: through experimentation one can uncover mathematical and scientific laws of mental processing.
In 1879 another German psychologist, Wilhelm Wundt, petitioned the Royal Saxon Ministry of Education for money to start the world’s first psychology laboratory.4 Though his request was denied, he established the laboratory anyway, in a small classroom he had already been using, informally, since 1875. That same year, a Harvard MD and professor named William James, who had taught Comparative Anatomy and Physiology, started teaching a new course called The Relations Between Physiology and Psychology. He also set up an informal psychology laboratory in two basement rooms of Lawrence Hall. In 1891 it attained official status as the Harvard Psychological Laboratory. In recognition of their pathbreaking efforts, a Berlin newspaper referred to Wundt as “the psychological Pope of the Old World” and James as “the psychological Pope of the New World.”5 It was through their experimental work, and that of others inspired by Weber, that psychology was finally put on a scientific footing. The field that emerged was called the “New Psychology.” For a while, it was the hottest field in science.6
The pioneers of the New Psychology each had his own views about the function and importance of the unconscious. The British physiologist and psychologist William Carpenter was one of the most prescient. In his 1874 book Principles of Mental Physiology, he wrote that “two distinct trains of Mental action are carried on simultaneously, one consciously, the other unconsciously,” and that the more thoroughly we examine the mechanisms of the mind, the clearer it becomes “that not only an automatic, but an unconscious action enters largely into all its processes.”7 This was a profound insight, one we continue to build on to this day.
Despite all the provocative ideas brewing in European intellectual circles after the publication of Carpenter’s book, the next big step in understanding the brain along the lines of Carpenter’s two-trains concept came from across the ocean, from the American philosopher and scientist Charles Sanders Peirce—the man who did the studies of the mind’s ability to detect what should have been undetectable differences in weight and brightness. A friend of William James’s at Harvard, Peirce had founded the philosophical doctrine of pragmatism (though it was James who elaborated on the idea and made it famous). The name was inspired by the belief that philosophical ideas or theories should be viewed as instruments, not absolute truths, and their validity judged by their practical consequences in our lives.
Peirce had been a child prodigy.8 He wrote a history of chemistry when he was eleven. He had his own laboratory when he was twelve. At thirteen, he studied formal logic from his older brother’s textbook. He could write with both hands and enjoyed inventing card tricks. He was also, in later life, a regular user of opium, which was prescribed to relieve a painful neurological disorder. Still, he managed to turn out twelve thousand printed pages of published works, on topics ranging from the physical sciences to the social sciences. His discovery of the fact that the unconscious mind has knowledge unknown to the conscious mind—which had its unlikely origin in the incident in which he was able to form an accurate hunch about the identity of the man who stole his gold watch—was the forerunner of many other such experiments. The process of arriving seemingly by chance at a correct answer you aren’t aware of knowing is now used in what is called a “forced choice” experiment, which has become a standard tool in probing the unconscious mind. Although Freud is the cultural hero associated with popularizing the unconscious, it is really to pioneers like Wundt, Carpenter, Peirce, Jastrow, and William James that we can trace the roots of modern scientific methodology and thought about the unconscious mind.
TODAY WE KNOW that Carpenter’s “two distinct trains of Mental action” are actually more like two entire railway systems. To update Carpenter’s metaphor, we would say that the conscious and unconscious railways each comprise a myriad of densely interconnected lines, and that the two systems are also connected to each other at various points. The human mental system is thus far more complex than Carpenter’s original picture, but we’re making progress in deciphering its map of routes and stations.
What has become abundantly clear is that within this two-tier system, it is the unconscious tier that is the more fundamental. It developed early in our evolution, to deal with the basic necessities of function and survival, sensing and safely responding to the external world. It is the standard infrastructure in all vertebrate brains, while the conscious can be considered an optional feature. In fact, while most nonhuman species of animals can and do survive with little or no capacity for conscious symbolic thought, no animal can exist without an unconscious.
According to a textbook on human physiology, the human sensory system sends the brain about eleven million bits of information each second.9 However, anyone who has ever taken care of a few children who are all trying to talk to you at once can testify that your conscious mind cannot process anywhere near that amount. The actual amount of information we can handle has been estimated to be somewhere between sixteen and fifty bits per second. So if your conscious mind were left to process all that incoming information, your brain would freeze like an overtaxed computer. Also, though we don’t realize it, we are making many decisions each second. Should I spit out my mouthful of food because I detect a strange odor? How shall I adjust my muscles so that I remain standing and don’t tip over? What is the meaning of the words that person across the table from me is uttering? And what kind of person is he, anyway?
Evolution has provided us with an unconscious mind because our unconscious is what allows us to survive in a world requiring such massive information intake and processing. Our sensory perception, our memory recall, our everyday decisions, judgments, and activities all seem effortless—but that is only because the effort they demand is expended mainly in parts of the brain that function outside awareness.
Take speech. Most people who read the sentence “The cooking teacher said the children made good snacks” instantly understand a certain meaning for the word “made.” But if you read, “The cannibal said the children made good snacks,” you automatically interpret the word “made” in a more alarming sense. Though we think that making these distinctions is easy, the difficulty in making sense of even simple speech is well appreciated by computer scientists who struggle to create machines that can respond to natural language. Their frustration is illustrated by a possibly apocryphal story of the early computer that was given the task of translating the homily “The spirit is willing but the flesh is weak” into Russian and then back into English. According to the story, it came out: “The vodka is strong but the meat is rotten.” Luckily, our unconscious does a far better job, and handles language, sense perception, and a teeming multitude of other tasks with great speed and accuracy, leaving our deliberative conscious mind time to focus on more important things, like complaining to the person who programmed the translation software. Some scientists estimate that we are conscious of only about 5 percent of our cognitive function. The other 95 percent goes on beyond our awareness and exerts a huge influence on our lives—beginning with making our lives possible.
One sign that there is a lot of activity going on in our brains of which we are not aware comes from a simple analysis of energy consumption.10 Imagine yourself sprawled on the couch watching television; you are subject to few demands on your body. Then imagine yourself doing something physically demanding—say, racing down a street. When you run fast, the energy consumption in your muscles is multiplied by a factor of one hundred compared to the energy you use as a couch potato. That’s because, despite what you might tell your significant other, your body is working a lot harder—one hundred times so—when you’re running than when you’re stretched out on the sofa. Let’s contrast this energy multiplier with the multiplier that is applicable when you compare two forms of mental activity: vegging out, in which your conscious mind is basically idle, and playing chess. Assuming that you are a good player with an excellent knowledge of all the possible moves and strategies and are concentrating deeply, does all that conscious thought tax your conscious mind to the same degree that running taxed your muscles? No. Not remotely. Deep concentration causes the energy consumption in your brain to go up by only about 1 percent. No matter what you are doing with your conscious mind, it is your unconscious that dominates your mental activity—and therefore uses up most of the energy consumed by the brain. Regardless of whether your conscious mind is idle or engaged, your unconscious mind is hard at work doing the mental equivalent of push-ups, squats, and wind sprints.
ONE OF THE most important functions of your unconscious is the processing of data delivered by your eyes. That’s because, whether hunting or gathering, an animal that sees better eats better and avoids danger more effectively, and hence lives longer. As a result, evolution has arranged it so that about a third of your brain is devoted to processing vision: to interpreting color, detecting edges and motion, perceiving depth and distance, deciding the identity of objects, recognizing faces, and many other tasks. Think of it—a third of your brain is busy doing all those things, yet you have little knowledge of or access to the processing. All that hard work proceeds outside your awareness, and then the result is offered to your conscious mind in a neat report, with the data digested and interpreted. As a result, you never have to bother figuring out what it means if these rods or those cones in your retinas absorb this or that number of photons, or to translate optic nerve data into a spatial distribution of light intensities and frequencies, and then into shapes, spatial positions, and meaning. Instead, while your unconscious mind is working feverishly to do all those things, you can relax in bed, recognizing, seemingly without effort, the lighting fixture on the ceiling—or the words in this book. Our visual system is not only one of the most important systems within our brain, it is also among the most studied areas in neuroscience. Understanding its workings can shed a lot of light on the way the two tiers of the human mind function together—and apart.
One of the most fascinating of the studies that neuroscientists have done on the visual system involved a fifty-two-year-old African man referred to in the literature as TN. A tall, strong-looking man, a doctor who, as fate would have it, was destined to become renowned as a patient, TN took the first step on his path to pain and fame one day in 2004 when, while living in Switzerland, he had a stroke that knocked out the left side of a part of his brain called the visual cortex.
The main part of the human brain is divided into two cerebral hemispheres, which are almost mirror images of each other. Each hemisphere is divided into four lobes, a division originally motivated by the bones of the skull that overlie them. The lobes, in turn, are covered by a convoluted outer layer about the thickness of a formal dinner napkin. In humans, this outer covering, the neocortex, forms the largest part of the brain. It consists of six thinner layers, five of which contain nerve cells, and the projections that connect the layers to one another. There are also input and output connections from the neocortex to other parts of the brain and nervous system. Though thin, the neocortex is folded in a manner that allows almost three square feet of neural tissue—about the size of a large pizza—to be packed into your skull.11 Different parts of the neocortex perform different functions. The occipital lobe is located at the very back of your head, and its cortex—the visual cortex—contains the main visual processing center of the brain.
A lot of what we know about the function of the occipital lobe comes from creatures in which that lobe has been damaged. You might look askance at someone who seeks to understand the function of the brakes on a car by driving one that doesn’t have any—but scientists selectively destroy parts of animals’ brains on the theory that one can learn what those parts do by studying animals in which they no longer do it. Since university ethics committees would frown on killing off parts of the brain in human subjects, researchers also comb hospitals seeking unfortunate people whom nature or an accident has rendered suitable for their study. This can be a tedious search because Mother Nature doesn’t care about the scientific usefulness of the injuries she inflicts. TN’s stroke was noteworthy in that it pretty cleanly took out just the visual center of his brain. The only drawback—from the research point of view—was that it affected only the left side, meaning that TN could still see in half his field of vision. Unfortunately for TN, that situation lasted for just thirty-six days. Then a tragic second hemorrhage occurred, freakishly destroying what was almost the mirror image of the first region.
After the second stroke, doctors did tests to see whether it had rendered TN completely blind, for some of the blind have a small measure of residual sight. They can see light and dark, for example, or read a word if it covers the side of a barn. TN, though, could not even see the barn. The doctors who examined him after his second stroke noted that he could not discern shapes or detect movement or colors, or even the presence of an intense source of light. An exam confirmed that the visual areas in his occipital lobe were not functioning. Though the optical part of TN’s visual system was still fully functional, meaning his eyes could gather and record light, his visual cortex lacked the ability to process the information that his retinas were sending it. Because of this state of affairs—an intact optical system, but a completely destroyed visual cortex—TN became a tempting subject for scientific research, and, sure enough, while he was still in the hospital a group of doctors and researchers recruited him.
There are many experiments one can imagine performing on a blind subject like TN. One could test for an enhanced sense of hearing, for example, or memory for past visual experiences. But of all possible questions, one that would probably not make your list would be whether a blind man can sense your mood by staring at your face. Yet that is what these researchers chose to study.12
They began by placing a laptop computer a couple feet in front of TN and showing him a series of black shapes—either circles or squares—presented on a white background. Then, in the tradition of Charles Sanders Peirce, they presented him with a forced choice: when each shape appeared, they asked him to identify it. Just take a stab at it, the researchers pleaded. TN obliged. He was correct about half the time, just what one would expect if he truly had no idea what he was seeing. Now comes the interesting part. The scientists displayed a new series of images—this time, a series of angry and happy faces. The game was essentially the same: to guess, when prompted, whether the face on the screen was angry or happy. But identifying a facial expression is a far different task from perceiving a geometric shape, because faces are much more important to us than black shapes.
Faces play a special role in human behavior.13 That’s why, despite men’s usual preoccupation, Helen of Troy was said to have “the face that launched a thousand ships,” not “the breasts that launched a thousand ships.” And it’s why, when you tell your dinner guests that the tasty dish they are savoring is cow pancreas, you pay attention to their faces and not their elbows—or their words—to get a quick and accurate report of their attitudes toward organ meat. We look to faces to quickly judge whether someone is happy or sad, content or dissatisfied, friendly or dangerous. And our honest reactions to events are reflected in facial expressions controlled in large part by our unconscious minds. Expressions, as we’ll see in Chapter 5, are a key way we communicate and are difficult to suppress or fake, which is why great actors are hard to find. The importance of faces is reflected in the fact that, no matter how strongly men are drawn to the female form, or women to a man’s physique, we know of no part of the human brain dedicated to analyzing the nuances of bulging biceps or the curves of firm buttocks or breasts. But there is a discrete part of the brain that is used to analyze faces. It is called the fusiform face area. To illustrate the brain’s special treatment of faces, look at the photos of President Barack Obama here.14
The photo on the left of the right-side-up pair looks horribly distorted, while the left member of the upside-down pair does not look very unusual. In reality the bottom pair is identical to the top pair, except that the top photos have been flipped. I know because I flipped them, but if you don’t trust me just rotate this book 180 degrees, and you’ll see that what is now the top pair will appear to have the bad photo, and what is now the bottom pair will look pretty good. Your brain devotes a lot more attention (and neural real estate) to faces than to many other kinds of visual phenomena because faces are more important—but not upside-down faces, since we rarely encounter those, except when performing headstands in a yoga class. That’s why we are far better at detecting the distortion on the face that is right side up than on the one that is flipped over.
www.moillusions.com. Used with permission.
The researchers studying TN chose faces as their second series of images in the belief that the brain’s special and largely unconscious focus on faces might allow TN to improve his performance, even though he’d have no conscious awareness of seeing anything. Whether he was looking at faces, geometric shapes, or ripe peaches ought to have been a moot point, given that TN was, after all, blind. But on this test TN identified the faces as happy or angry correctly almost two times out of three. Though the part of his brain responsible for the conscious sensation of vision had obviously been destroyed, his fusiform face area was receiving the images. It was influencing the conscious choices he made in the forced-choice experiment, but TN didn’t know it.
Having heard about the first experiment involving TN, a few months later another group of researchers asked him if he would participate in a different test. Reading faces may be a special human talent, but not falling on your face is even more special. If you suddenly notice that you are about to trip over a sleeping cat, you don’t consciously ponder strategies for stepping out of the way; you just do it.15 That avoidance is governed by your unconscious, and it is the skill the researchers wanted to test in TN. They proposed to watch as he walked, without his cane, down a cluttered hallway.16
The idea excited all those involved except the person not guaranteed to remain vertical. TN refused to participate.17 He may have had some success in the face test, but what blind man would consent to navigating an obstacle course? The researchers implored him, in effect, to just do it. And they kindly offered to have an escort trail him to make sure he didn’t fall. After some prodding, he changed his mind. Then, to the amazement of everyone, including himself, he zigged and zagged his way perfectly down the corridor, sidestepping a garbage can, a stack of paper, and several boxes. He didn’t stumble once, or even collide with any objects. When asked how he’d accomplished this, TN had no explanation and, one presumes, requested the return of his cane.
The phenomenon exhibited by TN—in which individuals with intact eyes have no conscious sensation of seeing but can nevertheless respond in some way to what their eyes register—is called “blindsight.” This important discovery “elicited disbelief and howls of derision” when first reported and has only recently come to be accepted.18 But in a sense it shouldn’t have been surprising: it makes perfect sense that blindsight would result when the conscious visual system is rendered nonfunctional but a person’s eyes and unconscious system remain intact. Blindsight is a strange syndrome—a particularly dramatic illustration of the two tiers of the brain operating independently of each other.
THE FIRST PHYSICAL indication that vision occurs through multiple pathways came from a British Army doctor named George Riddoch in 1917.19 In the late nineteenth century, scientists had begun to study the importance of the occipital lobe in vision by creating lesions in dogs and monkeys. But data on humans was scarce. Then came World War I. Suddenly the Germans were turning British soldiers into promising research subjects at an alarming pace. This was partly because British helmets tended to dance atop the soldiers’ heads, which might have looked fashionable but didn’t cover them very well, especially in the back. Also, the standard in that conflict was trench warfare. As it was practiced, a soldier’s job was to keep all of his body protected by the solid earth except for his head, which he was instructed to stick up into the line of fire. As a result, 25 percent of all penetrating wounds suffered by British soldiers were head wounds, especially of the lower occipital lobe and its neighbor the cerebellum.
The same path of bullet penetration today would turn a huge swath of the brain into sausage meat and almost certainly kill the victim. But in those days bullets were slower and more discrete in their effects. They tended to bore neat tunnels through the gray matter without disturbing the surrounding tissue very much. This left the victims alive and in better condition than you might imagine given that their heads now had the topology of a doughnut. One Japanese doctor who worked under similar conditions in the Russo-Japanese War saw so many patients injured in that manner that he devised a method for mapping the precise internal brain injury—and the deficits expected—based on the relation of the bullet holes to various external landmarks on the skull. (His official job had been to determine the size of the pension owed the brain-damaged soldiers.)20
Dr. Riddoch’s most interesting patient was a Lieutenant Colonel T., who had a bullet sail through his right occipital lobe while he was leading his men into battle. After taking the hit he bravely brushed himself off and proceeded to continue leading his men. When asked how he felt, he reported being dazed but said he was otherwise just fine. He was wrong. Fifteen minutes later, he collapsed. When he woke up it was eleven days later, and he was in a hospital in India.
Although he was now conscious again, one of the first signs that something was amiss came at dinner, when Lieutenant Colonel T. noted that he had a hard time seeing bits of meat residing on the left side of his plate. In humans, the eyes are wired to the brain in such a way that visual information from the left side of your field of vision is transmitted to the right side of your brain, and vice versa, no matter which eye that information comes from. In other words, if you stare straight ahead, everything to your left is transmitted to the right hemisphere of your brain, which is where Lieutenant Colonel T. took the bullet. After he was transferred to a hospital in England, it was established that Lieutenant Colonel T. was totally blind on the left side of his visual field, with one bizarre exception. He could detect motion there. That is, he couldn’t see in the usual sense—the “moving things” had no shape or color—but he did know if something was moving. It was partial information, and of little use. In fact, it annoyed him, especially during train rides, when he would sense that things were moving past at his left but he couldn’t see anything there.
Since Lieutenant Colonel T. was consciously aware of the motion he detected, his wasn’t a case of true blindsight, as TN’s was, but still, the case was groundbreaking for its suggestion that vision is the cumulative effect of information traveling along multiple pathways, both conscious and unconscious. George Riddoch published a paper on Lieutenant Colonel T. and others like him, but unfortunately another British Army doctor, one far better known, derided Riddoch’s work. With that it virtually disappeared from the literature, not to resurface for many decades.
UNTIL RECENTLY, UNCONSCIOUS vision was difficult to investigate because patients with blindsight are exceedingly rare.21 But in 2005, Antonio Rangel’s Caltech colleague Christof Koch and a coworker came up with a powerful new way to explore unconscious vision in healthy subjects. Koch arrived at this discovery about the unconscious because of his interest in its flip side—the meaning of consciousness. If studying the unconscious was, until recently, not a good career move, Koch says that studying consciousness was, at least until the 1990s, “considered a sign of cognitive decline.” Today, however, scientists study the two subjects hand in hand, and one of the advantages of research on the visual system is that it is in some sense simpler than, say, memory or social perception.
The technique Koch’s group discovered exploits a visual phenomenon called binocular rivalry. Under the right circumstances, if one image is presented to your left eye while a different image is presented to your right eye, you won’t see both of them, somehow superimposed. Instead, you’ll perceive just one of the two images. Then, after a while, you’ll see the other image, and then the first again. The two images will alternate in that manner indefinitely. What Koch’s group found, however, was that if they present a changing image to one eye and a static one to the other, people will see only the changing image, and never the static one.22 In other words, if your right eye were exposed to a film of two monkeys playing Ping-Pong and your left to a photo of a hundred-dollar bill, you’d be unaware of the static photo even though your left eye had recorded the data and transmitted it to your brain. The technique provides a powerful tool for creating, in a sense, artificial blindsight—a new way to study unconscious vision without destroying any part of the brain.
Employing the new technique, another group of scientists performed an experiment on normal people analogous to the one the facial expression researchers performed on patient TN.23 They exposed each subject’s right eye to a colorful and rapidly changing mosaic-like image, and each subject’s left eye to a static photograph that pictured an object. That object was positioned near either the right edge of the photograph or the left, and it was their subjects’ task to guess where the object was, even though they did not consciously perceive the static photo. The researchers expected that, as in the case of TN, the subjects’ unconscious cues would be powerful only if the object pictured was of vital interest to the human brain. This led to an obvious category. And so when the researchers performed this experiment, they selected, for one of the static images, pornography—or, in their scientific jargon, a “highly arousing erotic image.” You can get erotica at almost any newsstand, but where do you get scientifically controlled erotica? It turns out that psychologists have a database for that. It is called the International Affective Picture System, a collection of 480 images ranging from sexually explicit material to mutilated bodies to pleasant images of children and wildlife, each categorized according to the level of arousal it produces.
As the researchers expected, when presented with unprovocative static images and asked whether the object was on the left- or the right-hand side of the photo, the subjects’ answers were correct about half the time, which is what you would expect from completely random, uninformed guesses, a rate comparable to TN’s when he was making guesses about circles versus squares. But when heterosexual male subjects were shown an image of a naked woman, they gained a significant ability to discern on which side of the image she was located, as did females who were shown images of naked men. That didn’t happen when men were shown naked men, or when women were shown naked women—with one exception, of course. When the experiment was repeated on homosexual subjects, the results flipped in the manner you might expect. The results mirrored the subjects’ sexual preferences.
Despite their successes, when asked afterward what they had seen, all the subjects described just the tedious progression of rapidly changing mosaic images the researchers had presented to their right eye. The subjects were clueless that while their conscious minds were looking at a series of snoozers, their unconscious minds were feasting on Girls (or Boys) Gone Wild. This means that while the processing of the erotic image was never delivered to the consciousness, it did register powerfully enough in the unconscious that the subjects had a subliminal awareness of it. We are reminded again of the lesson Peirce learned: We don’t consciously perceive everything that registers in our brain, so our unconscious mind may notice things that our conscious mind doesn’t. When that happens we may get a funny feeling about a business associate or a hunch about a stranger and, like Peirce, not know the source.
I learned long ago that it is often best to follow those hunches. I was twenty, in Israel just after the Yom Kippur War, and went up to visit the Golan Heights, in Israeli-occupied Syria. While hiking along a deserted road I spotted an interesting bird in a farmer’s field, and being a bird-watcher, I resolved to get a closer look. The field was ringed by a fence, which doesn’t normally deter bird-watchers, but this fence had a curious sign on it. I pondered what the sign might say. It was in Hebrew, and my Hebrew wasn’t quite good enough to decipher it. The usual message would have been “No Trespassing,” but somehow this sign seemed different. Should I stay out? Something told me yes, a something I now imagine was very much like the something that told Peirce who had stolen his watch. But my intellect, my conscious deliberative mind, said, Go ahead. Just be quick. And so I climbed the fence and walked into the field, toward the bird. Soon I heard some yelling in Hebrew, and I turned to see a man down the road on a tractor, gesturing at me in a very animated fashion. I returned to the road. It was hard to understand the man’s loud jabbering, but between my broken Hebrew and his hand gestures, I soon figured out the issue. I turned to the sign, and now realized that I did recognize those Hebrew words. The sign said, “Danger, Minefield!” My unconscious had gotten the message, but I had let my conscious mind overrule it.
It used to be difficult for me to trust my instincts when I couldn’t produce a concrete, logical basis for them, but that experience cured me. We are all a bit like patient TN, blind to certain things, being advised by our unconscious to dodge to the left and right. That advice can often save us, if we are willing to open ourselves to the input.
PHILOSOPHERS HAVE FOR centuries debated the nature of “reality,” and whether the world we experience is real or an illusion. But modern neuroscience teaches us that, in a way, all our perceptions must be considered illusions. That’s because we perceive the world only indirectly, by processing and interpreting the raw data of our senses. That’s what our unconscious processing does for us—it creates a model of the world. Or as Kant said, there is Das Ding an sich, a thing as it is, and there is Das Ding für uns, a thing as we know it. For example, when you look around, you have the feeling that you are looking into three-dimensional space. But you don’t directly sense those three dimensions. Instead, your brain reads a flat, two-dimensional array of data from your retinas and creates the sensation of three dimensions. Your unconscious mind is so good at processing images that if you were fitted with glasses that turn the images in your eyes upside down, after a short while you would see things right side up again. If the glasses were then removed, you would see the world upside down again, but just for a while.24 Because of all that processing, when we say, “I see a chair,” what we really mean is that our brain has created a mental model of a chair.
Our unconscious doesn’t just interpret sensory data, it enhances it. It has to, because the data our senses deliver is of rather poor quality and must be fixed up in order to be useful. For example, one flaw in the data your eyes supply comes from the so-called blind spot, a spot on the back of your eyeball where the wire connecting your retina and your brain is attached. This creates a dead region in each eye’s field of vision. Normally you don’t even notice it because your brain fills in the picture based on the data it gets from the surrounding area. But it is possible to design an artificial situation in which the hole becomes visible. For example, close your right eye, look at the number 1 on the right side of the line below, and move the book toward you (or away from you) until the sad face disappears—it will then be in your blind spot. Keeping your head still, now look at the 2, the 3, and so on, still with your left eye. The sad face will reappear, probably around the number 4.
To help compensate for their imperfections, your eyes change position a tiny bit several times each second. These jiggling motions are called microsaccades, to distinguish them from ordinary saccades, the larger, more rapid patterns your eyes ceaselessly follow when you study a scene. These happen to be the fastest movements executed by the human body, so rapid that they cannot be observed without special instruments. For example, as you read this text your eye is making a series of saccades along the line. And if I were talking to you, your gaze would bounce around my face, mostly near my eyes. All told, the six muscles controlling your eyeball move it some 100,000 times each day, about as many times as your heart beats.
If your eyes were a simple video camera, all that motion would make the video unwatchable. But your brain compensates by editing out the period during which your eye is in transit and filling in your perception in a way that you don’t notice. You can illustrate that edit quite dramatically, but you’ll need to enlist as your partner a good friend, or perhaps an acquaintance who has had a few glasses of wine. Here is what you do: Stand facing your partner, with about four inches separating your noses, then ask your partner to fixate midway between your eyes. Next, have your partner look toward your left ear and back. Repeat this a couple of times. Meanwhile, your job is to observe your partner’s eyes and note that you have no difficulty seeing them move back and forth. The question is, If you could stand nose to nose with yourself and repeat the procedure, would you see your own eyes move? If it is true that your brain edits out visual information received during eye movements, you would not. How can you perform this test? Stand facing a mirror, with your nose two inches from the mirror’s surface (this corresponds to four inches from a real person). Look first right between your eyes, then at your left ear, then back. Repeat a couple of times. Miraculously, you get the two views but never see your eye move between them.
Another gap in the raw data delivered by your eyes has to do with your peripheral vision, which is quite poor. In fact, if you hold your arm out and gaze at your thumbnail, the only part of your field of vision with good resolution will be the area within, and perhaps just bordering, your nail. Even if you have twenty-twenty vision, your visual acuity outside that central region will be roughly comparable to that experienced by a person who needs thick glasses and doesn’t have them. You can get a taste for that if you look at this page from a distance of a couple feet and stare at the central asterisk in the first line below (try not to cheat—it isn’t easy!). The F’s in that line are a thumbnail apart. You’ll probably be able to recognize the A and F just fine, but not much of the other letters at all. Now go down to the second line. Here, the increasing size of the letters gives you some help. But if you’re like me, you won’t be able to clearly read all the letters unless they are as large as they appear in the third line. The size of the magnification required for you to be able to see the letters at the periphery is an indication of the poor quality of your peripheral vision.
The blind spot, saccades, poor peripheral vision—all these issues should cause you severe problems. When you look at your boss, for example, the true retinal image would show a fuzzy, quivering person with a black hole in the middle of his or her face. However emotionally appropriate that may seem, it is not an image you’ll ever perceive, because your brain automatically processes the data, combining the input from both eyes, removing the effects of the jiggling, and filling in gaps on the assumption that the visual properties of neighboring locations are similar. The images below illustrate some of the processing your brain does for you. On the left is the scene as recorded by a camera. On the right is the same image as it would appear if recorded by a human retina with no additional processing. Fortunately for you, that processing gets done in the unconscious, making the images you see as polished and refined as those picked up by the camera.
Our hearing works in an analogous manner. For example, we unconsciously fill in gaps in auditory data. To demonstrate this, in one study experimenters recorded the sentence “The state governors met with their respective legislatures convening in the capital city,” then erased the 120-millisecond portion of the sentence containing the first “s” sound in “legislatures” and replaced it with a cough. They told twenty experimental subjects that they would hear a recording containing a cough and would be given printed text so they could circle the exact position in the text at which the cough occurred. The subjects were also asked if the cough had masked any of the circled sounds. All of the volunteers reported hearing the cough, but nineteen of the twenty said that there was no missing text. The only subject who reported that the cough had obscured any phonemes named the wrong one.25 What’s more, in follow-up work the researchers found that even practiced listeners couldn’t identify the missing sound. Not only could they not pinpoint the exact location of the cough—they couldn’t even come close. The cough didn’t seem to occur at any clear point within the sentence; rather, it seemed to coexist with the speech sounds without affecting their intelligibility.
Original image, made by a camera. The same image seen by a retina (right eye, fixation at the X.)
Courtesy of Laurent Itti.
Even when the entire syllable “gis” in “legislatures” was obliterated by the cough, subjects could not identify the missing sound.26 The effect is called phonemic restoration, and it’s conceptually analogous to the filling in that your brain does when it papers over your retinal blind spot, and enhances the low resolution in your peripheral vision—or fills holes in your knowledge of someone’s character by employing clues based on their appearance, their ethnic group, or the fact that they remind you of your uncle Jerry. (About that, more later.)
Phonemic restoration has a striking property: because it is based on the context in which you hear words, what you think you heard at the beginning of a sentence can be affected by the words that come at the end. For example, letting an asterisk denote the cough, listeners in another famous study reported hearing the word “wheel” in the sentence “It was found that the *eel was on the axle.” But they heard “heel” when they listened to the sentence “It was found that the *eel was on the shoe.” Similarly, when the final word in the sentence was “orange” they heard “peel,” and when it was “table,” they heard “meal.”27 In each case the data provided to each subject’s brain included the same sound, “*eel.” Each brain patiently held the information, awaiting more clues as to the context. Then, after hearing the word “axle,” “shoe,” “orange,” or “table,” the brain filled in the appropriate consonant. Only at that time did it pass to the subject’s conscious mind, leaving the subject unaware of the alteration and quite confident of having accurately heard the word that the cough had partially obscured.
———
IN PHYSICS, SCIENTISTS invent models, or theories, to describe and predict the data we observe about the universe. Newton’s theory of gravity is one example; Einstein’s theory of gravity is another. Those theories, though they describe the same phenomenon, constitute very different versions of reality. Newton, for example, imagined that masses affect each other by exerting a force, while in Einstein’s theory the effects occur through a bending of space and time and there is no concept of gravity as a force. Either theory could be employed to describe, with great accuracy, the falling of an apple, but Newton’s would be much easier to use. On the other hand, for the calculations necessary for the satellite-based global positioning system (GPS) that helps you navigate while driving, Newton’s theory would give the wrong answer, and so Einstein’s must be used. Today we know that actually both theories are wrong, in the sense that both are only approximations of what really happens in nature. But they are also both correct, in that they each provide a very accurate and useful description of nature in the realms in which they do apply.
As I said, in a way, every human mind is a scientist, creating a model of the world around us, the everyday world that our brains detect through our senses. Like our theories of gravity, our model of the sensory world is only approximate and is based on concepts invented by our minds. And like our theories of gravity, though our mental models of our surroundings are not perfect, they usually work quite well.
The world we perceive is an artificially constructed environment whose character and properties are as much a result of unconscious mental processing as they are a product of real data. Nature helps us overcome gaps in information by supplying a brain that smooths over the imperfections, at an unconscious level, before we are even aware of any perception. Our brains do all of this without conscious effort, as we sit in a high chair enjoying a jar of strained peas or, later in life, on a couch, sipping a beer. We accept the visions concocted by our unconscious minds without question, and without realizing that they are only an interpretation, one constructed to maximize our overall chances of survival, but not one that is in all cases the most accurate picture possible.
That brings up a question to which we will return again and again, in contexts ranging from vision to memory to the way we judge the people we meet: If a central function of the unconscious is to fill in the blanks when there is incomplete information in order to construct a useful picture of reality, how much of that picture is accurate? For example, suppose you meet someone new. You have a quick conversation, and on the basis of that person’s looks, manner of dress, ethnicity, accent, gestures—and perhaps some wishful thinking on your part—you form an assessment. But how confident can you be that your picture is a true one?
In this chapter I focused on the realm of visual and auditory perception to illustrate the brain’s two-tier system of data processing and the ways in which it supplies information that does not come directly from the raw data in front of it. But sensory perception is just one of many arenas of mental processing in which portions of the brain that operate at the unconscious level perform tricks to fill in missing data. Memory is another, for the unconscious mind is actively involved in shaping your memory. As we are about to see, the unconscious tricks that our brains employ to create memories of events—feats of imagination, really—are as drastic as the alterations they make to the raw data received by our eyes and ears. And the way the tricks conjured up by our imaginations supplement the rudiments of memory can have far-reaching—and not always positive—effects.