Our Senses: An Immersive Experience - Rob DeSalle 2018
No Limits: The Limits to What We Can Sense and the Future of Our Senses
He appears on the TED stage, and without knowing who he is, my first reaction to his attire is, “What was he thinking when he left the house?” His shirt is bright blue, his jacket is pink, his pants are brilliant yellow, and on his feet are black-and-white saddle shoes. I can only guess what color his socks are. His name is Neil Harbisson, an artist and one of the most famous monochromats on the planet. Remember from Chapter 9 that a monochromat can see the world only in shades of black and white. So Harbisson sees the world, as he likes to point out, as if he were watching TV in the 1950s—that is, in tones of black and white and shades of gray. What is remarkable about Harbisson is that he calls himself a cyborg. He wears a cameralike device on his forehead that is connected to the back of his brain and makes sounds when it is pointed at something that has color. A high-pitched squeak is emitted when the camera focuses on a dirty yellow sock, and a lower, mellower sound emerges when Harbisson holds a red handkerchief in front of the camera. He has learned to use the sound vibrations from the device to observe the colors of objects. The cyborg in him has brought color to his otherwise black-and-white visual life.
As we have seen throughout this book, our senses do have physical limits, placed on them by the structures that collect and interpret information from the outer world. But our species has not been limited by these constraints on our senses. Our capacity to see is limited because the human retina can collect light in only a narrow range of electromagnetic wavelength. The wavelengths that exist in nature range from 100,000 kilometers (100,000,000,000 m) to 1.0 picometers (0.000000000001 m), or more than twenty-four orders of magnitude. Our visible range is only over a few hundred nanometers. In other words, humans’ visible biological range is only a paltry 0.0000000000000000000001th of the entire spectrum. Yet we know that light at these other wavelengths exists, and we have even developed aids that help us visualize the light or the product of light at these other far-ranging wavelengths.
For sound, we can hear in the range of 20 to 20,000 hertz. That’s more than three orders of magnitude. All other sounds outside this range are undetectable to most humans. Yet again we know these other sounds exist, and we have made instruments to detect them even though our own physical neurological machinery can’t. As we saw with odors, we have a relatively large number of smell receptor genes that translate into a capacity to smell over a fairly large range of odors. One estimate mentioned in this book is more than 1012 (more than a trillion). This is a huge number of odors, but nonetheless only a very small range of the total number of kinds of odors that exist on our planet. Yet once again we can characterize these odors that are odorless to our neurobiology. Taste and olfaction are similar in their neurobiological mechanism in that both are chemosensory, yet taste has far fewer receptors and only five real categories of what can be detected by the gustatory system. Yet once again we know that the range of molecules that interact with our taste buds that are out there is much greater than what we taste. We simply don’t have receptors for these other molecules, but we know they exist and we develop methods whereby we can characterize them. With fertile minds such as the one Charles Spence possesses (see Chapter 15), it is possible that we will find a way to characterize the better than trillion odors that we might be able to sense.
Part of the limited range of the senses we have is the result of how well tuned our biology is to our senses. But what is of equal importance to our modern human existence is that when our scientific, cultural, or social needs have exceeded the range of our biological senses, we have figured out a way not to be limited by our biology. X-rays, sonar, magnetic resonance imaging, and microscopy are only a few of the hundreds of innovations humans have made that allow us to expand beyond evolved biology every day.
My favorite example is DNA sequencing. Up until about sixty years ago, our species hadn’t even identified DNA as the hereditary material and had no idea what constituted it or how it was configured. Up to the 1950s, researchers in chemistry and physics were discovering spectacular information about the structure of “invisible” compounds such as proteins and carbohydrates (the constituents in living organisms), which was another equally stunning feat extending our visual capacity. But this knowledge of chemistry was extended because humans were reaching outside of the range of their senses. X-rays were being used to visualize the three-dimensional structure of molecules, and indeed one of the big steps to deciphering the physical structure of this important molecule was the use of X-rays to examine crystal structures of molecules like DNA. In 1953, James Watson, Francis Crick, Maurice Wilkins, and Rosalind Franklin were able to pin down the structure of DNA as a double helix with a diameter of 10 angstroms. This diameter is several orders of magnitude outside the range that we can normally see with our eyes. But the proposed structure was important because, as Watson and Crick so wryly pointed out in their 1953 Nature paper, it did not escape their notice “that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” Seeing the double helix and making the leap to how it works as the genetic material required that our species learn to see things in the X-ray range of electromagnetic radiation. With this information in hand, the next steps were to figure out how DNA did the trick for heredity. I have shown you in several parts of this book what DNA sequences look like and why they are important. But how really do scientists “see” those nucleotides (the Gs, As, Ts, and Cs) that make up the sequences? The size of a nucleotide is way too small (even smaller than the 10-angstrom diameter of the DNA double helix) to be able to see, even through the most powerful electron microscopes scientists use. Reading the recipe of life, as our genomes have been called, is a great example of overcoming sensory noise and our limited range of sensing to interpret a set of devised sensory cues that would make no sense whatsoever to any other species on the planet. For that matter, the way we see DNA is so esoteric that only maybe a few million people on the face of the Earth really can take the sensory information that has been corralled to read the sequence of the genome and make sense out of it. Basically, scientists have used the chemical nature of nucleotides and DNA to amplify its signals to give us a visual string of nucleotides that we interpret as the DNA sequence. Instead of light waves producing the sensory input, chemical reactions are used. These are then interpreted into visual output on a computer screen that our eyes can then see. It sounds like magic, but it really involves some basic but ingenious inventions to read these small molecules way out of our visual range. It’s not just the ability to see small objects that we have developed. Most of modern astronomy and astrophysics take data that have very little to do with our evolved visual capacity and convert them to images that we can see and interpret. Although an optical telescope enhances the ability of the retina to absorb light waves from items in the night sky, a radio telescope takes radio waves several times the length of those wavelengths the brain uses to interpret light and converts the radio wavelength data into stunningly informative images of planets and suns that are light years away from Earth.
As a graduate student in Saint Louis in the late 1970s, I well remember needing to compute a solution to a data set. I had only about 100 data points but needed to compute the possible solutions for the data set for exactly 10,395 possible permutations. At that time, the solution to the problem would be accomplished with a pencil and pad of paper, much like in the 2017 Oscar-nominated movie Hidden Figures. A group of mathematically inclined humans would take the data and, for all of the 10,000 or so permutations, would undertake the calculations. My thesis project didn’t involve national security, nor was it relevant to NASA, so I did not have the luxury of an army of dedicated pencil pushers to compute the solution. I turned to one of the newer possible ways of computing the solution—a computer. This was in the early days, when programs and data were punched onto cards and read into a mammoth computer to start the computation. There was then the long wait to get your green-and-white printout for the job. Clunky to say the least. My first graduate student wrote his thesis in the late 1980s on an Apple Macintosh and used this computer to do most of the same kinds of calculations I did for my thesis on the Macintosh. His graduate students used iMacs to do their work, and his grad students’ students used iBooks. All of this over a period of about fifteen years. Now our current generation of graduate students use MacBook Pros that are tens of thousands times more powerful than iMacs and iBooks, and these more modern computers can connect to clusters of processors that give them computing power billions of times more powerful than what my first student could use. This example simply shows that computing in science has expanded as a function of time in a pattern known as Moore’s law.
Gordon Moore in the 1960s astutely recognized that computing power will double every year. Computing among the consumer population has increased, too, and in essence has become more personalized than most people in the 1960s would ever have dreamed, except for those who pursued the development of the personalized computer, such as Steve Jobs and Bill Gates. This personalization has changed the way we live from day to day, but it has also changed the way we humans sense the outer world. And given that Moore’s law appears to be a real phenomenon, we should attempt to anticipate the rise in power of computational approaches and perhaps even anticipate some of the novel changes our senses will be exposed to as a result of the surge in computing.
The average adult in the Western world faces a computer screen or a smartphone screen for about ten hours a day, according to a 2016 Nielsen survey. Given that we sleep about seven to eight hours a day, this means that more than half the waking day in many cultures is spent staring at a computer or smartphone screen viewing virtual images the whole time. We are only beginning to understand the impact of this changed sensory realm on the human condition. In a direct comparison of reading comprehension among tenth graders, researchers in Norway assessed the difference between reading on a computer screen versus old-fashioned hard copy. The surprising result was that these students comprehended the written word on paper much better than on screen. Why this might be so is not well understood, but it does point to a possible dichotomy in the way we learn and comprehend reading as humans. Reading comprehension is a downstream effect of vision, and some researchers are concerned about the long-term impact that computer and smartphone screens might have on the human visual system in a more upstream manner. Humans did not evolve to peer endlessly at a small, light-emitting rectangle. In fact, our field of vision is much greater than that small smartphone screen you scan every day for hours. How this restriction of the visual field is affecting our eyes and their potential evolution is a subject that needs attention. Vision is not the only sense that is under assault by modern life. As noted in Chapter 11, modern humans experience sounds, sound levels, and ranges of sounds that our ancestors never had to. How we adapt to this changed auditory world is also a subject ripe for study.
There is one area of modern life enhanced by computers where researchers have spent some time examining—gaming. Young people today spend inordinate amounts of time playing computer games. Fatima Jonsson and Harko Verhagen have taken a multisensory look at the impact of gaming events on participants. Their conclusion is that even though the games themselves are extremely visual and auditory, the whole repertoire of the senses is involved in the gaming experience, all the way down to taste and smell. In fact, the auditory aspect of video gaming doesn’t only emanate from the computer but also involves the sounds from around the screen such as other individuals cheering and screaming as a result of play. And it would not be surprising to note that the olfactory and gustatory impact of gaming is strongly influenced by fast foods and soda. These upstream or basic sensory effects are somewhat easy to characterize, but researchers have also tried to examine downstream neurological effects of sensory stimulation via video gaming. Psychologist Angelica Ortiz de Gortari has carved out a niche in this regard and has studied what she calls game transfer phenomena as a result of intense game playing. For some gamers, their sensory experiences are so intense (and their mental states are susceptible enough) that they experience pseudohallucinations as a result of the gaming. They also incur visual aftereffects that can potentially cause them to misperceive the real world around them. It gets worse with more and more prolonged gaming, and it affects not only visual and auditory sensory perception but tactile and perhaps olfactory sensation, too.
Virtual reality (VR) has also become a modern-day reality. The 2016 holiday season doubled the number of VR headsets in British households to 20 percent, and other Western countries are following close behind. Vision and hearing are not the only senses that VR targets. Entrepreneurs and engineers such as Adrian David Cheok have suggested that all five of the Aristotelian senses can be incorporated into VR apparatuses. But what will the effect of VR be on our senses and sensory perception of the world? It turns out that we might be preadapted to a VR world. Andrea Stevenson Won and her colleagues suggest this interesting possibility, due to a phenomenon called homuncular flexibility. Disembodiment caused by VR can be disorienting, causing physiological and psychological effects. But homuncular flexibility can overcome these problems and enhance the VR experience. The flexibility idea is based on early experiments on phantom limbs. People who have lost limbs often feel extreme pain in the area where the lost limb used to exist. Neuroscientist V. S. Ramamchandran, whom we visited earlier in this book in our discussion of Capgras syndrome (Chapter 12) and the neurobiology of art (Chapter 19), asked people with missing limbs to place their injured limb into a box containing a mirror positioned along the middle of the box. He then asked the person to look into the mirror so that the injured limb visually appeared to be replaced by the uninjured one. What happened is that when the person moved the uninjured limb, he or she saw the illusion of two normally moving limbs. After the person experienced this illusion, the phantom limb syndrome was either reduced or eliminated.
Another example of homuncular flexibility is the rubber arm illusion (fig. 20.1). In this limb illusion, the person seated on the left places one limb below the desk where it is not visible. A disembodied rubber arm is placed on the desk. Simultaneously, the person on the right strokes the tips of the fingers of the hidden hand and of the rubber hand. The person on the left will soon establish a sense of ownership of the rubber arm. Sense of ownership means that if a hammer is brandished by the guy on the right threatening to hit the rubber arm, then the person on the left will flinch.
Both the phantom limb and rubber arm illusions demonstrate that people can be led to reconfigure their body image by visually tricking their brains. In other words, our brains are flexible enough to reconfigure our sensory homunculus (discussed in detail in Chapter 3). Virtual reality is like making your entire body a “phantom limb,” or like conditioning your avatar to be like a rubber arm. Our brains are fully capable of this.
Virtual reality, smartphones, technology in modern cosmology, DNA sequencing, and other extensions of our perception by modernity also have huge impacts on how we view our world and raise questions about how the human mind will deal with this brave new world. The problem of the human mind is a huge one. Thousands of books, millions if not billions of words, and much human thought have gone into trying to understand the human mind. The approach in this book has been to address what neuroscientists call the easy problems of the mind or of consciousness. These easy problems include localizing where our perceptions of the outside world originate and how this perception works. In some ways we are in a bit of a tough place when discussing these topics. We know, as Francis Crick so eloquently wrote, that “a vast assembly of nerve cells and their associated molecules” are responsible for the mind and the emergence of consciousness. And what the easy problems deal with are those physical aspects of perception caused by nerve cells and molecules. But the holy grail is located in the realm of what neuroscientists call the hard problem of consciousness. This problem is indeed hard, because any answer to it wants to link an emergent property of our neurobiology (mind) with physical, molecular, and chemical information. The spot between the rock and a hard place is that we need the easy problems solved to shore up any ideas we have about the hard problem, while the answers to the easy problems don’t get us all the way there. Here I have tried to take an evolutionary approach to understanding the senses and the easy problems of the mind.
Figure 20.1 The rubber arm illusion. The person on the left can be tricked into thinking that the rubber limb is his or hers.
This approach can be very illuminating. It is straightforward to reconstruct some of the evolutionary history of sensory processing in our species and our close relatives. An example can be seen in a book my colleague Ian Tattersall and I wrote in 2012. Using Antonio Damasio’s ideas about emotions, we reconstructed the evolutionary history of emotion in animals. This reconstruction suggests, not surprisingly, that our species is unique in how we deal with the outside world emotionally. Likewise, the evolutionary view of how our senses evolved indicates that we are in many ways unique in how our sensory information is processed. It is also evident from Ian Tattersall’s writing on human consciousness that language and all of the very human things we do around language are essential developments in the emergence of the mind in our species. Although we may still have a huge way to travel to unlock the hard problem, much of the easy problem is unraveling before us as a result of modern neurobiology. It is therefore critical that we keep in mind the no-limits aspect of our sensory development and change as a species. It may be the key to understanding the hard problem of the mind.