Perception

Cognitive Psychology: Theory, Process, and Methodology - Dawn M. McBride, J. Cooper Cutting 2019


Perception

Questions to Consider

· What is perception?

· What is the purpose of perception?

· How do our sensory systems affect our perception of the world?

· Do we control our perceptions or can we perceive automatically?

· Why do we sometimes perceive things incorrectly?

· What does it mean for something to be “more than the sum of its parts”?

· How does perception aid in action?

Introduction: Perception in Everyday Tasks

Have you ever walked across your campus at a busy time, say when classes have just gotten out and students are pouring out of the buildings and trying to get to their next class or the food court to eat lunch? Think about what is involved when you work your way through a busy crowd on your way somewhere. You have to look at what is around you to avoid running into other people or objects in your path. You have to listen to what is around you to avoid objects that you may not be able to see (e.g., moving out of the path of a grounds truck driving across the quad behind you). You have to judge distances between people to make sure you can fit between them if you are moving more quickly than they are or if they are moving toward you. You have to identify landmarks to make sure you are taking the correct path to your destination. In addition to these perceptions that are relevant to your task, you are perceiving many other things that are irrelevant to your task: the conversation of the people walking behind you, the smell of the guy who just walked by who has not showered in a while, how cold the temperature feels on your skin, the taste of the candy bar you are eating as you walk.

In this scenario, you may recognize that your five senses are clearly involved in bringing in information from the world around you but that there is more going on in your cognition than just receiving sensory input from the world. You are interpreting the information, deciding what is relevant and irrelevant to your task and relying on your other cognitive abilities to aid your perception and complete your task. For example, you are using memory to remember your path, language abilities to distinguish language from other sounds and to understand the conversations around you, and problem-solving abilities to determine where you can fit through the crowd. In this chapter, we discuss the aspects of perception present in this scenario: how they work together to help you interpret the information around you and how perception is tied to your action goals in moving around in the world. This chapter focuses on visual perception, as this is the sense the majority of research has focused on and the visual nature of this text allows for easier illustration of visual examples, but other types of perception (e.g., auditory, gustatory) are also described.

Sensory Systems: How Sensations Become Perceptions

As you might guess from reading through the previous scenario, perception begins with the sensations we bring in from the outside world. Our sense organs—ears, eyes, nose, tongue, and skin—all begin the process of perception for us, sometimes unintentionally. Often, we are simply sensing the world without intending to hear, see, or feel, but our sense organs work automatically to bring in the sensations from our environment. For example, do you sometimes work or study with background music on? The music continues to play with the sound waves continuously hitting your ears, but you do not always “hear” it if you are not paying attention to it or thinking about it. If you stop reading for a moment and listen or look around you, you will likely see and hear things that you did not notice were there until you paid attention to them (we discuss the role of attention in cognition more in Chapter 4). Yet those stimuli are being sensed by your sense organs, even if you are not currently perceiving them.

The sense organs make up the first part of our sensory systems. A sensory system processes the sensations coming into each sense organ that allows us to understand and interpret the sensations we receive. If a hot stimulus comes near our skin, we can very quickly perceive that sensation as “too hot” and move away from the heat source before we are burned. Within each sense organ, receptor cells receive the environmental stimuli: sound waves, light waves, pressure on the skin, or chemicals in food or the air. The receptor cells do the job of turning the environmental stimuli into neural signals the brain can receive and interpret. The receptor cells then send this information to the appropriate area of the brain through a nerve cell that connects to the neurons in different brain areas.

Sensory system: a system that receives and processes input from stimuli in the environment

Figure 3.1 illustrates the four parts of a sensory system for the visual sense system: (1) sense organ—the eye, (2) receptor cells—the rods and cones in the retina, (3) nerve conduit to the brain—optic nerve, and (4) brain area where the information is being processed—primary visual cortex (also called V1) in the occipital lobe of the brain (with extensions to other areas to connect with other cognitive processes). Figure 3.2 shows the sensory system for auditory stimuli: (1) the ear, (2) hair cells in the ear, (3) the auditory nerve, and (4) the primary auditory cortex (also called A1) receiving area in the temporal lobe. This sensory system structure is followed in the other sense systems as well with the nose, tongue, and skin as the sense organs. Each of these sense organs contains receptor cells of different sorts that convert the stimulus energy (e.g., air waves and pressure, chemicals in the air and food, temperature and pressure from stimuli) received by the sense organ into neural signals to be sent to the brain for processing. As described in Chapter 2, different brain areas are specialized for different functions. Thus, tactile sensory information is processed in the parietal cortex; gustatory sensory information is processed in the insular cortex at the junction of the frontal, temporal, and parietal lobes; and olfactory sensory information is processed in the olfactory bulb near the temporal lobe and then sent to several connected areas of the brain.

Primary visual cortex (V1): the receiving area of visual information in the cortex of the brain

Primary auditory cortex (A1): the receiving area of auditory information in the cortex of the brain

Figure 3.1 Diagram of the Visual Sensory System Showing the Four Parts of the System

Image

Sources: photo of dog: Janie Airey/Digital Vision/Thinkstock; photo of eye: Christopher Robbins/Photodisc/Thinkstock; photo of brain: Hemera Technologies/PhotoObjects.net/Thinkstock.

Figure 3.2 Diagram of the Auditory Sensory System Showing the Four Parts of the System

Image

Sources: 1. left: Dogboxstudio/Shutterstock; 1. right: Schankz/Shutterstock.

The primary job of the sensory system then is to receive stimulus energy from the environmental stimulus and to recode that stimulus, called the distal stimulus, into something the brain can interpret and process. This is just the start of our perceptual process, however. Once the distal stimulus has been represented in our minds, it becomes a proximal stimulus. This representation process is proposed to occur for all types of sensory information. The brain then processes the proximal stimuli in an attempt to interpret and act on the distal stimuli you are encountering in the world. The rest of this chapter focuses on the cognitive process of perception.

Stop and Think

· 3.1. Describe the four parts of a sensory system.

· 3.2. What is the role of receptor cells in perception?

· 3.3. What are the advantages to having a perceptual system that has automatic input of all environmental stimuli but only consciously processes a small portion of those stimuli?

· 3.4. Can you think of a situation where your perception of your environment did not match the reality of the environment? Why do you think that error occurred?

Approaches to the Study of Perception

Given the different roles of perception in our lives, researchers have approached the study of perception in different ways to better understand how perception operates in each of these roles. Each approach considers a different way that stimuli are processed in the brain. In the computational approach, researchers consider how different cues in the stimuli can be used to interpret the environment. In the Gestalt approach, researchers have considered how organizational principles of the world allow us to interpret the stimuli in our environment. In the perception-action approach, researchers consider the goals of action achieved through more direct perception. Each of these approaches has aided in our understanding of the processes of perception and how they work together to interpret the world around us.

Computational Approaches

Psychologists first used a computational approach to study our perceptions. In fact, some of the first psychologists studied perception through this approach in the field of psychophysics, where the goal was to discover fundamental knowledge of perception that showed us the scope and limits of our perceptual abilities. This approach to perception considers how we use features of objects and scenes to interpret and understand them. The features or cues in the environment help us turn the distal stimulus into the proximal stimulus in our minds. One process that aids in creating a proximal stimulus is bottom-up processing. Using bottom-up processing, perception is conducted starting with the most basic units or features of a stimulus and adding the parts together to understand and identify a coherent whole object. For example, consider how bottom-up processing might allow us to identify the words on this page. Figure 3.3 illustrates how bottom-up processing works to perceive the word safe in a feature detection model, where information is passed up through a hierarchical system that identifies more complex forms of written language at each level. The lines and curves of the individual letters are combined to identify each letter, and then the letters are combined to identify words. This bottom-up process can work for verbal language as well: phonemes of the language can be detected and activate words that contain those sounds (see Chapter 9 for more discussion of bottom-up processing in language).

Distal stimulus: stimulus in the environment

Proximal stimulus: stimulus as it is represented in the mind

Bottom-up processing: understanding the environment through basic feature identification and processing

Bottom-up processing as described in feature detection models received early physiological support. Using the single-cell recording technique described in Chapter 2, researchers Hubel and Wiesel (1959) identified neurons in the visual cortex that are selectively activated by features in the environment. They recorded the activity from neurons in the striate cortex (an area in the occipital lobe) of cats as different shapes of light were presented to their retinas. Recordings from the neurons showed that some cells were active when horizontal bars were presented, others when vertical bars were presented, and still others when diagonal bars of one orientation were presented (see Figure 3.4 for activity of a neuron specialized for vertical bars). These results suggest that visual feature detection is done at the neuron level and is consistent with how the visual cortex functions. Others (e.g., Bullock, 1961) suggested that feature detection specialization in the brain also exists for other sensory systems, such as the auditory system.

Figure 3.3 An Example of Bottom-Up Processing That Would Allow Perception of the Word Safe

Image

Another example of bottom-up processing from the computational approach is a theory about object recognition based on features of the objects called geons. Geons are the basic three-dimensional pieces of objects, such as cylinders, cones, and blocks (see Figure 3.5 for examples of geons and some objects that can be created from them). Biederman (1987) proposed that we identify objects by first identifying the geons that make up the object. We then match the geons we perceive against representations of objects stored in memory to identify the whole object. He showed that we can easily identify objects from different angles and objects that are occluded based on the geons. This is similar to the feature detection model described in Figure 3.3 for perceiving words; however, in Biederman’s object recognition model, the features are three-dimensional geons instead of the two-dimensional lines and shapes seen in Figure 3.3.

Figure 3.4 Neuron Activity for Lines at Different Orientations in Hubel and Wiesel’s (1959) Study

Image

Source: Figure 3, “Receptive Fields of Single Neurons in the Cat’s Striate Cortex,” by D. H. Hubel and T. N. Wiesel, 1959, Journal of Physiology, 148, pp. 574—591. © 1959 by The Physiology Society. Reprinted with permission from John Wiley & Sons, Inc.

One process the computational approach to perception has focused on is the use of basic cues in the environment as a means of interpreting the stimuli with which we are presented. For example, cues in visual stimuli help us estimate objects’ size and distance. See Photos 3.1 and 3.2 here for illustrations of the use of these cues. In Photo 3.1, we can use the linear perspective of the tracks to help determine the distance between the electrical poles on the right. The tracks seem to converge (i.e., get closer together) higher up in the photo. This implies that the tracks go off into the distance in a three-dimensional environment. We can also use the size of the image of an object on our retina to help us determine the object’s distance. In Photo 3.2, we perceive that the woman is closer to us than the buildings, partly because the image of the woman imposed on our retina is larger than the image of the buildings. However, we also need to use some knowledge about the objects to make these judgments. Knowing that the woman is not as tall as the buildings can help us judge the objects’ distance as well. In Photo 3.2, the size is similar for the images of the woman and the tallest building. Thus, we use additional knowledge we have about the objects to determine that the building must be farther away because at the same distance, the building should have a larger retinal image size.

Figure 3.5 Geons

Image

Source: Galotti, K. M. (2014). Cognitive psychology in and out of the laboratory (5th ed.). Thousand Oaks, CA: Sage.

Using knowledge of the objects is an example of top-down processing. When we perceive objects using our knowledge of the world, we use top-down processing. Thus, although we are using basic feature cues in the environment to perceive, we also rely on our knowledge of the world to interpret those cues. In some cases, our interpretation of these cues can be incorrect, creating an incorrect interpretation of an object. In other words, the proximal stimulus in our mind does not provide an accurate representation of the distal stimulus in the environment. This can be seen in some common illusions. In fact, these illusions seem to occur because we are interpreting the cues in a consistent way across stimuli.

Geons: basic three-dimensional pieces of objects

Top-down processing: understanding the environment through global knowledge of the environment and its principles

Photo 3.1 Train tracks showing a linear perspective, which helps determine the distance between the electrical poles.

Image

Marekuliasz/Shutterstock

Photo 3.2 The woman in front of these buildings shows how the distance of objects can be determined from retinal image size and knowledge about the objects.

Image

Maridav/Shutterstock

Consider the Ponzo illusion. In Photo 3.3, two cats are placed on the tracks shown in Photo 3.1. Which cat looks larger? Most people perceive the cat near the top of the photo as larger. This is because in the photo it looks like it is farther away at a point where the tracks are closer together, but, in fact, the images of the two cats are exactly the same size (measure them to see!). Because the images of the two cats in the photo are the same size, they create retinal images that are also the same size. Thus, retinal image is not the only cue we use to determine the size and distance of objects. This type of illusion is interesting to perceptual researchers because it shows how we use the linear perspective cues in the scene to misinterpret the size of the objects on the tracks. However, in many cases, the linear perspective gives us an accurate depiction of the objects’ distance, as it does in Photo 3.1 when we judge the distance of the two signs and the trees in the scene. The illusion shows that we use the linear perspective cues present in the environment to perceive the distance of objects.

Photo 3.3 Illustration of the Ponzo illusion: the cat on the bottom looks smaller due to the linear perspective of the train tracks.

Image

Marekuliasz/Shutterstock

Neuropsychologists have recently studied the relationship between brain function and organization and the perception of illusions. For example, Schwarzkopf, Song, and Rees (2011) examined the relationship between the strength of the Ponzo illusion seen in Photo 3.3 and the size of the primary visual cortex area in the occipital lobe known as V1. They found a positive correlation such that the subjects with larger V1 areas also perceived stronger illusions (i.e., the subjects reported a larger size difference between the two objects in the image when they were actually the same size).

Even though we use other cues to help determine size and distance of stimuli, retinal image size is an important cue for our interpretation of objects’ size and distance. Try this yourself: Hold two objects of the same length (e.g., two pencils) in front of you with one object held right in front of your face and the other object held out at arm’s length. You can easily see that you perceive the object held at arm’s length as farther away. Figure 3.6 shows how two pencils held at different distances create different-sized images on the retina. The closer pencil has a larger image size, helping us perceive it as closer to us in the environment. This is how retinal image size serves as a cue in judging objects’ distance and size.

Figure 3.6 Retinal Image Size

Image

Examining object perception using cues such as linear perspective and retinal image size led to the theory of unconscious inference proposed by one of the first perceptual psychologists, Hermann von Helmholtz. The theory of unconscious inference suggests that we make unconscious inferences about the world when we perceive it. In other words, we use our top-down processing unconsciously to perceive and interpret the environment. Consider the objects in Photo 3.4. How would you describe these objects? Most people would say something like “A cat is lying in a pot.” However, the entire cat is not actually visible in this picture. Thus, it is possible that only a portion of a cat is there and the rest of the cat is missing. But since that is an unlikely scenario, we interpret the scene as a whole cat in a pot with some of the cat hidden from view. This illustrates the likelihood principle that is part of the theory of unconscious inference. We perceive the object that is most likely in the scene when we view it, even if there are other possible interpretations of the scene.

Photo 3.4 This cat lying in a pot illustrates how we make unconscious inferences about objects to perceive the environment.

Image

Anjo Kan/Shutterstock

In summary, the computational approach to perception focuses on cues in the environment as a means of perceiving and interpreting stimuli. Both bottom-up and top-down processing contribute to object and scene interpretations. Cues such as linear perspective and retinal image size help us determine the size and distance of objects in the environment. However, those cues can be incorrectly interpreted and create errors in our perceptions in certain situations. But the errors are simply a by-product of a perceptual system that works by means of processing these cues in consistent ways. We will encounter another example of how our normal cognitive processes can inadvertently create errors in Chapter 7, when we consider memory errors.

Stop and Think

· 3.5. Explain what it means to interpret scenes based on cues present in those scenes.

· 3.6. In what way do illusions illustrate the normal processes of perception?

· 3.7. You see a light approaching on the road at night. According to the likelihood principle, which of the following are you most likely to perceive: (a) a deer crossing the road wearing a headlight, (b) a UFO, or (c) an approaching car? Explain your answer.

· 3.8. In the scene in Photo 3.4, describe some cues you can use to determine that the front of the pot is closer to you than the cat.

· 3.9. People report a “moon illusion” such that the full moon appears larger when it is lower in the sky and close to the horizon than when it is high in the sky and above us. Using what you learned about the use of cues in this section, why do you think the moon illusion occurs?

Gestalt Approaches

Take a look at the scene in Figure 3.7. What do you see there? Most people perceive a triangle with the points overlaid on top of circles. However, consider what is actually in the figure: Are there any triangles or circles in the figure? No, so why do we see these shapes? The Gestalt psychology approach to perception suggests that interpretation of a scene involves applying principles of how the world is organized. In other words, top-down processing is a key component of Gestalt approaches to perception. According to the Gestalt approach, perception occurs through applying a set of organizational principles that follow physical processes of the natural world. In applying these organizational principles, our perception of a scene is “more than the sum of its parts.” Table 3.1 summarizes some of the first organizational principles proposed by Gestaltists (see Wagemans et al., 2012, for a more complete listing of principles), and each of these is described with illustrative examples.

Theory of unconscious inference: the idea that we make unconscious inferences about the world when we perceive it

Gestalt psychology: a perspective in psychology that focuses on how organizational principles allow us to perceive and understand the environment

1. Similarity.

The first organizational principle of perception is similarity. We tend to group objects or features of a scene based on their similarity. Consider Photo 3.5: Describe what you see in this figure. Did you say something like “a number of pencils, a few pens, and scissors in a cup”? If you did, you illustrated the principle of similarity: You organized like objects together and described the figure according to these similarities. This is more natural and common than describing each individual object in the cup on its own or grouping objects that are not similar.

Figure 3.7 A Figure Perceived as a Triangle Overlaid Onto Three Circles Illustrates the Gestalt Approach to Perception

Image

2. Proximity.

Another organizational principle of perception is proximity. We tend to group objects or features of a scene based on their proximity to one another. How would you describe the scene in Photo 3.6? Do you see a couple at a party while another couple works at the grill in the background? This is a common organization described for a scene like this. We tend to group the people close to one another in the scene together as we describe and interpret it. For example, we’re likely to assume the couple in the foreground are having one conversation, while the couple in the background are having their own conversation and working on a different activity. Proximity can also help us distinguish between the objects in a scene and the background of a scene. We discuss further the separation of foreground and background in the environment later in this section.

Image

3. Good continuation.

Good continuation refers to our understanding that objects continue, even if parts of them are occluded. Photo 3.4 with the cat in the pot illustrates this principle. We interpret the scene as an entire cat lying in the pot, even though we can only see a portion of the cat in the photo. Photo 3.7 illustrates good continuation as well. We tend to see this figure as a woman holding two ends of single rope, rather than holding two separate pieces of rope, even though we cannot see the entire rope. We have the same interpretation for any line that has an object occluding a portion of it.

Photo 3.5 This figure illustrates the principle of similarity; an observer typically describes the scene with similar objects grouped (“some pencils,” “some pens”).

Image

RoJo Images/Shutterstock

Photo 3.6 This scene illustrates the principle of proximity; we organize the scene into sets of people based on their proximity to one another.

Image

George Rudy/Shutterstock

Photo 3.7 This photo illustrates the principle of good continuation; we see the line as a single rope held at both ends instead of as two separate ropes.

Image

Brocreative/Shutterstock

4. Closure.

The principle of closure allows us to view incomplete objects as a whole. For example, we see the object in Figure 3.8 as a circle, even though it is missing a small piece. In fact, closure contributes to perceiving a triangle in Figure 3.7. We perceive the complete triangle with angles on the circles even though the sides of the triangle are not filled in completely.

5. Principle of Pragnanz.

The principle of Pragnanz (also called the law of good figure or law of simplicity) suggests that we perceive scenes as simply as possible. Pragnanz is a German term meaning concise or succinct. Thus, this principle proposes that we view scenes in the most concise way possible, with a simple interpretation (thus, the law of simplicity). The first four principles can be viewed as specifics of the principle of Pragnanz. They each provide a specific way that we organize a scene more simply.

Principle of Pragnanz: an organizational principle that allows for the simplest interpretation of the environment

Consider the scene in Photo 3.8. According to the principle of Pragnanz, we organize the scene according to the simplest interpretation. What other organizational principles help you perceive this scene? Do you perceive a complete boy in the pile of leaves, even though part of him is occluded, due to good continuation? Do you perceive a pile of leaves because you grouped the leaves together as similar objects?

Finally, consider Photo 3.9. What do you see in this photo? Do you see a blue vase or do you see two white faces? It is possible to see both of these in the figure successively, depending on which color you assign to the background, white or blue. If you think of blue as the background, you see two white faces. If you think of white as the background, you see a blue vase. This occurs because of the figure-ground organization (which part you see as figure and which part you see as background) within a scene and is consistent with Gestalt principles. We simplify the scene by assigning a color to the background that allows us to see the objects. By organizing the figure in terms of similarity of color, we can perceive different objects. We can perceive the horses in this figure by organizing some patches of brown and white as belonging to the background and some patches of brown and white as belonging to the horses. We have a figure-ground problem in the horse scene because the figure and background are so similar. This is what makes the horses harder to see. In Photo 3.9, the figures and background are much more distinct, allowing us to separate them more easily as we view either the vase or the faces.

Pomerantz and Portillo (2012) describe research that supports use of the organizing principles from the Gestalt approach in perception. Such studies have shown that larger arrays of stimuli containing basic feature elements and more complex stimuli are easier to perceive than smaller basic arrays and stimuli shown to subjects. This is called the configural superiority effect. To illustrate the effect, consider two situations where you are attempting to find a target stimulus that is different from the others in an array of stimuli. Examine the three arrays shown in Figure 3.9. The first array (A) shows lines all slanted in the same direction except one. You may notice the line that is different, but it probably does not “pop out” of the array easily. Now, suppose we add the stimuli in Array B to Array A. This results in the more complex array (that is, more of a “whole”) seen in Array C. How easily can you detect the line slanted in the opposite direction in this more complex array? For most people it pops out at them and they very quickly detect it in the array.

Photo 3.8 This complex scene illustrates several Gestalt principles. How many can you identify?

Image

Photo 3.9 Do you see a blue vase or two white faces? This drawing illustrates the figure-ground organization of scenes.

Image

Figure 3.8 Due to the Principle of Closure, We View This Object as a Circle, Even Though It Is Not Complete

Image

Is there evidence of corresponding brain activity that relates to perceptual processes as described in the Gestalt approach? Recent studies in neuropsychology suggest there is. Some studies using the EEG recording technique (see Chapter 2 for a review of brain activity recording techniques) have shown that when subjects view figures such as the one shown in Figure 3.7, there is evidence that the features of the object perceived along with features presented with these stimuli in other modalities (e.g., sounds) are bound together in the occipital-temporal cortex of the brain (Fiebelkorn, Foxe, Schwartz, & Molholm, 2010). Other studies using fMRI have found similar evidence of feature binding in the parietal cortex for Gestalt figures (Zaretskaya, Anstis, & Bartels, 2013). Thus, neuroscientists are exploring how the organizational principles proposed in the Gestalt approach correspond to brain activity that connects the features of stimuli from the environment.

Figure 3.9 These Arrays Help Illustrate the Gestalt Idea of “Whole” Stimulus Processing at Work

Image

Source: Adapted from Pomerantz, J. R., & Portillo, M. C. (2011). Grouping and emergent features in vision: Toward a theory of basic Gestalts. Journal of Experimental Psychology: Human Perception and Performance, 37(5), 1331—1349.

The Gestalt approach to perception grew out of ideas that perception is more than just interpreting cues in the environment; it is more than just the sum of the parts of a scene. Instead, we rely more on top-down processing and our knowledge of the world in the form of organizing principles to help us perceive the world. Even in cases where perception is more difficult, these organizational principles can help us view objects in a scene that may be hard to perceive.

Perception/Action Approaches

Where computational and Gestalt approaches focus more on the “what” of perception, perception/action approaches focus more on the “what for” aspect of perception. What are the possible affordances of this environment (i.e., possible behaviors in a given environment)? Can I pass through that space? Can I use this stick to hammer in that nail? If I jump over this gap, will I make it without falling? According to these approaches, perception and action are intricately linked. One must consider them together to understand each one. Because the perception/action approach examines perception according to how it aids in performing behaviors, it is consistent with the embodied cognition approach described in Chapter 1.

Affordances: behaviors that are possible in a given environment

This approach has its roots in ecological psychology, first suggested by James Gibson (1979) as an alternative to representationalist approaches to perception. The computational approach describes perception to some degree as relying on representations of the world, with a proximal stimulus created in our minds to represent the distal stimulus in the environment. Thus, the focus is on how we interpret stimuli in the environment and the processes responsible for those interpretations. With the ecological approach, Gibson suggested that information about the world is available in the detectable patterns in the environment such that we directly perceive without first transforming a distal stimulus into a proximal stimulus. From this approach, the focus in studies of perception should be on how we perform goal-directed behaviors (Fajen, Riley, & Turvey, 2009). For example, how are we able to avoid bumping into objects when we move around in the environment?

Photo 3.10 An illustration of optic flow; less blurry objects are farther away.

Image

Studio 37/Shutterstock

For the past few decades, researchers following the ecological view have focused on this question in perceptual research: How do we perform goal-directed perceptual behaviors? Optic flow was one of the first concepts to be studied in this research. If you drive a car (or ride in one), consider what you experience as you move through the environment. Objects in the environment that are closer to you seem to pass by faster than objects that are farther away, even though you are moving and they are not (see Photo 3.10). This is an example of optic flow. It is the movement pattern generated by objects at different distances as you move past them. Photo 3.10 shows the optic flow that might be experienced from a moving train. The people on the train view closer objects as moving past the window very quickly, but objects in the distance (e.g., the trees) as moving more slowly. Optic flow is an important part of our perception of the environment. According to the perception/action approach, we perceive objects’ distance based on the optic flow, not from first representing the object in our minds based on its retinal image size.

Stop and Think

· 3.10. How does the Gestalt approach to perception differ from the computational approach to perception?

· 3.11. How is top-down processing involved in the Gestalt approach to perception?

· 3.12. Look around your environment and describe some examples of good continuation in the objects around you.

· 3.13. Consider the moon illusion described in Stop and Think 3.9. Would the Gestalt approach to perception explain this illusion differently than the computational approach? Why or why not?

The perception/action approach is broader than the ecological view of perception. In some perception/action approaches, actions are an important part of the process of perception, but perceiving an object may still involve representations of that object in the mind. Thus, perception/action approaches often blend elements of the ecological view and the representationalist view. For example, the perception of a chair may result from knowing that a chair can be used to sit or stand on because that is what you are currently looking for in your environment, but you can still identify the object as a chair if someone asks you what the object is.

Research with a perception/action approach has considered how perception and action are tied together. Consider the following scenario: You are shown the room setup that appears in Photo 3.11a. Given this room configuration, would you prefer to (1) walk to the left of the table, pick up the bucket with your right hand, and place the bucket on the near stool, or (2) walk to the right side of the table, pick up the bucket with your left hand, and place the bucket on the far stool? How about the room setup in Photo 3.11b or in Photo 3.11c? Would you choose the same path or change your path? These were scenarios faced by subjects in a study by David Rosenbaum (2012). In this task, the reaction time to choose a path was recorded for different scenarios to determine if people simulated the paths in their minds one by one (i.e., sequential processing) before choosing the shorter path or if they considered all the paths at once (i.e., parallel processing) and chose the shorter path more quickly. Their reaction time data showed that the time it took to choose a path was a function of the difference in length of the two paths, supporting the suggestion that both paths are considered at once (i.e., parallel processing of path possibilities). Reaction times did not increase with the overall lengths of the paths, which is contrary to what is predicted if subjects simulate each path one at a time before choosing the shortest path. In other words, if reaction times increased based on the total length of the two paths, this would mean the decision takes as long as it takes to first mentally travel the length of one path and then mentally travel the length of the second path. Rosenbaum also showed that the paths chosen in this study were consistent with data collected in a previous study (Rosenbaum, Brach, & Semenov, 2011), where subjects chose a path in the actual environment and then performed the requested action (i.e., walk along the side of the table, lift the bucket off the table, and place the bucket on the stool). See Figure 3.10 for a graph of these results. The consistency in path choice across the two studies indicates that the plan to perform the action is the same as when the action is actually performed.

Photos 3.11a, b, & c Room setups shown in the Rosenbaum (2012) study. Which path would you choose?

Image

Source: Rosenbaum, D. A. (2012, figure 1).

In another example of perception/action research, Witt, Linkenauger, and Proffitt (2012) examined the effect of a perceptual illusion on putting performance in golf. These researchers asked subjects to perform golf putts to a hole with projected surrounding circles. This was done in the context of a perceptual illusion: Larger circles around the hole make the hole appear smaller than if the hole is surrounded by smaller circles (see Figure 3.11). This is known as the Ebbinghaus illusion. When subjects saw the hole surrounded by larger circles, as in Figure 3.11a, their putting performance was worse than when they saw the hole surrounded by smaller circles, as in Figure 3.11b. These results are shown in the graph in Figure 3.12. Witt et al.’s (2012) study showed the important connection between sports performance and perception. Another study by two of these researchers (Witt & Proffitt, 2005) also showed that softball players with higher batting averages judged the size of the ball as larger when they were shown images of balls and asked to choose the correct size of the softball, further illustrating the link between perception and action.

Research in this area has also shown that judgments about the environment can be influenced by our current body perspective, even when no action was planned. Malek and Wagman (2008) asked subjects to judge whether they could stand upright on an inclined surface either while wearing a weighted backpack on their back or on their front (see photo in Figure 3.13). Wearing the backpack on their back pulled the subjects’ center of mass backward, whereas wearing the backpack on their front pulled the subjects’ center of mass forward. If one stands on an inclined surface, having your center of mass pulled backward makes it more difficult to stand on the surface, but having your center of mass pulled forward makes it easier to stand on the surface. Malek and Wagman (2008) asked if this difference in backpack position would affect perceptual judgments of affordances (i.e., possibilities for standing behavior) even though the subjects did not have to actually stand on the surface. They found results consistent with a perception/action perspective: When wearing the backpack on their front, subjects judged they could stand on higher-angled surfaces more often than when they wore the backpack on their backs. These results suggest that perception is influenced by possible actions, even when those actions do not actually need to be performed.

Figure 3.10 Comparison of Results From the Rosenbaum et al. (2011) Study and the Rosenbaum (2012) Study

Image

Figure 3.11 The Ebbinghaus Illusion

Image

SOURCE: Witt et al. (2012, figure 1 excerpt).

Figure 3.12 Results From the Witt et al. (2012) Study

Image

Source: Witt et al. (2012, figure 1 excerpt).

Additional applications of the perception/action approach are shown in studies where subjects are asked to judge possibilities for use of objects with only tactile information. Wagman and Hajnal (2014) examined subjects’ judgments of whether they could stand on a ramp with an exploration of the ramp’s angle by use of hand-held stick (the subjects in the study were blindfolded). Figure 3.13 shows how the subjects in this study performed this task. The researchers found that even without seeing the ramp, subjects were accurate in identifying ramps that afforded standing on in various situations (dominant and non-dominant hand tool exploration, sitting and standing, foot-controlled tool exploration). This study shows that we use more than just our visual sense to judge affordances for actions.

Is there brain activity evidence for a connection between perception and action? The answer is controversial. There is evidence that different brain areas are responsible for recognition of an object and the location of an object (Milner & Goodale, 2008). Since the location of an object is more important for actions related to that object, if these two functions are separate and independent, this might suggest that perception and action are also separate. The “what” brain pathway responsible for recognition of an object is located in the lower occipital lobe and leads to the temporal lobe, where language functions are controlled. This is known as the ventral pathway (or ventral stream) because it is on the underside of the cortex. The “where” brain pathway responsible for locating an object is in the upper occipital lobe and leads to the parietal lobe where the motor cortex resides. This is known as the dorsal pathway (or dorsal stream) because it is on the top of the cortex (think of a dorsal fin on a shark to help you remember where this is located). See Figure 3.14 for the location of these pathways in the brain. There is evidence that the ventral and dorsal pathways process the “what” and “where” information for visual and auditory stimuli (e.g., Rauschecker & Tian, 2000; Ungerleider & Haxby, 1994). Figure 3.15 shows active brain areas for tasks of locating sounds (red areas) and identifying sounds (green areas), showing the dorsal (where) areas at the top of the cortex in red and the ventral (what) areas at the bottom in green from a study by Maeder et al. (2001).

Ventral pathway: the pathway in the brain that processes “what” information about the environment

Dorsal pathway: the pathway in the brain that processes “where” information about the environment

Figure 3.13 Setup Faced by Subjects in the Wagman and Hajnal (2014) Study

Image

The controversy here comes from the mixed evidence in studies attempting to dissociate ventral and dorsal pathway functions. For example, Ganel, Tanzer, and Goodale (2008) reported that although subjects showed the Ponzo illusion (see Photo 3.3) in their size judgments of objects, their reaching behaviors were not affected by the illusion. However, as described earlier, Witt et al. (2012) showed that the Ebbinghaus size illusion (see Figure 3.11) affected subjects’ golf performance. Thus, studies have produced data both in support of a dissociation between perception and action (i.e., showing that a variable affects one behavior but does not affect other behaviors) and in contradiction to such a dissociation. One possibility is that some actions have stronger links with perception than others. For example, many of the studies showing dissociations between the ventral and dorsal pathway functions involved reaching and/or grasping behaviors, behaviors that require real-time location information for objects. In addition, McIntosh and Lashley (2008) showed that expected object size affected reaching behaviors, indicating a link between perception and action, but Borchers, Christensen, Ziegler, and Himelbach (2010) showed that this effect only occurred when the objects being reached for were familiar to the subjects after long-term use (e.g., objects they used in everyday life). Thus, different types of action behaviors may vary in the strength of their connection to visual perception.

Figure 3.14 Location of Dorsal and Ventral Visual Streams in the Brain

Image

Figure 3.15 The Dorsal “Where” and Ventral “What” Streams of Auditory Processing

Image

A neuropsychological finding that provides stronger support for the link between perception and action is the discovery of mirror neurons (first described in Chapter 2). Mirror neurons were discovered in a study by Rizzolatti, Fadiga, Gallese, and Fogassi (1996). These researchers were recording activity from neurons in an area of the brain known as F5 in the premotor cortex that contains neurons involved in sensation and movement in the hands. Neuron activity was recorded using the single-cell recording technique (see Chapter 2) on monkey subjects. These subjects were trained to reach into a box and grasp an object. Neurons in the F5 area were active during this grasping task. However, the researchers also showed that these neurons were active when hand movements related to grasping were performed by the researchers while the monkey watched (e.g., grasping an object, placing an object on a surface). These neurons were not active when the researchers performed other movements not related to grasping the object (e.g., picking up the object with a tool). Rizzolatti et al. (1996) called these neurons mirror neurons because they were active both for tasks the monkeys knew how to do and when they saw those actions performed by others. In other words, mirror neurons seem to be specialized for the connection between perception and action of known movements.

Stop and Think

· 3.14. How do perception/action approaches to cognition differ from computational approaches?

· 3.15. What is an affordance?

· 3.16. I am looking at the lilac tree in bloom outside my window. I immediately imagine going out and smelling the flowers. Explain how my perception of the lilac flowers fits a perception/action approach.

· 3.17. Would a perception/action researcher be interested in explaining the moon illusion described in Stop and Think 3.9? Why or why not?

More recent studies have shown mirror neuron function in humans. For example, Calvo-Merino, Glaser, Grèzes, Passingham, and Haggard (2005) conducted fMRI scans (see Chapter 2) of the premotor cortex areas where motor neurons reside. Subjects were experts in classical ballet, experts in capoeira (a Brazilian martial art), or nonexpert control subjects. During the fMRI scans, subjects viewed similar movements in classical ballet and capoeira. However, brain activity in the premotor cortex was only greater when subjects viewed movements in their area of expertise (e.g., ballet experts viewing ballet movements). This study showed that mirror neurons are active in humans when they view movements that they know how to perform, suggesting a link in brain activity between perception and action.

Comparison of Approaches to Perception: Motion Perception

The three approaches to the study of perception have been used by researchers to gain important knowledge about how we perceive the world. However, as you have seen from the discussion so far, researchers ask different questions about perception and conduct different types of studies depending on the approach they take in studying perception. We now consider motion perception to compare what we have learned from the three approaches to perception described in this chapter.

Computational perception researchers have looked at how visual cues help us detect motion and the speed of motion in the environment. Changes occurring in the retinal images over time are one cue. Although retinal images are constantly moving, even for stationary objects, due to the constant movement of our eyes across a scene, retinal images that move more than others over time can indicate movement of the objects creating those retinal images. Further, cues in the scene can aid in detecting movement of objects. If an object moves across a background in a scene, the movement can be detected more easily than without a background. Consider the scene in Photos 3.12a and b. The gradient in the background in (a) allows you to track the movement of the man more easily than in (b) where there are no landmarks in the background to use as reference points to his movement. Finally, research in neuropsychology has shown that neurons in the parietal lobe near the occipital lobe make up a “when” pathway separate from the “what” (ventral) and “where” (dorsal) pathways described in the previous section (Batteli, Pascual-Leone, & Cavanagh, 2007). Studies have shown that neurons in this area respond selectively to motion stimuli and are highly active when the direction of an object’s movement is accurately detected (Newsome, Britten, & Movshon, 1989). This is consistent with the idea of feature detection and neurons with selective activation for specific stimuli described earlier in the chapter, which is consistent with the computational view of perception.

Photos 3.12a & b In the scene in (a), we can use the lines of the fence in the background to perceive the movement of the man more easily than in (b) where the background does not contain these cues.

Image

(a) Srdjan Fot/Shutterstock

(b) Mimigephotography/Shutterstock

Gestalt researchers have also examined motion perception, focusing more on apparent motion as seen in a visual illusion known as the phi phenomenon (Wagemans et al., 2012). Whenever we detect movement in a digital billboard (e.g., a Jumbotron at a football game), we are actually seeing light pixels flashing on and off or changing colors in a specific pattern rather than anything actually moving. This is why the movement is called “apparent”—the lights seem to show objects moving, but in reality nothing is actually moving in space. The phi phenomenon shows that we organize the stimuli moving on and off as moving in the way we know objects move in a scene.

A classic example of the phi phenomenon is seen at railroad crossings. The next time you are stopped at a track with a crossing train, look at the blinking red lights on the sign. They appear to hop back and forth on the sign, but this is simply caused by two red lights blinking on and off with opposite timing.

In an example of research on apparent motion, Oyama, Simizu, and Tozawa (1999) examined how the principles of proximity and similarity influence apparent motion effects. Proximity and similarity were manipulated in apparent motion and perceptual grouping displays to determine which of these organizational principles is most important in perceiving apparent motion. Their results suggested that similarity was the more important element because they found that changes in similarity of color, size, and other factors influenced both perceptual grouping and apparent motion perception. Thus, research on Gestalt principles is contributing to our understanding of these kinds of motion effects.

The perception/action approach considers movement in terms of goals for our own action. When an outfielder views the movement of a fly ball and adjusts his action behaviors to catch the ball, he is showing the type of behaviors that perception/action researchers are interested in studying. An example of this type of research was conducted by Shaffer, Krauchunas, Eddy, and McBeath (2004). These researchers examined the movements of dogs catching flying Frisbees. Small video cameras were attached to the dogs’ heads while they completed the task of catching the Frisbees. The researchers then analyzed the video data from the dogs. They found that the dogs worked to catch the Frisbees by matching their movements to the speed and trajectory of the Frisbees to keep the Frisbees in sight; as the Frisbees came closer, the dogs were able to close the gap to catch them. Other research has shown that humans use similar control mechanisms in completing tasks such as catching a fly ball or controlling an aircraft (McBeath, Shaffer, & Kaiser, 1995). The study of optic flow described earlier also provides an example of the perception/action approach to motion perception. Beall and Loomis (1997) showed that aircraft pilots use optic flow in the environment to guide their landings.

Stop and Think

· 3.18. Reconsider the scenario presented at the beginning of the chapter where you are walking across your crowded campus. How would each of the three approaches describe perception in this situation?

The perception of motion likely involves a combination of processes. Thus, multiple approaches to the study of motion perception can aid in creating a full understanding of how it is accomplished. The three approaches described in this chapter have each contributed information about how these processes operate in humans and other animals. In this way, these approaches continue to guide perceptual researchers as they explore all areas of perception.

Thinking About Research

As you read the following summary of a research study in psychology, think about the following questions:

1. Which of the three approaches to the study of perception do you think this study most adheres to?

2. What was the primary manipulated variable in this experiment? (Hint: Review the Research Methodologies section in Chapter 1 for help in answering this question.)

3. From this study, is there evidence of bottom-up and/or top-down processing in scene categorization? Explain your answer.

4. Of the results described, which are most informative about the research question in this study? Explain your answer.

Study Reference

Malcolm, G. L., Nuthmann, A., & Schyns, P. G. (2014). Beyond gist: Strategic and incremental information accumulation for scene categorization. Psychological Science, 25, 1087—1097.

Purpose of the study: Researchers investigated how we categorize complex scenes. Previous studies have shown that understanding the gist (i.e., basic meaning) of a scene occurs very quickly. Malcolm et al. (2014) were specifically interested in whether we use detailed information of a scene in quickly interpreting that scene. They showed their subjects common scenes (e.g., restaurant, pool) in a blurred state. However, the subjects could choose to focus on a part of the scene to help them understand it. Subjects were asked to view the scene and choose the category it belonged to. The area of the subjects’ focus in the scene provided the primary test of the research question. If subjects showed no consistency across scenes in their focus, this would indicate that the specifics of the scenes did not aid in categorization of the scene. However, if subjects focused on consistent aspects of the scenes, this would indicate that those details of the scene were important in the categorization process.

Method of the study: Twenty-eight subjects participated in the experiment, each viewing 32 scenes. Each of the scenes belonged to one of the following four basic categories: pool, restaurant, classroom, or road. For each category, the scene belonged to a subcategory (e.g., restaurant: diner, pub, fine-dining establishment, cafeteria). Half of the subjects were asked to categorize the scene according to one of the four basic categories. The other half of the subjects were asked to categorize the scene according to a subordinate of that basic category (e.g., choose one: diner, pub, fine-dining establishment, or cafeteria).

The scenes were filtered such that they were blurred. Subjects viewed the scenes through an eye-tracking apparatus that recorded the location of their eyes’ fixation. When subjects focused on a location, that location in the scene was focused such that it could be viewed clearly. The scenes were presented randomly (blocked by basic category for the subjects who completed the task for subordinate categories). Scenes were shown until subjects pressed a button with their category choice or until 15 seconds had passed, whichever came first. Reaction time to respond was also recorded.

Results of the study: Reaction time data showed that subjects were significantly faster at categorizing scenes at the basic level (e.g., restaurant) than at the subordinate level (e.g., pub). Eye fixation data showed that subjects made more fixations in the subordinate condition than in the basic category condition. These results suggest that subordinate category judgments are slower and require more details of the scene.

To examine fixation pattern across the scene by categorization condition, the researchers examined the distance of the focus point from the center of the scene for the first five focusing points of each trial. The results showed that subjects fixated farther from the center of the scene when completing the subordinate categorization (e.g., pub) task than the basic categorization (e.g., restaurant) task. These data for the restaurant scenes are shown in Figure 3.16.

To examine whether specific objects were focused on during the task consistently, the researchers also considered the number of fixations per object compared to the total number of fixations in the scene. In both categorization conditions, specific objects were focused on with a significant proportion out of the total fixations. Thus, there was evidence of consistent focusing on details within the scenes for both conditions.

Conclusions of the study: From the results of this study, the researchers concluded that details of the scene aid in categorization of natural scenes. This shows that we use more than just gist information to interpret scenes in cases where we categorize the scene at a basic category level and when we categorize the scene at a more detailed subordinate category level.

Figure 3.16 Mean Distance From the Center of the Scene for Focus Data From the Restaurant Scenes in the Malcolm et al. (2014) Study

Image

Chapter Review

Summary

· What is perception?

Perception is the cognitive processes through which we interpret the stimuli in the world around us.

· What is the purpose of perception?

The purpose of perception is to interpret the world around us. However, the means by which this occurs is varied and described in different ways by the different approaches researchers take in studying perception.

· How do our sensory systems affect our perception of the world?

Sensory systems do the job of turning sensations into perceptions that help us understand what we are encountering in the world. Sensory systems turn stimulus energy into neural signals that can be processed in the brain.

· Do we control our perceptions or can we perceive automatically?

In some cases perception happens automatically, without our control (e.g., in experiencing perceptual illusions), but there are situations where we control perception (e.g., in perceiving a way to accomplish a behavioral goal).

· Why do we sometimes perceive things incorrectly?

Perceptual illusions occur through the natural processes of perception. In fact, they help illustrate the way that perception typically occurs in cases where illusions do not result.

· What does it mean for something to be more than the sum of its parts?

The Gestalt idea of perceiving the whole is proposed as a contrast to the computational approach where the parts are added together to achieve perception of the whole stimulus (e.g., as in feature detection models and encoding of geons). In the Gestalt approach, perception is viewed as a process that organizes stimuli into a coherent whole based on top-down processing in the form of organizing principles.

· How does perception aid in action?

According to the perception/action approach, perception is conducted as a means to achieve goal-directed behaviors. Thus, perception and action are intricately tied together.

Chapter Quiz

1. Which of the three approaches to perception would describe perception of an object in terms of the geons that make up the object?

1. Gestalt

2. computational

3. perception/action

2. Which of the three approaches to perception would describe perception of a doorway in terms of whether it can be walked through?

1. Gestalt

2. computational

3. perception/action

3. Which of the three approaches to perception would describe perception of a tree as more than the addition of its branches, leaves, roots, and flowers?

1. Gestalt

2. computational

3. perception/action

4. Which of the following parts of a sensory system is responsible for transforming stimulus energy into neural signals?

1. sense organ

2. brain areas

3. receptor cells

4. nerve conduit

5. In which lobe of the brain is visual information first processed?

1. parietal

2. frontal

3. temporal

4. occipital

6. Two objects appear in a scene: an elephant and a mouse. The mouse is much closer than the elephant. Explain how you might know that the mouse is closer from cues in the scene.

7. Regarding question 6, what aspects of the scene would be of interest to a perception/action researcher?

8. According to the perception/action approach, explain how the perception of the gap in my backyard fence would differ between the rabbit in my backyard and me.

9. Look around the room you are in and describe your perception in terms of the Gestalt principles of proximity, similarity, and closure.

10. Explain the difference in processing of visual stimuli that occurs in the ventral and dorsal brain pathways.

11. In what way does the discovery of mirror neurons support the connection between perception and action?

12. How might mirror neurons be useful in social perception?

13. The _____________ visual pathway extends into motor cortex, whereas the ____________ visual pathway extends into the temporal lobe where language is processed.

14. The information in the environment about movement where farther objects appear to be passing by more slowly than closer objects is called _______________.

15. Perception of the taste of food begins in the __________.

Key Terms

· Affordances 61

· Bottom-up processing 53

· Distal stimulus 53

· Dorsal pathway 66

· Geons 54

· Gestalt psychology 57

· Primary auditory cortex (A1) 51

· Primary visual cortex (V1) 51

· Principle of Pragnanz 59

· Proximal stimulus 53

· Sensory system 51

· Theory of unconscious inference 57

· Top-down processing 55

· Ventral pathway 66

Stop and Think Answers

· 3.1. Describe the four parts of a sensory system.

The four parts of a sensory system are (1) sense organ (eyes, ears, nose, tongue, skin), (2) receptor cells in each sense organ that receive stimulus energy and convert it to neural signals, (3) nerve conduit that carries the neural signal from the sense organ to the brain, and (4) brain area(s) that processes the neural signals received from the sense organ.

· 3.2. What is the role of receptor cells in perception?

The receptor cells serve the important role of converting stimulus energy (e.g., light, sound waves) to neural signals that can be received and processed by the brain.

· 3.3. What are the advantages to having a perceptual system that has automatic input of all environmental stimuli but only consciously processes a small portion of those stimuli?

Answers will vary, but a primary advantage is that we can focus our attention on (or attention can be captured by) any stimuli in the environment because all are being received. Thus, we have the ability to consciously process any stimulus in our environment.

· 3.4. Can you think of a situation where your perception of your environment did not match the reality of the environment? Why do you think that error occurred?

Answers will vary based on personal experiences. The illusions described in the chapter provide some examples of these errors.

· 3.5. Explain what it means to interpret scenes based on cues present in those scenes.

This describes the computational approach to the study of perception. Cues in the stimuli such as basic features, linear perspective, and retinal size help us interpret the size and distance of objects in the environment and also help us identify those objects.

· 3.6. In what way do illusions illustrate the normal processes of perception?

Because we use cues to interpret stimuli, those cues can sometimes lead to an inaccurate interpretation when they conflict with or are not an accurate representation of the environment.

· 3.7. You see a light approaching on the road at night. According to the likelihood principle, which of the following are you most likely to perceive: (a) a deer crossing the road wearing a headlight, (b) a UFO, or (c) an approaching car? Explain your answer.

In this situation, the most likely object causing this stimulus is (c) an approaching car. The likelihood principle states that we interpret stimuli based on the most likely event.

· 3.8. In the scene in Photo 3.4, describe some cues you can use to determine that the front of the pot is closer to you than the cat.

The retinal image size of the pot is larger than the retinal image of the cat. The cat is also higher in the photo; thus, linear perspective may help us determine that it is farther away.

· 3.9. People report a “moon illusion” such that the full moon appears larger when it is lower in the sky and close to the horizon than when it is high in the sky and above us. Using what you learned about the use of cues in this section, why do you think the moon illusion occurs?

One possible explanation of this illusion is that we misinterpret the size of the moon based on the comparison of retinal images of objects near the horizon (e.g., buildings and trees that can be seen along with the moon when it is low in the sky). When the moon is high in the sky, there are typically no other objects to compare it with. However, the explanation of the moon illusion is still debated within research in perception so there is no one right answer to this question.

· 3.10. How does the Gestalt approach to perception differ from the computational approach to perception?

The Gestalt approach to perception focuses almost entirely on top-down processing in the form of organizational principles of the world that we use to interpret stimuli in the environment. Adding cues or features together, as in the computational approach, is seen as providing an incomplete perception of objects and scenes.

· 3.11. How is top-down processing involved in the Gestalt approach to perception?

Top-down processing is involved in the use of knowledge about how the world is organized. We use this knowledge to mentally organize scenes (e.g., by proximity, similarity).

· 3.12. Look around your environment and describe some examples of good continuation in the objects around you.

Answers will vary.

· 3.13. Consider the moon illusion described in Stop and Think 3.9. Would the Gestalt approach to perception explain this illusion differently than the computational approach? Why or why not?

The Gestalt approach would provide a different explanation of this illusion because it would not consider cues such as retinal image size to explain the illusion.

· 3.14. How do perception/action approaches to cognition differ from computational approaches?

Perception/action approaches consider perception as a means to achieve behavioral action goals.

· 3.15. What is an affordance?

An affordance is a possibility for behaviors in a given environment.

· 3.16. I am looking at the lilac tree in bloom outside my window. I immediately imagine going out and smelling the flowers. Explain how my perception of the lilac flowers fits a perception/action approach.

Answers will vary but should include some description of an action goal (e.g., smelling the flowers).

· 3.17. Would a perception/action researcher be interested in explaining the moon illusion described in Stop and Think 3.9? Why or why not?

A perception/action researcher would only be interested in this illusion in terms of any behaviors it might influence.

· 3.18. Reconsider the scenario presented at the beginning of the chapter where you are walking across your crowded campus. How would each of the three approaches describe perception in this situation?

Answers will vary.

Student Study Site

Image

edge.sagepub.com/mcbridecp2e

SAGE edge offers a robust online environment featuring an impressive array of free tools and resources for review, study, and further exploration, keeping both instructors and students on the cutting edge of teaching and learning.

Image

Image

Asia Images Group Pte Ltd/Alamy Stock Photo