Attention - Cognitive psychology

Psychology: an introduction (Oxford Southern Africa) - Leslie Swartz 2011


Attention
Cognitive psychology

Jonathan Ipser

CHAPTER OBJECTIVES

After studying this chapter you should be able to :

•explain Broadbent’s filter theory of attention

•explain Treisman’s attenuation theory of attention

•explain Treisman’s feature-integration theory

•explain Duncan’s theory of selective visual attention

•compare early-selection models of attention with late-selection models of attention

•contrast top—down control of attention with bottom—up control of attention

•discuss what evidence there is for multiple attention systems

•understand which parts of the brain are involved in attention.

CASE STUDY

Melinda had recently received her driver’s licence, and as a reward her parents bought her a small car. The car was supposed to be one of the safest on the road, and included a hands-free cellphone kit. Melinda could hardly wait to take it for a drive and show it off to her friends. In fact, she was so excited that she decided to phone her best friend while still on the road. She had just had time to tell her friend that she was on her way when a 4×4 came out of nowhere, swerved and drove into the front of her car. It turned out that Melinda had driven straight through an intersection without even noticing the stop sign.

She was later told by the police that if the owner of the 4×4 had not been as quick to react by swerving at the last minute, the accident would have been a lot worse. On reflection, Melinda realised just how dangerous the situation had been. Not only was she an inexperienced driver, but she was not at all familiar with driving the car she had been given. She was not able to pay attention to driving the car, watching the road and talking to her friend all at the same time. She decided that in future she would wait until she stopped driving before using her cellphone.

Introduction

How many times in the past have you been told to pay attention? Every day, you are exposed to thousands of sounds, sights and smells, and yet you are frequently expected to be able to select a few for attention while ignoring the others. A simple exercise will reveal how important this ability is. Try focusing on one sound in your immediate environment, such as the ticking of a nearby clock or the singing of a bird. Now, at the same time, try to listen to a second sound. And then a third, and a fourth. Not easy, is it? Now imagine that you are expected to do this for both what you can hear and what you can see, at the same time, and you should begin to appreciate how important the ability to attend selectively to certain stimuli over others is. Finally, imagine trying to do the same exercise while crossing a busy road.

Attention is intimately involved with many aspects of our mental life, including perception, memory and ultimately, consciousness. It is impossible to imagine a world without it. In the words of William James, one of the most influential of the early theorists of attention: ’[M]y experience is what I agree to attend to — without selective interest, experience is an utter chaos’ (1890, p. 402).

Given the broad scope of research into attention, this chapter can only provide a general outline of some of the major theories of attention, and the findings that have motivated changes to these theories. It is divided into two general sections. The first section offers the different theories that have been proposed regarding the mechanisms underlying attention. The second section explores issues that cut across these different theories.

Mechanisms underlying selective attention

Selective attention refers to those processes involved in the orientation towards, and the selection of, certain stimuli over others. It is often described as involving a bottleneck. A bottleneck is a useful metaphor, as it captures the idea that people are only able to attend to a small proportion of the information available to them. Only some of the information gets through the bottleneck for subsequent processing. Theories of attention that are based on this idea are known as information-processing models of attention (or limited-capacity models of attention). The following section will review the major information-processing theories.

Image

Figure 12.1 Schematic diagram of Broadbent’s filter theory of attention

Broadbent’s filter theory of attention

Philosophers have speculated for centuries on the nature of attention, but it was only in the latter half of the 20th century that formal models of attention were proposed. One of the most influential of these was introduced in 1958 by Donald Broadbent, one of the pioneers of attention research (Eysenck & Keane, 2010). He argued that attention consists of two separate stages. In the first stage, different sources of information or channels are automatically distinguished from one another by their perceptual characteristics, such as tone and volume. In the second stage, a selective filter uses these characteristics to allow one of the channels through for the processing of its meaning, while restricting access to the other channels. This model became known as the filter theory of attention.

The dichotic listening task

Broadbent went on to support his case by making use of a method known as the dichotic listening task (dichotic means ’two ears’). In the traditional dichotic listening task, people are played different recorded messages in each ear. After the task, they are asked to repeat what they heard (Eysenck & Keane, 2010).

Two sets of three numbers were played into each ear at the same time, for example 4-7-2 in one ear and 9-5-1 in the other. People were then asked to repeat all six numbers either ear by ear or pair by pair. In the first instance, people would repeat 4-7-2 then 9-5-1. In the pair-by-pair task, people would repeat 4-9 7-5 2-1.

Broadbent found that people found it was easier to report all of the numbers from one ear and then the other (the ear-by-ear instruction), rather than reciting each number as they were presented to the different ears (the pair-by-pair instruction). He interpreted this as evidence that there is a cost involved in switching from one channel to another, with only one channel being attended to at a time. Listening to all the numbers presented to one ear before switching to the other ear reduces the number of times such switching has to take place, and hence increases the ease with which people can perform the task.

Shadowing

The influence of attention in selecting information for additional processing was further illustrated by adapting the dichotic listening tasks. People were instructed to repeat what was said in one of their ears (the attended message), while ignoring what was said in the other (the distracter message or unattended message). Known as shadowing, this was meant to ensure that people did not pay attention to the unattended ear. The results of these studies were generally consistent with the predictions of Broadbent’s theory. Participants demonstrated very little awareness of what was played in the unattended ear. Although they were frequently aware of differences in the pitch or volume of the voices, allowing them to identify the gender of the speaker, they were not able to describe what was said in the distracter channel, and even failed to notice when the messages played to the different ears were in different languages. However, participants who were practised at the task did much better, detecting 67 per cent of digits on the non-shadowed message, compared to 8 per cent detected by new participants (Eysenck & Keane, 2010).

Image

Figure 12.2 The first part of Broadbent’s dichotic listening procedure: repeating the numbers ear by ear

Image

Figure 12.3 The second part of Broadbent’s dichotic listening procedure: repeating the numbers pair by pair according to when they were heard

The sensory memory store

If Broadbent’s model of attention is correct, and people can only attend to one channel of information at a time, then how does one explain the finding that people are sometimes able, in the absence of shadowing, to repeat short sentences as well as lists of numbers even when they are presented simultaneously in different ears? Broadbent explained this by arguing that stimuli are temporarily kept in a sensory memory store, so that a person can retrieve information played back to both ears, as long as not too much time has elapsed and the information has not begun degrading. The auditory sensory memory store is known as echoic memory and the visual sensory memory store is known as iconic memory.

How many channels can penetrate the selective filter?

Perhaps you remember an occasion when you were listening to a friend, only to be distracted by what someone else in the same room said in another conversation? This phenomenon is so well known that it has even been given a name: the cocktail party effect (Cherry, 1953). This effect could not happen if you were unable to be aware of what was said outside your focus of attention, so it calls into question Broadbent’s idea of a selective filter that allows only one channel through for processing.

Other studies have also demonstrated that information regarded by Broadbent as unattended can seep through the selective filter. For instance, Lackner and Garrett (1972) found that when people were asked to paraphrase sentences, which could be interpreted in one of two ways, they were more likely to do so in a way that was consistent with the meaning of a word in the distracter channel. Corteen and Wood (1972) demonstrated that people who were taught to associate the names of cities with a mild electric shock would sweat more heavily when the city names were later presented to the unattended ear than when neutral words were presented. This physical response was observed despite people having no conscious recollection of the words in the unattended channel. Even more interestingly, this response manifested itself even when city names that were not part of the original training set were presented, suggesting that the words were processed for meaning.

Gray and Wedderburn’s revised dichotic listening task

It has been argued that the dichotic listening task used by Broadbent was not well suited to test his theory. The presentation of number sequences is not a very sensitive measure of the extent to which people process distracter items for meaning. Instead of relying solely on number lists, Gray and Wedderburn (1960) devised a task where people were presented with either numbers or words in each of their ears. They were then asked to repeat what they had heard ear-by-ear or category-by-category (in other words, either by grouping the numbers together or by grouping the words together). Gray and Wedderburn found that people were able to categorise items presented to both ears as either numbers or words. In fact, this task was as easy for them as reporting each stimulus ear by ear. Because categorisation requires some analysis of the meaning of the items, this calls into question Broadbent’s theory that we select stimuli for attention based on physical characteristics rather than meaning.

Image

Figure 12.4 Cherry (1953) showed that people in conversation with someone can be distracted by another conversation in the room

Image

Figure 12.5 The first part of Gray and Wedderburn’s dichotic listening procedure: repeating the items ear by ear

Image

Figure 12.6 The second part of Gray and Wedderburn’s dichotic listening procedure: repeating the items category by category

More recent research (Lachter, Forster & Ruthruff, 2004) has led to renewed interest in Broadbent’s approach. Lachter et al. (2004) looked at how the unattended message gets processed, arguing that people shift their attention to the temporary store.

SUMMARY

•Every day, people are exposed to thousands of sounds, sights and smells; the capacity to select a few for attention while ignoring the others (selective attention) is essential to human functioning. Attention is involved with many cognitive aspects, including perception, memory and consciousness.

•Selective attention is like a bottleneck. Only some of the information gets through the bottleneck for subsequent processing. Theories of attention that are based on this idea are known as information-processing models of attention.

•Broadbent’s filter theory of attention argues that attention consists of two separate stages. In the first stage, different channels of information are distinguished from one another based on their characteristics; in the second stage, a selective filter uses these characteristics to allow one of the channels through for the processing of meaning.

•Broadbent used evidence from the dichotic listening task to show that there is a cost involved in switching from one channel to another, with only one channel being attended to at a time.

•The dichotic listening task was adapted to demonstrate shadowing. People were asked to attend to the message in one ear only; this showed that they had very little awareness of what was played in the unattended ear.

•However, Broadbent argued that people have a brief sensory memory store which can retain information for a short period of time. The auditory sensory memory store is known as echoic memory and the visual sensory memory store is known as iconic memory.

•The cocktail party effect calls into question Broadbent’s idea of a selective filter; further research has shown that people can process information for meaning, even if it is unattended.

•Gray and Wedderburn revised Broadbent’s dichotic listening task and found that people were able to categorise items presented to both ears as either numbers or words, thus disputing Broadbent’s theory.

Treisman’s attenuation theory of attention

Despite the weaknesses that have been identified in Broadbent’s original formulation of the filter theory, many subsequent models of attention are based on the same idea that attention consists of an automatic phase followed by a slower selection phase. In the automatic phase, multiple sources of information are processed almost instantaneously (parallel processing). In the slower selection phase, stimuli are processed one at a time (serial processing).

Anne Treisman (1960), for instance, argued in her attenuation theory of attention that perceptual or sensory information from unattended channels makes it through for further processing, but in a much weakened, or attenuated form (Quinlan & Dyson, 2008). Although people may not be conscious of information they have not been focusing on, this information can still rise to awareness in certain circumstances. Whether this will occur depends on the intensity of the information relative to a certain threshold, with a lower threshold set for situationally relevant or familiar information (such as hearing your own name).

Treisman conducted a series of experiments to demonstrate that attenuated information is available for further processing when required. In one such study, sentences in a dichotic listening task were swapped in midpoint from one ear to the other. Some people would shadow an entire sentence, suggesting that they had swapped attention to the opposing ear in time with the sentence. For instance, the words ’the ship will leave first/for his headache’ might be played in the shadowed ear and ’he took an aspirin/thing in the morning’ in the unattended ear. Although the words in the first half of the sentence played to the unattended ear would have been attenuated, the word aspirin would have lowered the threshold for the word headache, thus facilitating the crossover from the attended ear to the unattended ear.

Treisman’s feature-integration theory

Since the pioneering work of Broadbent, researchers have increasingly moved from auditory to visual mechanisms of attention. One of the most influential theories in vision research is Treisman’s feature-integration theory (Treisman & Gelade, 1980). Many of the findings of this theory are based on the visual search task. An example of such a task is provided in Figure 12.7. Try and see how long it takes you to detect the odd items in each of the rectangles. Which of these seemed to take the longest time?

Image

Figure 12.7 Can you find the odd one out in (a), (b) and (c)?

In exercises similar to those presented in Figure 12.7 (a)—(c), Treisman instructed participants to identify the stimuli that were different from the others (the distracters). She discovered that people were slower in detecting these target stimuli in exercises similar to (c), than those in (a) and (b).

Treismen reported that when the target stimulus and the distracters differ on only one dimension, such as shape in (a) or colour in (b), then the target stimulus seems to leap out immediately, with little effort on the part of the participant. Known as the pop-out effect, this happens irrespective of how many distracter stimuli there are. But when targets and distracters differ on a combination of features, for example, shape and colour, as in (c), the task of identifying the target stimulus takes more time. With every additional distracter that is added, people take longer to identify the target stimulus.

Treisman interpreted the pop-out effect as evidence that perceptual features (such as shape or colour) are processed automatically and in parallel, with little conscious input from the participant. Combining the features is regarded as more resource intensive. According to the feature-integration theory, separate features of a stimulus are associated with its position through the use of a location map. It is only once these features have been combined at the spatial address specified by the location map that a person can be regarded as having identified the stimulus. The binding of features to a target can only take place one item at a time (serially), which helps to explain the sensitivity of the task to additional distracters.

Feature-integration theory has been criticised for not accounting for all of the findings made using the visual search tasks, such as evidence that the distinctiveness of the target and distracters can affect processing time. For example, try and see how long it takes you to identify the L amongst the T in the three rectangles in Figure 12.8. Duncan and Humphreys (1989) found that the L popped out more quickly when the T-shaped distracters were upright or on their sides, as in (a) and (b), but not when they were oriented in both directions, as in (c). This demonstrates that the quality of the distracters as well as the target need to be considered in attention tasks.

Image

Figure 12.8 Try to find the L in (a), (b) and (c)

Duncan’s theory of selective visual attention

An alternative to the feature-integration theory of visual attention was proposed by Duncan (1984). According to his theory of selective visual attention, the basic units of analysis in attention are objects, and not features of objects. This theory can explain how people who have been presented with two overlapping shapes are able to selectively attend to a single shape and ignore the other, despite the fact that they occupy the same location in space.

SUMMARY

•Despite the criticism of Broadbent’s theory, later models used the same idea of two phases of processing (a rapid, automatic, parallel processing phase and a slower selection phase).

•Treisman’s attenuation theory of attention argued that sensory information from unattended channels makes it through for further processing, but in a much-weakened form. Awareness will depend on a threshold, which will be lower for situationally relevant or familiar information.

•Following Broadbent’s work, researchers increasingly studied auditory rather than visual mechanisms of attention. One of the most influential of such theories is Treisman’s feature-integration theory.

•Feature-integration theory demonstrated the pop-out effect. When a target stimulus and distracters differ on only one dimension, then participants are easily able identify the target stimulus. But when targets and distracters differ on a combination of features, identifying the target stimulus takes more time. With every additional distracter that is added, people take longer to identify the target stimulus. This is because these features need to be processed serially (rather than simultaneously).

•Feature-integration theory has been criticised for not accounting for all of the findings made using the visual search tasks.

•Duncan’s theory of selective visual attention argued that the basic units of analysis in attention are objects, rather than features of objects. This object-oriented attention theory explains how people presented with two overlapping shapes are able to attend selectively to a single shape and ignore the other, despite the fact that they occupy the same location in space.

12.1THE RAPID SERIAL VISUAL PRESENTATION (RSVP) TASK

The effect of cognitive load on attention has been demonstrated using the Rapid Serial Visual Presentation (RSVP) task. In this task, people are presented with a series of items (e.g. 20) that typically follow each other by approximately 100 milliseconds. They are asked to report whether they are able to detect two targets they are shown before the task (e.g. a white X and a black X) among the other items (e.g. the letter O). While people generally report seeing the first target (T1), the second target (T2) is only detected if presented very soon after the first (< 100 ms) or after a longer delay of over half a second. This gap in which nothing is detected is known as the attentional blink (AB).

Image

Figure 12.9 Detecting two targets shown before the task (a white X and a black X) among the other items (the letter O)

One of the major interpretations of the existence of the attentional blink is that it represents the initial rapid processing of sensory features of T1 for identification, followed by a prolonged secondary phase of processing in which the sensory characteristics of an object are consolidated with its semantic characteristics. This secondary phase, which is necessary for the reporting of the target, has a limited capacity. As a result, T2 has to remain in the first phase of processing if it is presented while T1 is still undergoing consolidation, and will decay if it has to wait longer than the storage limit for visual memory. This late-selectionist interpretation of the attentional blink is supported by the finding that it does not occur if the interval between T1 and the next item in the sequence is increased. In these circumstances, the processing of T1 is less likely to be delayed, leaving sufficient time for the processing of T2. Interestingly, there is some evidence from a study using a variant of RSVP that pop-out effects are also delayed when substituted for the second target, which suggests that processes regarded as automatic are also affected by cognitive load (Joseph, Chun & Nakayama, 1997).

Theorists of object-oriented attention argue that in the initial stages of processing, perceptual stimuli are automatically grouped into objects according to certain organisational principles. These include the grouping of objects on the basis of their similarity and how close they are to one another, as well as whether grouping separate items as a single object results in a pattern that is simpler, more continuous and has greater closure. The same grouping principles have been used to explain how people can increase their search speed in visual search tasks by only searching for a target in a smaller subset of items.

Issues of general importance to attention

Although the theories of attention described so far are often stated in all-or-nothing terms, there is reason to suspect that each contains an element of truth regarding the nature of attentional processes. This will become evident in the following section, in which issues that cut across different theoretical approaches are discussed.

Early-selection versus late-selection models of attention

The filter theory of attention, the attenuation theory of attention, the feature-integration theory and the theory of selective visual attention are all examples of models that propose that we select what to attend to early on in the process of paying attention. According to these early-selection models, channels are selected on the basis of perceptual features prior to semantic analysis. However, late-selection models propose that all channels undergo semantic analysis for meaning, but only one is selected for retrieval. Late-selection theorists, such as Deutch and Deutsch (1963), argue that all stimuli are processed on a perceptual level, but only selected information is then stored in memory or used in response selection (Eysenck & Keane, 2010).

Much of the evidence presented so far could support either an early perceptual bottleneck or a later semantic bottleneck, so how would you choose between them? In fact, this may not be necessary. Perhaps the participants in studies inspired by Broadbent appear to attend to a single channel while ignoring others because the difficulty of the task means that they are forced to allocate all of their attention to a selected channel. In other words, unfamiliar tasks place a high cognitive load on the participant, with cognitive load defined as the amount of information that is processed relative to processing capacity. In these circumstances, it is no surprise that people have difficulty processing multiple channels for meaning. This was the point that was shown in Lachter et al.’s (2004) research. This also fits with findings that a higher cognitive load seems to result in earlier selection on the basis of perceptual features, while a lower cognitive load shifts selection to a later post-perceptual stage. Such flexible selection models of attention have been proposed by Vogel, Woodman and Luck (2005) and Lavie (1995).

Top—down versus bottom—up control of attention

Imagine that you are attending a lecture on statistics. Suddenly the sound of a car backfiring outside causes you to look out of the window. You then quickly turn your attention back to what the lecturer is saying, as you are determined to do well in this course.

In this scenario, you have demonstrated two forms of attention. When a sensory stimulus attracts your attention (such as in the case of the car backfiring), you are displaying what is known as exogenous or bottom—up control of attention. This is an automatic response to a stimulus that might be important for you to know about. On the other hand, when you decide to listen to what the lecturer is saying, you have shown endogenous, or top—down control of attention. This is a form of attention over which you have strategic control and it is based on your past experience and expectations (Eysenck & Keane, 2010).

12.2DIVIDED ATTENTION

Divided attention is concerned with the question of whether people can simultaneously pay attention to multiple sources of information. People who are asked to perform two tasks generally find it easier the greater the difference between the inputs used, and between the types of responses required.

For example, in a classic study by Spelke and colleagues (1976), two students, John and Dianne, were each asked to read a short story while writing down words dictated to them by the experimenter. As these tasks both involved language skills, they tended to interfere with one another, with the result that their reading speed and comprehension both suffered. Nevertheless, after six weeks of practising this dual task for five hours a week, their reading performance in the dual task condition was just as good as when they performed the reading task on its own. With additional practice, they were also able to detect relations between words and categorise them, all while maintaining reading speed and comprehension. But, despite evidence that practice improves performance in dual tasks, there does appear to be an upper limit on how much difference practice will make. One interesting example of a job requiring extremely divided attention is that of an air traffic controller, who is required to watch and coordinate the movements of multiple aircraft arriving and departing on a computer screen while at the same time listening and responding to their radio calls. Fortunately, with practice and experience, some aspects of these kinds of tasks can become more automatic, relieving some of the demands on attention (Smith & Kosslyn, 2009).

This aspect of attention has major implications for the present day as more and more people think they can drive quite competently while using their cellphones. However, in the US it was found that cellphone users were four times more likely to be involved in a collision (Smith & Kosslyn, 2009). In addition, teenagers who used cellphones while driving were more likely to be involved in rear-end collisions (Neyens & Boyle, 2007). Many people do not realise that this impairment in driving performance is observed irrespective of whether a hand-held or hands-free phone kit is used. Levy et al. (2006, in Quinlan & Dyson, 2008) assessed driving performance in a simulator. Participants were told to follow a car and to apply brakes when it did. In the experiment, the speed of the lead car was varied. Levy et al. (2006, in Quinlan & Dyson, 2008) found that performance on a concurrent task seriously impaired driving, regardless of whether it was verbal or manual (Quinlan & Dyson, 2008).

12.3ATTENTION TO FEAR-PROVOKING STIMULI

Researchers are increasingly using an evolutionary framework in the study of cognition, which leads them to assess the importance of stimuli with regard to people’s survival and well-being. Researchers in this area have often modified traditional information-processing tasks to include fear-provoking content, on the understanding that stimuli that provoke fear are likely to be important to our survival. For instance, in the emotional Stroop task, different coloured images of fearful or angry faces are displayed instead of words. People are instructed to name the colour of the images. Research using this task consistently discovered an inclination to attend to the fear-provoking images (the fearful faces). This bias is even greater in studies of people who already possess a heightened sense of fear. This is the case in individuals who have been exposed to traumatic events, such as torture, rape and natural disasters, and who subsequently have been diagnosed with post-traumatic stress disorder (PTSD). A bias in attention occurs in PTSD patients even when the images are displayed for such a brief period that people are not aware of having seen them.

Whether your attention is eventually allocated to a stimulus depends on both the external characteristics of the stimulus (e.g. the shape and colour of an object) as well as your goals at a particular time. You can override these stimuli if you are motivated to do so. This was the case in the previous example, in which you ignored the sound of traffic so that you could attend to what the lecturer was saying. This was demonstrated in an even more extreme form in a classic experiment by Simons and Chabris (1999), in which people were asked to watch a video of a basketball game, and instructed to count the number of times members of one of the teams passed the ball to each other. While they were doing so, a person dressed in a gorilla suit walked from one side of the screen to the other. Only 42 per cent of the people who participated in this study detected the person in the gorilla suit, although he was visible for a number of seconds. The failure to detect changes in a scene is referred to as change blindness, and is an example of extreme top—down control of attention.

There is also evidence that attention can be overridden by bottom—up processes. Victims of crime who have been held up at gunpoint are often better able to give a good description of the gun than the person holding it, despite the fact that the latter information is likely to be more useful when trying to identify the perpetrator at a later time. The cocktail party effect is an example of bottom—up control of attention, where the characteristics of the stimuli are more likely to attract your attention if they are relevant to you.

One attention system versus multiple attention systems

Most of the theories described in this chapter have treated attention as one system. However, there is a substantial body of evidence suggesting that it might be more accurate to consider attention as consisting of multiple independent subsystems. Posner and Boies (1971) identified three components of attention:

•An alerting system (which includes alertness for a forthcoming stimulus and sustained attention)

•An orienting system (which includes overt and covert orientation towards specific stimuli, and the selection of specific stimuli)

•An executive system (which includes top—down control and conflict resolution).

Evidence that these are separate systems comes from three sources:

•There is little relationship between people’s performance on the different sections of the attentional network test (ANT), which was specifically designed to independently assess each of these subsystems.

•The activation of independent brain circuits when these subtasks are carried out.

•Genetic differences underlying performance in the individual subsystems.

SUMMARY

•The filter theory of attention, the attenuation theory of attention, the feature-integration theory and the theory of selective visual attention are all examples of early-selection models of attention. Late-selection models propose that all channels undergo semantic analysis for meaning but only one is selected for retrieval. However, attention may depend on the cognitive load of difficult tasks.

•In top—down (endogenous) control of attention, people use strategic control to direct their attention; in bottom—up (exogenous) control of attention, a sensory stimulus demands their attention in an automatic response because the stimulus might be important to know about.

•Most of the theories in this chapter treat attention as one system; however, there is evidence to suggest that attention consists of multiple independent subsystems. Posner and Boies identified three components of attention: an alerting system, an orienting system and an executive system.

12.4STUDYING THE BRAIN TO UNDERSTAND ATTENTION (ALSO SEE CHAPTER 7)

By assessing the performance of people who have damaged particular regions of their brains, psychologists have for a long time tried to identify the parts of the brain necessary for certain cognitive abilities. One region that has been consistently associated with attention using this method is the parietal cortex. Treisman (1998) demonstrated that RM, a patient with damage to this part of the brain, was not able to link colours reliably to words.

In addition to this, unilateral neglect, which is an inability to attend to one field of vision, has been linked to damage to either one of the parietal lobes. Patients with this condition act as if they are not able to see anything in the field of vision on the opposite side to their lesions (the left field of view for right parietal damage, and vice versa). They will only eat the food from the right side of their plates, and when asked to draw or copy pictures of a clock, will only draw the digits on the one half of the clock face. There is also evidence that people with this disorder have deficits in their ability to attend to objects. When asked to draw a landscape scene of a house and outlying buildings, patients with unilateral neglect only draw the left or right half of all of the objects in the scene. This is consistent with the object-oriented account of visual attention. Findings from brain lesion studies can now be supplemented by relatively new brain-imaging techniques. Changes in brain activity can now be measured by technologies such as positron emission tomography (PET), which detects changes in blood flow, and functional magnetic resonance imaging (fMRI), which measures the oxygenation of blood in the brain. These allow researchers to study which parts of the brain are activated when people pay attention to certain classes of stimuli. (Interestingly, objects that are not the central focus of attention still activate the sensory regions of the brain, but to a lesser degree than those that are the central focus — a finding that provides some support for early-selection theories of attention.)

Image

Figure 12.10 Unilateral neglect: patients draw only one half of a figure

When people have purposefully directed their attention to particular items, activity in the prefrontal regions of the brain has been detected. The top—down control of attention required for these tasks is consistent with the involvement of the prefrontal cortex in planning actions and sequences of behaviours. Feedback loops from the prefrontal cortex (associated with strategic stimuli) and the amygdala (associated with emotional stimuli) to the visual regions of the brain are regarded as underlying enhancements of particularly salient stimuli for further processing.

The timing of different components of attention can also be examined by means of electroencephalography (EEG). In EEG, recordings are made of electrical activity in the brain. EEG research has discovered that brain waves that are evoked by sensory stimuli, known as event-related potentials (ERPs), are weaker for unattended than attended stimuli, but are still present. (Therefore these ERP findings also support early-selection models of attention.)

Conclusion

This chapter has attempted to provide a brief summary of the major theoretical models of attention. In tracing the development of these different theories, it has become clear that they serve to highlight complementary aspects of attention, rather than any single model providing an all-encompassing description of it. But that this is the case is hardly surprising, given the highly complex nature of this vital cognitive ability.

12.5MULTITASKING

Christine Rosen (2008, p. 105) rather wistfully quotes Lord Chesterfield’s comment in the 1740s that ’attention to one object is a sure mark of a superior genius’ . But, in the 2000s, it seems that the pressure of life has led to many seeing multitasking as a necessity. People have always had the capacity to pay attention to several things at the same time (Wallis, 2006); but, as technology has developed at lightning speed, multitasking using multiple electronic gadgets has become the norm, especially for the young people of Generation M (Wallis, 2006). At first, multitasking was widely admired and media and various experts tried to teach people how to do it better (Rosen, 2008). However, multitasking has some nasty side-effects, ranging from fatal car crashes due to texting to the loss of human connection in families and society as a whole. Wallis (2006) reports on work by UCLA anthropologist Elinor Ochs, which highlights the things that are not happening in families due to absorption with multiple media: children do not greet their parent(s) when they return from work, families do not eat together, and important conversations do not happen. Rosen (2008) notes that multitasking is really about paying attention and that William James believed that paying attention required discipline and will. Nowadays, ’our collective will to pay attention seems fairly weak’ (Rosen, 2008, p. 110) and this has potentially weighty and severe consequences. As Rosen (2008, p. 110) concludes, ’this state of constant intentional self-distraction could well be of profound detriment to individual and cultural well-being’.

KEY CONCEPTS

Imagealerting system: the proposed component of attention that relates to alertness and sustained attention

Imageamygdala: an area of the forebrain that is associated with emotional stimuli

Imageattended message: a message that a person is paying attention to

Imageattentional blink (AB): the ’gap’ in attention, lasting from approximately 100 ms to 500 ms, that manifests after a person has detected a target presented in the RSVP task

Imageattenuation theory of attention: the theory that information from both attended and unattended channels is processed, but from unattended channels is processed to a lesser extent

Imageautomatic phase: according to Treisman, the phase when multiple sources of information are processed almost instantaneously

Imagebottom—up (exogenous) control of attention: the automatic attraction of a person’s attention by a sensory stimulus

Imagechange blindness: the failure to detect changes in a scene

Imagechannels: different sources of information that, according to Broadbent, are automatically distinguished from one another by their perceptual characteristics

Imagecognitive load: the amount of information that is processed relative to processing capacity

Imagedichotic listening task: a method used in attention research where people are played different recorded messages in each ear and afterwards asked to repeat what they heard

Imagedistracter: either something not intentionally attended to or the part of a task that serves as a contrast to a target only

Imagedivided attention: the state when people can simultaneously pay attention to multiple sources of information

Imageechoic memory: a form of short-term auditory memory

Imageelectroencephalography (EEG): a brain-imaging technique that shows the electrical activity in the brain

Imageevent-related potentials (ERPs): brain waves that are evoked by sensory stimuli

Imageexecutive system: the proposed component of attention that relates to top—down control and the resolution of conflicting demands

Imagefeature-integration theory: the theory that perceptual characteristics or features of stimuli are processed automatically, but that selective attention is required for the combination of these features in identifying objects

Imagefilter theory of attention: the model of information processing that proposes that a person distinguishes between different channels and selects one for semantic processing while restricting others

Imagefunctional magnetic resonance imaging (fMRI): a brain-imaging technique that measures the oxygenation of blood in the brain

Imageiconic memory: a very short-term form of visual memory

Imageinformation-processing models of attention: (limited-capacity models of attention): theories of attention that are based on the idea that people are only able to attend to a small proportion of the information available to them

Imageobject-oriented attention: the concept that attention is directed at objects rather than features

Imageorienting system: the proposed component of attention that relates to the orientation towards, and the selection of, specific stimuli

Imageparietal cortex: region of the brain situated between the frontal and occipital lobes that serves to integrate sensorimotor information, and which has been consistently associated with attention

Imagepop-out effect: when the target of an attention task differs according to only one dimension, and therefore is identified with little effort by the participant

Imagepositron emission tomography (PET): a brain-imaging technique that detects changes in blood flow in the brain

Imageprefrontal cortex: an area at the front of the brain that is primarily involved in executive functioning and planning

ImageRapid Serial Visual Presentation (RSVP) task: a task where people are first presented with a series of items in quick succession, and then are asked to report which of these items they detected

Imageselection phase: according to Treisman, the phase when sources of information are processed one at a time

Imageselective attention: those processes involved in the orientation towards, and the selection of, certain stimuli over others

Imageselective filter: the cognitive mechanism that, according to Broadbent, selects a channel for processing based on characteristics of its stimuli, while restricting access to the other channels

Imagesemantic analysis: analysing a sensory input to determine its meaning

Imagesensory memory store: the cognitive mechanism that briefly keeps sensory information for possible retrieval and processing

Imageshadowing: the process whereby people are instructed to repeat an attended message while ignoring an unattended message

Imagetheory of selective visual attention: the theory of attention that proposes that the basic units of analysis are objects

Imagetop—down (endogenous) control of attention: the consciously directed and strategic control of attention

Imageunattended (distracter) message: a message in attention tasks to which a person has been instructed not to pay attention

Imageunilateral neglect: a disorder that can result from damage to the left or right parietal lobe, where affected patients act as if they are not able to see anything in the field of vision on the opposite side to their lesions

EXERCISES

Multiple choice questions

1.Which of the following models of attention acknowledge that people are only able to attend to a small proportion of the information available to them?

a)information-processing models of attention

b)top—down models of attention

c)object-oriented models of attention

d)all of the above.

2.In Broadbent’s model of attention, what function does a selective filter have?

a)It initially allows all sources of information through for processing of meaning.

b)It distinguishes sources of information on the basis of their perceptual characteristics.

c)It allows only some sources of information through for semantic processing.

d)Both b and c are correct.

3.In a traditional listening task, people find it easier to report all of the numbers heard by one ear and then all the numbers heard by the other ear, rather than reciting each number in the order in which it was presented to the ears. Broadbent argued that this was evidence that:

a)each ear is programmed to perceive different types of information

b)a selective filter operates on one ear and not the other

c)switching from one ear to the other decreases people’s efficiency in a listening task.

d)Both a and b are correct.

4.Which of the following effects were produced by shadowing in an adaptation of the dichotic listening task?

a)People were able to identify the gender of the person speaking the distracter message.

b)People were able to identify what language the person speaking the distracter message used.

c)People were able to repeat the distracter message, after repeating the attended message.

d)Both a and b are correct.

5.The attenuation theory of attention was proposed by:

a)Broadbent

b)Cherry

c)Duncan

d)Treisman.

6.Which one of the following is regarded as the most influential theory in vision research?

a)attenuation theory

b)feature-integration theory

c)filter theory

d)object-oriented theory.

7.Early-selection models of attention propose that:

a)all channels are analysed for meaning, but only one is selected for retrieval

b)some channels are processed for meaning, others are processed on the basis of perceptual characteristics

c)before channels are analysed for meaning, they are selected on the basis of perceptual characteristics

d)before channels are analysed for meaning, they are all attended to simultaneously.

8.In an experiment by Simons and Chabris (1999), 42 per cent of people concentrating on a certain aspect of a video did not notice a person dressed in a gorilla suit who walked from one side of the screen to the other. The failure to see the gorilla is an example of:

a)change blindness

b)the attentional blink

c)top—down control of attention.

d)Both a and c are correct.

9.Victims of crime who have been held up at gunpoint are often better able to give a better description of the gun than the person holding it. This phenomenon is evidence of:

a)top—down control of attention

b)bottom—up control of attention

c)change blindness

d)the pop-up effect.

10.Posner and Boies (1971) identified three components of attention. These are:

a)an alerting system, a filter system and an executive system

b)an alerting system, an orienting system and an executive system

c)a filter system, an orienting system and an executive system

d)an alerting system, an orienting system and a processing system.

Short-answer questions

1.Briefly outline the basic tenets of Broadbent’s filter theory of attention.

2.Describe what is meant by divided attention.

3.Compare early-selection models of attention with late-selection models of attention.

4.Contrast top—down control of attention with bottom— up control of attention.