Reasoning and Decision Making

Cognitive Psychology: Theory, Process, and Methodology - Dawn M. McBride, J. Cooper Cutting 2019


Reasoning and Decision Making

Questions to Consider

· How logical are the conclusions you draw?

· Why are some things harder to reason about than others?

· How and when do we make inferences about causal relations?

· What steps do we go through when we make decisions?

· Do we always make the best choices?

Introduction: A Night at the Movies

Suppose it is one of those days when you are running late. You race back to your dorm room because you are supposed to be heading out to a movie with your roommate. You arrive to find that your roommate isn’t there. You call your roommate, but she doesn’t answer. You aren’t sure what to do. Did she leave without you? Should you try to catch up with her at the theater? Or is she running late too? It seems out of character that your roommate would have left without you, but it is also atypical that she would be running late. You reason that if she is not there, then she must have gone to the movie without you. Looking around the room for clues you notice that a movie show time webpage is loaded up on the computer. The showing that you had planned to attend at the Omnimax is listed as sold out. There is a later showing at that same theater, but there is also an earlier showing of the movie at the Palace Theater. Should you wait for your roommate and go see the later showing, or do you leave now and head to the Palace? After some consideration, you reason that your roommate probably saw that the show was sold out and that you were running late, so she decided to head to the other theater to get tickets while they were still available. With this in mind, you grab your coat and head out, hoping to catch up with your roommate at the Palace.

Much of our everyday thinking is made up of reasoning and decision making. Generally we feel that our reasoning processes are logical (we come to the right conclusions) and that our decisions are sound (we make the right choices). However, it turns out that our thinking may not follow the standards of formal logical systems. Under what conditions do we act logically, and when do we deviate? This chapter reviews the theories and research behind how we reason about things like what your roommate probably did and which movie theater you decide to go to.

Our reasoning processes are what allow us to evaluate arguments and reach a conclusion. Cognitive psychologists and philosophers typically distinguish between two broad types of reasoning: deductive and inductive reasoning. Deductive reasoning is often described as making arguments from general information to more specific information. For example, if we know that Vulcans are logical and Spock is a Vulcan, we can conclude that Spock is logical. In contrast, inductive reasoning is argumentation from specific instances to more general relationships. For example, in “A Scandal in Bohemia” Sherlock Holmes reasons that Dr. Watson had recently been caught in a rainstorm based on his observation of his shoes. Holmes reasoned that several parallel cuts on the leather must have resulted from careless scraping of mud from the sole and that the mud resulted from a recent torrential rainstorm. This series of reasoning is an example of inductive reasoning (despite the fact that Holmes usually described it as “simple deduction”). Most of us probably think of ourselves as rational and logical. However, the fact that we think of fictional characters such as Star Trek’s Mr. Spock and Sherlock Holmes as extraordinarily logical in their thinking suggests that we are aware that we don’t always follow the rules of logic.

Deductive reasoning: making and evaluating arguments from general information to specific information

Inductive reasoning: making and evaluating arguments from specific information to general information

Deductive Reasoning

Deductive reasoning is the making and evaluation of arguments following a logical set of rules or principles. Generally two types of reasoning have been the focus of philosophical and psychological investigation: syllogistic and conditional reasoning. The following sections briefly describe these two types of reasoning.

Syllogistic Reasoning

Aristotle developed the logical rules of syllogistic reasoning. Syllogistic reasoning is a process by which a conclusion follows necessarily from a series of premises (statements). If the premises are true, then by the rules of deduction, the conclusion must be true as well. This is referred to as the deductive validity of the argument. In logical arguments, syllogisms often take the following form:

· All A’s are B’s. (first premise)

· All B’s are C’s. (second premise)

· All A’s are C’s. (conclusion)

The All is a quantifier. Other quantifiers include words like no, some, some are not, and many. The A’s, B’s, and C’s are things in the world. Let’s look at a concrete example.

· All ants are insects.

· All insects are animals.

· All ants are animals.

Syllogistic reasoning: a process by which a conclusion follows necessarily from a series of statements

The statement that “all ants are animals” is a valid conclusion that results from applying the rules of logic. My son, who was three years old at the time, used this type of logic to decide he didn’t like butterflies, which he had liked prior to thinking this through. After a scary experience with flying insects (on a ride at Disney World), he decided that “flying insects are scary.” After encountering a butterfly in the backyard, he realized that “butterflies are flying insects. Therefore, butterflies are scary.” And he has disliked them ever since.

Photo 12.1 Logical thinkers Spock and Sherlock Holmes.

Image

Photo 12/Alamy Stock Photo

The next question you might ask is how often we reason using these logical rules. Researchers have used straightforward methods to assess this question (e.g., Ford, 1994; Johnson-Laird, 2006; Johnson-Laird & Steedman, 1978; Roberts, Newstead, & Griggs, 2001). Typically, participants are presented with the premises and asked to produce the logical conclusion. Other times they are asked to select the valid conclusions among a set of possible conclusions or given a single conclusion and asked to decide whether it is valid. Sometimes the syllogisms are presented in systematically varied formats (e.g., verbally or visually, with or without figures). In some studies participants are asked to talk aloud about why they came to their conclusions. While each method yields slightly different results, an overall, consistent conclusion can be reached: We are often not very good at following the rules of logic.

Consider the syllogisms in Table 12.1. For each one determine whether the argument is valid. What did you conclude for the first one in (a)? If you decided that the conclusion follows logically from the premises, then you are in agreement with most people. The conclusion that “all dogs are mammals” feels right. It fits with our world knowledge about dogs, and both of the premises were all statements. However, this argument is logically invalid. The conclusion that “all dogs are mammals” does not logically follow from the two premises. The second example in (b) is the same pairing of the premises but with a different conclusion. What did you conclude about this argument? This one is a valid argument. However, when given this pairing of premises, people rarely give this conclusion (or the other valid conclusion “Some dogs are mammals”). The third argument in (c) is valid as well. People typically find the third argument difficult to evaluate. When just given the premises, people often conclude that there is no valid logical argument.

Image

The research suggests several factors impact how likely we are to correctly follow the rules of logic (e.g., Wilkins, 1928). One factor is the phrasing of the premises. Not all premises and conclusions are equally easy to reason about. We often find that arguments that include negations (no and not) are more difficult, as in (c). Arguments with some are typically harder than those with all. Another factor is how people understand the language used in the premises. There is a difference between the logical use of some and how we typically use some in our day-to-day language. Consider the conclusion in example (b). In day-to-day language, “some mammals are dogs” is generally understood as meaning “some mammals are dogs and some are not dogs.” In logical terms, “some mammals are dogs” should be interpreted as “at least one mammal is a dog, but there may or may not be other mammals that are not dogs.” Another factor is the content of the arguments (what the A’s, B’s, and C’s are in the example at the start of this section). For example, many people accept the argument in (a) as valid, when logically it is not. This is in part due to the fact that the conclusion “All dogs are mammals” is consistent with what we know about dogs. However, this knowledge is irrelevant for the logic rules; what the A’s, B’s, and C’s are shouldn’t matter. When we reason, we are influenced by the content of what we are reasoning about. Typically, we are less likely to accept something that goes against our initial assumptions and are more likely to accept something consistent with our beliefs.

Conditional Reasoning

Conditional reasoning (propositional reasoning) has a similar formal structure with the inclusion of connective words like if and then as part of the first premise (other connective words include and, or, and not, but for simplicity these are not discussed here). Conditional reasoning is sometimes referred to as propositional reasoning because of the connective words in propositional statements. Propositional statements are those that are either true or false (see Chapter 9 for a discussion of propositional representations in language). In logical arguments, they are often stated in a form similar to that used for syllogisms.

The major premise consists of the antecedent (p) and the consequent (q). The valid conclusion in this argument is “q.”

Some conditional reasoning was involved in the movie scenario presented at the beginning of the chapter. You thought through some propositional statements in making your decision: If I wait for my roommate and she’s gone to the earlier show at the Palace, I will miss the movie. If I go to the Palace and she’s gone out to eat before heading to the later show, I will miss dinner and go to the wrong theater. Considering these premises helped you think about the consequences of your decision in each case.

Let’s look at the concrete examples in Table 12.2. For each argument, decide whether you think the conclusion is valid. The first argument in (a) is often referred to as modus ponens. It is a valid argument and most people find it fairly straightforward. The second argument in (b) is called modus tollens and is also valid. Typically people find these arguments more difficult. What did you think about the third and fourth arguments in (c) and (d)? It turns out that both are invalid arguments (often called fallacies). The key is in the major premise “If it is sunny outside, then I will walk to class.” This doesn’t say anything about what I will do if it isn’t sunny; I might still choose to walk to class. So in (c), the minor premise tells us that it is not sunny, but we don’t have information about whether we will walk or not. The reverse is true in (d). Given that I might walk to class rain or shine, knowing that I walked to class doesn’t tell me what the weather was like.

One of the most popular tasks researchers have used to examine this kind of reasoning is the four-card task developed by Wason (1968). Figure 12.1 illustrates this task. Imagine that you are presented with four double-sided cards (each with a letter on one side and a number on the other side) and the following claim: If a card has a vowel on one side, then it has an even number on the other side. You are allowed to turn over two cards to test whether the claim is true. Go ahead and give it a try. Which two cards would you turn over to test the claim? The solution and logical rationale to this problem are shown in Figure 12.2. If we represent the claim as the major premise of a conditional argument, we get the following:

If a card has a vowel (p), then it has an even number on the other side (q).

Conditional reasoning (propositional reasoning): a process by which a conclusion follows from conditional statements (“if, then” statements)

We should select the cards that correspond to the minor premises in the two valid arguments. Try mapping this example onto the examples in Table 12.2. The first one (argument a) is like card A in the Wason task example in Figure 12.2. The second one (argument b) is like card 7 in the Wason task example. In other words, we need to look for evidence in the Wason task that is like that in the first and second arguments in Table 12.2. Don’t worry if you didn’t get the right answer; less than 10 percent of people pick both of these cards (Klauer, Stahl, & Erdfelder, 2007; Oaksford & Chater, 1994; Wason & Johnson-Laird, 1972). Let’s quickly walk through the logic. We want to turn over any cards with vowels (p). If we turn over the A card and it has an odd number, then we know the claim is invalid. Most people do select this card, but it is not enough to fully test the claim. We also need to turn over any card that corresponds to “not q.” That would be any card that doesn’t have an even number—in this case the 7 card. If that card has a vowel on the other side, then we also know the claim is invalid. Turning over the D or 4 card doesn’t help us. Whether the number on the other side of the D card is odd or even says nothing about the claim because D isn’t a vowel. Turning over the 4 doesn’t help because the letter on the other side doesn’t matter (this is the second most common card people select). If it is a vowel, it is consistent with the claim, but if it is a consonant, that’s okay too. The claim doesn’t say anything about consonants, so finding an even number on the card with a consonant doesn’t violate the claim.

Image

Figure 12.1 Wason’s (1968) Four-Card Task

Image

As was the case with the earlier syllogisms, the rules of logic should apply regardless of the context. You probably had trouble with the version of Wason’s four-card task presented in Figure 12.1. However, if we change the context of the argument, it can have an impact on how we reason about it. Wason and Shapiro (1971) gave a version of the task within a traveling context. Imagine that the cards have a location on one side and a method of travel on the other (see Figure 12.3). The claim to be tested is “Every time I go to Chicago I take the train.” Which cards should you turn over to test this claim? Most people find this version of the task easier. You should turn over the Chicago card but not the New York City card (it doesn’t matter how you got to New York). Your second card should be the plane card. If the other side has Chicago on it, then that violates the argument (you got to Chicago via a method other than the train). The train card just tells you where you went by train; it could be Chicago, but it could also be somewhere else. Again, it doesn’t matter what is on the other side of the train card. Griggs and Cox (1982) had participants imagine that they were a police officer assigned to enforce a law requiring those who drink alcohol to be at least twenty-one years of age. The cards represent different patrons in a bar (see Figure 12.4). One side of the card lists what a person is drinking and the other side lists the person’s age. Which patrons should the police officer check? (After you’ve made your guesses, check Figure 12.5 for the answer.) Griggs and Cox’s participants had little trouble recognizing that there was no point in checking the person drinking soda or the person who was over twenty-one. Over 75 percent of their participants made the correct card choices.

Research like this demonstrates that while we often aren’t good at logical reasoning, we do use it in some contexts. The following section briefly reviews some of the theoretical approaches proposed to explain how and when we reason logically, why we find some arguments more difficult than others, and why we make particular errors.

Figure 12.2 Solution to Wason’s (1968) Four-Card Task

Image

Figure 12.3 Travel Version of Wason’s Four-Card Task

Image

Photo sources: Chicago: Digital Vision/Photodisc/Thinkstock; New York: Creatas Images/Creatas/Thinkstock; train: Stockbyte/Stockbyte/Thinkstock; plane: Jupiterimages/Photos.com/Thinkstock.

Figure 12.4 Contextualized Version of Wason’s Four-Card Task

Image

Photo sources: soda: ITStock Free/Polka Dot/Thinkstock; woman: Ralf Nau/Digital Vision/Thinkstock; man: Goodshoot RF/Goodshoot/Thinkstock; beer: Thomas Northcut/Photodisc/Thinkstock.

Deductive-Reasoning Approaches

Generally, deductive reasoning involves understanding and representing the premises, combining these representations, and drawing a conclusion. Many theories have been proposed to explain how we deductively reason. Roberts (2005) classifies these into three general approaches: conclusion identification, representation explanations, and surface (or heuristic) approaches.

Conclusion Interpretation Approaches

These approaches propose that errors arise from general biases against making particular conclusions (e.g., Dickstein, 1981; Revlis, 1975; Roberts & Sykes, 2005). For example, people may be reluctant to make “no valid conclusion” responses because they feel that this is an uninformative conclusion. Another error may result because the order of the terms can be reversed (“called conversion”) in some premises but not in others. For example, “Some ants are insects” and “Some insects are ants” are logically equivalent. However, “All ants are insects” and “All insects are ants” are not.

Representation-Explanation Approaches

Theories of this type focus on how we represent the arguments. The difficulty of an argument and the likelihood of making an error are the result of either incomplete information or incorrect representation of the argument. If your idea that your roommate would not want to exclude you is not correct (i.e., maybe she was really hungry and decided getting dinner before the movie was more important than making sure you had dinner with her), then your reasoning that she had gone to the earlier movie may be incorrect because you did not accurately represent your roommate’s current priorities in your mind as you thought through the problem of which movie theater you should go to in order to meet up with her. Reasoning that requires complex chains of rules places demands on our working memory. The higher the demands, the more difficult the reasoning. Real-world problems, like that of our movie example, typically involve much uncertainty and incomplete information in which it is difficult, if not impossible, to think through all possible reasoning steps. One of the major differences across different theories is the kind of representation that is assumed.

Mental logic theories (e.g., Braine, 1978; Rips, 1994) propose that our deductive reasoning proceeds by applying a set of rules. Some of these theories propose context-free (so what the argument is about doesn’t matter) rules that operate on propositional representations of the premises. Propositional representations are statements that are either true or false (see Chapters 9 and 10 for more discussion of propositions). Other theories propose that the context of the rules does matter. Cheng and Holyoak (1985) proposed that we reason using sets of rules defined with respect to particular goals (e.g., permissions, obligations, and causation) learned through ordinary day-to-day experiences. In contrast, Cosmides (1989) has argued that we have evolved to reason using rules related to social exchanges. For example, we may be born knowing a rule along the lines of “if I do something for you, then you do something for me.” In other words, our reasoning is based on a kind of benefit-cost rule. However, not all approaches are based on applying mental rules.

Figure 12.5 Solution to the Contextualized Version of Wason’s Four-Card Task

Image

Photo sources: soda: ITStock Free/Polka Dot/Thinkstock; woman: Ralf Nau/Digital Vision/Thinkstock; man: Goodshoot RF/Goodshoot/Thinkstock; beer: Thomas Northcut/Photodisc/Thinkstock.

Stop and Think

· 12.1. Are the arguments discussed in the Conditional Reasoning section representative of the kinds of arguments you face in your day-to-day experience? If not, how are they different?

· 12.2. Consider the following argument: All Introduction to Psychology courses are taught in large sections, and all large-section courses use multiple-choice exams. Therefore my Introduction to Psychology course will use multiple-choice exams. Is this a valid argument? Do you think it follows the “rules of logic”? Would you change your mind if you learned that the section of your course is being taught by a new professor?

· 12.3. Consider the following argument: If I study every term on the review sheet, then I will get an A on the exam. I studied hard, therefore I will get an A. Are these valid arguments? Do you think that they follow the “rules of logic”?

· 12.4. Do you generally consider yourself a “logical thinker”? When you reason about things, do you usually think through all aspects of an argument, or do you usually focus on just a few?

One of the most influential theories of reasoning was proposed by Philip Johnson-Laird and his colleagues (e.g., Johnson-Laird, 2001). This theory proposes that reasoning proceeds through three stages. The first stage is model construction of the premises by building mental models of the world described by the premises. A mental model is essentially a simulation of the spatial relations, events, and processes. For example, Figure 12.6 shows the possible worlds described by the four basic premises we discussed earlier. Notice that many of the premises correspond to multiple mental models of the world. For example, the top left corner of Figure 12.6 shows the situations described by the premise “All ants are insects.” It could be that ants are a subset of insects, or it could be that ants make up the entire set of things that are insects. Both are logical possibilities. The second stage of the model is conclusion formulation. In this stage the mental models of premises are integrated such that consistent models are conjoined and inconsistent ones are discarded. Figure 12.7 depicts this for the argument “All ants are insects” and “All insects are animals.” There are two mental models for each of the premises. Combining each of these results in four possible integrated models. The final stage is the conclusion-validation stage. In this stage we look for models that would falsify the conclusion. In our example, the conclusion is about the relationship between ants and animals, so we can simplify the mental models by removing the information about insects. What remains are two mental models about ants and animals. Both are consistent with the conclusion premise, so we can conclude that this is a valid argument. Essentially, we reason that an argument is valid unless we identify a mental model that falsifies it.

Figure 12.6 Mental Model Representations of Possible Premises With the Quantifiers All, No, Some, and Some Are Not

Image

Figure 12.7 Mental Models of the Combination of the Premises “All Ants Are Insects” and “All Insects Are Animals” and the Logical Conclusion “All Ants Are Insects”

Image

The model predicts that working-memory limitations (see Chapter 5 for further discussion of working memory) interact with the reasoning processes. The more mental models required to evaluate the argument, the more difficult and error prone our reasoning is because we cannot consider all of the models at one time. This exceeds our working-memory limit. For example, the premise “Some ants are insects” has four mental models to consider. If we were to combine it with “Some insects are animals,” which also has four mental models, we would have sixteen integrated models to consider. If we don’t consider all of the models, we may miss the critical one(s) that falsifies the premise in the conclusion.

Surface Approaches

Surface approaches propose that reasoning relies primarily on general heuristics focused on the surface properties of the quantifiers in the argument (e.g., Wetherick & Gilhooly, 1990; Woodworth & Sells, 1935) rather than on reasoning analytically. For example, if the argument contains premises about universals (all and no), then the conclusion probably is a universal. Or if the argument contains a negative premise (no or some . . . not), then the conclusion will be negative.

Chater and Oaksford (1999) proposed the probability heuristics model. Their proposal is that everyday reasoning is not based on logic but rather on probability. Rather than treating the premises as statements of truth, the probability heuristics model analyzes the probability of the premises and the strength of the argument. Consider the probability of something being an ant given the premise “All ants are insects.” There is a 100 percent chance that something is an insect, given that it is an ant. Similarly, if the premise is “No ants are insects,” there is a 0 percent chance that something is an insect given that it is an ant. For the other two premises (some and some . . . not) the probability will be somewhere from 0 percent to 100 percent. However, in everyday life we rarely encounter things with 100 percent certainty. Chater and Oaksford use the example “If something is a bird, then it flies” and “Tweety is a bird” so “Tweety must fly.” But what if Tweety is an ostrich? Should you still infer that she flies? The model proposes that instead of treating “If something is a bird, then it flies” with 100 percent certainty, we should instead consider it with something like 90 percent certainty. Furthermore, we can combine that 90 percent certainty with the likelihood that Tweety is a canary and not an ostrich, strengthening the premise and the probability that Tweety can fly.

Combining These Approaches: Dual-Process Framework Approach

The different versions of Wason’s four-card task demonstrate that sometimes we reason logically, but on other occasions we do not. At times it may feel as if we have two different ways of reasoning. That is essentially what the dual-process framework proposes. Similar theories across a wide range of psychological areas have been developed within this dual-process framework (e.g., controlled and automatic attention processes described in Chapter 4). Evans (2012) reviews the characteristics usually assumed to be shared by dual-process accounts (see Table 12.3). Typically, System 1 processes are assumed to be largely automatic, rapid, and unconscious. In contrast, System 2 processes are typically assumed to be controlled, slow, and often conscious. Evans (1984, 2006) proposed a theory of reasoning within this dual-process framework. He suggests that when we reason we use one system based on heuristic processes (referred to as Type 1) and another based on analytic processes (Type 2). Heuristics are nonlogically based processes used to evaluate information relevant to the problem. They are influenced by the content of the argument, including implicit knowledge of the terms (e.g., what do I know about “ants” and “insects”?) and the language used to state the argument (e.g., what do I mean by “all” and “some”?). Type 2 processes operate with logically based analyses, using the representations activated from Type 1 processes.

Dual-process framework: the idea that cognitive tasks can be performed using two separate and distinct processes

Inductive Reasoning

Deductive reasoning has been the focus of much of the research on how we reason. However, deductive reasoning is about absolute truth, which is rare in our day-to-day lives. On the other hand, inductive reasoning examines the likelihood of a conclusion being true, rather than its absolute truth. This is something we do often in everyday reasoning (Feeney & Heit, 2007). There are many forms of inductive reasoning, some of which are reviewed in this section. What ties them together as a cohesive set is that they involve reasoning from specific data (based on both observation and knowledge) to broader generalizations. A result of these is the generation of new information. For example, think back to the story the chapter opened with. You looked for clues, and based on these clues, you reasoned that your roommate must have seen that the show was sold out and decided to go to another theater without you. This is all new information you have generated, not based on formal rules of deductive logic but rather on inductive reasoning processes. The following section discusses several types of inductive reasoning. Two types—analogical reasoning and category induction—are discussed in detail elsewhere in the textbook, so our discussion about these here is relatively brief.

Image

Source: Evans (2012).

Stop and Think

· 12.5. When you make reasoned arguments, what sort of representations do you think you use? Does it feel like you are using something like those proposed by the mental logic or the mental models approach?

· 12.6. Do you think the approach you take when reasoning depends on what you are reasoning about?

Types of Inductive Reasoning

Analogical Reasoning

We use analogies often. In our daily lives they may take a form similar to Forrest Gump’s “Life is like a box of chocolates, you never know what you’re going to get.” You have probably encountered more formal versions of analogical reasoning tasks on standardized tests (Rumelhart & Abrahamson, 1973; Sternberg, 1997). They typically look like the following example:

· A tree is to forest as a soldier is to __________

· general (b) army (c) warfare

Analogical reasoning is the process of using the structure of one conceptual domain to interpret another domain. Reasoning in this example first involves recognizing the part-whole structural relationship between tree and forest. Then that structural relationship is mapped onto the second part of the argument such that soldier fits the “part” and the choice is to identify the best option for the “whole.” In this case it should be (b) army. This process of analogical transfer is discussed in greater detail in Chapter 11.

Category Induction

Being able to organize and recognize a group of things as members of the same category is an important part of our cognitive system (see Chapter 10). If we see something new and can categorize it, then we can infer many properties of that thing. For example, if we are walking through the woods and we see a blue object on a branch making tweeting noises, we will probably categorize it as a bird. When we make this categorization, we will infer that the “bird” has properties common to other birds, like having feathers and the ability to fly. The inference of these properties is a kind of inductive reasoning: birds have feathers and can fly, this blue object is a bird, so the blue object is a bird and has feathers and can fly. Rips (1975) and others have demonstrated that we also reason in a similar way across categories. For example, suppose we are told that sparrows have a disease. Then we are asked how likely it is that robins and squirrels living in the same area might have the same disease. We make predictions about the likelihood that they have the disease using inductive reasoning. This research is discussed in detail in Chapter 10.

Causal Reasoning

One of our fundamental human behaviors is to attempt to understand how the world around us works. We generally believe in a universe where there is cause and effect. In other words, things happen for a reason. Generally, causal reasoning infers cause-and-effect relationships between two events that occur together either in space or time. If we know the cause-and-effect relationships between events, then we can make predictions about, or even control, our environments. For example, suppose that one day you wake up feeling like you are coming down with a cold. You think back and remember that your friend accidentally sneezed on you the previous evening, and you come to the conclusion that his sneeze caused you to get sick (see Photo 12.2). In the future, if your friend is sick, you decide to avoid him until he gets better. Research suggests two factors are important when we draw causal conclusions: identifying the covariation between the two events and believing that there is a mechanism for the causal relationship. In our example, the sneezing event happened in close temporal proximity (“the previous evening”), and it preceded your feeling sick. This corresponds to the covariation aspect of the situation—how often the two events co-occur. Your causal belief may further include beliefs that germs are transmitted from one person to another and that cold symptoms are the body’s reaction to germs (i.e., the cold virus).

Co-occurrence is necessary for one thing to cause another. However, many things covary together. We also should consider how often events do not co-occur. For example, are there times when somebody sneezes on us and we don’t get sick? Or times when we get a cold without somebody sneezing on us? Cheng and Novick (1992) proposed a model in which our causal reasoning is based on these probabilities. The essential idea of the model is that the strength of our causal belief is a function of the difference between the probabilities of an event happening (e.g., getting a cold) with and without the causal event (e.g., being sneezed on and not being sneezed on).

Photo 12.2 Causal reasoning: If you get a cold after your friend sneezes on you, do you blame your friend?

Image

PR Image Factory/Shutterstock

However, as you may remember from Chapter 1, correlation is not the same as causation. For one thing to cause another thing, there must be a mechanism that connects the two processes (germs, in our example). Beliefs about the mechanisms that underlie the causal relationship are also important factors in our causal reasoning processes. This was demonstrated in a study conducted by Fugelsang, Thompson, and Dunbar (2006). They presented participants with brief stories containing an event and a possible cause of the event. Participants were asked to rate the likelihood that the cause was responsible for the effect. Across several experiments the researchers manipulated the degree of covariation between the cause and event, the believability of the causal power linking the cause and event, and the familiarity and imageability of the causal mechanisms. For example, consider their causal story about slippery roads. All versions started with “Imagine that you are a researcher for the ministry of transportation who is trying to determine the cause of slippery roads in townships.” This was then followed by one of four potential causal hypotheses that varied with respect to different factors.

· High belief and high covariation: “You have a hypothesis that the slippery roads may be due to ice storms.”

· Low belief and high covariation: “You have a hypothesis that the slippery roads may be due to slippery sidewalks.”

· High belief and low covariation: “You have a hypothesis that the slippery roads may be due to rainfall.”

· Low belief and low covariation: “You have a hypothesis that the slippery roads may be due to excessive traffic.”

Participants were asked to rate their beliefs about the causal powers in the stories. Their results indicated that all of these factors were strongly associated with the strength of the inferred causal relationship. Furthermore, Cummins (1995) demonstrated that familiarity of alternative causal mechanisms also plays a role. For example, you know that germs can also be transmitted through touch, and yesterday you also used a public drinking fountain. Recalling this potential alternative explanation may lead you to adjust the strength of your belief about your friend’s sneeze as the cause of your cold.

As explained in Chapter 1, one of the best ways to establish causal relationships is by doing experiments. If we systematically manipulate a potential causal variable (independent variable) and observe what changes occur to the following event (dependent variable), then we have good data from which to make conclusions about causal relationships. While systematic manipulation of variables to test for cause and effect is the bread and butter of the scientific method, Sloman (2005) argues that it may also play a role in our day-to-day reasoning. We develop and update our causal models through both observing particular covariations and intervening in the causes and events.

For example, I recently baked a batch of cookies and discovered that they weren’t as good as I had hoped (see Photo 12.3). I wasn’t sure what was wrong, but I suspected that it was the generic brand of butter that I had used. So I tried making the same recipe, except this time I used a more expensive brand-name butter. The cookies turned out better, and I concluded that the generic butter was likely the problem in the first batch. In this situation, I manipulated the variable (butter type) and observed whether changes occurred in my measure of interest (how the cookies tasted). This allowed me to change my causal model about things that affected how the cookies tasted. In particular, I eliminated other potential causes of the bad taste (e.g., the bad taste wasn’t because the milk was old).

Photo 12.3 Which type of butter makes the better-tasting cookies? Experimenting with different brands tests the causal relationship between butter type and cookie taste.

Image

Monkey Business Images/Shutterstock

Hypothesis Testing

Suppose somebody gave you this sequence of numbers: 2, 4, 6. Your task is to give the rule that makes up this sequence.

What do you think the rule is? Without more information it is difficult to say. Let’s further suppose that you cannot ask questions about the rule but may come up with other sequences of three numbers and ask whether they follow the rule as well. This was the task developed by Peter Wason (1960; Wason & Johnson-Laird, 1972), and it has been the subject of extensive study. We can think of the rules you come up with as hypotheses, and the triplets you test as experiments to test the hypotheses. Suppose you think that the rule is even numbers and ask about 8, 10, and 12. Here the answer is “Yes, these follow the rule.” At this point you could guess a rule or propose three more numbers to continue testing your hypothesis about the rule. Maybe you think the rule is to start with a number and add 2 to get the next. So you may try the sequence 3, 5, and 7. Again you get a “Yes, these follow the rule” response. So now you have to decide if you’re going to propose the rule that you hypothesized or collect more evidence by proposing another set of numbers. Suppose that the correct rule in this situation is a very general one: “List numbers smallest to largest.” In this task, only 21 percent of Wason’s participants correctly arrived at the correct rule on their first try, over 50 percent proposed at least one incorrect rule, and the remaining 29 percent never found the correct rule.

Wason had his participants talk aloud while reasoning. He identified three general strategies his participants used. Sometimes they generated triplets designed to confirm their hypothesis (verification). On other occasions, they generated triplets inconsistent with their hypothesis (falsification). The third strategy was to consider other variations of their hypothesis. If you stick to the first strategy, looking for evidence that confirms your theory, then you are falling prey to a confirmation bias. We often put far too much weight on evidence that is consistent with our hypotheses and far too little weight on evidence against them. Participants who follow the second strategy (falsification) are much more likely to arrive at the correct rule.

One way to facilitate performance on this task was investigated by Tweney et al. (1980). Instead of classifying the triplets their participants generated as yes or no, they were instead classified as belonging to one of two rules (e.g., “There are two rules: Med and Dax. The sequence 2, 4, 6 follows the Dax rule”). Performance increased dramatically, with 60 percent of participants correctly stating the rule on their first attempt. Having two rules to consider may have reduced the confirmation bias, encouraging their participants to consider multiple hypotheses, rather than focusing on verifying a single hypothesis (Mynatt, Doherty, & Dragan, 1993).

Stop and Think

· 12.7. Try to think of an example of causal reasoning you did today. How strong is the covariation between the cause and effect events? Did you consider both how often they did not co-occur as well as how often they did?

· 12.8. Considering the same example that you came up with in Stop and Think 12.7, did you consider alternative hypotheses about the causal effect? Did you engage in counterfactual thinking and ask yourself “What if I had done something else instead?” How does thinking about alternative causes and “what ifs” impact your causal reasoning?

Counterfactual Thinking

Inductive reasoning also includes our ability to reason about things that could have happened but haven’t. These typically take the form of “what if” and “if only” sentences. For example, suppose that you didn’t do as well on your last calculus exam, but you believe you would have done better on your exam if only you had studied for an additional hour. Counterfactual thinking is used in conjunction with many other types of reasoning (Byrne, 2002). Examples include searching for counterexamples when evaluating hypotheses or when reasoning about causal relationships (especially after bad outcomes, like doing poorly on a test) and providing the building blocks for creative combinations of categories.

Everyday Reasoning

At this point you may be asking yourself whether the formal reasoning tasks used in the laboratory are the same as the reasoning we do in our daily lives. Galotti (1989, 2002) identified many potential differences between reasoning in the laboratory and reasoning in our day-to-day lives. The main differences are presented in Table 12.4. In many respects the question is similar to the distinction made between well-defined and ill-defined problems described in Chapter 11. Laboratory problems tend to be well-defined, with clear premises supplied, a single correct answer, and arguments that are evaluated because the researchers ask you to evaluate them. In contrast, the arguments we evaluate on a day-to-day basis are typically much less defined. It isn’t always easy to know what the premises are. The arguments are typically personally relevant, perhaps aimed at achieving a particular goal or outcome. And there isn’t always a single, clear solution. In fact, there may be several possible answers that vary in quality. Think back to the reasoning in our opening movie story. Reasoning “If she is not here, then she must have gone to the movie without me” probably feels different from “If Charlie is a basketball player, then he is tall. Charlie is a basketball player.” Because of these differences, everyday reasoning is more subtle and complex than most of the formal reasoning tasks studied in the lab. Research that examines the relationship between the two is still relatively new. One consistent finding is that everyday reasoning is subject to biases, many of which arise because of the use of heuristics. We discuss some of these heuristics in our review of decision making in the following section.

Image

Source: Galotti (1989).

Making Decisions

Making decisions is about assessing and making choices between different options. In our opening story we were trying to decide what to do, whether to wait for our roommate or go to the movie without her. Furthermore, if we do decide to go to the movie, to which theater should we go? Our days are filled with decisions, some big (e.g., “What do I want to do with my life after I graduate?” “Which house should I buy?”) and others small (“Should I order a peanut butter and jelly sandwich for lunch?” “Paper or plastic?”). As was the case with your self-assessment about your reasoning abilities, you probably feel like you make the best logical decisions you can. But, as was the case with reasoning, we make our decisions within our cognitive system and are subject to the limitations of that system. This section reviews theories and research about how we make decisions.

Stop and Think

· 12.9. Think back to how you reasoned through the various versions of the four-card problems. How did the reasoning you used to answer those questions compare to how you reason about things in your day-to-day life?

A General Model of Decision Making

Galotti (2002) describes a general model of decision making made up of five phases. The model closely resembles the general model of problem solving outlined in Chapter 11.

Setting goals.

Goals are mental representations of desired states of affairs. Good decisions are those that get us closer to our goals. Goals are the targets we aim for. The recognition of a disparity between the current state of affairs and the goal is often a strong motivator, driving us into action to reduce the difference. As mentioned, goals differ in many ways: Some are big, some are small; some are about things right now, others are things to do later; some are simple, others are complex. Big goals may need to be broken into smaller subgoals. Goals may also change along the way. Sometimes the process of trying to achieve your goals leads to a reassessment of and possibly a revision of your goals. Once our goals are set, then we begin to consider what options we may have available to us to achieve those goals.

In our movie story, your goal is to see a particular movie with your roommate. This goal may have been derived individually (e.g., you each saw a trailer for the movie and decided independently that you wanted to see it) or jointly (e.g., the desire to see the movie arose from a conversation about common interests). After realizing that your roommate may have already left, you need to weigh your options. Is it more important to find out where your roommate is so that you can see the movie together? Or is your desire to see the film more important than the goal of seeing it with your roommate? Is there enough time for you to get to the Palace, or should you wait a little longer and stick with the original plan to see the movie at the Omnimax? These options are tied up with the goals you set.

Gathering information.

Once you have set your goals, then you need to acquire information needed to make the decision. This information includes your options, the likelihood of the different outcomes, and the criteria you use to make the decision. Consider the movie example again. If you go to the Palace Theater you should be able to see the movie, but your roommate may not be there. The same is true for the showing at the Omnimax. One piece of information you may want to consider is how likely you are to meet your roommate at each theater (and remember that there is also the possibility that she may be running late, too). However, as we will see later in the chapter, we gather more than just information about probabilities and options. The structure and limitations of our cognitive systems have a large impact on the information we use to make decisions.

Structuring the decision.

Once we have our goals and have assembled our information, we need to organize the information in a way that will be useful for making the decision. Consider the common practice of making a list of pros and cons. Suppose that you are trying to decide whether to buy a desktop or laptop computer. Under the pros column for the laptop you list features like portability, great Wi-Fi on campus, and cool looks. Under cons, you list small screen and hard drive and risk of dropping it or having it stolen. The purpose of this exercise is to arrange the information you think is relevant to make the decision easier.

Making a final choice.

After collecting the information and organizing it to make comparisons, it is time to actually choose an option. This isn’t always an easy task. Often there is no one obvious choice. When we make decisions, we usually make our selections based on information loaded with uncertainty. Once again, think back to our opening story. You are trying to decide what to do and which theater to go to, not knowing where your roommate is. Without knowing this information, how do you make the decision? The sections to follow briefly describe some of the theories proposed to account for how we make decisions.

Evaluation.

This last stage is often overlooked. Indeed, there is relatively little research that examines this final phase. The general attitude is that if the decision has been made, then it is time to move on. However, remember that we make many decisions every day. An important part of our cognitive processes is that we have a memory system. Our past decisions impact later decisions. So interpreting our choices and evaluating what went right or wrong are important aspects of decision making.

Ideal Decision Making: A Normative Model

We start by describing an idealized model of how we make decisions. Our first step is to break the decision down into all of the independent criteria. Next we need to weigh each criterion according to how important it is to the decision. Then we need to list all of the options and rate each option according to the list of criteria. The option with the highest score wins and that’s the decision we make. Let’s return to our computer-buying example. Figure 12.8 illustrates how the idealized model might work. The first column lists the criteria that you want to consider. The second ranks them in terms of their importance. The next columns list the relevant features of each of the computer options. The numbers in parentheses represent the quantitative value of these features (1 = high/good; 3s and 4s = low/poor). To determine which option is the best choice we can multiply the computers’ scores by the weight of the criteria and then add up the scores. For the first laptop we get (4 × 1) + (3 × 3) + (5 × 2) + (2 × 1) + (7 × 2) + (8 × 1) + (1 × 4) + (6 × 3) = 69. We’ve set things up so that the lowest combined score is our choice. However, as we will see, decision making is rarely as straightforward as this ideal model suggests.

Heuristics and Biases

Heuristics are essentially mental shortcuts that we use to reduce the processing burden on our cognitive systems. They are typically faster, require fewer resources, and generally give the right answer. However, heuristics usually work by ignoring some information, which at times may result in making errors or biased conclusions. The list of heuristics we use when collecting and assembling information for decision making is too long to review here, so we limit our review to three heuristics (Kahneman, 2011).

Figure 12.8 Making a Decision About Which Computer to Buy

Image

PHOTO Sources: computers: Ryan McVay/Photodisc/Thinkstock.

Representativeness Bias

Read through the following description of Tom W created by Daniel Kahneman and Amos Tversky (Kahneman & Tversky, 1973, p. 238).

Tom W is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to have little feel and little sympathy for other people, and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.

Now consider the following professions: computer science, engineering, business administration, law, education, and social science. Rank in order, from 1 to 5, the likelihood that Tom W is a graduate student in one of these fields (1 = most likely, 5 = least likely). If you are like most people, you probably ranked computer science and engineering high on your list and education and social science low. However, computer science and engineering typically have many fewer students than education and social science fields. Given the relative size of the fields, it would be better to predict that Tom was a student in the larger fields. Instead, you probably picked up on some of the characteristics of Tom (e.g., likes sci-fi and corny puns, is neat and tidy, and doesn’t generally interact with others) and identified these characteristics as ones you associate with your stereotype of computer scientists and engineers. This is referred to as the representativeness bias. There may be some truth in stereotypes. The predictions that follow from the hypothesis may turn out to be correct. However, sometimes they are wrong. In Tom’s case these features say relatively little about what field of study Tom may be in and thus should not be as important a factor in the decision as information like the size of the field.

Stop and Think

· 12.10. Think back to the last time you made a major purchase or decision (e.g., buying a car, renting a particular apartment, deciding what to major in). What factors did you consider when you made that decision? What kind of information did you gather? How did you combine that information to arrive at your decision?

· 12.11. Think back to a relatively minor decision (e.g., what to eat for breakfast, what to wear today). What factors did you consider when you made that decision? What kind of information did you gather? How did you combine that information to arrive at your decision?

· 12.12. How do the decision processes differ between your answers in Stop and Think 12.10 and 12.11?

Availability Bias

Recall that part of the decision-making process is to assemble information relevant to the choice that has to be made. In an ideal world we would have access to all of the necessary information. However, that is typically not the case. Consider our computer buying example. Suppose that when we are doing our research, we find that the computer store website is incomplete and that some of the information for some of the computer options isn’t available. This lack of information will impact your ability to make a choice. Much of the time the information we use to make decisions comes from our own memory. As discussed in detail in Chapter 7, memory retrieval is far from perfect.

Tversky and Kahneman (1973) demonstrated that the ease with which we are able to retrieve information from memory has a large impact on our decision making. This is called the availability bias. For example, which do you think is more common in English: words that start with the letter L or words in which L is the third letter? It turns out that there are many more words in which L is the third letter, but it is much more difficult to think of examples of these words compared to thinking of words that begin with L. This is most likely related to how we organize words in our mental lexicon (see Chapter 9 for more discussion of the mental lexicon). Similar results have been found for other factors that influence how easily something is retrieved from memory (e.g., recent items, primed items, more vivid items). Findings like these suggest that the ease or difficulty of retrieval of information provides a metric for how likely an event is. This, in turn, can impact the decisions and choices we make. My son believes that all dogs will steal his food because his dog often tries to steal his food. He is using the availability bias to draw a conclusion about dogs that is not always accurate. Figure 12.9 shows additional examples of the availability bias.

Representativeness bias: a bias in reasoning where stereotypes are relied on to make judgments and solve problems

Availability bias: bias in reasoning where examples easily brought to mind are relied on to make judgments and solve problems

Framing Bias

Let’s return to our movie theater example and add a little more to the story. When you arrive at the theater box office and get your wallet out to pay for the $10 ticket, you realize that somewhere along the way you lost a $10 bill. You still have enough money to pay for the ticket. What would you do? Would you buy a ticket to see the show? Most people say that they would (Tversky & Kahneman, 1981). Let’s change the story a bit. Instead, imagine that you get to the theater, buy your $10 ticket, and quickly run back to your car to make sure that you had locked it. When you return to the theater and attempt to enter, you suddenly realize that you lost your ticket. You still have enough money to pay for another ticket. What would you do in this case? In this case just over half of people say that they would not pay for another ticket. In both story continuations you are out $10, either for a lost ticket or a lost bill. So why do people usually make different choices in these two contexts?

Figure 12.9 The Availability Bias

Image

Monkey Business Images/Shutterstock

Fiona Ayerst/Shuttestock

Source: Table from website of the National Safety Council, 2017 (shark attack data from older table).

Tversky and Kahneman argue that it is because we frame the problems differently (called the framing bias). In the second situation, we mentally represent the cost of going to the movie as $20, which seems to be excessive. In contrast, even though the total amount of money that has left our wallet that night is $20, we typically only associate $10 of it as related to the cost of seeing the movie. This demonstrates that how we frame the information we use to make a decision also impacts the choices we end up making. Figure 12.10 shows an example of the framing bias.

Descriptive Decision-Making Approaches

The use of heuristics, like those just presented, demonstrates that we are impacted by our cognitive architecture when collecting information relevant to our decision. Our cognitive processes also impact how we structure the decision and make the final choice. Think back to the model of decision making we described for the computer-buying example. That process probably seems somewhat complex, and you may wonder whether you really go through all of that for all of your decisions. Remember that that process is a model of decision making under ideal conditions, without consideration of potentially limiting constraints as to the context or our cognitive processes. Research suggests that we use many different decision-making strategies depending on the situational context. We now consider a few of these.

Framing bias: a bias in reasoning where the context in which a problem is presented influences our judgment

Figure 12.10 The Framing Bias

Image

istock.com/clubfoto

Tversky (1972) described the elimination-by-aspects strategy. When we use this strategy, we dramatically limit the number of criteria we consider, by first considering only the most important. If this criterion is sufficient to make our choice, then we do so. If it is not, then we move to the next most important. In our computer-buying example (see Figure 12.8), price is listed as the most important criterion. If we use the elimination-by-aspects strategy, we could focus on the prices of the computers and would probably end up selecting the pink laptop because it is the least expensive.

Another strategy we may use is to focus on the criteria that are easy to evaluate (Hsee, 2000). This can be especially true if you are considering the criteria alone, without information about the range and reference point. For example, it is fairly easy to imagine the difference between different screen sizes. However, for a feature like computational power it may be difficult to know how to interpret the differences: Is a jump from 3.1 to 3.2 GHz important enough to consider? The criteria “work” and “gaming” are potentially even more difficult to evaluate. So the strategy of ignoring difficult criteria in favor of those that are easy may result in making a misguided decision.

Stop and Think

· 12.13. Suppose that you are trying to decide whether to get renter’s insurance. You recently read a report in the paper that crime rates across the nation are at an all-time low. However, two of your friends recently had their apartment robbed. You go ahead and decide to pay for the insurance. Do you think that your choice may have been biased?

· 12.14. Can you think of any real-life decisions you have made that may have been the result of framing? If you were to be in the same situation again, do you think you would make the same decision?

· 12.15. Terrorist activities are often big topics of world news but are relatively rarely local news stories. Given what you know about the availability bias, how do you think that this impacts our perceptions about the dangers of terrorism to ourselves?

We also consider our past experiences when we make decisions. Suppose that the last computer you owned was made by the same company as one of the desktops you are considering. If you had problems with that earlier computer, that may lead you to avoid buying the same brand again. On the other hand, if you thought your last computer was great, you may have a preference for that brand that goes above and beyond the specific criteria listed. Clearly, the consequences of past decisions may impact how you make your current choice.

Photo 12.4 The decoy effect: If we are given the choice of a small for $3, medium for $5, and large for $6, we will select the large more often than if the medium size was not an option.

Image

Tyler Olson/Shutterstock

Cognitive psychologists are not the only researchers who study decision making. Decision making is so central to our daily lives; understanding how and why we make decisions is of interest and importance in many areas. For example, people who sell us products are quite aware that our decision making is not always logical. Let’s extend our opening movie story one last time. Suppose that we get into the theater and have time to buy some refreshments. We get to the counter and ask for a medium popcorn. Rather than just give you what you ordered straight away, they instead make the following offer: “Would you like to make that a large for just $1 more?” Given that the small is $3 and the medium is $5, getting a large for only $1 more may seem like a bargain that is too good to pass up, even though you don’t really want the large popcorn. This is a version of the “decoy effect.” If we were only given the choice between the small and large popcorns, then we probably wouldn’t have really considered the large one. However, the presence of the medium size, priced close to the large size, dramatically increases how often we select the large popcorn. The medium size is presented primarily as a decoy, looking like a poor choice when compared to the larger size and increasing the attractiveness of the large popcorn.

Prospect Theory

Kahneman and Tversky (1979) explained many of the heuristics and biases within a framework they called prospect theory. They noted that the biases in people’s decision making often resulted from the fact that we do not treat gains and losses equally. Generally, we treat losses as more important than gains (loss aversion). In other words, losing $100 is much more impactful than gaining $100. Additionally, the framework assumes diminishing returns; a gain or loss of $100 matters a lot if we have a balance of $1,000 in our account, but it matters very little if we have a balance of $100,000 in our account. Even though the change is $100 in both cases, the reference point impacts that $100. The theory also proposes that people tend to overweight low-probability outcomes and underweight high-probability outcomes (e.g., the odds of getting in an automobile accident are much higher than the odds of being in an airplane crash; however, more people fear flying than driving). This framework has been used to explain a wide variety of apparently irrational decisions. For example, businesses typically take little risk offering money-back guarantees because once people have a product, giving it up (a loss) is considered more aversive than the benefit of getting the money back (a gain).

Dual-Process Framework

The finding that our choices often don’t seem to follow logical, analytical models of decision making doesn’t mean that we can’t make decisions that way. It does, however, suggest that we can make decisions many different ways. The dual-process framework, discussed earlier in this chapter for reasoning purposes, has also been proposed to explain why we may reason differently at different times (e.g., Evans, 2008; Kahneman, 2011).

Wilson and Schooler (1991) selected five varieties of jams that had been independently rated by experts for quality. The jams they selected ranged from the top-ranked jam down to one of the worst. In one condition, they asked college students to taste the jams, think about what they liked and didn’t like about them, and then to rate the jams for taste. The students’ ratings looked very different from those of the experts. In another condition, with a different set of tasters, the researchers asked them to taste the jams, answer some questions about why they selected their college major, and then rate the jams. The key difference between the conditions was that the raters in the second condition didn’t think about why they liked or disliked the jams. Their ratings closely matched those of the experts. Within the dual-process approach, these results are interpreted as reflecting decisions made using two different decision-making systems. When asked to think deliberately and analytically about why they made their judgments about jams, the tasters engaged their System 2 thinking. When asked just to make preference ratings without thinking about the reasoning behind those ratings, the tasters used their System 1 thinking.

This means that how we make decisions in everyday situations can vary depending on what we think about in making those decisions. Dijksterhuis (2004; Dijksterhuis & Nordgren, 2006) has suggested that System 1 thinking that is more automatic and unconscious can result in better reasoning. In one of his studies, subjects were asked to consider apartments or roommates from a list. Alternatives were presented with both the pros and cons of each one. The alternatives were designed by the researchers to have one best option and one worst option based on the number of relative pros and cons. Subjects then made decisions (directly or through rating the alternatives) immediately after alternatives were presented, after a few minutes of conscious thought about the decision, or after completing a distractor task that allowed them to consider alternatives unconsciously (but not consciously). In all of the experiments, subjects who were in the unconscious consideration condition made the best decisions (i.e., chose the “best” alternatives more often or rated them most highly). These results suggest that everyday reasoning may be better when it involves more unconscious processes than conscious, deliberate thought.

Future Advances in Theories of Reasoning and Decision Making

Understanding how we reason and make decisions under uncertainty is of interest not only to cognitive psychologists. One of the newly emerging multidisciplinary approaches has brought psychologists, neuroscientists, and economists together to develop the field of neuroeconomics (Loewenstein, Rick, & Cohen, 2008; Rustichini, 2009; Sanfey, Loewenstein, McClure, & Cohen, 2006). Researchers in this field are attempting to develop theories about the neural circuitry that underlies our reasoning and decision-making behaviors. However, these behaviors are extremely complex, involving interactions between systems of memory, knowledge representation, language, attention, and perception. Our understanding of the underlying neural circuitry is still in the very early stages, and many researchers recommend caution in using neuroscience findings to interpret theoretical claims (e.g., Del Pinal & Nathan, 2013; Goel, 2007; Henson, 2005, 2006; Poldrack, 2006; Rick, 2011). There is likely no single, unitary reasoning or decision-making system in the brain but instead distributed systems that dynamically respond to particular task demands and environmental cues (Goel, 2007).

Consider, for example, some of the research examining the neuroscience of heuristics and biases. De Neys, Vartanian, and Goel (2007) created scenarios designed to result in conflicts between our probabilistic and heuristic ways of processing (similar to the Tom W story presented earlier in the chapter). Participants were presented brief descriptions of studies and information about a person in the studies. Their task was to choose between two possibilities about that person based on the given information. In addition to recording their participants’ behavioral responses, the authors used fMRI to record brain activity of their participants as they performed the task. Table 12.5 presents examples of the four conditions used in the study and the two types of cues given: stereotype cues and base-rate cues that are consistent with the probability values given in the problem. The story in the incongruent condition pits base-rate cues (5 engineers and 995 lawyers) against stereotypical cues (no interest in politics, conservative, likes math puzzles). The other three story types were control conditions. In the congruent control condition, one of the answers was consistent with both the base-rate and stereotype information. In the neutral control, there was no stereotypical information in the story, so it was expected that the base-rate information would cue the answer. In the final control, the base-rate information was the same for both groups (e.g., 500 people in each group), so it was expected that participants would base their responses on heuristic information.

Image

Figure 12.11 The Behavioral Results of the Decisions Made in the De Neys et al. (2008) Study

Image

The behavioral results are presented in Figure 12.11. As expected, across the three control conditions participants used the base-rate and stereotypic cues from the stories to make the correct decisions most of the time. In the incongruent condition they sometimes used base-rate information and sometimes used stereotypic information. The authors examined the participants’ fMRI data in the right lateral prefrontal cortex (RLPFC; a region involved in response inhibition) and the anterior cingulate cortex (ACC; a region involved in conflict detection) while they completed the task. The results indicated that when participants in the incongruent condition responded with base-rate cued answers, there was increased activation in the RLPFC reflecting the inhibition of a stereotype-based response. This activation was not present when they made a stereotype-based response. Additionally, the ACC was activated both when making stereotypic and base-rate responses, indicating that the participants were detecting their bias regardless of which response they gave. The ACC did not show activation across the control conditions where there were no conflicting responses. The authors interpreted these results as indicating that the bias of the representativeness heuristic results not from a failure to recognize conflicting information but from a failure to inhibit making stereotypic responses.

While our understanding of the massively complex underlying neural circuits involved in our reasoning and decision making is still in the very early stages, the multidisciplinary collaborative approaches that bring cognitive psychologists, economists, and neuroscientists together hold bright promise. The integration of the insights, methods, and theories of diverse disciplines is quickly advancing our level of understanding of how we reason and make decisions.

Thinking About Research

As you read the following summary of a research study in psychology, think about the following questions:

1. What kind of reasoning is being examined in this study?

2. What are the independent variables in this study?

3. What are the dependent variables in this study?

4. What alternative explanations can you come up with to explain the results of this study?

Study Reference

De Neys, W. (2006). Dual processing in reasoning: Two systems but one reasoner. Psychological Science, 17(5), 428—433.

Purpose of the study: The research was designed to examine the impact of working-memory capacity on syllogistic reasoning. The dual-process description of reasoning was tested using a task where cognitive load was manipulated.

Method of the study: Participants were asked to evaluate syllogistic arguments such as the following:

Figure 12.12 Results of the De Neys (2006) Study. Reasoning Performance for High-, Medium-, and Low-Span Participants as a Function of Cognitive Load and Belief Consistency

Image

Source: De Neys (2006, figure 2).

All fruits can be eaten.

Hamburgers can be eaten.

Therefore, hamburgers are fruit.

They answered either that the conclusion follows or does not follow logically from the premises. Some of the problems had logically consistent conclusions that were in conflict with common beliefs; others had valid conclusions that were consistent with beliefs. To manipulate cognitive load, participants were also tested in a dot memory task. This task consisted of briefly presenting the participants with dot patterns and asking the participants to reproduce the patterns. High-load dot patterns had complex four-dot patterns, while low-load patterns consisted of simple three-dot patterns. Finally, the participants’ working-memory spans were assessed using a word list recall task while performing simple math problems.

Results of the study: The results are presented in Figure 12.12. In the belief-consistent conditions, there were no differences between working-memory span and cognitive load. In contrast, when the conclusions conflicted with belief, there was an effect of cognitive load: As load increased, reasoning performance decreased.

Conclusions of the study: The author concluded that the results support a dual-process model of reasoning. In the absence of belief conflict, reasoning is performed through automatic, resource-free processing. However, the presence of belief conflict requires slower, resource-demanding processing.

Chapter Review

Summary

· How logical are the conclusions you draw?

Aristotle and other ancient Greeks established most of what we consider the formal rules of logic. We can use these rules to draw logical conclusions and evaluate formal arguments. However, we don’t always follow these formal rules of logic. Instead, our reasoning behavior reflects the cognitive processes we use to reason and is affected by the limitations and biases of these processes.

· Why are some things harder to reason about than others?

The rules of deductive logic are generally independent of the content of arguments. However, often the contents of the arguments do impact our reasoning because of the knowledge about and experience with those contents. Sometimes that knowledge and experience facilitate our reasoning, but in other situations it may interfere. Everyday reasoning is often more difficult because the arguments are often less clearly defined than typical formal arguments.

· How and when do we make inferences about causal relations?

When events co-occur in time and/or space, we often infer a causal relationship between the events. However, another important factor is whether or not we can easily infer a mechanism for the causal relationship between the events.

· What phases do we go throyugh when we make decisions?

Decision making involves five phases: setting goals, gathering information, structuring the decision, making a final choice, and evaluation of the process.

· Do we always make the best choices?

Under ideal conditions we consider all of the available options across all of the relevant conditions when we make decisions. However, often the conditions are not ideal. Because we make decisions using our cognitive processes, our decisions are constrained by those processes. We often use heuristic shortcuts to reduce cognitive demands.

Chapter Quiz

1. Using logical rules about the validity of an argument that draws a conclusion based on general information is an example of

1. representational reasoning.

2. deductive reasoning.

3. inductive reasoning.

4. analogical reasoning.

2. Drawing a conclusion about general properties based on specific data information is an example of

1. representational reasoning.

2. deductive reasoning.

3. inductive reasoning.

4. analogical reasoning.

3. Drawing conclusions by using the structure of one conceptual domain to interpret another domain is an example of

1. representational reasoning.

2. deductive reasoning.

3. intuitive reasoning.

4. analogical reasoning.

4. An example of a counterfactual reasoning problem is the following:

1. If all fire trucks are red, and my truck is red, then my truck is a fire truck.

2. If I had decided to become a firefighter instead of professor, then I would be in better physical condition.

3. If there are five firefighters and seven police officers in a room and two people walk out of the room, what is the probability that one is a firefighter and the other is a police officer?

4. Firefighter is to water hose as police officer is to __________?

5. Galotti describes decision making as consisting of which five phases?

1. setting goals, gathering information, structuring the decision, making a final choice, evaluation

2. identifying the purpose, determining the representation, estimating probabilities, adjusting expectations, learning from past mistakes

3. understanding the premise, building mental models, combining mental models, ruling out invalid combinations, evaluating the final model

4. identifying representative choices, retrieving available resources, framing the decision, applying heuristics, evaluation

6. Wilson and Schooler’s jam experiment demonstrated that

1. students and expert tasters never show the same pattern of preferences.

2. students and expert tasters always show the same pattern of preferences.

3. students and expert tasters sometimes show the same pattern of preferences.

4. magazines that report the opinion of experts have no value for our everyday decisions.

7. What are the possible mental models for the statement “Some cars are Fords”?

8. When considering whether two events are related to each other causally, what factors should you consider?

9. Explain the elimination-by-aspects strategy for making decisions.

10. Using stereotypes to make decisions about people is likely to involve the use of what heuristic?

Key Terms

· Availability bias 337

· Conditional reasoning (propositional reasoning) 321

· Deductive reasoning 318

· Dual-process framework 328

· Framing bias 338

· Inductive reasoning 318

· Representativeness bias 337

· Syllogistic reasoning 319

Stop and Think Answers

· 12.1. Are the arguments discussed in the Conditional Reasoning section representative of the kinds of arguments you face in your day-to-day experience? If not, how are they different?

Answers will vary.

· 12.2. Consider the following argument: All Introduction to Psychology courses are taught in large sections, and all large-section courses use multiple-choice exams. Therefore my Introduction to Psychology course will use multiple-choice exams. Is this a valid argument? Do you think it follows the “rules of logic”? Would you change your mind if you learn that the section of your course is being taught by a new professor?

The argument is valid because the conclusion follows from the rules of deductive logic. Answers will vary, but many people change their answer if they think there is a chance that a “new professor” might not use a multiple-choice exam. If they do this, they are challenging the truth of the premises rather than the validity of the argument.

· 12.3. Consider the following argument: If I study every term on the review sheet, then I will get an A on the exam. I studied hard, therefore I will get an A. Are these valid arguments? Do you think that they follow the “rules of logic”?

Notice that the first part included “If I study every term on the review sheet,” but the given information in the second part is “I studied hard.” While these two may be related, they are not equivalent, so this is not a valid argument.

· 12.4. Do you generally consider yourself a “logical thinker”? When you reason about things, do you usually think through all aspects of an argument, or do you usually focus on just a few?

Answers will vary.

· 12.5. When you make reasoned arguments, what sort of representations do you think you use? Does it feel like you are using something like those proposed by the mental logic or the mental models approach?

Answers will vary.

· 12.6. Do you think the approach you take when reasoning depends on what you are reasoning about?

Answers will vary.

· 12.7. Try to think of an example of causal reasoning you did today. How strong is the covariation between the cause and effect events? Did you consider both how often they did not co-occur as well as how often that they did?

Answers will vary.

· 12.8. Considering the same example that you came up with in Stop and Think 12.7, did you consider alternative hypotheses about the causal effect? Did you engage in counterfactual thinking and ask yourself “What if I had done something else instead?” How does thinking about alternative causes and “what ifs” impact your causal reasoning?

Answers will vary.

· 12.9. Think back to how you reasoned through the various versions of the four-card problems. How did the reasoning you used to answer those questions compare to how you reason about things in your day-to-day life?

Answers will vary.

· 12.10. Think back to the last time you made a major purchase or decision (e.g., buying a car, renting a particular apartment, deciding what to major in). What factors did you consider when you made that decision? What kind of information did you gather? How did you combine that information to arrive at your decision?

Answers will vary.

· 12.11. Think back to a relatively minor decision (e.g., what to eat for breakfast, what to wear today). What factors did you consider when you made that decision? What kind of information did you gather? How did you combine that information to arrive at your decision?

Answers will vary.

· 12.12. How do the decision processes differ between your answers in Stop and Think 12.10 and 12.11?

Answers will vary.

· 12.13. Suppose that you are trying to decide whether to get renter’s insurance. You recently read a report in the paper that crime rates across the nation are at an all-time low. However, two of your friends recently had their apartment robbed. You go ahead and decide to pay for the insurance. Do you think that your choice may have been biased?

Answers will vary, but the availability heuristic is likely to have influenced the decision.

· 12.14. Can you think of any real-life decisions you have made that may have been the result of framing? If you were to be in the same situation again, do you think you would make the same decision?

Answers will vary.

· 12.15. Terrorist activities are often big topics of world news but are relatively rarely local news stories. Given what you know about the availability bias, how do you think that this impacts our perceptions about the dangers of terrorism to ourselves?

Most acts of terrorism are intended to be frightening and memorable. They also tend to get a lot of coverage from news media. At the same time, they are relatively rare events, and people are much more likely to die from other causes (see Figure 12.9). However, due to the amount of media coverage of these events, which make these events very memorable, people often think that these events are likely to directly impact their lives even when they are not very likely to occur.

Student Study Site

Image

edge.sagepub.com/mcbridecp2e

SAGE edge offers a robust online environment featuring an impressive array of free tools and resources for review, study, and further exploration, keeping both instructors and students on the cutting edge of teaching and learning.

Image