Smarter Groups Are More Cooperative

Hive Mind: How Your Nation’s IQ Matters So Much More Than Your Own - Garett Jones 2016


Smarter Groups Are More Cooperative

The ability to recognize the other player from past interactions, and to remember the relevant features of those interactions, is necessary to sustain cooperation.

ROBERT AXELROD, THE EVOLUTION OF COOPERATION1

WE HUMANS ARE A SOCIAL SPECIES: we rely on each other to get things done. Whether it’s building a car, creating a happy marriage, or holding a potluck dinner at church, we usually need to cooperate in order to achieve the big successes in life. But cooperation is hard. Economics professor Paul Seabright makes this point in his excellent book, The Company of Strangers:

Nowhere else in nature do unrelated members of the same species—genetic rivals incited by instinct and history to fight one another—cooperate on projects of such complexity and requiring such a high degree of mutual trust as in the human species.2

Why is cooperation so hard? Because cooperating is often against your own best interest. When you’re going to a potluck dinner, the smart thing to do is to bring a bag of chips while sampling other people’s delicious casseroles. At some point before you arrive, you might think, “If everyone does that, then all we’ll have at the potluck is twenty bags of chips.” And that’s true enough, but you have no influence over whether those other nineteen people bring chips or casseroles, so why not do what’s best for yourself: chips it is.

This is one example of the famous “prisoner’s dilemma,” in which individual greed leads to an awful group outcome. Prisoner’s dilemmas are everywhere, and they’re the precise opposite of Adam Smith’s famous “invisible hand,” in which individual greed leads to a positive group outcome. Invisible hands and prisoner’s dilemmas are both at work in the world: sometimes, as Gordon Gekko said in the movie Wall Street, “Greed is good,” and sometimes greed creates misery. In this and the next chapter, we’ll see how greed can create misery, and we’ll see how higher-IQ groups are just a bit more likely to find a way to cooperate, a bit more likely to avoid the prisoner’s dilemma.

The Real Prisoner’s Dilemma

First off, let’s go back to the source—the classic economic example of when greed is bad. You and your accomplice rob a bank, and a few hours later, you both get picked up by the police and put into separate interrogation rooms. The cops offer you a deal: if you cooperate and rat out your accomplice while your accomplice keeps her mouth shut, you’ll get to walk and she’ll get ten years. But the opposite is also true: If she talks and you’re quiet, you’ll get the ten years and she walks. Now, if you both talk, there’ll be enough evidence that the cops will put you both away for five years. And if you both keep your mouths shut, the cops put you both away for a year on a minor weapons possession charge.

So, what’s the rational thing to do? Well, if you think your accomplice will talk, you’re facing a choice between ten years if you’re quiet and five years if you talk. Five is less than ten, so the thing to do is spill the beans. And if you think your accomplice will be loyal to you, so she’ll keep her mouth shut, then you get to walk free if you sing like a canary and you get one year if you keep quiet. Zero is less than one, so you talk. So regardless of what you think your accomplice will do, the right thing to do, the greedy thing to do, is to talk. Easy choice, right? Yes, it is. But just remember: your accomplice got the same deal. So she’s going to make the same decision: she’s going to talk and you’re going to talk and you’re both going to get five years in prison. A grim outcome.

Note that this prisoner’s dilemma essentially is the same situation as the church potluck: regardless of what the other people are doing, the right thing for me to do is to try to get the best deal I can. And here we see the problem: if everyone just goes ahead and acts in his or her own individual best interest, we get an awful outcome. It’s something students and office workers see on team projects: people just try to coast by on the efforts of others. They know that work is hard, and that any one person’s effort isn’t going to do that much to change the outcome. Any time many people are making a decision when each person’s sacrifice mostly helps the others, then you’re in a prisoner’s dilemma. And doesn’t that sound like a lot of modern life? Consider two more examples from the field of politics. A politician steers government funds to his supporters: What could be more rational? Or a politician spends his time on cable news, building up a national reputation as a radical reformer rather than quietly supporting a practical reform that might actually get enacted. In both cases self-interest drives the outcome. The rationality of pursuing self-interest even when it hurts society is precisely what makes reasonably fair, even-handed political outcomes such a puzzle. When short-run self-interest naturally leads to awful outcomes, it’s a wonder that we ever see good political outcomes.

I’ll save most of the political implications for the next chapter. For now, think about a married couple’s decision to be sexually exclusive, to be faithful. Notice—I just treated it as if it were a group decision—the couple’s decision—but groups can’t make decisions, only individuals can. Let’s be conventional, and think of this as a heterosexual married couple, since it makes it easier to follow the story with pronouns. The man is trying to decide whether to be faithful or cheat. The woman is facing the same decision. For the moment, we’ll think of this as a one-time decision: a decision to choose cheating as a lifestyle. The best outcome for the man is for him to cheat while his wife is faithful. The best outcome for the woman is the reverse: she has the occasional fling while her husband stays at home playing video games every night. Now, what if the husband thinks his wife is cheating? Well, if it was a good idea for him to cheat when she was faithful, it’s even a better idea for him to cheat if she’s gallivanting around town, like the heartbreaker in a country song. So whether she’s faithful or not, the best decision for the husband is to cheat—and we’re back in the prisoner’s dilemma.

If deciding whether to be faithful to your spouse is a once-and-for-all decision, the best strategy is clear: cheat. Yet in real life, people don’t cheat all that often. Most marriages in rich countries are fairly monogamous—perhaps half of married couples cheat over decades of being together. But that means that half of married couples don’t cheat—and even when marriages do involve infidelity, the affairs largely are short term. It’s far from the everyday cheating you’d expect from our simple story that indicates, rationally, that everyone should cheat whenever possible.

Part of the reason there’s so little cheating may be because divorce can be expensive, or because lots of spouses try to cheat, but can’t find any willing partners. Nevertheless, there’s an enormous amount of cooperation within marriages, there are a lot of people who pick up their neighbor’s mail while they’re on vacation, there are lots of people who bring casseroles to church potlucks—all without anybody really forcing anyone else to be kind. Why does this happen? One classic explanation is that life is what economists call a repeated game. And when the same two people play the prisoner’s dilemma game again and again, whether in a college laboratory or in the halls of Congress, something magical happens: pairs of players often learn to cooperate. Not always and not reliably, but many people do decide to take the ABBA song to heart and “take a chance” on trust, on bringing the casserole this week, on staying faithful this year. And sometimes it works out just fine.

Deciding to trust doesn’t mean acting naive: trust can be deeply shrewd. That’s because if you know you’re going to be playing the same prisoner’s dilemma game with the same partner every week (at the church potluck) or every year (deciding whether to have that fling during the annual sales meeting), then the game looks very different. All of a sudden, both you and your partner have a way to punish each other: if you’re mean to me this time around, I can be mean to you the next time around. As you give, so shall you receive; or as we usually put it, “tit for tat.” So I’ll take out the garbage as long as you do the dishes, but if you stop doing the dishes, I’ll let the garbage pile up. And if you start doing dishes again, I’ll go back to taking out the trash quickly. This kind of tacit cooperation is everywhere in personal relationships, in work relationships, in the neighborhood, and we take it for granted. Often enough, we’re nice because everyone else is making it easy to be nice.

Economists figured this out in the early days of the field known as “game theory,” that once you turn a one-shot prisoner’s dilemma into a repeated game, it’s possible for selfish players to rationally cooperate with each other, not out of a sense of generosity but out of pure self-interest. This result—that repetition can turn lemons into lemonade—is known as the “folk theorem.” That’s because it seemed fairly obvious once people started thinking about it, and no one economist was really willing to take credit for the idea.

One researcher—a political scientist, Robert Axelrod—went further than this. He saw repeated prisoner’s dilemmas (RPDs) everywhere in politics and society, and so he concluded that if he could find out how to get people cooperating rather than descending into bitter defection, he could help make the world a more peaceful place. It sounds a bit naive—but it was nothing of the sort. Axelrod’s research, summed up in his excellent book The Evolution of Cooperation, is still used by peace negotiators, labor-management mediators, and nuclear arms reduction experts.3 His is an agenda that has made the world a better, safer place. And it began by just taking the repeated prisoner’s dilemma seriously, so seriously that Axelrod decided to get a lot of social scientists together to play some games.

Axelrod ran a competition—not in real life, but on some 1970s-era computers. He invited social scientists, mathematicians, anyone interested to submit a simple computer program giving instructions to one of the electronic “players” in a two-person repeated prisoner’s dilemma game. The winner of the tournament would be the contestant whose program could win the most points when pitted against the other computer programs. The most points, naturally, clocked in when the other player cooperated while you defected; when both cooperated you got a good outcome, but not as good as when you were exploiting the other player.

So dozens of researchers proposed dozens of computer programs for the tournament. As you can imagine, some programs were quite sophisticated, looking for ways to dupe the other computerized player into cooperating so that the entrant could exploit his partner for at least a few rounds of the tournament. But not every program was sophisticated; in fact one program followed the simplest rule possible: “always cooperate.” Which computer program—which strategy—won the entire tournament? It’s known by the phrase I mentioned earlier: tit for tat.

It plays out something like this. In the first round, I’ll cooperate. After that, I just do whatever you did last time. So if you cooperated, then I’ll take a chance on cooperation: I’ll return good for good. If you defected, then I’ll return evil for evil: I’ll defect this time. But if you go back to cooperating, I’ll forget about your unfaithfulness, and I’ll go back to cooperating too. This simple strategy works well for many reasons, but three stand out. First, it opens the door to endless cooperation—and endless periods of decent-but-not-exciting rewards, just like most happy, faithful marriages. But second, it doesn’t leave you open to the possibility of endless exploitation. Unlike some long-suffering spouses you may know, a tit-for-tat player punishes defection swiftly and smartly. And third, the punishment ends as soon as kindness returns: if your partner mends his ways, you forgive and forget. Grudges are for the petty.

Tit for tat combines an open right hand with an armed left hand.4 In a society filled with tit for tatters, people would always cooperate, not because those people were doormats or naifs, but because any potential cheater would know that she would be quickly punished. So, tit for tat is a good strategy—something worth keeping in mind the next time you have an argument with the neighbors over who should fix the broken fence. But Axelrod wanted to do more than just pinpoint a good computer program: he tried to distill the essence of what made tit for tat and a few similar strategies work so well in order to convey those lessons to the world. He came up with some principles for encouraging cooperation in repeated prisoner’s dilemma settings. Three of them matter for us: think of them as the Three P’s of the RPD. Players should

1. Be patient: Focus on the long-term benefits of finding a way to cooperate—don’t just focus on the short-run pleasures, whether it’s the pleasure of exploitation or the pleasure of punishment. Axelrod calls this “extending the shadow of the future.”

2. Be pleasant: Start off nice—make sure those bared teeth are part of a smile. And later in the game, take the ABBA approach, and take a chance on cooperating every now and then, even when things have gone south for a while.

3. Be perceptive: Figure out what game you’re playing—know the rules, and know the benefits and costs of cooperation.

I claim that people with higher IQs will be better at all three. That higher-IQ players tend to follow the third piece of advice, “Be perceptive,” is almost obvious: higher-IQ individuals are just more likely to get it, to grok the key ideas, as sci-fi writer Robert Heinlein used to say. Not always, not perfectly, but as we saw in the coda to Chapter 1, on average individuals with high IQ are better at grokking the rules of the social game, they’re more socially intelligent. And as we saw in the last chapter, IQ also tends to predict patient behavior. Those who see the patterns in the Raven’s Progressive Matrices also see the future. That means that in a repeated prisoner’s dilemma, they’ll tend to focus on the rewards of long-term cooperation, not the short-term thrills of punishment or exploitation.

My final claim is that higher-IQ people are nicer than most other people—at least when they’re in settings such as the repeated prisoner’s dilemma. Can that really be the case? You might expect higher-IQ people to be a little meaner in some cases—they might try to exploit people if they figure out a way to exploit them. That might be important in some settings, but there are three interesting new experiments that show how high IQ predicts generosity.

Economist Aldo Rustichini and his coauthors gave IQ tests to a thousand people enrolled in a truck-driving school, and then they had them play a trust game.5 A typical trust game—first invented by my George Mason colleague Kevin McCabe and his coauthors—works like this. The game has just two players, each making one choice. They can’t see each other, and they never know who they’re actually playing; in most cases, they’re just facing a computer terminal. First, Player 1 starts with $5; he then decides how much of his money (if any!) to send to Player 2 and how much to keep for himself. If some of the money is sent over, the money sent magically triples in value. So if Player 1 sent over $2, Player 2 now has $6. Player 2 now gets to decide how much money to return to Player 1; she can return nothing and keep all $6, she can return all $6 and keep nothing for herself, or she can do something in between. Since McCabe and coauthors invented this experiment, it’s been run numerous times: most players return just about the amount that Player 1 sent over—in other words, the average person is trustworthy, but no philanthropist.

Most people are interested in the question of “Who reciprocates? Who is trustworthy?” But here we’re interested not in Player 2 but in Player 1: Who’s the biggest sucker? Who takes the chance on sending money over—without a formal contract, without being able to even see the other person? Wouldn’t we expect players with lower IQ scores to naively send over cash, in the hope that Player 2 will be generous? Wouldn’t we expect a higher-IQ Player 1 to figure out that Player 2 has no incentive to be kind? We might, but in fact, Rustichini found just the opposite: the higher-IQ students in truck-driving school sent over more money than their classmates with lower IQs. So smarter players are more likely to start off by playing nice. This result—that IQ predicts “generous” or “nice” behavior—was backed up by a German study, a team-effort problem: a few players are each given a few Euros, and they each have to decide how much to chip in to the pot.6 If the total amount chipped in is greater than, say, 10€, then the pot doubles, and the amount in the pot is split equally between all the players; if not, the pot evaporates, with nobody getting anything except the money they held out of the pot. In this study, higher-IQ players put more into the pot. They may have done it out of kindness to others, or they may have done it because they shrewdly calculated that they had a decent chance of being the donor who pushed the pot over the 10€ threshold, so it’s hard to tell what their motives were. But in any case, smarter players chipped in more, and what they chipped in helped everyone in the group. They were more pleasant.

As a side note, Rustichini’s truck-driver study also looked at Player 2’s were more likely to reciprocate. In other words, higher-IQ players were more likely to return good for good, evil for evil. That means that, in this experiment, higher-IQ players were the enforcers. They enforced the norm of reciprocity even when it cost money to do so.

Another study by Brown University economist Louis Putterman and his coauthors found still more evidence that higher-IQ individuals are more likely to start off by playing nice, by being generous team players.7 In this game, known as the public goods game, players individually decide how much of their own money to put in a metaphorical pot, the money doubles or triples, and then it gets divided up among the group. When you give money, you’re directly contributing to the public good. The game was repeated for a few rounds with the same team so players would have a chance to learn from each other, a chance to find a path to cooperation.

As this was run at Brown University, an Ivy League school where one might expect that almost all students are raised in incredibly advantaged environments, it might seem that differences in IQ scores would be irrelevant. But in Putterman’s cooperation experiment, IQ mattered. He and his coauthors found that higher-IQ students at Brown put more money in the pot during the early rounds of the game: the higher-IQ students were more pleasant early on. That’s the smart thing to do, because extra money early on can send a signal of kindness, of cooperativeness, to the other players. And it’s worth noting that in another part of the experiment, when the students could vote on a way to penalize low contributors, higher-IQ students were more likely to vote for a rule that would penalize the non-cooperators: so higher-IQ students were pleasant, but not naive.

Intelligence as a Way to Read the Minds of Others

So people with higher test scores tend to have more of the Three P’s of the RPD. But just how socially perceptive are higher-IQ people? After all, being nice in a lab experiment might not translate into real-world social interactions, and while IQ predicts social intelligence in surveys, it would be good to have a concrete test of social perceptiveness. One test by economist David Cesarini and his coauthors illustrates the ability of higher-IQ individuals to understand the minds of others.8 The Keynesian Beauty Contest, as it is known, is a game in which all the players are asked to pick a number from zero to one hundred. A prize will be given to the person whose guess is closest to, say, one-half of the group’s average guess. In the event of a tie they might split the prize among the best guesses. So if almost everyone chose fifty but just one person chose thirty, that lower guess would win. If the players were all perfectly rational, and they knew that everyone else in the game was equally rational, they would realize that the winning answer would be the only number that is exactly one half of itself: zero.

But people aren’t perfectly rational and—here’s the good part—people who are more rational are more likely to be aware of just how irrational most people are. So while the weaker players would pick numbers close to randomly—guessing on average fifty or a little below—someone better-skilled might realize that the group combines some sharper players with some weaker players, and so submit a guess quite a bit lower than fifty. But isn’t there a chance that higher-IQ players make the mistake of thinking that everyone is as smart as they are? Or might they overthink the situation, foolishly submitting zero as the right answer? In a study of Swedes, Cesarini and coauthors found that players with the highest IQs submitted numbers that were low but not too low; indeed they gave answers that were strikingly close to the best possible answer. By contrast, players in the bottom of the IQ distribution gave answers that tended to be far too high. IQ predicted not just individual rationality but a better view into the minds of others. And later a second study came to the same finding using another IQ-type test.9

Overall, mental test scores predict the ability to understand the minds of others. And you might wonder: Why is it called the Keynesian Beauty Contest? It’s based on a story the legendary economist John Maynard Keynes once told. A British newspaper published photos of women and had an accompanying contest: the winners would be the contestants who chose the photo that was most often selected by everyone else. So while at first glance you might think it’s a game of “pick the prettiest woman,” the real goal is instead “pick the woman that I think everyone else will be picking.” Keynes thought this was a useful metaphor for the stock market: while it seems prudent at first to invest in companies with strong objective prospects, Keynes thought the real game was in picking the companies that everyone else was soon going to be picking. Keynes believed that reading the crowd drew on a cognitively demanding skill: skill at understanding other humans.

High-SAT Schools and Cooperation

Axelrod pointed out three paths to cooperation—patience, pleasantness, and perceptiveness—and we’ve seen that higher-IQ people tend to follow all three pieces of advice in experimental settings. But wouldn’t it be nice if there were some actual evidence that high-intelligence groups cooperate more often in a genuine repeated prisoner’s dilemma? That’s what I thought to myself in 2004, when I first started thinking about the link between repeated games, IQ, and productivity. I looked through the vast literature on repeated prisoner’s dilemmas, trying to see if someone, somewhere, had run a repeated game and then given the players an IQ test—or even asked for the students’ SAT scores. I found almost nothing.

But I did eventually find one exception: one study had twins play a repeated prisoner’s dilemma game against each other for a hundred rounds, and they found that higher-IQ pairs of twins did cooperate more often than lower-IQ pairs of twins.10 This was one piece of evidence that higher-IQ people cooperate more, indeed. But, since people were knowingly playing against their own siblings, I was reluctant to generalize to society at large.

Since I wasn’t in a position to run my own experiments at the time, I instead began a program of collecting academic articles on dozens of repeated prisoner’s dilemma experiments run at different U.S. universities.11 The plan was simple: record the average rate of cooperation in each study along with a few other experimental characteristics—whether they played for cash, how many rounds the games lasted, whether the school was public or private, and so on. Then, record the average SAT score at that school both in the 1960s and the early 1970s (when most of these experiments were run, but when few schools reported average SAT scores) and today (when more data were publicly available).

The results? Students at schools with high average SAT scores cooperated more often than students at low-SAT schools. The relationship was between modest and strong, and the relationship still held, though not quite as strongly, when you took account of the fact that some schools were private (so maybe classes were smaller and students knew each other) and the fact that some schools played with real money rather than fake points. Years later, I revised and expanded the collection of experiments and confirmed the findings: on average, it looked like smarter groups really were more cooperative.

Of course, there could just be something special about being at a high-SAT school that makes people want to cooperate, something other than the cognitive skills of the students. Maybe there’s a stronger campus culture at elite schools. (If so, that might be evidence that groups with higher cognitive skills tend to build more tightly knit cultures—a possibility worth exploring in its own right.) Or maybe professors at lower-scoring schools made the prisoner’s dilemma experiments more difficult in some way that was difficult to measure. As I noted in the original paper, a study looking at cooperation rates across different universities could only offer a “prima facie” case that group test scores cause group cooperation. It would be good to know what would happen if you really ran an experiment in which you had some college students play a repeated prisoner’s dilemma game and then had them take an IQ test.

That’s just what I later did with the help of my colleagues Omar al-Ubaydli and Jaap Weel.12 They’re both experimental economists—the kind who spend time handing out cash to college students in order to get them to jump through carefully designed hoops. We had students show up in groups of about eight at a time for our experiment. Then we randomly paired students to play a ten-round repeated prisoner’s dilemma game against each other over a computer. They never saw who they were playing against but they did know they were playing a ten-round game. However, they weren’t told what game they were playing. In an experiment, you don’t want to tell students, “Option 1 is ’Cooperate,’ and Option 2 is ’Cheat,’ which would you prefer?” That’s known as “priming the subject” or “experimenter demand.” Students usually want to show that they’re good, moral people, so they’re more likely to choose the nice response if you label it as the nice response.

So instead, we just showed them the payoffs from different actions on the screen and then gave them a choice of playing, say, “blue” or “green” rather than “cooperate” or “cheat.” And how did they play? Just as you would expect on the basis of the Three Ps of the RPD: pairs with high average IQs were much more likely to cooperate—a fifteen-point rise in the pair’s average IQ predicted an 11 percent rise in both people cooperating. Smarter pairs just found a way to make it work.

The next question is why are they cooperating? How do they make it work? In our ten-round game, we found that Round 2 was quite special: in that round, higher-IQ individuals apparently looked at what their opponent in Round 1 had done, and followed something like tit for tat. If their opponent cooperated, the more intelligent were more likely than the less intelligent to return the cooperation. So in Round 2, higher-IQ players were more likely to follow the norm of reciprocity; they acted like “conditional cooperators.”13 And in the world of cooperation experiments, conditional cooperators, players who are willing to play nice but only if others are playing nice, are a key ingredient in building cooperative groups.14

My finding with Omar and Jaap is similar to the finding from the truck-driver study. In both cases, higher-IQ players tended to be nicer, more generous, to someone who had recently treated them generously. Reciprocity is so important to explaining human behavior that economists Samuel Bowles and Herbert Gintis, who were mentioned in Chapter 1 and who both have a long history of studying human origins, sometimes refer to human beings not as homo sapiens—man the knower—but as homo reciprocans—man the reciprocator.15 And in a variety of settings, it appears that people with higher test scores are more likely to be reciprocators.

And here’s the most exciting result from my experiments with Omar and Jaap: on average, over the course of the entire experiment, higher-IQ pairs were five times more cooperative than higher-IQ individuals. The link between IQ and cooperation was an emergent phenomenon; it arose not from smart individual players but from smart pairs of players.

Recently, Aldo Rustichini and coauthors ran their own repeated prisoner’s dilemma game, and it confirmed our findings.16 Like us they gave all the players Raven’s IQ tests. Unlike in our study, their game ended randomly, with an electronic flip of the coin, so players could never be sure which round was the last one. This helps keep alive the shadow of the future, the incentive to be kind because your actions today are shaping the reputation you’ll have tomorrow. Rustichini’s findings are right there in the title of the paper: “Higher Intelligence Groups Have Higher Cooperation Rates in the Repeated Prisoner’s Dilemma.” It’s possible that the link between IQ and cooperation won’t seem like any great surprise to you: these experiments are just games, an IQ test is a game, and people who are good at one kind of game are often good at other games. But life is a game as well.

Other similar experiments have been run looking at the link between IQ and game outcomes both in formal game theory experiments and in loosely structured negotiation games. In negotiation games—popular in business schools—two players pretend to work out the details of a construction deal or a product delivery schedule or some other similarly complicated and loose business deal. I discussed one such experiment in the Introduction. After the experiment is over the experimenter can check and see: Did individual students with higher SAT or GRE scores get a bigger slice of pie? Did pairs of students with higher average scores tend to grow a bigger pie altogether? It turns out that the strongest result is that pair test scores predict a bigger pie overall. Smarter pairs leave less money on the table on average: they find more win-win deals. There’s some evidence overall that higher-scoring individual players get a bigger slice of a fixed pie, but the more interesting and more robust evidence is that higher-scoring pairs bake a bigger pie in the first place. There have been enough of these studies—both the formal prisoner’s-dilemma-style games and the informal negotiation games—that one group of authors were able to perform a meta-analysis.17 They checked to see if, taken as whole, looking across many studies, IQ-type tests were good predictors of cooperative behavior. The answer: yes, higher standardized test scores tend to predict win-win behavior.

Machiavelli and the Mind

Such cooperative tendencies [among early humans] probably evolved in two main ways. First, they are a by-product of the evolution of intelligence. As human intelligence developed, individuals could increasingly calculate that their long-term interest lay in keeping rather than breaking certain kinds of agreement. . . .

Paul Seabright, The Company of Strangers18

The positive link between IQ and cooperation is likely to be strongest in settings that involve an element of time, when there’s room for social feedback. Indeed, the one study of which I’m aware that finds that higher-IQ individuals are more cruel and less cooperative is a study of a one-shot prisoner’s dilemma, something much like the true criminal’s prisoner’s dilemma.19 Two strangers choose exactly once, simultaneously, to either cooperate or defect: this is the only setting I know of in which high scorers are more brutal than low scorers. To my mind, this finding—that higher IQ predicts cruelty in one-shot interactions but cooperation in long-run relationships—fits with the widely discussed concept of “Machiavellian intelligence.” The Machiavellian intelligence hypothesis is the theory that human intelligence exists partly to solve the most complex problem of all: the problem of living with other humans. In a one-shot environment, if it’s either steal or be robbed, and if the players will never see each other again, then I’d expect higher-IQ individuals to figure out what setting they’re in and act shrewdly, act cruelly. But when it’s a repeated game, or even when it feels more like a repeated game with one player moving first and another player moving second, as in McCabe’s trust game, I’d often expect the higher-scoring individuals to “take a chance” on trust just to see how it works out. In the field of psychology it’s well-known that higher IQ predicts greater openness to new experiences, a greater willingness to try new things out.20 In addition to being more open to new things, the person with the higher test score is more likely to understand the rules, more likely to figure out when being nice is worth it and when it’s a fool’s errand, and more likely to figure out when it’s best to cut her losses when the investment in kindness isn’t paying off. Assessing the situation: that’s a skill one would expect to be more common among people with higher test scores. If an entire group of individuals with higher IQs are together for a reasonably long period of time, we should expect them to find more win-win outcomes, growing a bigger pie that they can squabble over later.

I can’t tell you how many times I’ve met people from all walks of life who’ve told me that smarter people lack common sense, that they overthink and overstrategize issues to their detriment. If that were the case then smarter groups would likely turn out to be “too big for their britches” and collapse into endless rounds of cheating; failed attempts at exploitation; and continual, costly punishment. Certainly that happens sometimes, but on average, that is not the case. Now that we’ve seen that higher-IQ groups tend to be more cooperative, let’s take that idea from the laboratory into the world of politics.