Friends - Social Cement

The Moral Animal - Whe We Are The Way We Are: The New Science of Evolutionary Psychology - Robert Wright 1995

Friends
Social Cement


[I]t is not a little remarkable that sympathy with the distresses of others should excite tears more freely than our own distress; and this certainly is the case. Many a man, from whose eyes no suffering of his own could wring a tear, has shed tears at the sufferings of a beloved friend.

— The Expression of the Emotions in Man and Animals (1872)1

Darwin, perhaps sensing the weakness of his main theory of the moral sentiments, threw in a second theory for good measure. During human evolution, he wrote in The Descent of Man, "as the reasoning powers and foresight ... became improved, each man would soon learn from experience that if he aided his fellow-men, he would commonly receive aid in return. From this low motive he might acquire the habit of aiding his fellows; and the habit of performing benevolent actions certainly strengthens the feeling of sympathy, which gives the first impulse to benevolent actions. Habits, moreover, followed during many generations probably tend to be inherited."2

That last sentence, of course, is wrong. We now know that habits are passed from parent to child by instruction or example, not via the genes. In fact, no life experiences (except, say, exposure to radiation) affect the genes handed down to offspring. The very beauty {189} of Darwin's theory of natural selection, in its strict form, was that it didn't require the inheritance of acquired traits, as had previous evolutionary theories, such as Jean-Baptiste de Lamarck's. Darwin saw this beauty, and stressed mainly the pure version of his theory. But he was willing, especially as he grew older, to invoke more dubious mechanisms to solve especially nettlesome issues, such as the origin of the moral sentiments.

In 1966, George Williams suggested a way to make Darwin's musings about the evolutionary value of mutual assistance more useful: take out not only the last sentence, but also the part about "reasoning" and "foresight" and "learning." In Adaptation and Natural Selection, Williams recalled Darwin's reference to the "low motive" of doing favors in hopes of reciprocation and wrote: "I see no reason why a conscious motive need be involved. It is necessary that help provided to others be occasionally reciprocated if it is to be favored by natural selection. It is not necessary that either the giver or the receiver be aware of this." He continued, "Simply stated, an individual who maximizes his friendships and minimizes his antagonisms will have an evolutionary advantage, and selection should favor those characters that promote the optimization of personal relationships."3

Williams's basic point (which Darwin certainly understood, and stressed in other contexts)4 is one we've encountered before. Animals, including people, often execute evolutionary logic not via conscious calculation, but by following their feelings, which were designed as logic executers. In this case, Williams suggested, the feelings might include compassion and gratitude. Gratitude can get people to repay favors without giving much thought to the fact that that's what they're doing. And if compassion is felt more strongly for some kinds of people — people to whom we're grateful, for example — it can lead us, again with scarce consciousness of the fact, to repay kindness.

Williams's terse speculations were transmuted into a full-fledged theory by Robert Trivers. In 1971, exactly one hundred years after Darwin's allusion to reciprocal altruism appeared in The Descent of Man, Trivers published a paper titled "The Evolution of Reciprocal Altruism" in The Quarterly Review of Biology. In the paper's abstract, he wrote that "friendship, dislike, moralistic aggression, gratitude, sympathy, trust, suspicion, trustworthiness, aspects of guilt, {190} and some forms of dishonesty and hypocrisy can be explained as important adaptations to regulate the altruistic system." Today, more than two decades after this nervy pronouncement, there is a diverse and still-growing body of evidence to support it.

GAME THEORY AND RECIPROCAL ALTRUISM

If Darwin were put on trial for not having conceived and developed the theory of reciprocal altruism, one defense would be that he came from an intellectually disadvantaged culture. Victorian England lacked two tools that together form a uniquely potent analytical medium: game theory and the computer.

Game theory was developed during the 1920s and thirties as a way to study decision making.5 It has become popular in economics and other social sciences, but it suffers from a reputation for being a bit too, well, cute. Game theorists cleverly manage to make the study of human behavior neat and clean, but they pay a high price in realism. They sometimes assume that what people pursue in life can be tidily summarized in a single psychological currency — pleasure, or happiness, or "utility"; and they assume, further, that it is pursued with unwavering rationality. Any evolutionary psychologist can tell you that these assumptions are faulty. Humans aren't calculating machines; they're animals, guided somewhat by conscious reason but also by various other forces. And long-term happiness, however appealing they may find it, is not really what they're designed to maximize.

On the other hand, humans are designed by a calculating machine, a highly rational and coolly detached process. And that machine does design them to maximize a single currency — total genetic proliferation, inclusive fitness.6

Of course, the designs don't always work. Individual organisms often fail, for various reasons, to transmit their genes. (Some are bound to fail. That is the reason evolution so assuredly happens.) In the case of human beings, moreover, the design work was done in a social environment quite different from the current environment. We live in cities and suburbs and watch TV and drink beer, all the while being pushed and pulled by feelings designed to propagate our genes in a small hunter-gatherer population. It's no wonder that people {191} often seem not to be pursuing any particular goal — happiness, inclusive fitness, whatever — very successfully.

Game theorists, then, may want to follow a few simple rules when applying their tools to human evolution. First, the object of the game should be to maximize genetic proliferation. Second, the context of the game should mirror reality in the ancestral environment, an environment roughly like a hunter-gatherer society. Third, once the optimal strategy has been found, the experiment isn't over. The final step — the payoff — is to figure out what feelings would lead human beings to pursue that strategy. Those feelings, in theory, should be part of human nature; they should have evolved through generations and generations of the evolutionary game.

Trivers, at the suggestion of William Hamilton, employed a classic game called the prisoner's dilemma. Two partners in crime are being interrogated separately and face a hard decision. The state lacks the evidence to convict them of the grave offense they committed but does have enough evidence to convict both on a lesser charge — with, say, a one-year prison term for each. The prosecutor, wanting a harsher sentence, pressures each man individually to confess and implicate the other. He says to each: If you confess but your partner doesn't, I'll let you off scot-free and use your testimony to put him away for ten years. The flip side of this offer is a threat: If you don't confess but your partner does, you go to prison for ten years. And if you confess and it turns out your partner confesses too, I'll put you both away, but only for three years.7

If you were in the shoes of either prisoner, and weighed your options one-by-one, you would almost certainly decide to confess — to "cheat" on your partner. Suppose, first of all, that your partner cheats on you. Then you're better off cheating: you get three years in prison, as opposed to the ten you'd get if you stayed mum while he confessed. Now, suppose he doesn't cheat on you. You're still better off cheating: by confessing while he stays mum, you go free, whereas you'd get one year if you too kept your silence. Thus, the logic seems irresistible: betray your partner.

Yet if both partners follow this nearly irresistible logic, and cheat on each other, they end up with three years in jail, whereas both could have gotten off with one year had they stayed mutually faithful {192} and kept their mouths shut. If only they were allowed to communicate and reach an agreement — then cooperation could emerge, and both would be better off. But they aren't, so how can cooperation emerge?

The question roughly parallels the question of how dumb animals, which can't make promises of repayment, or, for that matter, grasp the concept of repayment, could evolve to be reciprocally altruistic. Betraying a partner in crime while he stays faithful is like an animal's benefiting from an altruistic act and never returning the favor. Mutual betrayal is like neither animal's extending a favor in the first place: though both might benefit from reciprocal altruism, neither will risk getting burned. Mutual fidelity is like a single successful round of reciprocal altruism — a favor is extended and returned. But again: Why extend the favor if there's no guarantee of return?

The match between model and reality isn't perfect.8 With reciprocal altruism there is a time lag between the altruism and its reciprocation, whereas the players in a prisoner's dilemma commit themselves concurrently. But this is a distinction without much of a difference. Because the prisoners can't communicate about their concurrent decisions, each is in the situation faced by prospectively altruistic animals: unsure whether any friendly overture will be matched. Further, if you keep pitting the same players against one another, game after game after game — an "iterated prisoner's dilemma" — each can refer to the other's past behavior in deciding how to act toward him in the future. Thus each player may reap in the future what he has sown in the past — just as with reciprocal altruism. All in all, the match between model and reality is quite good. The logic that would lead to cooperation in an iterated prisoner's dilemma is fairly precisely the logic that would lead to reciprocal altruism in nature. The essence of that logic, in both cases, is non-zero-sumness.

NON-ZERO-SUMNESS

Suppose you are a chimp that has just killed a young monkey and you give some meat to a fellow chimp that has been short of food lately. Let's say you give him five ounces, and let's call that a five-point loss for you. Now, in an important sense, the other chimp's {193} gain is larger than your loss. He was, after all, in a period of unusual need, so the real value of food to him — in terms of its contribution to his genetic proliferation — was unusually high. Indeed, if he were human, and could think about his plight, and were forced to sign a binding contract, he might rationally agree to repay five ounces of meat with, say, six ounces of meat right after payday next Friday. So he gets six points in this exchange, even though it cost you only five.

This asymmetry is what makes the game non-zero-sum. One player's gain isn't canceled out by the other player's loss. The essential feature of non-zero-sumness is that, through cooperation, or reciprocation, both players can be better off.9 If the other chimp repays you at a time when meat is bountiful for him and scarce for you, then he sacrifices five points and you get six points. Both of you have emerged from the exchange with a net benefit of one point. A series of tennis sets, or of innings, or of golf holes eventually produces only one winner. The prisoner's dilemma, being a non-zero-sum game, is different. Both players can win if they cooperate. If caveman A and caveman B combine to hunt game that one man alone can't kill, both cavemen's families get a big meal; if there's no such cooperation, neither family does.

Division of labor is a common source of non-zero-sumness: you become an expert hide-splicer and give me clothes, I carve wood and give you spears. The key here — and in the chimpanzee example above, as well as in much non-zero-sumness — is that one animal's surplus item can be another animal's rare and precious good. It happens all the time. Darwin, recalling an exchange of goods with the Fuegian Indians, wrote of "both parties laughing, wondering, gaping at each other; we pitying them, for giving us good fish and crabs for rags, &c.; they grasping at the chance of finding people so foolish as to exchange such splendid ornaments for a good supper."10

To judge by many hunter-gatherer societies, division of economic labor wasn't dramatic in the ancestral environment. The most common commodity of exchange, almost surely, was information. Knowing where a great stock of food has been found, or where someone encountered a poisonous snake, can be a matter of life or death. And knowing who is sleeping with whom, who is angry at whom, who {194} cheated whom, and so on, can inform social maneuvering for sex and other vital resources. Indeed, the sorts of gossip that people in all cultures have an apparently inherent thirst for — tales of triumph, tragedy, bonanza, misfortune, extraordinary fidelity, wretched betrayal, and so on — match up well with the sorts of information conducive to fitness.11 Trading gossip (the phrase couldn't be more apt) is one of the main things friends do, and it may be one of the main reasons friendship exists.

Unlike food or spears or hides, information is shared without being actually surrendered, a fact that can make the exchange radically non-zero-sum.12 Of course, sometimes information is of value only if hoarded. But often that's not the case. One Darwin biographer has written that, after scientific discussions between Darwin and his friend Joseph Hooker, "each vied with the other in claiming that the benefits he had received ... far outweighted whatever return he might have been able to make."13

Non-zero-sumness is, by itself, not enough to explain the evolution of reciprocal altruism. Even in a non-zero-sum game, cooperation doesn't necessarily make sense. In the food-sharing example, though you gain one point from a single round of reciprocal altruism, you gain six points by cheating — accepting generosity and never returning it. So the lesson seems to be: if you can spend your life exploiting people, by all means do; the value of cooperation pales by comparison. Further, if you can't find people to exploit, cooperation still may not be the best strategy. If you're surrounded by people who are always trying to exploit you, then reciprocal exploitation is the way to cut your losses. Whether non-zero-sumness actually fuels the evolution of reciprocal altruism depends heavily on the prevailing social environment. The prisoner's dilemma will have to do more than simply illustrate non-zero-sumness if it is to be of much use here.

Testing theories, of course, is a general problem for evolutionary biologists. Chemists and physicists test a theory with carefully controlled experiments that either work as predicted, corroborating the theory, or don't. Sometimes evolutionary biologists can do that. As we've seen, researchers have nutritionally deprived pack rat mothers to see if they would, as predicted, then favor female offspring. But {195} biologists can't experiment with human beings the way they do with pack rats, and they can't conduct the ultimate experiment: rewind the tape and replay evolution.

Increasingly, though, biologists can replay approximations of evolution. When Trivers laid out the theory of reciprocal altruism in 1971, computers were still exotic machines used by specialists; the personal computer didn't even exist. Though Trivers put the prisoner's dilemma to good analytical use, he didn't talk about actually animating it — creating, inside a computer, a species whose members regularly confront the dilemma and may live or die by it, and then letting natural selection take its course.

During the late 1970s, Robert Axelrod, an American political scientist, devised such a computer world and then set about populating it. Without mentioning natural selection — which wasn't, initially, his interest — he invited experts in game theory to submit a computer program embodying a strategy for the iterated prisoner's dilemma: a rule by which the program decides whether to cooperate on each encounter with another program. He then flipped the switch and let these programs mingle. The context for the competition nicely mirrored the social context of human, and prehuman, evolution. There was a fairly small society — several dozen regularly interacting individuals. Each program could "remember" whether each other program had cooperated on previous encounters, and adjust its own behavior accordingly.

After every program had had 200 encounters with every other program, Axelrod added up their scores and declared a winner. Then he held a second generation of competition after a systematic culling: each program was represented in proportion to its first-generation success; the fittest had survived. And so the game proceeded, generation after generation. If the theory of reciprocal altruism is correct, you would expect reciprocal altruism to "evolve" inside Axelrod's computer, to gradually dominate the population.

It did. The winning program, designed by the Canadian game theorist Anatol Rapoport (who had once written a book called Prisoner's Dilemma), was named TIT FOR TAT.14 TIT FOR TAT was guided by the simplest of rules — literally: its computer program was five lines long, the shortest submitted. (So if the strategies had been {196} created by random computer mutation, rather than by design, it probably would have been among the first to appear.) TIT FOR TAT was just what its name implied. On the first encounter with any program, it would cooperate. Thereafter, it would do whatever the other program had done on the previous encounter. One good turn deserves another, as does one bad turn.

The virtues of this strategy are about as simple as the strategy itself. If a program demonstrates a tendency to cooperate, TIT FOR TAT immediately strikes up a friendship, and both enjoy the fruits of cooperation. If a program shows a tendency to cheat, TIT FOR TAT cuts its losses; by withholding cooperation until that program reforms, it avoids the high costs of being a sucker. So TIT FOR TAT never gets repeatedly victimized, as indiscriminately cooperative programs do. Yet TIT FOR TAT also avoids the fate of the indiscriminately uncooperative programs that try to exploit their fellow programs: getting locked into mutually costly chains of mutual betrayal with programs that would be perfectly willing to cooperate if only you did. Of course, TIT FOR TAT generally forgoes the large one-time gains that can be had through exploitation. But strategies geared toward exploitation, whether through relentless cheating or repeated "surprise" cheating, tended to lose out as the game wore on. Programs quit being nice to them, so they were denied both the large gains of exploitation and the more moderate gains of mutual cooperation. More than the steadily mean, more than the steadily nice, and more than various "clever" programs whose elaborate rules made them hard for other programs to read, the straightforwardly conditional TIT FOR TAT was, in the long run, self-serving.

HOW TIT FOR TAT FEELS

TIT FOR TAT's strategy — do unto others as they've done unto you — gives it much in common with the average human being. Yet it has no human foresight. It doesn't understand the value of reciprocation. It just reciprocates. In that sense it is perhaps more like Australopithecus, our small-brained forebears.

What feelings would natural selection have instilled in an australopithecine to make it employ the clever strategy of reciprocal {197} altruism in spite of its dim-wittedness? The answer goes beyond the simple, indiscriminate "sympathy" that Darwin stressed. True, this kind of sympathy would come in handy at first, prompting TIT FOR TAT's initial overture of goodwill. But thereafter sympathy should be dished out selectively, and supplemented by other feelings. TIT FOR TAT's reliable return of favors might emerge from a sense of gratitude and obligation. The tendency to cut off largesse for mean australopithecines could be realized via anger and dislike. And the tendency to be nice toward erstwhile meanies who have mended their ways would come from a sense of forgiveness — an eraser of suddenly counterproductive hostility. All of these feelings are found in all human cultures.

In real life, cooperation isn't a matter of black and white. You don't run into an acquaintance, try to extract useful information, and either fail or succeed. More often, the two of you swap miscellaneous data, each providing something of possible use to the other, and the contributions don't exactly balance. So the human rules for reciprocal altruism are likely to be a bit less binary than TIT FOR TAT's. If person F has been distinctly nice on several occasions, you might lower your guard and do favors without constantly monitoring F, remaining alert only to gross signs of incipient meanness, and periodically reviewing — consciously or unconsciously — the cumulative account. Similarly, if person E has been mean for months, it's probably best to write him off. The sensations that would encourage you to behave in these time-and-energy-saving fashions are, respectively, affection and trust (which entail'the concept of "friend"); and hostility and mistrust (along with the concept of "enemy").

Friendship, affection, trust — these are the things that, long before people signed contracts, long before they wrote down laws, held human societies together. Even today, these forces are one reason human societies vastly surpass ant colonies in size and complexity even though the degree of kinship among cooperatively interacting people is usually near zero. As you watch the kind but stern TIT FOR TAT spread through the population, you are seeing how the human species's uniquely subtle social cement could grow out of fortuitous genetic mutations.

More remarkable, perhaps, is that the fortuitous mutations thrive {198} without "group selection." That was Williams's whole point back in 1966: altruism toward nonkin, though a critical ingredient in group cohesion, needn't have been created for the "good of the tribe," much less the "good of the species." It seems to have emerged from simple, day-to-day competition among individuals. Williams wrote in 1966: "There is theoretically no limit to the extent and complexity of group-related behavior that this factor could produce, and the immediate goal of such behavior would always be the well-being of some other individual, often genetically unrelated. Ultimately, however, this would not be an adaptation for group benefit. It would be developed by the differential survival of individuals and would be designed for the perpetuation of the genes of the individual providing the benefit to another."15

One key to this emergence of macroscopic harmony from microscopic selfishness is feedback between macro and micro. As the number of TIT FOR TAT creatures grows — that is, as the amount of social harmony grows — the fortunes of each individual TIT FOR TAT grow. The ideal neighbor for TIT FOR TAT, after all, is another TIT FOR TAT. The two settle quickly and painlessly into an enduringly fruitful relationship. Neither ever gets burned, and neither ever needs to dish out mutually costly punishment. Thus, the more social harmony, the better each TIT FOR TAT fares, and the more social harmony, and so on. Through natural selection, simple cooperation can actually feed on itself.

The person who pioneered the modern study of this sort of self-reinforcing social coherence, and also the evolutionary application of game theory, is John Maynard Smith. We've seen how he used the idea of "frequency-dependent" selection to show how two kinds of bluegill sunflsh — drifters and fine, upstanding citizens — could exist in equilibrium: if the number of drifters grows relative to upstanding citizens, the drifters become less genetically prolific, and their number returns to normal. TIT FOR TAT is also subject to frequency-dependent selection, but here the dynamic works in the other direction, with feedback that is positive, not negative; the more TIT FOR TATs there are, the more successful TIT FOR TAT is. If negative feedback sometimes produces an "evolutionarily stable state" — a balance among different strategies — positive feedback can produce an {199} "evolutionarily stable strategy": a strategy that, once it has pervaded a population, is impervious to small-scale invasion. There is no alternative strategy that, if introduced via a single mutant gene, can flourish. Axelrod, after watching TIT FOR TAT triumph and analyzing its success, concluded that it was evolutionarily stable.16

Cooperation can begin to feed on itself early in the game. If even a small chunk of the population employs TIT FOR TAT and all other creatures are steadfastly uncooperative, an expanding circle of cooperation will suffuse the population generation after generation. And the reverse isn't true. Even if several steadfast noncooperators arrive on the scene at once, they still can't subvert a population of TIT FOR TATs. Simple, conditional cooperation is more infectious than unmitigated meanness. Robert Axelrod and William Hamilton, in a jointly authored chapter of Axelrod's 1984 book The Evolution of Cooperation, wrote: "[T]he gear wheels of social evolution have a ratchet."17

Unfortunately, this ratchet doesn't kick in at the very beginning. If only one TIT FOR TAT creature enters a climate of pure meanness, it is doomed to extinction. Steadfast uncooperativeness, apparently, is itself an evolutionarily stable strategy; once it pervades a population, it is immune to invasion by a single mutant employing any other strategy, even though it is vulnerable to a small cluster of conditionally cooperative mutants.

In that sense, Axelrod's tournament gave TIT FOR TAT a head start. Though the strategy didn't at first enjoy the company of any exact clones, most of its neighbors were designed to cooperate under at least some circumstances, thus raising the value of its own good nature. Had TIT FOR TAT been tossed in with forty-nine steadfast meanies, there would have been a forty-nine-way tie for first place, and only one clear loser. However inexorable TIT FOR TAT's success appears on the computer screen, reciprocal altruism's triumph wasn't so obviously in the cards many millions of years ago, when meanness pervaded our evolutionary lineage.

How did reciprocal altruism get off the ground? If any new gene offering cooperation gets stomped into the dust, how did there ever arise the small population of reciprocal altruists needed to shift the odds in favor of cooperation? {200}

The most appealing answer is one suggested by Hamilton and Axelrod: that kin selection gave reciprocal altruism a subtle boost. As we've seen, kin selection can favor any gene that raises the precision with which altruism flows toward relatives. Thus, a gene counseling apes to love other apes that suckled at their mother's breast — younger siblings, that is — might thrive. But what are the younger siblings supposed to do? They never see their older siblings suckle, so what cues can they go by?

One cue is altruism itself. Once genes directing altruism toward sucklers had taken hold by benefiting younger siblings, genes directing altruism toward altruists would benefit older siblings. These genes — reciprocal-altruism genes — would thus spread, at first via kin selection.

Any such imbalance of information between two relatives about their relatedness is fertile ground for a reciprocal-altruism gene. And such imbalances are quite likely to have existed in our past. Before the advent of language, aunts, uncles, and even fathers often had conspicuous cues about the identities of their younger relatives when the reverse wasn't true; so altruism would have flowed largely from older to younger relatives. That imbalance would itself have been a reliable cue for youngsters to use in steering altruism toward relatives — at least, it probably would have been more reliable than other simple cues, which is all that matters. A gene that repaid kindness with kindness could thus have spread through the extended family, and, by interbreeding, to other families, where it would thrive on the same logic.18 At some point the TIT FOR TAT strategy would be widespread enough to keep flourishing even without the aid of kin selection. The ratchet of social evolution was now forged.

Kin selection probably paved the way for reciprocal altruism genes in a second way too: by placing handy psychological agents at their disposal. Long before our ancestors were reciprocal altruists, they were capable of familial affection and generosity, of trust (in kin) and of guilt (a reminder not to mistreat kin). These and other elements of altruism were part of the ape mind, ready to be wired together in a new way. That almost surely made things easier for natural selection, which often makes thrifty use of the materials at hand. {201}

Given these likely links between kin selection and reciprocal altruism, one can view the two phases in evolution almost as a single creative thrust, in which natural selection crafted an ever-expanding web of affection, obligation, and trust out of ruthless genetic self-interest. The irony alone would make the process worth savoring, even if this web didn't include so many of the experiences that make life worthwhile.

BUT IS IT SCIENCE?

Game theory and computer simulation are neat and fun, but how much do they really add up to? Is the theory of reciprocal altruism genuine science? Does it succeed in explaining what it aims to explain?

One answer is: Compared to what? There isn't exactly a surplus of rival theories. Within biology, the only alternatives are group-selectionist theories, which tend to face the sort of problem Darwin's group-selectionist theory faced. And within the social sciences, this subject is a giant void.

To be sure, social scientists, going back at least to the turn-of-the-century anthropologist Edward Westermarck, have recognized that reciprocal altruism is fundamental to life in all cultures. There is a whole literature on "social exchange theory," in which the everyday swapping of sometimes intangible resources — information, social support — is gauged with care.19 But because so many social scientists have resisted the very idea of an inherent human nature, reciprocation has often been seen as a cultural "norm" that just happens to be universal (presumably because distinct peoples independently discovered its utility). Few have noted that the daily life of every human society rests not just on reciprocity, but on a common foundation of feelings — sympathy, gratitude, affection, obligation, guilt, dislike, and so on. Even fewer have offered an ultimate explanation for this commonality. There must be some explanation. Does any one have an alternative to the theory of reciprocal altruism?

The theory thus wins by default. But it doesn't win only by default. Since Trivers published his paper in 1971, the theory has been tested and so far has fared well.20

The Axelrod tournament was one test. If uncooperative strategies {202} had prevailed over cooperative ones, or if cooperative strategies had paid off only after they made up much of the population, things would have looked worse for the theory. But conditional niceness was shown to have the upper hand over meanness, and indeed to be a nearly inexorable evolutionary force once it gains even a small foothold.

The theory has also gotten support in the natural world: evidence that reciprocal altruism can evolve without a human's abstract comprehension of its logic, so long as the animals in question are smart enough to recognize individual neighbors and record their past deeds, whether consciously or unconsciously. Williams, in 1966, noted the existence of mutually supportive and long-lasting coalitions of rhesus monkeys. And he suggested that the mutually "solicitous" behavior of porpoises might be reciprocal — a suspicion later confirmed.21

Vampire bats, not mentioned by either Trivers or Williams, also turn out to be reciprocally altruistic. Any given bat has sporadic success in its nightly forays to suck blood from cattle, horses, and other victims. Since blood is highly perishable, and bats don't have refrigerators, scarcity faces individual bats pretty often. And periodic individual scarcity, as we've seen, invites non-zero-sum logic. Sure enough, bats that return to the roost empty-handed are often favored with regurgitated blood from other bats — and they tend to return the favor in the future. Some of the sharing is, not surprisingly, between kin, but much takes place within partnerships — two or more unrelated bats that recognize each other by distinctive "contact calls" and often groom each other.22 Bat buddies.

The most vital zoological support for the evolution of reciprocal altruism in humans has come from our close relatives the chimpanzees. When Williams and Trivers first wrote about reciprocity, the social life of chimpanzees was just coming into clear view. There were few signs of how utterly reciprocal altruism permeates it. Now we know that chimpanzees share food reciprocally and form somewhat durable alliances. Friends groom each other and help each other confront or fend off enemies. They give reassuring caresses and hearty embraces. When one friend betrays another, seemingly heartfelt outrage may ensue.23

The theory of reciprocal altruism also passes a very basic, essentially {203} aesthetic scientific test: the test of elegance, or parsimony. The simpler a theory, and the more varied and numerous the things it explains, the more "parsimonious" it is. It is hard to imagine anyone isolating a single and fairly simple evolutionary force that, like the force Williams and Trivers isolated, could plausibly account for things so diverse as sympathy, dislike, friendship, enmity, gratitude, a gnawing sense of obligation, acute sensitivity to betrayal, and so on.24

Reciprocal altruism has presumably shaped the texture not just of human emotion, but of human cognition. Leda Cosmides has shown that people are good at solving otherwise baffling logical puzzles when the puzzles are cast in the form of social exchange — in particular, when the object of the game is to figure out if someone is cheating. This suggests to Cosmides that a "cheater-detection" module is among the mental organs governing reciprocal altruism.25 No doubt others remain to be discovered.

THE MEANING OF RECIPROCAL ALTRUISM

One common reaction to the theory of reciprocal altruism is discomfort. Some people are troubled by the idea that their noblest impulses spring from their genes' wiliest ploys. This is hardly a necessary response, but for those who choose it, full immersion is probably warranted. If indeed the genetically selfish roots of sympathy and benevolence are grounds for despair, then extreme despair is in order. For, the more you ponder reciprocal altruism's finer points, the more mercenary the genes seem.

Consider again the question of sympathy — in particular, its tendency to grow in proportion to the gravity of a person's plight. Why do we feel sadder for a starving man than for a slightly hungry man? Because the human spirit is a grand thing, devoted to allaying suffering? Guess again.

Trivers addressed this question by asking why gratitude itself varies according to the plight from which the grateful are rescued. Why are you lavishly thankful for a life-saving sandwich after three days in the wilderness and moderately thankful for a free dinner that evening? His answer is simple, credible, and not too startling: gratitude, by reflecting the value of the benefit received, calibrates the {204} repayment that's in order. Gratitude is an I.O.U., so naturally it records what's owed.

For the benefactor, the moral of the story is clear: the more desperate the plight of the beneficiary, the larger the I.O.U. Exquisitely sensitive sympathy is just highly nuanced investment advice. Our deepest compassion is our best bargain hunting. Most of us would look with contempt on an emergency-room doctor who quintupled his hourly fee for patients on the brink of death. We would call him callously exploitive. We would ask, "Don't you have any sympathy?" And if he had read his Trivers, he would say, "Yes, I have lots of it. I'm just being honest about what my sympathy is." This might dampen our moral indignation.

Speaking of moral indignation: it, like sympathy, assumes a new cast in light of reciprocal altruism. Guarding against exploitation, Trivers notes, is important. Even in the simple world of Axelrod's computer, with its discrete, binary interactions, TIT FOR TAT had to punish creatures that abused it. In the real world, where people may, in the guise of friendship, run up sizable debts and then welch on them — or may engage in outright theft — exploitation should be discouraged even more emphatically. Hence, perhaps, the fury of our moral indignation, the visceral certainty that we've been treated unfairly, that the culprit deserves punishment. The intuitively obvious idea of just deserts, the very core of the human sense of justice, is, in this view, a by-product of evolution, a simple genetic stratagem.

What's puzzling at first is the intensity that righteous indignation reaches. It can start feuds that dwarf the alleged offense, sometimes causing the death of the indignant. Why would genes counsel us to take even a slight risk of death for something as intangible as "honor" ? Trivers, in reply, noted that "small inequities repeated many times over a lifetime may exact a heavy toll," thus justifying a "strong show of aggression when the cheating tendency is discovered."26

A point he didn't make, but which has since been made, is that indignation is even more valuable when publicly observed. If word of your fierce honor gets around, so that a single, bloody fistfight deters scores of neighbors from cheating you — even slightly and occasionally — then the fight was worth the risk. And in a hunter-gatherer society, where almost all behavior is public, and gossip travels {205} fast, the effective audience for a fistfight is all-encompassing. It is notable that, even in modern industrial societies, when males kill males they know, there is usually an audience.27 This pattern seems perverse — why commit murder in front of witnesses? — except in terms of evolutionary psychology.

Trivers showed how complexly devious the real-life game of prisoner's dilemma could get, as feelings that evolved for one purpose were adapted to others. Thus, righteous indignation could become a pose that cheaters use — whether consciously or unconsciously — to escape suspicion ("How dare you impugn my integrity!"). And guilt, which may originally have had the simple role of prompting payment of overdue debts, could begin to serve a second function: prompting the preemptive confession of cheating that seems on the verge of discovery. (Ever notice how guilt does bear a certain correlation with the likelihood of getting caught?)

One hallmark of an elegant theory is its graceful comprehension of long-standing and otherwise puzzling data. In an experiment conducted in 1966, test subjects who believed they had broken an expensive machine were more inclined to volunteer for a painful experiment, but only if the damage had been discovered.28 If guilt were what idealists assume it to be — a beacon for moral guidance — its intensity wouldn't depend on whether a misdeed had been uncovered. Likewise if guilt were what group selectionists believe it to be — an incentive for reparations that are good for the group. But if guilt is, as Trivers says, just a way of keeping everyone happy with your level of reciprocation, its intensity should depend not on your misdeeds but on who knows or may soon know about them.

The same logic helps explain everyday urban life. When we pass a homeless person, we may feel uncomfortable about failing to help. But what really gets the conscience twinging is making eye contact and still failing to help. We don't seem to mind not giving nearly so much as we mind being seen not giving. (And, as for why we should care about the opinion of someone we'll never encounter again: perhaps in our ancestral environment just about everyone encountered was someone we might well encounter again.)29

The demise of "good of the group" logic shouldn't be exaggerated {206} or misconstrued. Reciprocal altruism is classically analyzed in one-on-one situations, and almost surely arose in that form. But the evolution of sacrifice may have grown more complex with time and fostered a sense of group obligation. Consider (not too literally) a "club-forming" gene. It gives you the capacity to think of two or three other people as parts of a unified team; in their presence, you target your altruism more diffusely, making sacrifices for the club as a whole. You might, for example, take a risk in the joint pursuit of wild game and (consciously or unconsciously) expect each of them to repay you on some future expedition. But rather than expect direct repayment, you expect them to sacrifice for "the group," as you did. The other club members expect this too, and people who fail to meet expectations may have their membership terminated, either gradually and implicitly or abruptly and explicitly.

A genetic infrastructure for clubbishness, being more complex than the infrastructure for one-on-one altruism, may sound less likely. But once the one-on-one variety is entrenched, the additional evolutionary steps aren't all that forbidding. So too for subsequent steps that might permit allegiance to even larger groups. Indeed, the growing success of a growing number of small groups within a hunter-gatherer village would be a Darwinian incentive to join larger ones, and get a leg up on the competition; genetic mutations that fostered such joining could flourish. Eventually, indeed, one can imagine a capacity for loyalty and sacrifice toward a group as large as the tribes that figured in Darwin's group-selectionist theory of the moral sentiments. Yet this scenario doesn't suffer from the complications of his scenario. It doesn't involve sacrifice for anyone who doesn't ultimately reciprocate.30

Actually, reciprocal altruism of the classic one-on-one variety can, by itself, yield seemingly collectivist behavior. In a species with language, one effective and almost effortless way to reward nice people and punish mean ones is to affect their reputations accordingly. Spreading the word that someone cheated you is potent retaliation, since it leads people to withhold altruism from that person for fear of getting burned. This may help explain the evolution of the "grievance" — not just the sense of having been wronged, but the urge to {207} publicly articulate it. People spend lots of time sharing grievances, listening to grievances, deciding whether the grievances are just, and amending their attitudes toward the accused accordingly.

Perhaps Trivers, in explaining "moral indignation" as a fuel for retaliatory aggression, was getting ahead of the game. As Martin Daly and Margo Wilson have noted, if simple aggression is your goal, a sense of moral outrage isn't necessary; sheer hostility will do fine. Presumably it is because humans evolved amid bystanders — bystanders whose opinions mattered — that a moral dimension has emerged, that grievances crystallize.

Exactly why opinions of bystanders matter is another question. Bystanders may, as Daly and Wilson put it, be imposing "collective sanctions" as part of a "social contract" (or, at least, part of a "club contract"). Or they may, as I've just suggested, simply be shunning reputed offenders out of self-interest, creating de facto social sanctions. And they may do some of both. In any event, the airing of grievances can lead to widespread reactions that function as collective sanctions, and this has come to be a vital part of moral systems. Few evolutionary psychologists would quarrel with Daly and Wilson's basic view that "Morality is the device of an animal of exceptional cognitive complexity, pursuing its interests in an exceptionally complex social universe."31

Perhaps the most legitimately dispiriting thing about reciprocal altruism is that it is a misnomer. Whereas with kin selection the "goal" of our genes is to actually help another organism, with reciprocal altruism the goal is that the organism be left under the impression that we've helped; the impression alone is enough to bring the reciprocation. The second goal always entailed the first in Axelrod's computer, and in human society it often does. But when it doesn't — when we can look nice without really being so nice, or can be profitably mean without getting caught — don't be surprised if an ugly part of human nature surfaces. Hence secret betrayals of all gradations, from the everyday to the Shakespearean. And hence the general tendency of people to burnish their moral reputations; reputation is the object of the game for this "moral" animal. And hence hypocrisy; it seems to flow from two natural forces: the tendency toward grievance {208} — to publicize the sins of others — and the tendency to obscure our own sins.

The evolution of George Williams's 1966 musings about reciprocal aid into a compelling body of explanation is one of the great feats of twentieth-century science. It involved ingenious and distinctly modern tools of analysis, and brought momentous results. Though the theory of reciprocal altruism isn't proved in the sense that theories of physics can be proved, it rightly commands much confidence within biology, and that confidence should grow as the connection of genes to the human brain becomes clearer in the coming decades. Though the theory isn't as arcane or as mind-bending as the theories of relativity or quantum mechanics, in the end it may alter the world-view of the human species more deeply and more problematically. {209}