The Evolution of Cooperation

Stephen Boydstun's picture
Submitted by Stephen Boydstun on Mon, 2007-08-27 13:56

This review of mine appeared in Nomos magazine in spring 1985. 

The Evolution of Cooperation by Robert Axelrod (Basic Books, 1984)


Cooperation for mutual benefit is sometimes exceedingly difficult. Suppose that two strangers have agreed to an exchange that must be kept secret. They agree on the money price for the good being exchanged, and they agree on separate pickup locations for the money and the good. The purchaser of the good then calculates: If I leave the money and she leaves the good, that is good (R = reward for cooperation). But if I leave the money and she does not leave the good, that is the worst outcome (S = sucker’s loss). If I do not leave the good, that is the very best (T = temptation). Finally, if I leave no money and she leaves no good (P = punishment), that is worse than trade (R), but better than sucker (Drunk. No matter what she does, I am better off to leave no money; I risk no loss and I gain a lot if she takes the sucker’s loss. Since the seller of the good goes through the same calculations, nothing is exchanged—there is no cooperation.


The preceding situation (T>R>P>S, in terms of each participant’s own utilities) is known in formal game theory as Prisoner’s Dilemma. One way out of the dilemma is to divide the exchange (if the good is divisible) into a sequence of smaller exchanges. Some cooperation can then emerge. But it can be foreseen by the parties that the final exchange would be just the old single-shot Prisoner’s Dilemma with all the stakes scaled down, and this foresight bodes ill for the exchanges preceding the last. More cooperation is possible if the exchange can be iterated by multiplication, that is, if the parties have a present interest in repeating the exchange an indefinite number of times in the future.


The iterated Prisoner’s Dilemma has important applications in personal, business, and political relationships. It underlies the political and military relations between the two superpowers. It would be good to know what the best strategy is for a player in the iterated Prisoner’s Dilemma. Robert Axelrod has investigated this question and has reported his findings in The Evolution of Cooperation.


Axelrod solicited entries for a computer tournament to see how various strategies fare when paired against each other in an iterated Prisoner’s Dilemma. Each program specified whether to cooperate or defect (not cooperate) on the next move, having the complete history of the present match available for use in reaching its decision. The award points reflected the Prisoner’s Dilemma payoffs. For example, with (T = 5) > (R = 3) > (P = Innocent > (S = -2), two opponents each choosing to cooperate in the next move would each be awarded three points. The points were totaled for each player for each match and the match scores were averaged over the number of matches to arrive at each player’s tournament score. And the winner was . . .


Actually, there were two tournaments. The second tournament was larger—62 entries, as opposed to 14 in the first—and its strategies were drawn up with full knowledge of the results from the first tournament. A program named TIT FOR TAT won both tournaments. It had the simplest prescription of all: cooperate on the first move and thereafter do whatever the opponent did on the preceding move.


TIT FOR TAT accumulated the most tournament points but in no match did it receive more points than its opponents. It cannot score more than its opponent because it lets the opponent defect first and it will never defect a greater number of times than its opponent. It is one thing to do better than one’s opponent; it may be quite another to do well for oneself. High tournament scores were achieved only by strategies that were nice, that is, they were never the first to defect. The nice rules did well largely because they did well with each other and because there were enough of them to substantially raise each other’s scores.


On not-so-nice strategy DOWNING was especially important in determining the rank among the nice rules. DOWNING tries to understand its opponent. If the opponent does not seem responsive to what DOWNING is doing, DOWNING defects; if the opponent does seem responsive, DOWNING cooperates. Nice rules that do not retaliate are pulled down by DOWNING.


One of the lowest-ranking nice rules FRIEDMAN employed permanent retaliation. TIT FOR TAT does better by forgiving after only one retaliation. In fact, in the environment of the first tournament, a program that was not entered TIT FOR TWO TATS (i.e., defect only after the opponent defects twice in a row), would have done even better than TIT FOR TAT. This is because TIT FOR TAT gets locked into unending mutual recriminations when playing against rules that are otherwise like itself but which sneak in an occasional unprovoked defection. This is an important drawback to using TIT FOR TAT in real world applications in which there is some uncertainty about what an opponent did on the previous move. Axelrod suggests using a fraction of a tit for an uncertain tat.


In the second tournament TIT FOR TWO TATS was entered, but it came in twenty-fourth. TIT FOR TWO TATS was badly exploited by TESTER which defects on the first move, cooperates on the second and third, and thereafter defects on every other move until its opponent defects. One tat is far better than two tats when tit tatting with TESTER because, once its opponent defects, TESTER reforms and becomes TIT FOR TAT. TIT FOR TWO (or more) TATS is one of the few strategies which could suffer greatly by advance publication to its opponents.


TIT FOR TAT also benefits in virtue of the fact that it is a very common strategy and is relatively easy to recognize. Opponents know they are likely to encounter it and the only way to do well with it is to cooperate.


If everyone in an environment is using the same strategy and if there is no better strategy for a player to adopt in that environment, then the strategy is said to be collectively stable. Axelrod demonstrates that in order for a nice strategy to be collectively stable, it must be provoked by the very first defection. A nice strategy such as TIT FOR TAT will be collectively stable provided time preferences are sufficiently low, that is, if future cooperation is not valued steeply less than present cooperation. Furthermore, if a nice strategy cannot be destabilized by a single deviant player, it also cannot be destabilized by a cluster of deviants.


The totally uncooperative strategy ALWAYS DEFECT is also collectively stable. But if deviants arrive in clusters in a world of ALWAYS DEFECT, cooperation among deviants can begin, provided the probability of a deviant interacting with another deviant is sufficiently greater than the probability of a deviant interacting with an ALWAYS DEFECT. The requisite size of a cluster of deviants is smallest for those strategies that will cooperate even if the present opponent has not already cooperated and, once it cooperates, will never again cooperate with ALWAYS DEFECT, but will always cooperate with its own kind. TIT FOR TAT is such a strategy. It can enter a world of meanies very efficiently and entrench itself there quite securely.


Our world generally offers a little more room for maneuver than did the computer tournaments. As noted by Axelrod, a significant portion of the human brain is devoted to the task of recognizing the individual human face. However, when beginning an interaction with a stranger, immediately observable indications of sex, age, race, wealth, education, and so forth are used as cues to individual character on the expectation that the stranger will behave like others who share the same characteristics. Using his theoretical results, Axelrod is able to explain many of the features of social labels, stereotypes, and status hierarchies. The collectively stability of TIT FOR TAT underlies the cruel fact that stereotypes can be stable, even when they are not based on objective fact.


Axelrod’s most detailed historical application is the case of the unauthorized ceasefires that occurred in so many locations in the trench warfare of World War I. The iterated Prisoner’s Dilemma was there. These spontaneous little ceasefires were quite dependable and proceeded purely by tacit agreement. They endured until the top brass finally destroyed the conditions necessary for cooperation. Also cited is the case of a certain folkway that emerged in the early years of the U.S. Senate. It involves helping out a colleague and getting repaid in kind. Some readers will doubt whether this sort of cooperation benefits the public.

( categories: )

Social Conventions

Stephen Boydstun's picture

Social Conventions: From Language to Law

Andrei Marmor (Princeton 2009)

Games and Language

Stephen Boydstun's picture

At the Pacific Division Meeting of the American Philosophical Association, there will be a symposium on Evolutionary Game Theoretic Explanations of the Emergence of Language.

The meeting will be held in Vancouver at the Westin Bayshore. This session will be on April 9th at 9:00 a.m.–Noon.


J. McKenzie Alexander (also)

Jeffrey Barrett

Simon Huttegger

Brian Skyrms (also)

Kevin Zollman


The session of the Ayn Rand Society will be that evening.

Further Related Work for Games

Stephen Boydstun's picture

Evolution of the Social Contract

Brian Skyrms (Cambridge 1996)


The Stag Hunt and the Evolution of Social Structure

Brian Skyrms (Cambridge 2004)


In Philosophy of Science:

“Stability and Explanatory Significance of Some Simple Evolutionary Models”

Brian Skyrms (1996) (67:94–113)

“Evolution and the Explanation of Meaning”

Simon Huttegger (2007) (74:1–27)


See also my “Rights and Game Strategies” A, B, C and the references cited therein.


Leonid If one cooperates

Leonid's picture

If one cooperates with a rational person who understands perils of fraud,he can trust him. If one cooperates with irrational person...well, one better not.The outcome always will be S and no computer will help him. " In coopperation of good with evil only evil can win." Ayn Rand.

Human Nature and Game Theory

Stephen Boydstun's picture

Check out for free this super-fine book by Tom Siegfried, the new Editor in Chief of Science News.


A Beautiful Math: John Nash, Game Theory, and the Modern Quest for a Code of Nature


2007 Nobel - Economics

Stephen Boydstun's picture

2007 Nobel Memorial Prize in Economics


Mechanism Design Theory


Leonid Hurwicz


Eric Maskin


Roger Meyerson

Prof. Meyerson is the author of Game Theory: Analysis of Conflict (1991) and Probability Models for Economic Decisions (2005). He has published numerous articles in Econometrica, Mathematics of Operations Research, and International Journal of Game Theory.

Nozick 1993

Stephen Boydstun's picture

Nozick, Robert. 1993. The Nature of Rationality. Princeton.

See the section “Prisoner’s Dilemma” (pages 50–59) for an original analysis of rational decision in PD situations under various mixes of (i) expected utility principles according to causal decision theory, (ii) expected utility principles according to evidential decision theory, and (iii) a symbolic utility factor of the alternative acts.

Further Developments

Stephen Boydstun's picture

Further Works of Robert Axelrod


"Models of Cooperation Based on the Prisoner's Dilemma and the Snowdrift Game"

Michael Doebeli, University of British Columbia


"Cheating Viruses and Game Theory"

Paul Turner, Yale University


Evolutionary Games and Population Dynamics

Josef Hofbauer and Karl Sigmund (Cambridge University Press, 1998)


Evolutionary Dynamics: Exploring the Equations of Life

Martin A. Nowak (Harvard University Press, 2006)

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.