by Birgir T. Runolfsson Solvason


    Types of Explanations
    Conflict vs. Coordination
    The Evolution of Cooperation
    Action Interest vs. Constitutional Interest
    Trust-rules vs. Solidarity-rules
    Second-order Clustering
    The State of Nature


    Under what conditions will cooperation emerge in a world of egoists without central authority? This question has intrigued people for a long time and for a good reason. We all know that people are not angels, and that they tend to look after themselves and their own first. Yet we also know that cooperation does occur and that our civilization is based upon it. But, in situations where each individual has an incentive to be selfish, how can cooperation ever develop? (Axelrod 1984:3)

    That cooperation is beneficial and even necessary to a prosperous society is well known. That cooperation, for the most part at least, exists in modern societies is also known. People trade peacefully with each other and even join common causes and contribute to common projects, both within and across societies. Although this may seem natural in most respects, it is at the same time curious.

    It is fully understandable that people can get together and cooperate when there are obvious benefits to all parties. In trade, for example, people give up what they value less for something they value more. When these benefits are less clear, whether to the parties or outside observers, it becomes harder to understand. In other situations, benefits are such that so long as some contribute (cooperate) it remains profitable for others not to contribute, as the latter may still enjoy the same benefits as those who contributed (the public goods problem).

    This is not to say that cooperation takes place in a vacuum. Rather, there are institutions that encourage or enforce cooperative behaviour, including property rights, law, money, and other market and state institutions. Observing the rise and, sometimes, the decline of such institutions, we could say that human history is the history of how cooperation emerged or failed to emerge, how it became stable or unstable.

    Human cooperation has always attracted scholarly attention. Various theories have been set forth on how social order emerges and collapses, and historical studies have tried to determine the factors that contribute to the formation of an orderly society.

    This chapter will outline and explain the theoretical framework that will be used to explain the emergence of the Icelandic Commonwealth. Before explaining my theory a brief discussion on theoretical constructions will be offered.


    In explaining how social order comes about a theorist can basically differentiate (at least) between two types of theories. The first is a theory of created order, where human beings deliberately set out to construct a social order. The second is a theory of spontaneous order, where people "accidentally" through their self-motivated behaviour "come up" with a social order.

    The first theoretical type typically tells a story of a ruler, a king or other sovereign, who decided on his own or in conjunction with others to establish a form of social order based on some form of rules, and in which some organization has the task of enforcing those rules. A good example of such a construction would be the story of the Founding Fathers and their creation of the United States of America.

    The second theoretical type would typically not postulate such a creator, but rather show how through some change in the behaviour pattern of individuals the order arose "accidentally" or unintentionally. This sort of theorizing has been put to its best use in describing the workings of a market economy (see Hayek 1976;Horwitz 1989) and its institutions (see Menger 1981;1984;1985;Vanberg 1988).

    The spontaneous order or "invisible-hand" approach is unique in that it starts from an original situation where the phenomenon to be explained does not exist and ends with a situation where it does exist, although no one aimed at this conclusion.1   In other words, the phenomenon was an unintended consequence of the behaviour of the individuals involved. This approach has often been equated with the Scottish Moral Philosophers and with some members of the Austrian School, notably Menger and Hayek (see, for example Barry 1982;Vanberg 1986;1988;Vanberg & Buchanan 1988). Menger's most famous use of this approach, or as he called it, the "organic" approach, is in his explanation of the evolution of money (for example Menger 1985). Hayek has used it to explain the evolution of the rules of conduct, or cultural evolution (for example Hayek 1967b;1967c).

    This type of explanation proceeds in steps:

    Step 1: An 'original situation' is described in which the institution (i.e. the behaviourial pattern) that is to be explained does not exist.

    Step 2: The ordinary behaviour is described that, under the stated conditions, individuals will typically exhibit in pursuit of their own interest.

    Step 3: It is shown that adopting a particular kind of behaviour would allow the individuals concerned to better realize their interests.

    Step 4: It is shown to be plausible to assume that, sooner or later, some innovative individual(s) will "discover" this particular behaviour and its advantageous consequences.

    Step 5: It is shown that, once the initial discovery has been made, other individuals are likely to notice the greater success of the 'pioneers' and they will tend to imitate their behaviour.

    Step 6: It is shown that as the behaviour spreads out and becomes common social practice it will result in the institution (that is: the socially uniform pattern of behaviour) that is to be explained.2

    Although the "unintended outcome" is a key to this sort of theorizing, this feature by itself is not enough to distinguish the theory from a "constructivist" theory. The latter type of theory can also describe unintended outcomes. Rather the difference is that in an invisible-hand explanation there is no intention that adoption of certain behaviour have a particular overall result, while in the constructivist explanation there is this intention.

    The purpose of the discussion above is to place the theory presented below in the invisible-hand category rather than the constructivist one. I do not claim that this theory is fully consistent with the procedural steps of the invisible-hand explanation as detailed above, but, rather, that my theory is more consistent with the spontaneous order approach than the constructivist one.

    There are basically two reasons to prefer a spontaneous order explanation. First, the spontaneous order theory can usually explain instances of constructivist order, while the constructivist theory cannot explain spontaneous orders. A spontaneous order theory can just as easily and convincingly explain the founding of America as can a constructivist theory. A constructivist theory, on the other hand, cannot as easily (and certainly not as convincingly) explain the workings of the market order and its institutions.

    Secondly the historical case that I analyze here was not chronicled by contemporary historians of the time. The first histories of the beginning of the Commonwealth were written about 200 years later and are therefore not trustworthy records of all the details of the formation of the order. I will not reject all constructivist elements of this history. My theory will not be fully consistent with a spontaneous order or invisible-hand theory and I propose to call it a "decentralized order" theory instead. As will become evident below, my theory subscribes basically to the invisible-hand form and yet allows some elements of intention on part of some members of the population.3


    In explaining how cooperation arises from a state of nature it is necessary to analyze the types of institutions that are required for cooperation. We typically refer to certain institutions that either encourage or enforce cooperative behaviour. These institutions include property rights, language, money, and law.

    These institutions are not all alike. At first glance, we might try to separate market institutions and state institutions, although there seems to be no clear line between the two. Another distinction would separate institutions that create benefits for a person only if he participates and institutions that generate benefits for a person whether he participates or not. Game-theory clarifies this distinction, contrasting coordination games with conflict games. A coordination game generates the greatest benefits to those who cooperate. Conflict games, such as prisoner's dilemmas, generate the greatest benefits to those who defect, or fail to cooperate.

    The institution of money, for example, corresponds to a coordination game. Only by using the same commodity as money as others are using can an individual benefit from the institution. Carl Menger put it this way:

    As each economizing individual becomes increasingly more aware of his economic interest, he is led by this interest, without any agreement, without legislative compulsion, and even without regard to the public interest, to give his commodities in exchange for other, more saleable, commodities, even if he does not need them for any immediate consumption purpose. With economic progress, therefore, we can everywhere observe phenomenon of a certain number of goods, especially those that are most easily saleable at a given time and place, becoming, under the influence of custom, acceptable to everyone in trade, and thus capable of being given in exchange for any other commodity. These goods were called "Geld" by our ancestors, a term derived from "gelten" which means to compensate or pay. Hence the term "Geld" in our language designates the means of payment as such. (Menger 1981:260)

    Using the more marketable commodity that others use expands each individual's choices. Not using the commonly accepted money reduces the number of choices.4

    In contrast, public goods, such as law, correspond to a prisoner's dilemma game (PD-game). With all other people adhering to the rule (cooperating), a single person does best by not adhering to the rule. In some sense, PD-games are like public goods, sharing the latter's free-rider problem. It may seem that state provision of the benefits in question would be the only solution to the problem. Recently, though, there has been a new interest in solving this problem with self-enforcing rules.

    Axelrod (1984) shares this interest (others are, Ullman-Margalit 1977;Hardin 1982;Sugden 1986;Taylor 1987).


    The essence of Axelrod's contribution is the notion of recurrent dealings (with a low discount rate) and reciprocity. If some individuals have recurrent interactions, then by adopting a stragety of reciprocity they can modify each others' behaviour. In game-theory terms this means that cooperation will be rewarded with cooperation, and defection retaliated against by defection. In a one shot PD-game this of course does not work, because reward and punishment cannot both be given by a player in the same game.

    If A cooperates and B defects in a PD game, A cannot punish B unless they play more than one game. If there are recurrent games between A and B, A could defect to punish B in the next game after B's defection. In a way, A's behaviour could be explained by his learning; A now knows B.

    Axelrod ran a computer tournament in which competition among different strategies was simulated. The strategy that came out on top in the tournament was TIT FOR TAT, a strategy in which the hypothetical player initiates a cooperative move and then reciprocicates its opponent's move on the following turn. The results from Axelrod's study suggest that cooperation can evolve without a central enforcement agency.5

    The main results of the Cooperation Theory are encouraging. They show that cooperation can get started by even a small cluster of individuals who are prepared to reciprocate cooperation, even in a world where no one else will cooperate. The analysis also shows that the two key requisites for cooperation to thrive are that the cooperation be based on reciprocity, and that the shadow of the future is important enough to make this reciprocity stable. (Axelrod 1984:173)

    All that is needed is for two people to start cooperating, and cooperation will spread to others.6  However, if defectors randomly interact with cooperators, there will be a limit to how far cooperation spreads. This is the problem of large numbers.7  As long as the group is small, there will be no opportunity for a defector to interact at random with the other members of the group. But after the group has grown to a certain point, the opportunity for defection presents itself (Vanberg and Buchanan 1988). It therefore seems that small groups, or clusters, would predominate instead of a large group. A different possibility for a large group is the creation of controlling institutions such as a central enforcement agency. But my purpose is to see if cooperation can emerge and survive without such institutions. Again, Axelrod's results suggest that reciprocity can serve as a type of an enforcement, and this possibility we recognize in human interaction.


    To throw better light on the issue let us look at a recent analysis by Vanberg and Buchanan (1988). They use the notions of "action interest" and "constitutional interest" to refer to what have been called individual interest and group interest, or private and common interest. This terminology is necessary in order to clarify what is at issue. The two interests do not necessary conflict, but, rather, allow one to differentiate between two levels of choice. The constitutional interest is what an individual considers his best interest as a member of a group in general, while action interest is what the individual considers his best interest in a particular situation. The constitutional interest determines an individuals choice of a rule or constitution for the whole group. The action interest determines whether an individual would actually adhere to the rule in a particular situation. The problem hindering the emergence of cooperation is that the two interests may not converge. In coordination problems they do converge, as there are no incentives to drive them apart. Again, in the case of money, it is only rational to use money if others use it, and if they do, then the best choice in a particular situation is to use it. To refrain from using money would leave the individual worse off. For PD-type problems, however, there is a problem of convergence. An individual may prefer a rule for the whole group, such as a rule intended to provide a public good, but, then in a particular situation he may be better off if he consumes the good without paying his share. We have seen that if reciprocity is practised, additional incentives are established to make the two interests converge. As pointed out, though, this reciprocity should only be expected to emerge, or be effective, in small groups or small-number settings, where recurrent dealings are expected.8


    Vanberg and Buchanan (1988) point out, however, that not only are there two types of game problems (two broad groups of games), the coordination and the PD (or conflict) type, but PD-games actually include two different sets of rules. These two PD-type rules, as the authors distinguish them, are trust-rules, like "respect property," and solidarity-rules, like "do not litter in public places," "respect waiting lines," "do not drive recklessly," and "pay your fair share in joint endeavors." The essential claim supporting this distinction is that the latter are not targeted to particular individuals or groups as are the former. Or as Vanberg and Buchanan put it:

      By his compliance with or transgression of trust-rules a person selectively affects specific other persons. Because compliance with or non-compliance with trust-rules is, in this sense, "targeted" the possibility of forming cooperative clusters exists: Any subset of actors, down to any two individuals, can realize cooperative gains by following these rules in their dealings with each other. Adoption of and compliance with trust-rules offers differential benefits to any group or cluster, independently of the behaviour of other persons in the more inclusive community or population. (1988:18)

      In contrast to trust-rules, compliance with or violation of solidarity rules cannot be selectively targeted at particular other persons, at least not within some "technically" - i.e. by the nature of the case - defined group. There is always a predefined group all members of which are affected by their respective rule related behaviour. (18-19)

      For solidarity rules it is not true, as it is for trust-rules, that any two individuals can start to form a "cooperative cluster" that would allow them to realize differential gains from which their unconstrained fellow-men are excluded. Solidarity-rules require adherence by some inclusively defined persons before providing differential mutual benefits to those who adopt compliance behaviour. (19)

    In other words, compliance with trust-rules provides benefits wholly to the participating actors and only to them. By contrast, compliance with solidarity-rules generates benefits both to participating actors and non-participating ones. The trust-rules therefore become self-enforcing with the additional incentive of reciprocity, but this incentive is not enough to make the solidarity rules self-enforcing.

    It was suggested above that as far as coordination rules are concerned, there is no "large-number" problem. They are totally self-enforcing, and there are no incentives for defection. In contrast, there was a "large-number" problem with PD-type rules. When we separate the trust-rules from the solidarity-rules, we see that large numbers are less of a problem for trust-rules than for solidarity-rules.

    Compliance with trust-rules confers benefits only upon participants, while solidarity-rules confer benefits upon others as well. Therefore, trust-rule groups can grow as large as the notion of reciprocity allows. In other words, an individual only has to discriminate between cooperators and defectors, and he can use his memory of previous interactions to accomplish this. Further, there is an incentive for the individual to cooperate, since others have the capacity to remember his previous behaviour. In trust-rule situations, he would not want to be defected against since that makes him miss out on the benefits. In contrast, for solidarity-rules he does not have this incentive, because these rules are like genuine non-excludable public goods. He benefits whether he cooperates or not, and is better off by defecting if the cooperative choice is costly.

    A partial solution to ensure compliance with solidarity-rules (or "norms," as Axelrod calls them) is offered by Axelrod (1986). His suggestion is that a metanorm be adopted to punish not only defectors, but also those who fail to punish defectors. A cooperating individual would himself punish not only those who defect, but also cooperators who do not punish defectors. But this solution requires more knowledge than does the solution for trust-rules. For the metanorm enforcement, the individual has to have knowledge not only of defectors but of those who fail to punish defectors. To enable group members to acquire the knowledge needed to enforce the metanorm requires a smaller group than trust-rules are capable of. Therefore, reciprocity in recurrent interactions only allows for small cooperative clusters. That essentially means that in such situations one winds up with many small cooperative groups or clusters. Will cooperation emerge among these groups, and if so, how?


    If two individuals can cooperate and become better off, why not two groups? Vanberg and Buchanan (1988) suggest that such "second-order" clustering provides a solution for problems of intergroup cooperation. If there are recurrent dealings between groups or between individuals from the different groups then it seems that a strategy of reciprocity supplies a solution here, as in the original case. Groups could have not only first-order boundaries but also different second-order boundaries. At the first-order level the group is bounded by the "optimal number" for cooperative clusters, the optimal number being determined by the range of the solidarity-rules. At the second level a different group emerges. This second-order group is different in that it incorporates members from more than one group. Two individuals from two different groups begin cooperating: this second-order cooperation spreads. It can spread, as on the first-order level, through a stragety of joining or imitation. If this secondary clustering works, then nothing prevents third order clustering also.9   In this way a hierarchy of groups could emerge without any central enforcement agency.

    Another way intergroup cooperation might emerge would be through intergroup sponsorship. A group guarantees the cooperative behaviour of the group members in interactions with members of other groups. Such sponsorship could be imitated by other groups if the original group was successful. Either of these could be described as a self-enforcing federal structure.10


    All the theorists commenting on the "cooperation problem" attempt to solve the problem of the state of nature, or, as it is often called, the Hobbesian problem of social order.11

    There are basically two ways to describe a state of nature. First, there is the paradigm of kin groups or tribes. This paradigm generally describes order in primitive societies. The claim is that kin groups and tribal clans can be orderly because the order is based on community, a sense of belonging to the group, or a belief in witchcraft or supernatural sanctions (Taylor 1982;1984). On this view, there is some limit to group size and a limit to intergroup cooperation. Intergroup cooperation is based mainly on marriages or fostering, and the stability of cooperation is dependent on how a person changing groups views the relevant communities. Will the person view the old group as part of his community or not? If he does, some intergroup stability can be realized.

    The other paradigm is that of a state or central enforcement agency. This is the Hobbesian view. It supposes that individuals will not be rule-follower voluntarily, so a state must impose and enforce such behaviour. The claim is that individuals in the state of nature maximize their own utility without consideration for others. This view may not deny that kin groups can be orderly, but it denies that anything beyond that will be (Taylor 1987).

    As the cooperation theory that I have put forth above shows, a solution to the problem posed by Hobbes is possible without a central enforcement agency. Cooperation could emerge between kin groups by the same mechanism. It must be remembered, however, that an original state of nature where all are fighting all is unlikely to have ever existed. The theory outlined in this chapter will be tested for its historical relevance against a historical case that most resembles a state of nature, the settlement and the rise of social order in medieval Iceland.

    bibliography       chapter 3

    1 For a more detailed discussion on this approach see Ullman-Margalit 1978;Nozick 1974;Vanberg 1988.
    2Quoted from Vanberg (1988:9-10) and shorthnd somewhat. Vanberg, in his presentation, is showing Menger's invisible-hand explanation and his theory of the evolution of money. The references to Menger and money were skipped in the quote.
    3 My theory therefore does not conform to the invisible-hand explanation as detailed by Ullman-Margalit 1978;Vanberg 1988. On the other hand, it is fully consistent with Nozick's (1974) account of what an "invisible-hand" explanation should be like.
    4 To avoid misunderstanding; it is not the marketability, as such, of the commodity money that is being stressed here, but rather that people are using it and thereby establish it as a convention.
    5 Michael Taylor (1987), through an analysis of two-person PD-games, reaches the same conclusion: "Axelrod comes to the same general conclusions we arrive at here (and which was at the heart of the analysis of the Prisoners' Dilemma supergame in Anarchy and Cooperation), namely that `the two key requisites for cooperation to thrive are that the cooperation be based on reciprocity, and that the shadow of the future is important enough to make this reciprocity stable'." (p.70)
    6 A full list of what assumptions Axelrod (1984) claims he does and does not make can be found in the following: "[L]ittle had to be assumed about the individuals or the social setting to establish these results. The individuals do not have to be rational: the evolutionary process allows the successful strageties to thrive, even if the players do not know why or how. Nor do the players have to exchange massages or commitments: they do not need words, because their deeds speak for them. Likewise, there is no need to assume trust between the players: the use of reciprocity can be enough to make defection unproductive. Altruism is not needed: successful strageties can elicit cooperation even from an egoist. Finally, no central authority is needed: cooperation based on reciprocity can be self-policing. The emergence, growth, and maintenance of cooperation do require some assumptions about the individuals and the social setting. They require an individual to be able to recognize another player who has been dealt with before. They also require that one's prior history of interactions with this player can be remembered, so that a player can be responsive.... For cooperation to prove stable, the future must have a sufficiently large shadow.... It requires that the players have a large enough chance of meeting again and that they do not discount the significance of their next meeting too greatly.... Finally, the evolution of cooperation requires that successful strageties can thrive and that there be a source of variation in the strageties which are being used. These mechanisms can be classical Darwinian survival of the fittest and the mutation, but they can also involve more deliberate processes such as imitation of successful patterns of behavior and intelligently designed new strageties.... In order for cooperation to get started in the first place, one more condition is required. The problem is that in a world of unconditional defection, a single individual who offers cooperation cannot prosper unless others are around who will reciprocate. On the other hand, cooperation can emerge from small clusters of discriminating individuals as long as these individuals have even a small proportion of their interactions with each other. So there must be some clustering of individuals who use strageties with two properties: the strageties will be the first to cooperate, and they will discriminate between those who respond to the cooperation and those who do not." (1984:173-5) The following caution from Rappoport should be kept in mind: "The most instructive lesson to be drawn from the iterated Prisoner's Dilemma stragety contests and from the simulated "ecologies" associated with them concerns not what will happen under given conditions (the usual instruction expected from an experiment), not even what is likely to happen, but only what can logically happen" (1988:400).
    7 Or as Taylor puts it: Axelrod's "analysis hinges on the assumption that an individual will play out the whole of an infinite supergame with one other player, or each player in turn, rather than, say, ranging through the population, or part of it, playing against different players at different times in the supergame (possibly playing each of them a random number of times)." (1987:71) And Taylor continues: "[I]t is pretty clear that Cooperation amongst a relatively large number of players is `less likely' to occur than Cooperation amongst a small number. For a start, the more players there are, the greater is the number of conditions that have to be satisfied - the conditions specifying that the right kinds of conditionally Cooperative strageties are present and those specifying the inequalities that all the Cooperators' discount rates must satisfy. But the main reason for this new `size' effect is that Cooperation can be sustained only if conditional Cooperators are present and conditional Cooperators must be able to monitor the behavior of others. Clearly, such monitoring becomes increasingly difficult as the size of the group increases." (1987:104-5) But, Taylor does state that: "Nevertheless, it has been shown that under certain conditions the Cooperation of some or all of the players could emerge in the supergame no matter how many players there are." (1987:104)
    8 "Reciprocity seems likely to emerge and to be effective as a behavioral pattern only in critically small-number settings, where individuals both identify others in the social interaction and expect to experience further dealings within the same group. The question for us becomes one of identifying conditions under which persons are likely to form small-number groups or `cooperative clusters' that internally secure rule-following through reciprocity. In this regard it is useful to distinguish between two types of rules which we shall call trust rules and solidarity rules." (Vanberg and Buchanan 1988:147)
    9 Whether these clustering will actually be hierarchial or only overlapping on the same level is not of concern here. In the theory both ways would tend to promote cooperation between groups.
    10 Others have suggested similar solutions: "[R]ussel Hardin has suggested that large groups without any internal authority structure at all may be able to resolve collective action dilemmas by using a federated structure. He argues that despite the absence of a central authority, subunits may be able to regulate themselves via decentralized strageties. Such self-regulation could arise if there were multiple activities going on simultaneously in each chapter" (Bendor and Mookherjee 1987:143). Or, as Hardin himself states: "It is hard to imagine that conventional behavior or strageties of contingent cooperation could resolve Prisoner's Dilemmas if these occurred exclusively in very large groups. Large-group Prisoners Dilemmas might be resolved as a byproduct of smaller subgroup interactions. But this could be strictly a spontaneous voluntaristic by-product - not the organized by-product of Olson's analysis... (1982:184) ... Overlapping activities are therefore perhaps most important for their relation to reputation, or rather for the dependence of one's reputation on one's behavior in a cluster of activities." (1982:185)
    11 This section is only supposed to connect the cooperation problem contributions to a broader class of a problem all of the contributors claim to be challenging, the Hobbesian problem. All contributions are looking for an alternative solution, an alternative to the Hobbesian solution.

    This file has been created with the evaluation or unregistered copy of

    EasyHelp/Web from Eon Solutions Ltd
    (Tel: +44 (0)973 209667, Email: eon@cix.compulink.co.uk)