Skip to main content
Advertisement
  • Loading metrics

Asymmetric Evolutionary Games

Abstract

Evolutionary game theory is a powerful framework for studying evolution in populations of interacting individuals. A common assumption in evolutionary game theory is that interactions are symmetric, which means that the players are distinguished by only their strategies. In nature, however, the microscopic interactions between players are nearly always asymmetric due to environmental effects, differing baseline characteristics, and other possible sources of heterogeneity. To model these phenomena, we introduce into evolutionary game theory two broad classes of asymmetric interactions: ecological and genotypic. Ecological asymmetry results from variation in the environments of the players, while genotypic asymmetry is a consequence of the players having differing baseline genotypes. We develop a theory of these forms of asymmetry for games in structured populations and use the classical social dilemmas, the Prisoner’s Dilemma and the Snowdrift Game, for illustrations. Interestingly, asymmetric games reveal essential differences between models of genetic evolution based on reproduction and models of cultural evolution based on imitation that are not apparent in symmetric games.

Author Summary

Biological interactions, even between members of the same species, are almost always asymmetric due to differences in size, access to resources, or past interactions. However, classical game-theoretical models of evolution fail to account for sources of asymmetry in a comprehensive manner. Here, we extend the theory of evolutionary games to two general classes of asymmetry arising from environmental variation and individual differences, covering much of the heterogeneity observed in nature. If selection is weak, evolutionary processes based on asymmetric interactions behave macroscopically like symmetric games with payoffs that may depend on the resource distribution in the population or its structure. Asymmetry uncovers differences between genetic and cultural evolution that are not apparent when interactions are symmetric.

Introduction

Evolutionary game theory has been used extensively to study the evolution of cooperation in social dilemmas [13]. A social dilemma is typically modeled as a game with two strategies, cooperate (C) and defect (D), whose payoffs for pairwise interactions are defined by a matrix of the form (1) [4, 5]. For a focal player using a strategy on the left-hand side of this matrix against an opponent using a strategy on the top of the matrix, the first (resp. second) coordinate of the corresponding entry of this matrix is the payoff to the focal player (resp. opponent). That is, a cooperator receives R when facing another cooperator and S when facing a defector; a defector receives T when facing a cooperator and P when facing another defector. Since the same argument applies to the opponent, the game defined by (Eq 1) is symmetric. If defection pays more than cooperation when the opponent is a cooperator (T > R), but the payoff for mutual cooperation is greater than the payoff for mutual defection (R > P), then a social dilemma [6, 7] arises from this game due to the conflict of interest between the individual and the group (or pair). The nature of this social dilemma depends on the ordering of R, S, T, and P. Biologically, the most important rankings are given by the Prisoner’s Dilemma (T > R > P > S) and the Snowdrift Game (T > R > S > P) [4, 710].

Since matrix (Eq 1) defines a symmetric game, any two players using the same strategy are indistinguishable for the purpose of calculating payoffs. In nature, however, asymmetry frequently arises in interspecies interactions such as parasitic or symbiotic relationships [4]. Interactions between subpopulations, such as in Dawkins’ Battle of the Sexes Game [1114], also give rise to asymmetry that cannot be modeled by the symmetric matrix (Eq 1). Even intraspecies interactions are essentially always asymmetric: (i) phenotypic variations such as size, strength, speed, wealth, or intellectual capabilities; (ii) differences in access to and availability of environmental resources; or (iii) each individual’s history of past interactions, all affect the interacting individuals differently and result in asymmetric payoffs. The winner-loser effect, for example, is a well-studied example of effects of previous encounters on future interactions and has been reported across taxa [4, 15], including even mollusks [16, 17]. Asymmetry may also result from the assignment of social roles [1820], such as the roles of “parent” and “offspring” [21]: cooperation may be tied to individual energy or strength, for example, which is, in turn, determined by a player’s role. In the realm of continuous strategies, adaptive dynamics has been used to study asymmetric competition, which applies to the resource consumption of plants, for instance [2224]. In social dilemmas containing many cooperators, accumulated benefits may be synergistically enhanced (or discounted) in a way that depends on who or where the players are [7], thereby making larger group interactions asymmetric. To model such interactions using evolutionary game theory, the payoff matrix must reflect the asymmetry.

In the Donation Game, a cooperator pays a cost, c, to deliver a benefit, b, to the opponent, while a defector pays no cost and provides no benefit [25]. In terms of matrix (Eq 1), this game satisfies R = bc, S = −c, T = b, and P = 0. Provided b and c are positive, mutual defection is the only Nash equilibrium. If b > c, then this game defines a Prisoner’s Dilemma. Perhaps the simplest way to modify this game to account for possible sources of asymmetry is to allow for each pair of players to have a distinct payoff matrix; that is, the payoff matrix for player i against player j in the Donation Game is (2) for some bi, bj, ci, and cj. If player i cooperates, then this player donates bi to his or her opponent and incurs a cost of ci for doing so. As before, defectors provide no benefit and pay no cost. The index i could refer to a baseline trait of the player, the player’s location, his or her history of past interactions, motivation [26], or any other non-strategy characteristic that distinguishes one player from another.

Games based on matrices of the form (Eq 2), with payoffs for both players in each entry of the matrix, are sometimes called bimatrix games. Although bimatrix games have appeared in the context of evolutionary dynamics [14, 20, 27], most of the focus on these games has been in the setting of classical game theory and economics [see 28] where “matrix game” generally means “bimatrix game.” Bimatrix games may be used to model classical asymmetric interactions such as those arising from sexual asymmetry in the Battle of the Sexes Game [29]. The asymmetric, four-strategy Hawk-Dove Game of [4] consisting of the strategies Hawk, Dove, Bourgeois, and anti-Bourgeois may also be framed as a (4 × 4) bimatrix game [see 30]. Symmetric matrix games, such as (Eq 1), are special cases of bimatrix games. We explore here the ways in which bimatrix games can be incorporated into evolutionary dynamics and used to model natural asymmetries in biological populations.

We treat two particular forms of asymmetry: ecological and genotypic. Ecological asymmetry is derived from the locations of the players, whereas genotypic asymmetry is based on the players themselves. With ecological asymmetry, Mij is the payoff matrix for a player at location i against a player at location j. Since the payoffs depend on the locations of the players, this form of asymmetry requires a structured population. Ecological asymmetry is a natural consideration in evolutionary dynamics since it ties strategy success to the environment. In the Donation Game, for instance, cooperators might be donating goods or services, but the costs and benefits may depend on the environmental conditions, i.e. the location of the donor.

On the other hand, players might instead differ in ability or strength, and “strong” cooperators might contribute greater benefits (or incur lower costs) than “weak” cooperators. This variation results in genotypic asymmetry, where each player has a baseline genotype (strength) and a strategy (C or D). This form of asymmetry turns out to be subtler than it seems at first glance, however, since genotypes are generally represented by strategies in evolutionary game theory [4, 31]. In particular, it might seem that the genotype and strategy of a player could be combined into a single composite strategy and that the symmetric game based on these composite strategies could replace the original asymmetric game. As it happens, whether genotypic asymmetry can be resolved by a symmetric game depends on the details of the evolutionary process.

Classically, evolutionary games were studied in infinite populations via replicator dynamics [32], and more recently these games have been considered in finite populations [33, 34]. Because every biological population is finite, we focus on finite populations (which, for technical reasons, we assume to be large). Since ecological asymmetry requires distinguishing different locations within the population, we assume that the population is structured and that a network defines the structure. Network-structured populations have received a considerable amount of attention in evolutionary game theory and provide a natural setting in which to study social dilemmas [1, 3, 3538]. Compared to well-mixed populations, in which each player interacts with every other player, networks can restrict the interactions that occur within the population by specifying which players are “neighbors,” i.e. share a link. We represent the links among the N players in the population using an adjacency matrix, (wij)1 ⩽ i, jN, which is defined by letting wij = 1 if there is a link from vertex i to vertex j and 0 otherwise (and satisfies wij = wji for each i and j).

In an evolutionary game, the state of a population of players is defined by specifying the strategy of each player. Each player interacts with all of his or her neighbors. The total payoff to a player is multiplied by a selection intensity, β ⩾ 0, and then converted into fitness (see Methods). Once each player is assigned a fitness, an update rule is used to determine the state of the population at the next time step [39]. For example, with a birth-death update rule, a player is chosen from the population for reproduction with probability proportional to relative fitness. A neighbor of the reproducing player is then randomly chosen for death, and the offspring, who inherits the strategy of the parent, fills the vacancy. This process is a modification of the Moran process [40], adapted to allow for (i) frequency-dependent fitnesses and (ii) population structures that are not necessarily well mixed. The order of birth and death could also be reversed to get a death-birth update rule [1]. In this rule, death occurs at random and the neighbors of the deceased compete to reproduce in order to fill the vacancy. These two rules result in the update of a single strategy in each time step, but one could consider other rules, such as Wright-Fisher updating, in which all of the strategies are revised in each generation [41]. The rules mentioned to this point define strategy updates via reproduction and inheritance; as such, we refer to them as genetic update rules.

Another popular class of update rules is based on revisions to the existing players’ strategy choices. We refer to rules falling into this class as cultural update rules. Examples include imitation updating, in which a player is selected at random to evaluate his or her strategy and then probabilistically compares this strategy to those of his or her neighbors [1]. A more localized version of this update rule is known as pairwise comparison updating, in which a player chooses a random neighbor for comparison rather than looking at the entire neighborhood [42, 43]. Under best response dynamics, an individual adopts the strategy that performs best given the current strategies of his or her neighbors [44]. In each of these cultural processes, the strategy of a player can change, but the underlying genotype is always the same, which suggests that baseline genotype and strategy need to be treated separately.

Genotypic asymmetry needs to be handled more carefully if the update rule is genetic since the nature of genotype transmission affects the dynamics of the process. In contrast to cultural processes, the genotype and strategy of a player at a given location may both change if the update rule is genetic: genotype may be inherited but not imitated. We will see that this property results in cultural and genetic processes behaving completely differently in the presence of genotypic asymmetry. Phenotype may have both genetic and environmental components [45, 46], and after treating the genetic (genotypic) and environmental components separately, these two forms of asymmetry may be combined in order to get a model in which the asymmetry is derived from varying baseline phenotypes. Thus, with a theory of both ecological asymmetry and genotypic asymmetry based on inherited genotypes, one can account for more complicated forms of asymmetry appearing in biological populations.

Results

Ecological asymmetry

Here we develop a framework for ecologically asymmetric games in which the payoffs depend on the locations of the players as well as their strategies. We assume that all of the players have the same set of strategies (or “actions”) available to them, {A1, …, An}. The payoff matrix for a player at vertex i against a player at vertex j is (3) That is, a player at vertex i using strategy Ar against an opponent at vertex j using strategy As realizes a payoff of , whereas his opponent receives . Since depends on i and j, these payoff matrices capture the asymmetry of the game.

In the simpler setting of symmetric games, the pair approximation method has been used successfully to describe the dynamics of evolutionary processes on networks [1, 36, 4749]. For each r ∈ {1, …, n}, this method approximates the frequency of strategy Ar, which we denote by pr, using the frequencies of strategy pairs in the population. Pair approximation is expected to be accurate on large random regular networks [1, 48], so we assume that the network is regular (of degree k > 2) and that N is sufficiently large. (For k = 2, the network is just a cycle, which we do not treat here.) We also take β ≪ 1, meaning that selection is weak, which results in a separation of timescales: the local configurations equilibrate quickly, while the global strategy frequencies change much more slowly. This separation allows us to get an explicit expression for the expected change, 𝔼 [Δpr], in the frequency of strategy Ar for each r. Incidentally, weak selection happens to be quite reasonable from a biological perspective since each trait is expected to have only a small effect on the overall fitness of a player [5052].

Interestingly, for two genetic and two cultural update rules, weak selection reduces ecological asymmetry to a symmetric game derived from the spatial average of the payoff matrices:

Theorem 1. In the limit of weak selection, the dynamics of the ecologically asymmetric death-birth, birth-death, imitation, and pairwise comparison processes on a large, regular network may be approximated by the dynamics of a symmetric game with the same update rule and payoff matrix , i.e. (4) where for each s and t.

For a proof of Theorem 1, see Methods. In Methods, we derive explicit formulas for 𝔼 [Δpr] for each r (where pr is the frequency of strategy Ar and 𝔼 [Δpr] is the expected change in pr in one step of the process) and show that these expectations depend on in the limit of weak selection. If we choose an appropriate time scale and make the approximation (5) then the dynamics of an ecologically asymmetric process may also be described in terms of the replicator equation (on graphs) of [36]: If , then (6) where is a function of , k, and the update rule. (For each of the four processes, the explicit expression for is provided in Methods.) The matrix accounts for local competition resulting from the population structure [see 36]. In particular, the Ohtsuki-Nowak transform, (7) which transforms the classical replicator equation into the replicator equation on graphs, also applies to evolutionary games with ecological asymmetry.

Even though interactions are now governed by a symmetric game, Theorem 1 states that, in general, the dynamics depend on the particular network configuration, (wij)1 ⩽ i, jN; that is, the symmetric payoffs defined by still depend on the network structure, or, equivalently, on the distribution of ecological resources within the population. However, somewhat surprisingly, there is a broad class of games for which this dependence vanishes:

Definition 1. If for each r and s, then Mij is called a spatially additive payoff matrix. If Mij is spatially additive for each i and j, then the game is said to be spatially additive.

A game is spatially additive if the payoff for an interaction between any two members of the population can be decomposed as a sum of two components, one from each player’s location. Note that spatial additivity is different from the “equal gains from switching” property [53] in that neither implies the other. However, spatial additivity is an analogue in the following sense: if two players at different locations use the same strategy against a common opponent, then the difference in these two players’ payoffs for this interaction is independent of the location of the opponent. Interchanging “location” and “strategy,” one obtains the equal gains from switching property. The importance of spatially additive games is due to the following corollary to Theorem 1:

Corollary 1. If Mij is spatially additive for each i and j, then the expected change in the frequency of strategy Ar, 𝔼 [Δpr], is independent of (wij)1 ⩽ i, jN for each r. In particular, the dynamics of the process do not depend on the particular network configuration.

As an example, the asymmetric Donation Game is spatially additive and possesses the equal gains from switching property, which greatly simplifies the analysis of its dynamics:

Example 1. (Donation Game with ecological asymmetry). The asymmetric Donation Game with payoff matrices defined by Eq (2) is spatially additive and satisfies (8) where and . Therefore, the dynamics of the asymmetric game are the same as those of its symmetric counterpart with benefit, , and cost, , regardless of network configuration or resource distribution. Under death-birth (resp. imitation) updating, this result implies that cooperation is expected to increase if and only if (resp. ), where k is the degree of the (regular) network [1]. Fig 1(A) compares the predicted result obtained from to simulation data for imitation updating when benefit and cost values are distributed according to Gaussian random variables.

thumbnail
Fig 1. Average change in the frequency of cooperators, , as a function of the frequency of cooperators, pC, in (A) an asymmetric Donation Game and (B) asymmetric Snowdrift Games.

The update rules are (A) imitation and (B) death-birth, and each process has for a selection intensity β = 0.01. In both figures, the network is a random regular graph of size N = 500 and degree k = 3. In (A), benefits and costs of cooperation vary across vertices according to a Gaussian distribution with mean 3.5, variance 1.0 for benefits and mean 0.5, variance 0.25 for costs. In (B), the benefit is b = 5.0 for all vertices, and the costs are either low, c1 = 34/13, or high c2 = 70/13, which actually recovers the payoff ranking of the Prisoner’s Dilemma because c2 > b. The costs are the same for all vertices (c1, blue and c2, green) or mixed at equal proportions (red). (B) confirms that the average change in cooperators in the mixed Snowdrift Game/Prisoner’s Dilemma (red) may be obtained by averaging these changes for the Snowdrift Game (blue) and the Prisoner’s Dilemma (green). The small, systematic deviations between simulation data and analytical predictions (solid lines) are explained in Methods (where it is also shown that is linear in β for β ≪ 1).

https://doi.org/10.1371/journal.pcbi.1004349.g001

Example 2. (Snowdrift Game with ecological asymmetry). In order to illustrate when Corollary 1 fails, we turn to cooperation in the Snowdrift Game [8, 9]. In this game, two drivers find themselves on either side of a snowdrift. If both cooperate in clearing the snowdrift, they share the cost, c, equally, and both receive the benefit of being able to pass, b. If one player cooperates and the other defects, both players receive b but the cooperator pays the full cost, c. If both players defect, each receives no benefit and pays no cost. In order to incorporate ecological asymmetry, we assume that the benefits are all the same since they are derived from being able to pass in the absence of a snowdrift. On the other hand, the cost a player pays to clear the snowdrift may depend on his or her location: the snowdrift may appear on an incline, for example, in which case one player shovels with the gradient and the other player against it. Moreover, when two cooperators meet, they might clear unequal shares of the snowdrift. Thus, the payoff matrix for a player at location i against a player at location j should be of the form (9) where 0 ⩽ αij ⩽ 1 and αij+αji = 1 [54]. Intuitively, when two cooperators face one other, they each begin to clear the snowdrift and stop once they meet; the quantity αij indicates the fraction of the snowdrift a cooperator at location i clears before meeting the cooperator at location j. A natural choice for αij is (10) which is the unique value that gives αij ci = αji cj for each i and j, ensuring that the game is fair, i.e. that the cooperator with the higher cost clears a smaller portion of the snowdrift than the one with the lower cost. Averaging the payoff to one cooperator against another over all possible locations gives (11) which is the upper-left entry of . In contrast, the remaining three entries of do not depend on (wij)1 ⩽ i, jN. Therefore, provided there are at least two locations with distinct cost values, the dynamics of an evolutionary process depend on the particular network configuration (Theorem 1). This network dependence is illustrated in Fig 2.

thumbnail
Fig 2. Average change in the frequency of cooperators, , as a function of the frequency of cooperators, pC, for a spatially non-additive Snowdrift Game, Eq (9), with selection intensity β = 0.01.

The blue and green data are obtained using pairwise comparison updating and differ only in the configuration of the underlying network, which in both cases is a random regular graph of size N = 500 and degree k = 3. Every vertex has a benefit value of b = 4.0, and the cost values are split equally, with half of the vertices having c1 = 0.5 and the remaining half having c2 = 5.5. The average payoff for mutual cooperation, Eq (11), is 3.069 (blue) and 2.961 (green), which suggests that the former arrangement is more attractive for cooperation. The analytical predictions (solid lines) are obtained from Eq (48) in Methods (and are linear in β for β ≪ 1).

https://doi.org/10.1371/journal.pcbi.1004349.g002

Suppose now that we set αij ≡ 1/2 to model ecological asymmetry in the Snowdrift Game; that is, if two cooperators meet, they each clear exactly half of the snowdrift. If there are two cost values in the population, c1 and c2, with c1 < b < c2 < 2b, then a player who incurs a cost of c1 finds it beneficial to cooperate against a defector, but a player who incurs a cost of c2 would rather defect in this situation. Thus, based on the social dilemma implied by the ranking of the payoffs, a player who incurs a cost of c1 for cooperating is always playing a Snowdrift Game while a player who incurs a cost of c2 is always playing a Prisoner’s Dilemma. It follows that ecological asymmetry can account for multiple social dilemmas being played within a single population, even if the players all use the same set of strategies (C and D). The payoff matrices of this particular game are spatially additive, so, by Corollary 1, the dynamics do not depend on the network configuration. If q is the fraction of vertices with cost value c1 then is the average cost of cooperation for a particular location and the dynamics are the same as those of the symmetric Snowdrift Game in which the cost of clearing a snowdrift is (see Fig 1(B)). Fig 3 demonstrates that this result does not extend to stronger selection strengths, so Theorem 1 is unique to weak selection.

thumbnail
Fig 3. The Snowdrift Games of Fig 1(B) with the stronger selection strengths β = 0.1 (A) and β = 0.5 (B).

For each of the three games (with benefit b = 5.0 and costs c1, c2, and half c1/half c2, respectively), the simulation results differ from the prediction of pair approximation already for β = 0.1 (A). Moreover, for β = 0.5, (B) makes it clear that Theorem 1 no longer holds since the average change in cooperators in the game with mixed costs (red) differs from the average (grey) of these changes for the games with costs c1 only (blue) and c2 only (green). Thus, Theorem 1 is peculiar to weak selection.

https://doi.org/10.1371/journal.pcbi.1004349.g003

Based on Theorem 1 and the relative rank of payoffs, the social dilemma defined by the asymmetric game (Eq 9) (for general αij) is a Prisoner’s Dilemma if and a Snowdrift Game if when selection is weak. That is, microscopically, there is a mixture of Prisoner’s Dilemmas and Snowdrift Games, but, macroscopically, the process behaves like just one of these social dilemmas. Consequently, although the dynamics of this evolutionary process may depend on the network configuration, the type of social dilemma implied by this game does not.

Genotypic asymmetry

Another form of asymmetry is based on the genotypes of the players rather than their locations. Each player in the population has one of ℓ possible genotypes, and these genotypes are enumerated by the set {1,…,ℓ}. For an n-strategy game, the payoff matrix for a player whose genotype is u against a player whose genotype is v is (12)

We explore genotypic asymmetry for cultural and genetic processes separately:

Cultural updating.

If genotypic asymmetry is incorporated into a cultural process, then the genotypes of the players never change; only the strategies of the players are updated. In a structured population, it follows that each player’s genotype may be associated with his or her location, and this association is an invariant of the process. Thus, if u(i) denotes the genotype of the player at location i, then we may apply Theorem 1 to the matrices defined by Mij = Mu(i)u(j) for each i and j. In this sense, genotypic asymmetry may be “reduced” to ecological asymmetry in evolutionary games with cultural update rules. Note that, unlike ecological asymmetry, genotypic asymmetry does not require a structured population. However, one can always think of a population as structured (even in the well-mixed case), and doing so allows one to make sense of the “locations” of the players and to apply Theorem 1 to cultural processes with genotypic asymmetry.

Example 3. (Donation Game with genotypic asymmetry and cultural updating). In the Donation Game, a cooperator of genotype u donates bu at a cost of cu. Defectors contribute no benefit and pay no cost, irrespective of genotype. Consider imitation updating on a large, regular network of degree k, and let u(i) denote the genotype of the player at location i (henceforth “player i”). Suppose that player i is a cooperator, player j is a defector, and that player i imitates player j and becomes a cooperator. Despite this strategy change, the genotype of player i is still u(i), and the payoff matrix for player i against player j is still Mu(i)u(j). On the other hand, consider the same process but with the genotypic asymmetry replaced by ecological asymmetry (and with Mij: = Mu(i)u(j) as the payoff matrix for the player at location i against the player at location j). Since the genotype of a player at a given location never changes in an imitation process, the process with ecological asymmetry is well-defined; that is, Mij is independent of the dynamics of the process for each i and j. Therefore, we may instead study the evolution of cooperation in the process with ecological asymmetry, and we already know from Example 1 that, in the limit of weak selection, the frequency of cooperators in this Donation Game is expected to increase if and only if .

In contrast, for genetic update rules, the asymmetry present due to differing genotypes can be removed completely if the genotypes of offspring are determined by genetic inheritance:

Genetic updating. Genetic update rules are defined by the ability of players to propagate their offspring to other locations in the population by means of births and deaths. In other words, there is a reproductive step in which genetic information is passed from parent(s) to child. Both the death-birth and birth-death processes have genetic update rules, but reproduction need not be clonal for the update rule to be genetic. If the genotypes of offspring are determined by genetic inheritance, then the strategy and genotype at each location are updated simultaneously: if the offspring of a player whose genotype is u and whose strategy is Ar replaces a player whose genotype is v and whose strategy is As, then v is updated to u and As is updated to Ar synchronously. Therefore, rather than treating genotypes and strategies separately, we may consider them together in the form of pairs, (u, Ar), linking genotype and strategy. These pairs may be thought of as composite strategies of a larger evolutionary game whose payoff matrix, , is defined by (13) for genotypes, u and v, and strategies, Ar and As. The map (14) resolves a collection of n × n asymmetric payoff matrices with a single symmetric payoff matrix, , of size ℓn × ℓn. This argument holds for any population structure, so evolutionary processes with genotypic asymmetry that are based on genetic update rules can be studied in any setting in which there is a theory of symmetric games. For example, we may use the results from pair approximation on large, regular networks to study the Donation Game with genotypic asymmetry and genetic updating:

Example 4. (Donation Game with genotypic asymmetry and genetic updating). As in Example 3, a cooperator of genotype u in the Donation Game donates bu at a cost of cu. Defectors contribute no benefit and pay no cost, irrespective of genotype. For the death-birth and birth-death update rules, defectors may be modeled as cooperators whose benefit and costs are both 0. In the larger symmetric game defined by (Eq 14), it follows that there are ℓ + 1 distinct composite strategies: (1, C), (2, C), …, (ℓ, C), and D: = (ℓ + 1, C). For death-birth updating on a large, regular network of degree k, cooperators of genotype u ∈ {1,…,ℓ} are expected to increase if and only if (15) where, for each v ∈ {1,…,ℓ}, pv denotes the frequency of cooperators of genotype v (i.e. the frequency of strategy (v, C) in the larger symmetric game). The terms and are the average population benefit and cost values, respectively. Therefore, the condition for the expected increase in cooperators of a particular genotype depends on the average level of cooperation within the population. Eq (15) may be thought of as an analogue of the ‘b/c > k’ rule of [1] with b replaced by the “benefit premium,” , and c replaced by the “cost premium,” .

In the birth-death process, on the other hand, cooperators of genotype u ∈ {1,…,ℓ} are expected to increase if and only if (16) Interestingly, this condition is independent of the benefit values and says that cooperators of genotype u ∈ {1,…,ℓ} increase in abundance if they incur, on average, smaller costs for cooperating than the other cooperators.

Eqs (15) and (16) are obtained by noticing that the expected change in the frequency of cooperators of genotype u, 𝔼 [Δpu], is a positive multiple of in the death-birth process and of in the birth-death process (see Eqs (33) and (36) in Methods). In the birth-death process, it follows that the expected change in the frequency of cooperators of genotype u is close to 0 if pu is close to 1, hence increases in cooperators who pay nonzero costs are necessarily transient.

Discussion

Asymmetric games naturally separate standard evolutionary update rules into cultural and genetic classes. This distinction is important because it captures biological differences that are not always apparent in models of evolution based on symmetric games. For example, consider a model player whose offspring replaces a focal player and a model player whose strategy is imitated by a focal player. For symmetric games, processes based on these two types of updates are mathematically identical; if asymmetry is present, then the fact that one update is genetic (replacement) and the other is cultural (imitation) becomes important. Thus, asymmetric games can highlight fundamental differences in evolutionary processes that are based on distinct update rules but happen to behave similarly when the underlying game is symmetric.

In order to incorporate into evolutionary games the asymmetries commonly studied in classical game theory, our focus has been on games with asymmetric payoffs. Games with asymmetric payoffs arise naturally from different forms of interaction heterogeneity. Dependence of payoffs on the environment is a reasonable assumption when considering ecological variation [55]. Certain patches may provide resources or have drawbacks that influence a player’s success when using a particular strategy [56]. Asymmetric interactions may also be the result of heterogeneity in the sizes or strengths of players [57, 58]. Whether the source of asymmetry is the environment or the players themselves, our model effectively resolves a collection of microscopically asymmetric interactions with a macroscopically symmetric game in the limit of weak selection. Figs 1 and 2 illustrate this result for three common update rules.

Similar forms of asymmetry have been studied previously in evolutionary game theory: Szolnoki and Szabó [59] consider asymmetry appearing in the update rule that results in “attractive” and “repulsive” players in the pairwise comparison process. For games with population structures defined by two graphs (“interaction” and “dispersal” graphs), Ohtsuki et al. [60, 61] show that the evolution of cooperation can be inhibited by asymmetry arising from differences in these two graphs. On the other hand, Pacheco et al. [62] show that heterogeneous population structures can promote the evolution of cooperation by effectively transforming a collection of microscopic social dilemmas into a global coordination game. This result is reminiscent of our Theorem 1, which relates the microscopic interactions to the global behavior of a process. Such heterogeneous population structures can result in asymmetric interactions even if the underlying game is symmetric [63]. These models, although somewhat different from ours, demonstrate that asymmetry (in its many forms) has a remarkable effect on evolutionary dynamics.

Although genotypic asymmetry can always be reduced to a (larger) symmetric game under genetic update rules, this symmetric game can be of independent interest. For example, Eq (16) shows that if cooperators vary in size or strength, then certain cooperators may increase in the Donation Game even under birth-death updating. In contrast, cooperation never increases in the absence of cooperator variation [1]. Though defectors still eventually outcompete cooperators, the transient increase in cooperators suggests that other evolutionary processes with this form of asymmetry can behave in novel ways.

If both ecological and genotypic asymmetries are present, they can be handled separately: genotypic asymmetry is reduced to either (i) ecological asymmetry (if the update rule is cultural) or (ii) a symmetric game with more strategies (if the update rule is genetic). In either case, an evolutionary game with both ecological and genotypic asymmetries can be reduced to a game with ecological asymmetry only and hence Theorem 1 applies. Our framework handles asymmetry resulting from varying baseline traits due to both environment and genotype, which could be referred to as phenotypic asymmetry.

The presence of ecological or genotypic asymmetry in an evolutionary process does not necessarily depend on the selection strength or update rule; these forms of asymmetry may be incorporated into many evolutionary processes. Theorem 1, which effectively reduces a game with ecological asymmetry to a particular symmetric game, is stated for four common update rules in evolutionary game theory. Fig 3 demonstrates (using the asymmetric Snowdrift Game) that this theorem is specific to weak selection. That selection is weak is often a reasonable assumption when using evolutionary games to study populations of organisms with many traits. However, our study of the asymmetric Snowdrift Game for stronger selection strengths suggests that the behavior of asymmetric games is more complicated if selection is strong. Though more difficult to treat analytically, symmetric games under strong selection are worthy of further investigation.

Asymmetry is omnipresent in nature, and any framework that is used to model evolution should take into account possible sources of asymmetry. We have formally introduced ecological and genotypic asymmetries into evolutionary game theory and have studied these asymmetries in the limit of weak selection. Asymmetry has a natural place in the Donation Game and the Snowdrift Game, but our results are applicable to any general n-strategy matrix game. Our treatment of asymmetry highlights important differences between models of cultural and genetic evolution that are not apparent in the traditional setting of symmetric games. Ecological and genotypic asymmetries cover a wide variety of background variation observed in biological populations, and, as such, our framework enhances the modeling capacity of evolutionary games.

Methods

For the two genetic processes (death-birth and birth-death) and the two cultural processes (imitation and pairwise comparison) we consider, we treat ecologically asymmetric games on a large, regular network using pair approximation [1, 47]. We assume here that the degree of the network, k, is at least 3. For k = 2, the network is just a cycle, and we do not treat this case here. The detailed steps of each calculation are omitted but we include the main setups to allow for reconstruction of the reported results. We begin by recalling the way in which these four processes are defined (see eg. Ohtsuki and Nowak [36]):

  1. (DB) In the death-birth process, a player is selected uniformly at random from the population for death. A neighbor of the focal individual is then selected to reproduce with probability proportional to relative fitness, and the resulting offspring replaces the deceased player;
  2. (BD) In the birth-death process, an individual is selected from the population for reproduction with probability proportional to relative fitness, and the offspring replaces a neighbor at random;
  3. (IM) In the imitation process, an individual is chosen uniformly at random to evaluate his or her strategy. This focal individual either adopts a strategy of a neighbor (with probability proportional to that neighbor’s relative fitness) or retains his or her original strategy (with probability proportional to own relative fitness);
  4. (PC) In the pairwise comparison process, a focal individual is selected uniformly at random from the population to evaluate his or her strategy. A model individual is then chosen uniformly at random from the neighbors of the focal individual as a basis for comparison, and the focal player adopts the strategy of the model player with probability proportional to the model player’s relative fitness.

Notation and general remarks

Let 𝒮 = {A1, …, AN} be the set of pure strategies available to each player and suppose that there are N players on a regular network of size N (i.e. every node is occupied). A strategy pair (Ar, As) means a choice of a player using strategy Ar who has as a neighbor a player using strategy As. Let (17a) (17b) (17c) We will make repeated use of the following properties of these quantities: (18a) (18b) Strictly speaking, the equalities ps qr|s = prs = psr = pr qs|r need not hold in general. As a pathological example, one may consider the network with two nodes and a single undirected link between these nodes. If the player on the first node uses Ar, the player on the second node uses As, and rs, then prs = 1 but ps = 1/2, which gives qr|s = 2. However, for large random regular graphs [48], condition (Eq 21) holds approximately, and we will take this equality as given in what follows.

For 𝒳 ∈ {pr, prs, qs|r}1⩽r,s⩽n;, let 𝔼 [Δ𝒳] denote the expected change in 𝒳 in one step of the process. A pair (Ar, i) denotes a player on vertex i using strategy Ar. Given pairs (Ar, i) and (As, j), we denote by π(As, j)(Ar, i) the expected payoff to a player at vertex j playing strategy As given that they have as a neighbor an individual playing strategy Ar at vertex i. If β ⩾ 0 is a parameter representing the intensity of selection, then payoff, π, is converted to fitness, fβ(π), via (19) When defined in this way, fitness is always positive.

The main theorem we prove is the following:

Theorem 1. In the limit of weak selection, the dynamics of the ecologically asymmetric death-birth, birth-death, imitation, and pairwise comparison processes on a large, regular network may be approximated by the dynamics of a symmetric game with the same update rule and payoff matrix , i.e. (20) where for each s and t.

Theorem 1 is established for each of these four update rules separately:

Death-birth updating

If an individual is playing strategy Ar at node i, As at j, and if wij ≠ 0, then (21) Suppose that an (Ar, i) individual is selected for death. The probability that (As, j) replaces this focal individual is proportional to fβ(π(As, j)(Ar, i)). For each i, let (i1, …, ik) be an enumeration of the indices j with wij ≠ 0 (say, in increasing order) and let s be the strategy used by the player at vertex i. If (Ar, i) is chosen for death, then the probability that it is replaced by (As, i) is (22) The Taylor expansion of this term for small β is (23) This expansion will be used frequently in the displays that follow.

Approximation of the expected change in strategy frequencies.

Let δx, y be the Kronecker delta (defined to be 1 if x = y and 0 otherwise). The probability of choosing the player on vertex i for death is 1/N. The chance that this player is using strategy Ah is ph. Suppose that (Asi1, …, Asik) is a k-tuple of strategies. If the focal player at vertex i uses strategy Ah, then the probability that the player on vertex i uses strategy Asi for each ℓ = 1, …, k is qsi1|hqsik|h. Thus, (24) for each strategy, Ar. The Taylor expansion to first-order yields (25) where (26a) (26b) (26c) (26d)

Approximation of the expected change in pair frequencies.

If rs, then (27) On the other hand, (28) The zeroth-order Taylor expansion yields (29) if rs, and (30) Therefore, 𝔼 [Δpr] = O(β) (by Eq (25)) and 𝔼 [Δprs] = O(1) (by Eqs (29) and (30)) for each r and s, which results in a separation of timescales between the strategy frequencies and the pair frequencies. In particular, the pair frequencies will reach their equilibrium much more quickly than the strategy frequencies will, so we can examine the expression for 𝔼 [Δpr] under the assumption that the pair frequencies have reached their equilibrium [1].

Weak-selection dynamics.

Assuming that each update takes place in one unit of time, we can approximate the dynamics by the deterministic systems and for each r and s[1, 36]. Since β is small, we see that the latter system will reach equilibrium much quicker than the former. When the pair frequencies have reached equilibrium (i.e. 𝔼 [Δprs] = 0), we have (31) Ohtsuki and Nowak [36] show that this equation implies that (32) Assuming the system has reached this local equilibrium, we then have (33) as long as β is small. Therefore, if we choose an appropriate time scale and set (34a) (34b) (34c) then , recovering the replicator equation of Ohtsuki and Nowak [36]. It follows that the dynamics depend on , proving Theorem 1 for death-birth updating.

Birth-death updating

In the birth-death process, an individual is selected for reproduction with probability proportional to relative fitness. The offspring of the selected player then replaces a random neighbor. Rather than trying to approximate the total fitness of the population, we will simply denote this value by fpop. Since this value is positive, it does not influence the sign of the expectation values and as such we will largely ignore it. We have (35)

The local equilibrium conditions for birth-death updating turn out to be the same as those for death-birth updating (Eq (32)). These local equilibrium conditions do not take into account selection as long as β is close to 0, so they are essentially based on a neutral process in which at most one strategy is update at each time step. Therefore, it is perhaps not surprising that these conditions are the same for different processes based on one strategy update in each time step.

In the following expressions, by xy we mean that x is proportional to y with positive constant of proportionality. Letting β → 0 and using the local equilibrium conditions (as well as the same separation-of-timescales argument we used in §), we find that (36) Just as we saw with the death-birth process, after choosing an appropriate time scale and letting (37a) (37b) we have , proving Theorem 1 for birth-death updating.

Imitation updating

In the imitation process, an individual is selected uniformly at random from the population to evaluate his strategy. The chosen player then compares his fitness with the fitness of each neighbor and either adopts a new strategy or retains his or her current strategy (with probability proportional to relative fitness). Suppose that an individual at vertex i, playing Ar, is selected to evaluate his or her strategy. If sr, then the probability that he or she adopts strategy s is (38) and the probability that his strategy remains unchanged is (39) We let π(As, j)(Ar, i) be the same as it was for death-birth updating. For small β, (40)

Approximation of the expected change in strategy frequencies.

For r ∈ {1, …, n}, (41) The local equilibrium conditions are exactly the same as they were for the death-birth process. Assuming that the system has reached this local equilibrium, the separation-of-timescales argument we used in § gives (42) With and , we have (43) which establishes Theorem 1 for imitation updating.

Pairwise comparison updating

In the pairwise comparison process, a focal individual is selected uniformly at random from the population. A model individual is then chosen uniformly at random from the neighbors of the focal individual. If πf and πm denote the payoffs to the focal and model individuals, respectively, then the focal player will adopt the strategy of the model player with probability (44) where β ⩾ 0 is a real parameter representing the intensity of selection. In addition to the expected payoff π(As, j)(Ar, i) (defined in the same way as for death-birth updating), we let (45) if (As, i) has as a neighborhood (Asi1, …, Asik). With this notation in place, we have (46) As β → 0, we have (47) Consequently, in the limit of weak selection, (48) The local equilibrium conditions are exactly the same as they were for the other processes, but in this case they are not needed to arrive at this last expression for 𝔼 [Δpr]. With and , we have . It follows that the dynamics of the pairwise comparison process depend on , which completes the proof of Theorem 1.

Finally, we show that the dynamics of each process are independent of the particular network configuration if the asymmetric game is spatially additive:

Definition 1. If for each r and s, then Mij is called a spatially additive payoff matrix. If Mij is spatially additive for each i and j, then the game is said to be spatially additive.

Corollary 1. If Mij is spatially additive for each i and j, then the expected change in the frequency of strategy Ar, 𝔼 [Δpr], is independent of (wij)1 ⩽ i, jN for each r. In particular, the dynamics of the process do not depend on the particular network configuration.

Proof. If for each r, s, i, j, then (49) which is independent of (wij)1 ⩽ i, jN. The corollary then follows directly from Theorem 1.

Computer simulations

In each simulation, a random k-regular network (with k = 3) of N = 500 vertices is generated. The selection intensity is β = 0.01 for Figs 1 and 2, β = 0.1 for Fig 3(A), and β = 0.5 for Fig 3(B). The figures are generated based on data collected from a number of cycles: In each cycle, the network is given an initial configuration of cooperators by first choosing a density, d, uniformly at random from the interval [0, 1], and then placing a cooperator (resp. defector) at each vertex with probability d (resp. 1 − d). The update rule is applied until either C or D fixates. (The absorption time depends on a number of factors including the game, selection strength, and initial configuration of the population.) Let pC(t) denote the frequency of cooperators at time t; pC(0) is just the initial frequency of cooperators. The frequency pC(t+1) is obtained from pC(t) by adding to it the change in the frequency of cooperators over the next N (= 500) updates. For each t, the quantity pC(t + 1) − pC(t) is associated with pC(t). Once pC ∈ {0,1}, a new initial configuration of cooperators is chosen and the process is repeated. After each possible value of pC has at least 105 associated data points (changes in cooperator frequency), these changes are averaged, and this resulting quantity, , is paired with the corresponding value of pC. These pairs are then plotted to obtain Figs 1, 2, and 3. The results from pair approximation apply to the expected change over one update, but we can easily get a predicted result over N updates (i.e. one Monte Carlo step) by scaling the expressions for 𝔼[ΔpC] by a factor of N.

Small deviations from the expected results are seen in each of the figures, and these deviations are due to the effects of finite selection parameter (β) and the finiteness of the set of possible values of pCpC is a multiple of 1/N). As an example of how these properties can give rise to small deviations, consider the Donation Game under imitation updating in Fig 1(A). Eq (42) predicts that 𝔼[ΔpC] is always positive, yet we observe in Fig 1(A) that this change becomes negative as pC → 0,1. If pC = (N − 1)/N and β > 0, then the only defector in the population has a higher payoff than all of the other cooperators. Let denote the fitness of the player at location j. Thus, with just a single defector (at location i) in a population of cooperators, we have for each ji, with equality if and only if β = 0. The expected change in the frequency of cooperators in the next time step is (50) The first (resp. second) summation runs over all of the neighbors of i (resp. j). For each ji, (51a) (51b) both with equality if and only if β = 0. Therefore, we see that (52) with equality if and only if β = 0. The same argument explains the negative average changes as pC → 0. Since pC can only take on finitely many values for a given population size, similar arguments explain the small discrepancies between the actual and expected results for intermediate values of pC (see Fig 1).

Acknowledgments

A. M. thanks Farhan Abedin and György Szabó for helpful discussions.

Author Contributions

Conceived and designed the experiments: AM CH. Performed the experiments: AM. Analyzed the data: AM CH. Contributed reagents/materials/analysis tools: AM. Wrote the paper: AM CH.

References

  1. 1. Ohtsuki H., Hauert C., Lieberman E., and Nowak M. A.. A simple rule for the evolution of cooperation on graphs and social networks. Nature, 441(7092):502–505, May 2006. pmid:16724065
  2. 2. Nowak M. A.. Five rules for the evolution of cooperation. Science, 314(5805):1560–1563, Dec 2006a.
  3. 3. Taylor P. D., Day T., and Wild G.. Evolution of cooperation in a finite homogeneous graph. Nature, 447(7143):469–472, May 2007. pmid:17522682
  4. 4. Maynard Smith J.. Evolution and the Theory of Games. Cambridge University Press, 1982.
  5. 5. Hofbauer J. and Sigmund K.. Evolutionary Games and Population Dynamics. Cambridge University Press, 1998.
  6. 6. Dawes R. M.. Social dilemmas. Annual Review of Psychology, 31(1):169–193, Jan 1980.
  7. 7. Hauert C., Michor F., Nowak M. A., and Doebeli M.. Synergy and discounting of cooperation in social dilemmas. Journal of Theoretical Biology, 239(2):195–202, Mar 2006. pmid:16242728
  8. 8. Hauert C. and Doebeli M.. Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature, 428(6983):643–646, Apr 2004. pmid:15074318
  9. 9. Doebeli M. and Hauert C.. Models of cooperation based on the prisoner’s dilemma and the snowdrift game. Ecology Letters, 8(7):748–766, Jul 2005.
  10. 10. Voelkl B.. The ‘hawk-dove’ game and the speed of the evolutionary process in small heterogeneous populations. Games, 1(2):103–116, May 2010.
  11. 11. Dawkins R.. The Selfish Gene. Oxford University Press, 1976.
  12. 12. Schuster P. and Sigmund K.. Coyness, philandering and stable strategies. Animal Behaviour, 29(1):186–192, Feb 1981.
  13. 13. Maynard Smith J. and Hofbauer J.. The “battle of the sexes”: A genetic model with limit cycle behavior. Theoretical Population Biology, 32(1):1–14, Aug 1987.
  14. 14. Hofbauer J.. Evolutionary dynamics for bimatrix games: A hamiltonian system? Journal of Mathematical Biology, 34(5–6):675–688, May 1996. pmid:8691089
  15. 15. Dugatkin L. A.. Winner and loser effects and the structure of dominance hierarchies. Behavioral Ecology, 8(6):583–587, 1997.
  16. 16. Wright W. G. and Shanks A. L.. Previous experience determines territorial behavior in an archaeogastropod limpet. Journal of Experimental Marine Biology and Ecology, 166(2):217–229, Mar 1993.
  17. 17. Shanks A. L.. Previous agonistic experience determines both foraging behavior and territoriality in the limpet lottia gigantea (sowerby). Behavioral Ecology, 13(4):467–471, Jul 2002.
  18. 18. Selten R.. A note on evolutionarily stable strategies in asymmetric animal conflicts. Journal of Theoretical Biology, 84(1):93–101, May 1980. pmid:7412323
  19. 19. Hammerstein P.. The role of asymmetries in animal contests. Animal Behaviour, 29(1):193–205, Feb 1981.
  20. 20. Ohtsuki H.. Stochastic evolutionary dynamics of bimatrix games. Journal of Theoretical Biology, 264(1):136–142, May 2010. pmid:20096289
  21. 21. Marshall J. A. R.. The donation game with roles played between relatives. Journal of Theoretical Biology, 260(3):386–391, Oct 2009. pmid:19616012
  22. 22. Weiner J.. Asymmetric competition in plant populations. Trends in Ecology & Evolution, 5(11):360–364, Nov 1990.
  23. 23. Freckleton R. P. and Watkinson A. R.. Asymmetric competition between plant species. Functional Ecology, 15(5):615–623, Oct 2001.
  24. 24. Doebeli M. and Ispolatov I.. Symmetric competition as a general model for single-species adaptive dynamics. Journal of Mathematical Biology, 67(2):169–184, May 2012. pmid:22610397
  25. 25. Sigmund K.. The calculus of selfishness. Princeton University Press, 2010.
  26. 26. Bergman M., Olofsson M., and Wiklund C.. Contest outcome in a territorial butterfly: the role of motivation. Proceedings of the Royal Society B: Biological Sciences, 277(1696):3027–3033, May 2010. pmid:20462910
  27. 27. Hofbauer J. and Sigmund K.. Evolutionary game dynamics. Bulletin of the American Mathematical Society, 40(04):479–520, Jul 2003.
  28. 28. Fudenberg D. and Tirole J.. Game Theory. The MIT Press, 1991.
  29. 29. Magurran A. E. and Nowak M. A.. Another battle of the sexes: The consequences of sexual asymmetry in mating costs and predation risk in the guppy, poecilia reticulata. Proceedings of the Royal Society B: Biological Sciences, 246(1315):31–38, Oct 1991. pmid:1684666
  30. 30. Mesterton-Gibbons M.. Ecotypic variation in the asymmetric hawk-dove game: When is bourgeois an evolutionarily stable strategy? Evolutionary Ecology, 6(3):198–222, May 1992.
  31. 31. Dugatkin L. A.. Game Theory and Animal Behavior. Oxford University Press, 2000.
  32. 32. Taylor P. D. and Jonker L. B.. Evolutionary stable strategies and game dynamics. Mathematical Biosciences, 40(1–2):145–156, jul 1978.
  33. 33. Nowak M. A., Sasaki A., Taylor C., and Fudenberg D.. Emergence of cooperation and evolutionary stability in finite populations. Nature, 428(6983):646–650, Apr 2004. pmid:15071593
  34. 34. Taylor C., Fudenberg D., Sasaki A., and Nowak M. A.. Evolutionary game dynamics in finite populations. Bulletin of Mathematical Biology, 66(6):1621–1644, Nov 2004. pmid:15522348
  35. 35. Lieberman E., Hauert C., and Nowak M. A.. Evolutionary dynamics on graphs. Nature, 433(7023):312–316, Jan 2005. pmid:15662424
  36. 36. Ohtsuki H. and Nowak M. A.. The replicator equation on graphs. Journal of Theoretical Biology, 243(1):86–97, Nov 2006. pmid:16860343
  37. 37. Szabó G. and Fáth G.. Evolutionary games on graphs. Physics Reports, 446(4–6):97–216, Jul 2007.
  38. 38. Débarre F., Hauert C., and Doebeli M.. Social evolution in structured populations. Nature Communications, 5, Mar 2014. pmid:24598979
  39. 39. Nowak M. A.. Evolutionary Dynamics: Exploring the Equations of Life. Belknap Press, 2006b.
  40. 40. Moran P. A. P.. Random processes in genetics. Mathematical Proceedings of the Cambridge Philosophical Society, 54(01):60, Jan 1958.
  41. 41. Imhof L. A. and Nowak M. A.. Evolutionary game dynamics in a wright-fisher process. Journal of Mathematical Biology, 52(5):667–681, Feb 2006. pmid:16463183
  42. 42. Szabó G. and Tőke C.. Evolutionary prisoner’s dilemma game on a square lattice. Physical Review E, 58(1):69–73, Jul 1998.
  43. 43. Traulsen A., Pacheco J. M., and Nowak M. A.. Pairwise comparison and selection temperature in evolutionary game dynamics. Journal of Theoretical Biology, 246(3):522–529, Jun 2007. pmid:17292423
  44. 44. Ellison G.. Learning, local interaction, and coordination. Econometrica, 61(5):1047, Sep 1993.
  45. 45. Mahner M. and Kary M.. What exactly are genomes, genotypes and phenotypes? and what about phenomes? Journal of Theoretical Biology, 186(1):55–63, May 1997. pmid:9176637
  46. 46. Baye T. M., Abebe T., and Wilke R. A.. Genotype–environment interactions and their translational implications. Personalized Medicine, 8(1):59–70, Jan 2011. pmid:21660115
  47. 47. Matsuda H., Ogita N., Sasaki A., and Sato K.. Statistical mechanics of population: The lattice lotka-volterra model. Progress of Theoretical Physics, 88(6):1035–1049, Dec 1992.
  48. 48. Bollobás B.. Random Graphs. Cambridge University Press, 2001.
  49. 49. Vukov J., Szabó G., and Szolnoki A.. Cooperation in the noisy case: Prisoner’s dilemma game on two types of regular random graphs. Physical Review E, 73(6), Jun 2006.
  50. 50. Wu B., Altrock P. M., Wang L., and Traulsen A.. Universality of weak selection. Physical Review E, 82(4), Oct 2010.
  51. 51. Tarnita C. E., Wage N., and Nowak M. A.. Multiple strategies in structured populations. Proceedings of the National Academy of Sciences, 108(6):2334–2337, Jan 2011.
  52. 52. Wu B., García J., Hauert C., and Traulsen A.. Extrapolating weak selection in evolutionary games. PLoS Computational Biology, 9(12):e1003381, Dec 2013. pmid:24339769
  53. 53. Nowak M. and Sigmund K.. The evolution of stochastic strategies in the prisoner’s dilemma. Acta Applicandae Mathematicae, 20(3):247–265, Sep 1990.
  54. 54. Du W.-B., Cao X.-B., Hu M.-B., and Wang W.-X.. Asymmetric cost in snowdrift game on scale-free networks. Europhysics Letters, 87(6):60004, Sep 2009.
  55. 55. Maciejewski W. and Puleo G. J.. Environmental evolutionary graph theory. Journal of Theoretical Biology, 360:117–128, Nov 2014. pmid:25016047
  56. 56. Kun Á. and Dieckmann U.. Resource heterogeneity can facilitate cooperation. Nature Communications, 4, Oct 2013. pmid:24088665
  57. 57. Maynard Smith J. and Parker G. A.. The logic of asymmetric contests. Animal Behaviour, 24(1):159–175, Feb 1976.
  58. 58. Hauser O. P., Traulsen A., and Nowak M. A.. Heterogeneity in background fitness acts as a suppressor of selection. Journal of Theoretical Biology, 343:178–185, Feb 2014. pmid:24211522
  59. 59. Szolnoki A. and Szabó G.. Cooperation enhanced by inhomogeneous activity of teaching for evolutionary prisoner’s dilemma games. Europhysics Letters, 77(3), Jan 2007.
  60. 60. Ohtsuki H., Nowak M. A., and Pacheco J. M.. Breaking the symmetry between interaction and replacement in evolutionary dynamics on graphs. Physical Review Letters, 98(10), Mar 2007a.
  61. 61. Ohtsuki H., Pacheco J. M., and Nowak M. A.. Evolutionary graph theory: Breaking the symmetry between interaction and replacement. Journal of Theoretical Biology, 246(4):681–694, Jun 2007b.
  62. 62. Pacheco J. M., Pinheiro F. L., and Santos F. C.. Population structure induces a symmetry breaking favoring the emergence of cooperation. PLoS Computational Biology, 5(12):e1000596, Dec 2009. pmid:20011116
  63. 63. Maciejewski W., Fu F., and Hauert C.. Evolutionary game dynamics in populations with heterogenous structures. PLoS Computational Biology, 10(4):e1003567, Apr 2014. pmid:24762474