Skip to main content
Advertisement
  • Loading metrics

Evolutionary Game Dynamics in Populations with Heterogenous Structures

  • Wes Maciejewski ,

    wes@math.ubc.ca

    Affiliation Department of Mathematics, The University of British Columbia, Vancouver, British Columbia, Canada

  • Feng Fu,

    Affiliation Theoretical Biology, Institute of Integrative Biology, ETH Zürich, Zürich, Switzerland

  • Christoph Hauert

    Affiliation Department of Mathematics, The University of British Columbia, Vancouver, British Columbia, Canada

Abstract

Evolutionary graph theory is a well established framework for modelling the evolution of social behaviours in structured populations. An emerging consensus in this field is that graphs that exhibit heterogeneity in the number of connections between individuals are more conducive to the spread of cooperative behaviours. In this article we show that such a conclusion largely depends on the individual-level interactions that take place. In particular, averaging payoffs garnered through game interactions rather than accumulating the payoffs can altogether remove the cooperative advantage of heterogeneous graphs while such a difference does not affect the outcome on homogeneous structures. In addition, the rate at which game interactions occur can alter the evolutionary outcome. Less interactions allow heterogeneous graphs to support more cooperation than homogeneous graphs, while higher rates of interactions make homogeneous and heterogeneous graphs virtually indistinguishable in their ability to support cooperation. Most importantly, we show that common measures of evolutionary advantage used in homogeneous populations, such as a comparison of the fixation probability of a rare mutant to that of the resident type, are no longer valid in heterogeneous populations. Heterogeneity causes a bias in where mutations occur in the population which affects the mutant's fixation probability. We derive the appropriate measures for heterogeneous populations that account for this bias.

Author Summary

Understanding the evolution of cooperation is a persistent challenge to evolutionary theorists. A contemporary take on this subject is to model populations with interactions structured as close as possible to actual social networks. These networks are heterogeneous in the number and type of contact each member has. Our paper demonstrates that the fate of cooperation in such heterogeneous populations critically depends on the rate at which interactions occur and how interactions translate into the fitnesses of the strategies. We also develop theory that allows for an evolutionary analysis in heterogeneous populations. This includes deriving appropriate criteria for evolutionary advantage.

Introduction

Population structure has long been known to affect the outcome of an evolutionary process [1][4]. Evolutionary graph theory has emerged as a convenient framework for modelling structured populations [4], [5]. Individuals reside on vertices of the graph and the edges define the interaction neighbourhoods.

A variety of processes have been investigated on a number of graph classes. However, few analytical results exist in general, since an arbitrary graph may not exhibit sufficient symmetry to aid calculations. The most general class of graphs for which analytical results are known is the class of homogeneous (vertex-transitive) graphs. Such a graph has the property that for any two vertices and there exists a structure-preserving transformation of such that . It is worth noting that not all regular graphs are homogeneous; an extreme example is the Frucht graph [6], which is regular of degree and has only the trivial symmetry. Intuitively, this class consists of graphs that “look” the same from any vertex. The amount of symmetry in such graphs has allowed for a complete set of analytical results for restricted types of evolutionary processes and weak selection [7][9]. Despite the tractability of calculations on homogeneous graphs, natural population structures are seldom homogeneous. Therefore it is important to understand the effects of heterogeneous population structures on evolutionary processes [4], [8], [10] and, in particular, on the evolution of cooperation.

In the simplest case there are two strategic types: cooperators that provide a benefit to their interaction partner at some cost to themselves (), whereas defectors provide neither benefits nor incur costs. This basic setup is known as an instance of the prisoner's dilemma and reflects a conflict of interest because mutual cooperation yields payoff and hence both parties prefer this outcome over mutual defection, which yields a payoff of zero. However, at the same time each party is tempted to defect in order to avoid the costs of cooperation. The temptation of increased benefits for unilateral defection thwarts cooperation – to the detriment of all. This conflict of interest characterizes social dilemmas [11], [12].

More general kinds of interactions between two individuals and two strategic types, and , can be represented in the form of a payoff matrix as in Table 1. The payoffs garnered from these game interactions affect an individual's expected number of offspring by altering their propensity to have offspring (their fitness) or their survival. The expected number of offspring is determined by the fitness of the individuals and some population updating process, which will be made precise in the next section. The offspring produced during the population update have the potential to change the strategy composition of the population. An increase in the abundance of one strategy over a sufficiently large time scale indicates that strategy is favoured by evolution.

thumbnail
Table 1. The payoff matrix for a general by strategy game.

https://doi.org/10.1371/journal.pcbi.1003567.t001

It can be shown, for replicator dynamics, for example [13], [14], that any payoff matrix can be reduced to the matrix in Table 1 without loss of generality because adding a constant term to the payoff matrix does not affect the dynamics and multiplying the payoffs by a positive factor merely rescales the time. Therefore we can always shift the payoffs such that --encounters return a payoff of zero and scale all other payoffs such that --encounters yield a payoff of . In the Averaged versus Accumulated Payoffs section, we show that the generality of the matrix in Table 1 extends to other forms of stochastic dynamics in finite populations based on the frequency dependent Moran process [15].

The (additive) prisoner's dilemma introduced before corresponds to the special case with and . Rescaling the payoff matrix in Table 1 by yields the traditional form, Table 2. More generally, the prisoner's dilemma requires and to result in the characteristic conflict of interest outline above. The special case of the additive prisoner's dilemma, Table 2, effectively reduces the game to a single parameter with (and ). Moreover it has the special property that when an individual changes its strategy, the payoff gain (or loss) is the same, regardless of the opponents' strategy – the so-called equal-gains-from-switching property [16].

thumbnail
Table 2. The payoff matrix for an additive prisoner's dilemma game.

https://doi.org/10.1371/journal.pcbi.1003567.t002

In the absence of structure, cooperators dwindle and disappear in the prisoner's dilemma. In contrast, structured populations enable cooperators to form clusters, which ensures that cooperators more frequently interact with other cooperators than they would with random interactions [17], [18]. Such assortment between cooperators is essential for the survival of cooperation [19].

In heterogeneous graphs not all vertices have the same number of connections and hence the fitnesses of individuals may be based on different numbers of interactions. Because of this, some vertices are more advantageous to occupy than others. However, which sites are favourable depends on the type of population dynamics. In particular, for the Moran process in structured populations it is important to distinguish between birth-death and death-birth updating [10], [20], [21], i.e. whether first an individual is randomly selected for reproduction with a probability proportional to its fitness and then the clonal offspring replaces a (uniformly) randomly selected neighbour – or, if first an individual is selected at random to die and then the vacant site is repopulated with the offspring of a neighbouring individual with a probability proportional to its fitness. Even in homogenous populations the sequence of events is of crucial importance but becomes even more pronounced in heterogenous structures [10], [20].

In order to illustrate that the population dynamics may bestow an advantage to individuals occupying certain sites in a heterogeneous population, consider neutral evolution, where game payoffs do not affect the evolutionary process and all individuals have the same fitness. For birth-death updating every individual is chosen to reproduce with the same probability but neighbours of individuals with few connections are replaced more frequently. Hence vertices with fewer neighbours are more favourable than those with many connections. Conversely, for death-birth updating every individual has the same expected life time but highly-connected individuals, or, hubs, get more frequently a chance to produce offspring, since one of their many neighbours dies, and are thus more favourable than vertices with few neighbours [21][23]. A simple example of this is a -line graph, one central vertex connected to two end vertices. In the birth-death process, the central vertex is replaced with probability , while either end vertex is replaced with probability , while in the death-birth process, the central vertex replaces either end vertex with probability and either end replaces the centre with probability [21]. The upshot is, even though the fitness of all individuals is the same, the effective number of offspring produced depends on the dynamics as well as an individual's location in the population.

The intrinsic advantage of some vertices over others can be further enhanced through game interactions leading to differences in fitness that depend on an individual's strategy as well as its position on the graph. For example, a cooperator occupying a favourable vertex can more easily establish a cluster of cooperators, which creates a positive feedback through mutual increases in fitness. Conversely, a favourable vertex also supports the formation of a cluster of defectors but this results in a negative feedback and lowers the fitness of the defector in the favourable vertex. The fact that heterogeneity can promote cooperation was first observed for the prisoner's dilemma and snowdrift games [24], [25] and has subsequently been confirmed for public goods games [26], [27]. However, the detailed effects not only crucially depend on the dynamics but also on how fitnesses are determined. For example, heterogenous population structures favour cooperation if payoffs from game interactions are accumulated but that advantage disappears if payoffs are averaged [28][30].

The effects of population structure on the outcome of evolutionary games is sensitive to a number of factors: population dynamics [10], [20], [31], translation of payoffs into fitness [28], [30], [32][35], the diversity of players [27], [34], [36], and the type of game played – for example, spatial structure tends to support cooperation in the prisoner's dilemma but conversely, in the snowdrift game, spatial structure may be detrimental [37]. Macroscopic features of the evolutionary process on the level of the population, such as frequency and distribution of cooperators, are determined by microscopic processes on the level of individuals. In the current article, we discuss some of these microscopic processes, such as averaging and accumulating payoffs, and the rate at which interactions take place, and illustrate how they affect an evolutionary outcome. Crucially, we also illustrate that the conditions for evolutionary advantage commonly found in the literature are not applicable to evolution in finite, heterogeneous populations. We modify these conditions and develop a general framework to determine evolutionary advantage in finite, heterogeneous populations.

The manuscript is organized as follows. Sections “Accumulated and Averaged Payoffs” and “Criteria for Evolutionary Success” create a critical synthesis of the existing literature concerning evolution on heterogeneous graphs. In these sections we extend existing results to general games and focus on an imitation process. We also discuss the inapplicability of approaches used in homogeneous populations and present our novel conditions for evolutionary success in heterogeneous populations. Interspersed in these sections are new observations and results that aid in establishing a consistent framework on which we base further novel results presented in the section “Stochastic Interactions and Updates”.

Results

Accumulated Versus Averaged Payoffs

In heterogenous population structures individuals naturally engage in different numbers of interactions. This renders comparisons of the performances of individuals more challenging. One natural approach is to simply accumulate the game payoffs. This clearly puts hubs with many neighbours in a strong position as scoring many times even a small payoff may still exceed few large payoffs. To avoid this bias in favour of hubs, game payoffs can be averaged. Interestingly, these two approaches not only play a decisive role for the evolutionary outcome but also entail important biological implications. In this section we extend previous work on payoff accounting [29] to general games and provide a thorough discussion of why different payoff accounting schemes can result in markedly different evolutionary outcomes.

Consider two different ways to translate the total, accumulated payoffs of an individual into its fitness :(1a)(1b)where denotes the strength of selection and is the number of interactions experienced by . The limit recovers the neutral process, where selection does not act. Note that the payoff matrix in Table 1 can still be used without loss of generality because adding a constant merely changes the (arbitrary) baseline fitness from to and multiplying the payoffs by is identical to simply changing the selection strength to .

The exponential form of fitness in the above equations is mathematically convenient since it guarantees that the fitness is always positive, irrespective of the strength of selection and payoff values. It is worth noting that if the strength of selection is weak, that is, , then(2a)(2b)which represents another common form for fitness found in the literature [8].

Homogenous populations.

In the past, details of the payoff accounting have received limited attention, or the two approaches have been used interchangeably, because they yield essentially the same results for traditional models of spatial games, which focus on lattice populations [4], [38] or, more generally, on homogenous populations [8], [10], [39]. In fact, the difference in payoff accounting reduces to a change in the selection strength because in homogenous populations each individual has the same degree (number of neighbours) and hence, on average, the same number of interactions per unit time. If each individual interacts with all its neighbours then . Thus, the only difference is that the selection strength for accumulated payoffs is -times as strong as for averaged payoffs.

Therefore, in homogenous populations all individuals engage in the same number of interactions per unit time and consequently accumulating or averaging payoffs merely affects the strength of selection. Naturally, the converse question arises – are uniform interaction rates restricted to homogenous graphs? Or, more generally, which class of graphs supports uniform interaction rates?

To answer this question, let us consider an arbitrary graph with adjacency matrix where indicates the weight or the strength of the (directed) edge from vertex to . if vertex is connected to and if it is not. For example, the natural choice for the edge weights on undirected graphs is . That is, all edges leaving vertex have the same weight and hence for all .

An individual on vertex is selected to interact with vertex with a probability proportional to . In this case we say vertex has initiated the interaction. Interactions with self are excluded by requiring . If there are interactions per unit time, then the average number of interactions that vertex engages in is given by(3)where the fraction indicates the probability that vertex participates in one particular interaction either by initiating it (first sum in numerator) or initiated by neighbours of (second sum in numerator). On average each individual engages in interactions. Note that the factor enters because each interaction affects two individuals. Therefore, a graph structure results in uniform interaction rates if and only if(4)holds for every vertex , or equivalently, if for all where is an arbitrary positive constant.

If the sum of the weights of all edges leaving vertex , , is the same for all then and Eq. (4) requires that the sum of the weights of all incoming edges, , for all , as well to ensure uniform interaction rates. The class of graphs that satisfies the condition for all are called circulations [5] and, in the special case with , the adjacency matrix is doubly stochastic such that each row and column sums to . A more generic representative of the broad class of circulation graphs is shown in Figure 1 but this does not include heterogenous graphs such as scale-free networks.

thumbnail
Figure 1. A representative example of the broad class of circulation graphs.

Note that the weights of edges entering as well as those leaving any vertex all sum to .

https://doi.org/10.1371/journal.pcbi.1003567.g001

In order to illustrate that the number of interactions experienced by an individual depends on which vertex they reside, let us consider a mean-field approximation for sufficiently large networks, based on the degree distribution and degree-degree correlations [40]. Specifically, denotes the probability of a randomly chosen vertex having degree , and denotes the conditional probability of a vertex with degree connected to vertices with degree . With this notation, the connectivity between a vertex of degree and another vertex of degree is . Further averaging this quantity over all vertices having degree , we obtain , which indicates the weight of the connection between a vertex of degree and another vertex of degree ,(5)The formula above omits higher-order correlations than two-point correlations, and works for large, sparse networks ( and for all ) [41]. In this case, can be interpreted as the probability that vertices and are connected at the mean-field level.

For random, undirected, and degree-uncorrelated graphs, does not depend on and is thus given by , where is the average degree of the network, . This applies even if the network is not sparse. Accordingly, can be simplified:(6)Inserting into Eq. (4) yields(7)Hence, the number of interactions of one vertex scales linearly with its degree.

Similarly, each vertex can initiate the same number of interactions, . Then, with probability the neighbouring vertex initiates an interaction with :(8)Again, vertices with a degree greater (less) than the average degree are expected to have more (fewer) interactions than on average. Interaction rates on various heterogenous networks are shown in Figure 2. As shown in Figures 2a–c, this approximation above works well for a variety of networks where degree of adjacent vertices are uncorrelated. However, when the network is strongly degree-correlated, like the two-star graphs [27], [34], this approximation works poorly (also see Figure 2d for such an example of highly clustered scale free networks). In this case, we may use Eq.(5) to calculate as long as the function for degree-degree correlations, , is explicitly known.

thumbnail
Figure 2. Average number of interactions as a function of the degree of the vertex for different types of random heterogenous population structures:

(A) Erdós-Rényi random graphs [53], (B) Newman-Watts small-world networks [54]. (C) Barabási-Albert scale-free networks [42], and (D) Klemm-Eguiluz highly-clustered scale-free networks [55]. All graphs have size and an average degree of . At each time step a randomly chosen individual interacts with a randomly selected neighbour. The average number of interactions is shown for simulations (blue dots) and an analytical approximation for graphs where the degrees of adjacent vertices are uncorrelated (red line, see Eq. (8)).

https://doi.org/10.1371/journal.pcbi.1003567.g002

This indicates that on undirected graphs uniform interaction rates can be achieved only on regular graphs, where all vertices have the same number of neighbours.

Heterogenous populations.

In recent years the focus has shifted from homogenous populations to heterogenous structures and, in particular, to small-world or scale-free networks because they capture intriguing features of social networks [42]. On these structures the accounting of payoffs becomes important and, indeed, a crucial determinant of the evolutionary outcome. If payoffs are accumulated, heterogenous structures further promote the evolution of cooperation [24], [25], [27], [36]. In contrast, averaging the game payoffs can remove the ability for scale-free graphs to sustain higher levels of cooperation [28][30].

So far our discussion has focused on interactions between individuals and the translation of payoffs into fitness. The next step is to specify how differences in fitness affect the population dynamics. The most common updating rules in evolutionary games on graphs fall into three categories: Moran birth-death and death-birth, and imitation processes. The evolutionary outcome can be highly sensitive to the choice of update rule. For example, supposing weak selection, cooperation in the prisoner's dilemma may only thrive under death-birth but not under birth-death updating [8], [10], [20].

In heterogenous populations the range of payoffs depends on the payoff accounting: if payoffs are averaged, the range is determined by the maximum and minimum values in the payoff matrix but if payoffs are accumulated the range additionally depends on the size and structure of the population. In particular, this difference may also affect the updating rule: for example, the pairwise comparison process represents the probability that vertex adopts the strategy of vertex based on their fitnesses of , , respectively [43], [44]. This represents an imitation process where denotes a sufficiently large normalization constant to ensure that the expression indeed remains a probability. Since needs to be at least twice the range of possible fitness values, a generic choice of becomes impossible for accumulated payoffs.

Here we focus on a related imitation process where an individual is chosen at random to reassess its strategy by comparing its performance to a randomly chosen neighbour . Individual then imitates the strategy of with probability(9)where and are the fitnesses of and . This variant is convenient as it includes an appropriate normalization factor and hence works regardless of how the fitnesses are calculated. In particular, for exponential payoff-to-fitness mapping (see Eq. (1)) the imitation rule, Eq. (9), recovers the Fermi-update [45]:(10a)(10b)For a comparison between averaged and accumulated payoffs in homogenous and heterogenous populations, see Figure 3.

thumbnail
Figure 3. Average fraction of strategy for accumulated (top row) versus averaged (bottom row) payoffs in homogenous (left column) and heterogeneous (middle column) populations as well as the difference between them (right column) as a function of the game parameters and (see Table 1).

In each panel the four quadrants indicate the four basic types of generalized social dilemmas: prisoner's dilemma (upper left), snowdrift or co-existence games (upper right), stag hunt or coordination games (lower left) and harmony games (lower right). Homogenous populations are represented by lattices with von Neumann neighbourhood (degree ) and heterogenous populations are represented by Barabási-Albert scale-free networks (size , average degree ). The population is updated according to the imitation rule Eq. (9). The colours indicate the equilibrium fraction of strategy (left and middle columns) ranging from dominates (blue), equal proportions (green), to dominates (red). Increases in equilibrium fractions due to heterogeneity are shown in blue shades (right column) and decreases in shades of red. The intensity of the colour indicates the strength of the effect. Accumulated payoffs in heterogenous populations shift the equilibrium in support of the more efficient strategy except for harmony games where dominates in any case (bottom right quadrant). Conversely, for averaged payoffs the support of strategy is much weaker and even detrimental for . Parameters: initial configuration is a random distribution of equal proportions of strategies and ; each simulation run follows updates and the equilibrium frequency of is averaged over the last updates; results are averaged over independent runs; for scale-free networks the network is regenerated every runs. No mutations occured during the simulation run.

https://doi.org/10.1371/journal.pcbi.1003567.g003

On a microscopic level averaging or accumulating payoffs in heterogenous populations turns out to have important biological implications: when averaging payoffs, individuals play different games depending on their location on the graph, whereas for accumulated payoffs everyone plays the same game but at different rates – again based on the individuals' locations. These intriguing differences are illustrated and discussed for the simplest heterogenous structure, the star graph, in a subsequent section. First we develop a framework that aids in analyzing an evolutionary process in heterogeneous, graph-structured populations.

Criteria for Evolutionary Success

In order to determine the evolutionary success of a strategic type in a finite population we consider three fixation probabilities: and . The first, , indicates the probability that a single type in an otherwise population goes on to supplant all s, while the second, , refers to the probability of the converse process where a single type takes over a population of types. These fixation probabilities are important whenever mutations can arise in the population during reproduction or through errors in imitating the strategies of others. The last probability, , denotes the fixation probability of the neutral process, which is defined as the dynamic in a population with vanishing selection, . In such a case the game payoffs do not matter and everyone has the same fitness. Based on these fixation probabilities two distinct and complementary criteria are traditionally used to measure evolutionary success [15], [20]:

  1. Type is said to have an evolutionary advantage or is favoured if(11)holds. If mutations, or errors in imitation, are rare the mutant has disappeared or taken over the entire population before the next mutation occurs. We can then view the population dynamic as an embedded Markov chain transitioning between two states: all- and all-. Denote the proportion of time spent in the state all- (respectively, all-) by (resp. ). Together, and are known as the stationary distribution of the Markov chain and satisfy the balance equation(12)where () is the probability an () appears in the all- (all-) population. For homogeneous populations, or if mutations are not tied to reproduction or imitation events, and so Eq. (12) reads(13)Hence, if then , which captures the notion of having an advantage over . If the inequality, Eq. (11), is reversed then type has the advantage.
  2. Type is a beneficial mutation if(14a)holds. Similarly, if(14b)holds, the type is a beneficial mutation. Note that, in general, Eqs. (14a) and (14b) are not mutually exclusive. and types may simultaneously be advantageous mutants – in co-existence games, , such as the snowdrift game – or both disadvantageous – in coordination games, , such as the stag-hunt game. However, for payoff matrices that satisfy equal-gains-from-switching, such as Table 2, implies and vice versa in unstructured populations or for weak selection [46].

The above conditions (11) and (14) are based on the implicit assumption of homogenous populations or averaged payoffs and randomly placed mutants. In the present context of heterogenous populations and with mutants explicitly arising through errors in reproduction or imitation, both conditions require further scrutiny and appropriate adjustments.

The first condition implicitly assumes that an mutant appears in a monomorphic population with the same probability as a mutant in a monomorphic population. However, in heterogenous populations with accumulated payoffs this is not necessarily the case. Even in monomorphic states hubs may have a higher fitness and hence are more readily imitated, or reproduce more frequently, than low degree vertices. This can result in a bias of the rates at which and mutants arise. Thus, the condition for evolutionary advantage, Eq. (11), must read(15)In general, and depend on the population structure as well as the payoffs and their accounting. The star structure serves as an illustrative example in the next section.

Similarly, the second condition also needs to be made more explicit. In general, to determine whether a mutation is beneficial its fixation probability should exceed the probability that in the corresponding monomorphic population one particular individual eventually establishes as the common ancestor of the entire population. We denote these monomorphic fixation probabilities by , and , respectively. Thus, the second condition, Eq. (14), should be interpreted as(16a)(16b)i.e. that the fixation probability of a single (or ) mutant in a () population exceeds that of one () individual turning into the common ancestor of the entire population.

If mutations occur during an updating event, then in heterogeneous populations mutants occur more frequently in some vertices than in others. For our imitation process, high degree vertices serve more often as models than low degree vertices and hence the mutation is likely to occur in neighbours of high degree vertices. Note that this is different from placing a mutant on a vertex chosen uniformly at random from all vertices [47]. A randomly placed neutral mutant fixates, on average, with a probability corresponding to the inverse of the population size. This is not necessarily the case if neutral mutants arise in reproductive events or errors in imitating or adopting other strategies. In fact, the distinction between and is only required on heterogenous graphs with accumulated payoffs and non-random locations of mutants. In all other situations the (average) monomorphic fixation probabilities are the same and equal to , where is the population size.

In summary, due to the fitness differences in a monomorphic population with accumulated payoffs the turnover is accelerated and more strategy updates take place and hence more errors occur than in the corresponding monomorphic population. This means that, on average, mutant s more frequently attempt to invade an population than vice versa. Overall, this leads to new conditions for evolutionary success in heterogeneous populations, summarized as follows. Type (i) has an evolutionary advantage or is favoured if where is the probability a mutant arises in an all- population (and vice-versa), and, is beneficial if , where is the probability a single individual goes on to become the common ancestor in an all- population. Analogous conditions hold for a mutant type. We apply these novel conditions to an example found in the literature [47], the star graph.

The star graph.

The star graph represents the simplest, highly heterogenous structure. A star graph of size consist of a central vertex, the hub, which is connected to all leaf vertices. On the star graph the range of degrees is maximal – the hub has degree and all leaves have degree one.

In order to illustrate the differences arising from accumulating and averaging payoffs, consider a situation where each individual initiated, on average, one interaction. Thus, the hub has interactions while the leaves have only . Assume that vertices are of type and of type . The payoff to a hub of type is then for accumulated payoffs and if payoffs are averaged. In contrast, the payoff of an leaf is (accumulated) and (averaged). From each leaf the hub gains for accumulated payoffs, which is the same as the gain for the leaf. However, for averaged payoffs, the hub only gains from each leaf but each leaf still gains from the interaction with the hub. Thus, --interactions are more profitable for vertices with a low degree and the payoff gets discounted for vertices with larger degrees. Although potential losses against leaves also get discounted: for leaves versus for an hub for averaged payoffs as opposed to for leaves versus for an hub for accumulated payoffs. For types it is less attractive to interact with types whenever and hence applies to all generalized social dilemmas [12].

Similarly, the payoffs to a type hub are (accumulated) and (averaged) versus for leaves (accumulated and averaged) or (accumulated) and (averaged) for leaves. In --interactions both players get zero, regardless of the aggregation of payoffs, which is a consequence of our particular scaling of the payoff matrix in Table 1. Hence there is no discrimination between vertices of different degrees. An illustration of the differences arising from payoff accounting for the simpler and more intuitive case of the prisoner's dilemma in terms of costs and benefits (see Table 2), is given in Figure 4.

thumbnail
Figure 4. A star graph has the hub in the centre surrounded by leaf vertices.

Using the matrix in Table 2, an type individual (blue) on the hub provides a benefit to each leaf, regardless of whether the payoffs are a accumulated or b averaged. For each interaction, the costs to the hub amount to in the accumulated case whereas only in the averaged case. Conversely, the costs to a type leaf are always and it provides a benefit to the hub if payoffs are accumulated whereas only when averaged. Hence for averaged payoffs an type hub provides a benefit to each leaf at a fraction of the costs while type leaves provide a fraction of the benefits to the hub. This means that the leaves and the hub are playing different games. More specifically, the cost-to-benefit ratio of leaves is while it is for an hub. For most of the population (the leaves), this ratio is much larger than for accumulated payoffs where the cost-to-benefit ratio is . As a consequence cooperation is much more challenging if payoffs are averaged rather than accumulated.

https://doi.org/10.1371/journal.pcbi.1003567.g004

In particular, on star graphs or, more generally, on scale-free networks, averaged payoffs result in higher and hence less favourable cost-to-benefit ratios for most individuals in the population, those with the lower degree vertices. Naturally these differences are also reflected in the evolutionary dynamics. We demonstrate this through the fixation probabilities of a single () type in a population of () types.

Let us first consider the fixation probability of a single type, . Because of the heterogenous population structure, depends on the location of the initial – for a star graph, whether the originated in the hub or one of the leaves. We denote the two fixation probabilities by and , respectively. With probability one of the leaves is chosen to update its strategy and the hub with probability . For averaged payoffs the fitnesses of everyone is the same in a monomorphic population and hence the hub is equally likely to adopt the strategy of a leaf, and make a mistake with probability , as are leaves that are adopting the hubs strategy. Hence the average fixation probability is given by(17)In contrast, for accumulated payoffs even in a homogenous population the hub does not necessarily have the same payoffs as the leaves because of the larger number of interactions. However, for our payoff matrix in Table 1, this does not matter for homogenous populations as all --interactions yield a payoff of zero. Consequently, Eq. (17) equally holds for averaged and accumulated payoffs and, incidentally, this is also the average fixation probability for a randomly placed mutant.

Similarly, we are interested in the average fixation probability, , of a single type in an otherwise homogenous population. Again we first need to determine with what probability the mutant arises in a leaf or in the hub. Interestingly, and in contrast to , this now depends on the accounting of payoffs. If payoffs are averaged then all individuals have the same payoff and, in analogy to Eq. (17), we obtain(18)However, for accumulated payoffs, the hub achieves a payoff of as compared to an average payoff of merely for the leaves. In order to determine the average fixation probability of a single type, , we first consider the case where the mutant arises on a leaf. With probability a leaf is selected to update its strategy and adopts the hub's strategy with probability (c.f. Eq. (10a)). If the leaf adopts the strategy it makes an error with a small probability and instead of copying the strategy, the leaf becomes of type . Similarly, the hub reassesses its strategy with probability and switches to the leafs strategy with probability , which may then give rise to an type in the hub with a small probability. Based on these probabilities we can now determine the proportion of mutants that occur in the leaves and the hub, respectively. For the leaves we getand similarly for the hubThus, the average fixation probability of a single mutant is(19)

In the weak selection limit, (or, more precisely, ), Eq. (19) takes on the same form as for averaged payoffs, Eq. (18). Conversely, for large populations, , mutants almost surely arise in leaves and hence . Note that this is a good approximation as for and the probability that the mutant arises in the hub is already less than .

In order to determine the evolutionary advantage of and types we still need to determine the rates at which and mutants arise in monomorphic and populations, respectively. If payoffs are averaged all individuals in the population have the same fitness and hence with probability the focal individual imitates its neighbour (c.f. Eq. (10a)) and with a small probability an error (or mutation) occurs. This holds for monomorphic populations of either type and hence . For accumulated payoffs the same argument holds for monomorphic populations where all individuals have zero payoff. Consequently, mutants arise at a rate . In contrast, in a monomorphic population the hub has a much higher fitness and leaves will almost surely imitate the hub (whereas the hub almost surely will not imitate a leaf):(20)For large every update essentially results in one of the leaves imitating the hub, so that .

Equations (17) through (19) yield the conditions under which type or has an evolutionary advantage. For star graphs, the fixation probabilities, and , can be derived based on the transition probabilities to increase or decrease the number of mutants by one and hence the results can be easily applied to any update rule [47]. For the imitation dynamics types are favoured under weak selection if and only if(21a)(21b)and in the limit of infinite populations, , the conditions reduce to(21c)(21d)A detailed derivation of the different fixation probabilities is provided in the Materials and Methods Section.

In order to determine whether a mutant is favoured or not (see Eq. (16)), we first need to determine the fixation probabilities and . Naturally, those fixation probabilities again depend on whether the ancestor is located in the hub or one of the leaves. Let us first consider a monomorphic population. The fixation probability of a located in the hub, , or in one particular leaf, , can be derived from the fixation probabilities and by setting (see Materials and Methods), which yields(22a)(22b)Intuitively, the hub individual becomes the common ancestor with probability because any leaf individual updates its strategy to the hub's with a probability of and the hub keeps its strategy also with probability of but both probabilities are independent of the size of the population. Conversely, a leaf individual must first be imitated by the hub, which is times less likely than the reverse. On average we then obtain (insert into Eq. (17)):(23)Note that in a monomorphic population the payoffs are zero regardless of the selection strength, , location (hub and leaves) or the payoff accounting. Again, this is a consequence of our particular choice of payoff matrix (Table 1), and thus, Eq. (23) holds for both averaged as well as accumulated payoffs and is, in fact, the same as the neutral fixation probability . Note that the fixation probabilities in (22a) through (23) corroborate the approximation results of [23] and the analytical results of [21].

Let us now turn to the monomorphic population and determine . If then everything is the same as in the monomorphic population above and . However, for any non-zero selection, , the situation becomes more interesting. If payoffs are averaged, all individuals have the same (non-zero) payoffs and a mutant is equally likely to appear in the hub as any particular leaf (c.f. Eq. (18)) and hence still holds. However, if payoffs are accumulated the hub has a higher fitness. The fixation probabilities that an on the hub or one of the leaves becomes the common ancestor are and (see Materials and Methods) and, on average we obtain(24)Now we are able to derive the conditions under which an and/or mutant is beneficial, c.f. Eq. (16):(25a)(25b)for averaged payoffs and, for accumulated payoffs,(25c)(25d)The parameter region which delimits the region of evolutionary success of and types is illustrated in Figure 5.

thumbnail
Figure 5. Criteria for evolutionary success on the star graph for accumulated (left column) and averaged (right column) payoffs for weak selection, .

The range for which is advantageous (top row, c.f. Eq. (15)) depends on the population size, , and is shown in the limit (solid line) and for (dashed line). Below the respective lines is favoured. Similarly, the range for which and mutants are beneficial (c.f. Eq. (16) also depends on . mutants are beneficial above the red lines, while mutants are beneficial below the blue lines (solid for ; dashed for ). Additive games (or equal-gains-from-switching) satisfy (dotted line).

https://doi.org/10.1371/journal.pcbi.1003567.g005

We can analyze Eqs. (21a)(21d) and (25a)(25d) in terms of the additive prisoner's dilemma game by substituting and . For simplicity, we restrict attention to the case and since in the additive prisoner's dilemma game a strategy is favoured if and only if it beneficial we need only consider Eqs. (21a)(21d). We have(26a)(26b)If we suppose , then Eq. (26a) is never satisfied. That is, averaging rather than accumulating the payoffs altogether removes the ability of the star graph to support cooperation.

Note that for additive, or equal-gains-from-switching, games (games that satisfy ) and for weak selection the condition implies both and , regardless of the accounting of payoffs. This extends results obtained for homogenous populations [8], [10].

Stochastic Interactions and Updates

As we have seen, when payoffs are averaged, members of a heterogeneous population are possibly playing different games, while if they are accumulated, all individuals play the same game. Therefore, only accumulating payoffs allows for meaningful comparisons of different heterogeneous population structures. A common simplifying assumption is that each individual interacts once with all its neighbours, see Figure 3. For heterogeneous populations this assumption means that those individuals residing on higher-degree vertices are interacting with their neighbours at a higher rate than those on lower-degree vertices. This leads to a separation of time scales, where interactions occur on a much faster time scale than strategy updates.

Realistically, all social interactions require a finite amount of time and hence the number of interactions per unit time is limited. This constraint already affects the evolutionary process in unstructured populations [48] but becomes particularly important in heterogenous networks where, for example, in scale-free networks some vertices entertain neighbourhood sizes that are orders of magnitude larger than that of other vertices. For those hubs it may not be possible to engage in interactions with all neighbours between subsequent updates of their strategy or the strategies of one of their neighbours. In order to investigate this we need to abandon the separation of the timescales for interactions and strategy updates.

A unified time scale on which interactions and strategy updates occur can be introduced as a stochastic process where a randomly chosen individual initiates an interaction with probability with a random neighbour and reassesses its strategy with probability by comparing its payoff to that of a random neighbour according to Eq. (9). Interactions alter the payoffs of both individuals (and hence their fitnesses, , see Eq. (1a)) according to the game matrix in Table 1. If individual adopts the strategy of its neighbour, then its payoff (and interaction count) is reset to zero, , regardless of whether the imitation had resulted in an actual change of strategy. Simulation results for various are shown in Figure 6.

thumbnail
Figure 6. Average fraction of strategy for different ratios between interactions and strategy updates in homogenous (top row) and heterogeneous (middle row) populations and the difference between them (bottom row) as a function of the game parameters and (c.f. Figure 3).

Interactions occur with probability and strategy updates with . For example, for each individual has, on average, initiated interactions between strategy updates but only an average of interactions for . For small effects of heterogenous population structures have little chance to manifest themselves and the results are closer to those for averaged payoffs (c.f. Figure 3). In contrast, for large heterogeneity plays an important role: for scale-free networks it is guided by the structural heterogeneity whereas in homogenous populations another form of heterogeneity spontaneously emerges in the number of interactions. Even on lattices, stochastic differences in the number of interactions get amplified by the dynamics because an increased number of interactions reduces the chances that an individual updates its strategy (c.f. Figure 7). As a consequence the results for lattices and scale-free networks become increasingly similar but scale-free networks keep promoting types to a greater extend. Parameters and averaging technique are as in the caption to Figure 3.

https://doi.org/10.1371/journal.pcbi.1003567.g006

For small few interactions occur between strategy updates and in the limit neutral evolution is recovered because no interactions occur. Conversely, in the limit many interactions occur between strategy updates, which allows individuals to garner large payoffs as well as build up large payoff differences. The average number of interactions initiated by any individual between subsequent reassessments of the strategy is , the relative ratio of the time scales of game interactions versus strategy updates. However, the distribution of the number of interactions is biased: individuals with a large number of interactions tend to score high payoffs and hence are less likely to imitate a neighbours' strategy, which in turn results in a further increase of interactions. On heterogenous graphs and scale-free networks, in particular, this bias is built in by the underlying structure because highly connected hubs engage, on average, in a much larger number of interactions than vertices with few neighbours. Moreover, hubs are more likely to serve as models when neighbours are reassessing their strategy – simply because hubs have many neighbours. Thus, hubs are not only more resilient to change but also have a stronger influence on their neighbourhood. When this ratio begins to get large, interactions dominate strategy updates and the resulting game dynamics on heterogeneous and homogeneous graphs becomes indistinguishable.

Interestingly, a similar bias in interaction numbers spontaneously emerges on homogenous graphs, lattices in particular. Since all vertices have the same number of neighbours, no vertices are predisposed to achieve more interactions than others but some inequalities in interaction numbers occur simply based on stochastic fluctuations. As above, those vertices that happen to engage in more interactions tend to have higher payoffs and hence are less likely to imitate their neighbours and keep aggregating payoffs. This positive feedback between interaction count and resilience to change spontaneously introduces another form of heterogeneity, which becomes increasingly pronounced for larger . In fact, for large it rivals the structurally imposed heterogeneity of scale-free networks, see Figure 7.

thumbnail
Figure 7. Distributions of the number of interactions on lattices (black ) and scale-free networks (blue ) with a few interactions between updates ( or, on average, interactions) and b many interactions between updates ( or, on average, interactions).

For small the heterogeneity of scale-free networks results in a pronounced tail at higher numbers of interactions compared to the approximately exponential distribution for lattices. This tail is responsible for the reduction of cooperation in scale-free networks observed in Figure 6: as interactions dominate, some vertices almost never update their strategies. This “static network” emerges in both lattices and scale-free graphs and prevents the complete proliferation of the rare strategy. Nevertheless, most of the individuals in the population experience essentially the same number of interactions. The distributions look different for large but the main difference remains that scale-free networks produce a more pronounced tail. More importantly, however, for most of the population the distributions are actually very similar and hence the heterogeneities very similar. On lattices, the skewed distribution is caused by stochastic variations and the positive feedback between the number of interactions and the resilience to changing strategy.

https://doi.org/10.1371/journal.pcbi.1003567.g007

Regardless of the structure, the positive feedback between payoff aggregation and the diminishing chances to change strategy (and hence reset payoffs) means that a small set of nodes forms an almost static backdrop of the dynamics and hence has a considerable effect on the evolutionary process. This set is a random selection on homogenous structures and consists of the hubs on heterogenous structures. As a consequence, the initial configuration of the population has long lasting effects on the abundance of strategies.

A more detailed view on the effects of on the evolutionary process is provided by restricting attention to the prisoner's dilemma and additive payoffs, c.f. Table 2. This can be accomplished by setting with . The equilibrium levels of cooperation in the -plane are shown in Figure 8 for lattices and scale-free networks.

thumbnail
Figure 8. Impact of the time scale relation on the equilibrium fractions of cooperators in the -plane for additive prisoner's dilemma games (): a lattices and b scale-free networks.

The limit recovers the neutral process (no interactions) whereas for individuals hardly update their strategies. Thus, in both of the two limiting cases the fraction of cooperators remains at the initial value of . For both types of population structures there exists an intermediate that leads to an optimal level of cooperation. On lattices the support for cooperation is strongest if interactions and strategy updates occur at equal rates, , but on scale-free networks more frequent updates than interactions are even more beneficial, . Parameters and averaging technique are as in the caption to Figure 3.

https://doi.org/10.1371/journal.pcbi.1003567.g008

Altering the relative rates of interactions versus strategy updates has interesting effects on the evolutionary outcome. For lower rates of interaction (), scale-free networks outperform lattices in their ability to promote cooperation. As interaction rates increase and strategy updates become more rare (), scale-free networks and lattices become virtually indistinguishable in their ability to support cooperation. For both lattices and scale-free networks an optimal ratio between strategy updates and interactions exist: for lattices this is roughly , suggesting that lattices support the greatest amount of cooperators when interactions occur at the same rate as strategy updates, whereas for scale-free networks the optimum lies around , which suggests that scale-free networks provide the strongest support for cooperation if there are roughly three updates per interaction.

Discussion

Evolutionary dynamics in heterogenous populations, scale-free networks in particular, have attracted considerable attention over recent years. Somewhat surprisingly, the underlying microscopic processes and their implications for the macroscopic dynamics and the corresponding biological interpretations have received little attention.

Here we have shown that established criteria to measure success in evolutionary processes make different kinds of implicit assumptions that do not hold in general for heterogenous structures. Instead, for such structures it becomes imperative to reconsider, revise and generalize these criteria, which was done in the Criteria for Evolutionary Success section. If errors arise in imitating the strategic type of other individuals, or mutations occur during reproduction, then mutations are more likely to arise in some locations than in others. For example, on the star graph mutants likely occur in the leaf nodes for birth-death updating and imitation processes but in the hub for death-birth processes. Moreover, in heterogenous populations the fixation probabilities generally depend on the initial location of the mutant and hence even the fixation probability of a neutral mutant may no longer simply be the reciprocal of the population size but rather intricately depend on the population structure.

Another crucial determinant of the evolutionary dynamics in heterogenous populations is the aggregation of payoffs from interactions between individuals. Individuals on vertices with a higher (lower) degree expect to have more (fewer) interactions than on average. Even though the choice between averaging or accumulating payoffs may seem innocuous, it has far reaching consequences. Previous authors [29] have found that averaging payoffs in a prisoner's dilemma game on a scale-free network eliminates such a network's ability to promote cooperation as observed in earlier studies [24], [25], [27], [49]. We have extended this result to general games and provide a detailed rationale for this phenomenon which is summarized as follows. If payoffs are accumulated, some individuals are capable of accruing more payoffs than others strictly by virtue of them having more potential partners. Averaging payoffs removes the ability of hubs to accrue greater payoffs, but simultaneously makes it difficult to compare results for different population structures (e.g. lattices versus scale-free networks) even if their average degrees are the same because the type of game played depends on the location in the graph. Hence, accumulating payoffs seems a more natural choice to compare evolutionary outcomes based on different population structures because it ensures that everyone engages in the same game. However, if we assume all interactions are realised then those individuals with more neighbours interact at a much greater rate than those with less.

In order to investigate the disparity in the number of interactions on the success of strategies on heterogenous graphs we introduced the time-scale parameter , which determines the probability that an interaction or a strategy update occurs. When increasing the rate of strategy updates (small ), heterogeneous graphs are able to support higher levels of cooperation than lattices. Conversely, increasing the rate of interactions (large ) results in small differences between lattices and scale-free networks; both support roughly the same levels of cooperation. For imitation processes, individuals with high payoffs are unlikely to change their strategies and hence are likely to keep accumulating more payoffs. On scale-free networks, hubs are predestined to become such high performing individuals but on lattices they spontaneously emerge, triggered by stochastic fluctuation in the interaction count and driven by the positive feedback between increasing payoffs and increasing resilience to changing strategies (and hence to resetting payoffs).

For intermediate an optimum increase in the level of cooperation is found: lattices support cooperation most efficiently if a balance is struck between interactions and strategy updates (), whereas scale-free networks work most efficiently if slightly more updates occur (). For lattices a related observation was reported for noise in the updating process [50]. If the noise is large, updating is random but if it is small the game payoffs become essential. Interestingly, cooperation is most abundant for intermediate levels of noise – which is similar to having some but not too many interactions between strategy updates.

Previous work has found that heterogeneous graphs support coordination of strategies, where all individuals are inclined to adopt the same strategy, while homogeneous graphs support co-existence [51], [52]. The time scale parameter introduced in the Stochastic Interactions and Updates section seems to aid in promoting coexistence in both types of graphs, based on the large green region in Figures 3, 6, and 8. Exactly how the time scale parameter promotes coexistence is a topic worthy of further investigation.

Naturally there is no correct way of modelling the updating of the population or the aggregation of payoffs but, as so often, the devil is in the detail and implicit assumptions originating in traditional, homogenous models may be misleading or have unexpected consequences in more general, heterogenous populations.

Materials and Methods

In [47], the authors calculate expressions for the probability that a single mutant fixes on a star graph. These expressions are in terms of state transition probabilities. Denote by the transition probability from a state with individuals on the leaves and an individual on the hub to a state with individuals on the leaves and a on the hub. With this notation, the fixation probability of a single on a leaf vertex is(27)and for a single on the hub,(28)where, in both cases,(29)For the imitation process defined by Eq. 9 and accumulated payoffs we have(30a)(30b)(30c)(30d)and for averaged payoffs,(31a)(31b)(31c)(31d)These are incorporated into the Eqs. (27) and (28) to yield the fixation probabilities and . The fixation probabilities and are obtained in a similar way. The averages and are then calculated using Eqs. (17), (18), and (19). Finally, a first-order approximation in is found for the above. For example,(32a)The other fixation probabilities are found in a similar way:(32b)(32c)(32d)Assuming , and employing the appropriate condition for evolutionary advantage, yields Eqs. (25a25d) in the main text.

Author Contributions

Conceived and designed the experiments: WM FF CH. Analyzed the data: WM FF CH. Wrote the paper: WM CH. Designed the software used for the simulations: FF.

References

  1. 1. Wright S (1931) Evolution in Mendelian populations. Genetics 16: 97–159.
  2. 2. Kimura M, Weiss G (1964) The stepping stone model of population structure and the decrease of genetic correlation with distance. Genetics 49: 561–575.
  3. 3. Levins R (1969) Some demographic and genetic consequences of environmental heterogeneity for biological control. Bulletin of the Entomological Society of America 15: 237–240.
  4. 4. Nowak MA, May RM (1992) Evolutionary games and spatial chaos. Nature 359: 826–829.
  5. 5. Lieberman E, Hauert C, Nowak MA (2005) Evolutionary dynamics on graphs. Nature 433: 312–316.
  6. 6. Frucht R (1949) Graphs of degree three with a given abstract group. Canadian Journal of Mathematics 1: 365–378.
  7. 7. Ohtsuki H, Nowak MA (2006) Evolutionary games on cycles. Proceedings of the Royal Society B 273: 2249–2256.
  8. 8. Taylor PD, Day T, Wild G (2007) Evolution of cooperation in a finite homogeneous graph. Nature 447: 469–472.
  9. 9. Grafen A, Archetti M (2008) Natural selection of altruism in inelastic viscous homogeneous populations. Journal of Theoretical Biology 252: 694–710.
  10. 10. Ohtsuki H, Hauert C, Lieberman E, Nowak MA (2006) A simple rule for the evolution of cooperation on graphs. Nature 441: 502–505.
  11. 11. Dawes RM (1980) Social dilemmas. Annual Review of Psychology 31: 169–193.
  12. 12. Hauert C, Michor F, Nowak MA, Doebeli M (2006) Synergy and discounting of cooperation in social dilemmas. Journal of Theoretical Biology 239: 195–202.
  13. 13. Taylor PD, Jonker L (1978) Evolutionary stable strategies and game dynamics. Mathematical Biosciences 40: 145–156.
  14. 14. Hofbauer J, Sigmund K (1998) Evolutionary Games and Population Dynamics. Cambridge University Press, Cambridge.
  15. 15. Nowak MA, Sasaki A, Taylor C, Fudenberg D (2004) Emergence of cooperation and evolutionary stability in finite populations. Nature 428: 646–650.
  16. 16. Nowak MA, Sigmund K (1990) The evolution of stochastic strategies in the prisoner's dilemma. Acta Applicandae Mathematicae 20: 247–265.
  17. 17. Van Baalen M, Rand DA (1998) The unit of selection in viscous populations and the evolution of altruism. Journal of Theoretical Biology 193: 631–648.
  18. 18. Hauert C (2001) Fundamental clusters in spatial 2×2 games. Proceedings of the Royal Society B 268: 761–9.
  19. 19. Fletcher JA, Doebeli M (2009) A simple and general explanation for the evolution of altruism. Proceedings of the Royal Society B 276: 13–19.
  20. 20. Zukewich J, Kurella V, Doebeli M, Hauert C (2013) Consolidating birth-death and death-birth processes in structured populations. PLoS One 8: e54639.
  21. 21. Maciejewski W (2014) Reproductive value in graph-structured populations. Journal of Theoretical Biology 340: 285–293.
  22. 22. Broom M, Rychtar J, Stadler B (2011) Evolutionary dynamics on graphs - the effect of graph structure and initial placement on mutant spread. Journal of Statistical Theory and Practice 5: 369–381.
  23. 23. Li C, Zhang B, Cressman R, Tao Y (2013) Evolution of cooperation in a heterogeneous graph: Fixation probabilities under weak selection. PLoS One 8: e66560.
  24. 24. Santos FC, Pacheco JM (2005) Scale-free networks provide a unifying framework for the emergence of cooperation. Physical Review Letters 95: 098104.
  25. 25. Santos FC, Rodrigues JF, Pacheco JM (2006) Graph topology plays a determinant role in the evolution of cooperation. Proceedings of the Royal Society B 273: 51–55.
  26. 26. Santos FC, Pacheco JM, Lenaerts T (2006) Cooperation prevails when individuals adjust their social ties. PLoS Computational Biology 2: 1284–1291.
  27. 27. Santos FC, Santos MD, Pacheco JM (2008) Social diversity promotes the emergence of coope ration in public goods games. Nature 454: 213–216.
  28. 28. Tomassini M, Pestelacci E, Luthi L (2007) Social dilemmas and cooperation in complex networks. International Journal of Modern Physics C 18: 1173–1185.
  29. 29. Szolnoki A, Perc M, Danku Z (2008) Towards effective payoffs in the prisoner's dilemma game on scale-free networks. Physica A 387: 2075–2082.
  30. 30. Antonioni A, Tomassini M (2012) Cooperation on social networks and its robustness. Advances in Complex Systems 15: 1250046.
  31. 31. Huberman BA, Glance NS (1993) Evolutionary games and computer simulations. Proceedings of the National Academy of Sciences USA 90: 7716–7718.
  32. 32. Masuda N (2007) Participation costs dismiss the advantage of heterogeneous networks in evolution of cooperation. Proceedings of the Royal Society B 274: 1815–1821.
  33. 33. Perc M, Szolnoki A (2008) Social diversity and promotion of cooperation in the spatial prisoner's dilemma game. Physical Review E 77: 0011904.
  34. 34. Pacheco J, Pinheiro FL, Santos FC (2009) Population structure induces a symmetry breaking favouring the emergence of cooperation. PLoS Computational Biology 5: e1000596.
  35. 35. Grilo C, Correia L (2011) Effects of asynchronism on evolutionary games. Journal of Theoretical Biology 269: 109–122.
  36. 36. Santos FC, Pinheiro FL, Lenaerts T, Pacheco JM (2012) The role of diversity in the evolution of cooperation. Journal of Theoretical Biology 299: 88–96.
  37. 37. Hauert C, Doebeli M (2004) Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature 428: 643–646.
  38. 38. Hauert C (2002) Effects of space in 2×2 games. International Journal of Bifurcation and Chaos 12: 1531–1548.
  39. 39. Szabó G, Fáth G (2007) Evolutionary games on graphs. Physics Reports 446: 97–216.
  40. 40. Boguná M, Pastor-Satorras R (2002) Epidemic spreading in correlated complex networks. Physical Review E 66: 047104.
  41. 41. Bollobas B (1985) Random Graphs. Cambridge Studies in 628 Advanced Mathematics.
  42. 42. Barabási A, Albert R (1999) Emergence of scaling in random networks. Science 286: 509–512.
  43. 43. Traulsen A, Claussen JC, Hauert C (2005) Coevolutionary dynamics: From finite to infinite populations. Physical Review Letters 95: 238701.
  44. 44. Traulsen A, Claussen JC, Hauert C (2012) Stochastic differential equations for evolutionary dynamics with demographic noise and mutations. Physical Review E 85: 041901.
  45. 45. Szabó G, Tőke C (1998) Evolutionary Prisoner's Dilemma game on a square lattice. Physical Review E 58: 69–73.
  46. 46. Taylor PD, Day T, Wild G (2007) From inclusive fitness to fixation probability in homogeneous structured populations. Journal of Theoretical Biology 249: 101–110.
  47. 47. Hadjichrysanthou C, Broom M, Rychtář J (2011) Evolutionary games on star graphs under various updating rules. Dynamic Games and Applications 1: 386–407.
  48. 48. Woelfing B, Traulsen A (2009) Stochastic sampling of interaction partners versus deterministic payoff assignment. Journal of Theoretical Biology 257: 689–695.
  49. 49. Santos FC, Pacheco JM, Lenaerts T (2006) Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proceedings of the National Academy of Sciences USA 103: 3490–3494.
  50. 50. Szabó G, Vukov J, Szolnoki A (2005) Phase diagrams for an evolutionary prisoner's dilemma game on two-dimensional lattices. Physical Review E 72: 047107.
  51. 51. Pinheiro F, Pacheco JM, Santos F (2012) From local to global dilemmas in social networks. PLoS One 7(2): e32114.
  52. 52. Pinheiro FL, Santos FC, Pacheco JM (2012) How selection pressure changes the nature of social dilemmas in structured populations. New Journal of Physics 14: 073035.
  53. 53. Erdős P, Rényi A (1960) On the evolution of random graphs. Publ Math Inst Hung Acad Sci 5: 17–61.
  54. 54. Newman MEJ, Watts DJ (1999) Scaling and percolation in the small-world network model. Physical Review E 60: 7332.
  55. 55. Klemm K, Eguiluz VM (2002) Highly clustered scale-free networks. Physical Review E 65: 036123.