Evolutionary Dynamics: Exploring the Equations of Life

By Author(s)
Print
Evolutionary Dynamics
Exploring the Equations of Life

Martin A. Nowak
Belknap Press (Harvard University Press) (2006), 384 pp., Price: $45.00 (hardcover)
ISBN 978-0-6-7402-3383
Reviewer: Stephen Schecter
Department of Mathematics
North Carolina State University
Raleigh, North Carolina, U.S.A.

In 2005 I started a game theory course for upper-division undergraduates. It's unusual to teach game theory in a mathematics department. The mathematicians John von Neumann, John Nash, and others were famously involved in the creation of game theory, but mathematicians have since taken much less interest in it than have economists, political scientists, and biologists. Even though game theory, like calculus, is a collection of mathematical ideas that apply to many different areas, we have lost our right to claim it.

In my university, however, there were no courses in game theory in other departments, so I could easily start one. One reason I wanted to teach game theory is that game theory today has an overlap with dynamical systems.

A game is a situation in which several individuals (players) want to choose their own strategies to maximize their own payoffs, but each player's payoff depends in part on the other players' strategies. If you knew the other players' strategies, you could maximize your own payoff over your own strategies. Typically, however, you don't know what the other players will do.

Originally game theory was thought of as the rational analysis of how to play games. (Robert Aumann, a mathematician of Nash's generation who won the Nobel Prize in Economics in 2005, founded Hebrew University's Center for the Study of Rationality.) Dynamics came into the picture in the 1970's when biologists started trying to use game theory.

One motivating problem was to understand how animals fight over food, mates, and territory. Some put on a display of how tough they are, showing their big antlers, or whatever they have, but back down if their opponent doesn't. Others are willing to fight to the end. The display strategy is the way to go if most of your opponents are fighters. The fight strategy is the way to go if most of your opponents are displayers. Classical game theory computes a mixed-strategy Nash equilibrium: use each strategy with a certain probability, and your opponent can do no better than to copy you. Interpreted biologically, this might mean that the population should have the computed fractions of displayers and fighters. But the animals haven't studied game theory. How do they figure out the fractions?

One solution is to add dynamics using the replicator equation. Suppose \(n\) strategies, such as display and fight, are available. For simplicity assume each animal uses a pure strategy. Let \(p_i\) be the fraction of the population that uses the \(i\)th strategy. The population state is then a point \((p_1,\ldots,p_n)\) in the \(n\)-simplex: each \(p_i \ge 0\), \(\sum p_i =1\). An animal using strategy \(i\) meets an animal using strategy \(j\) with probability \(p_j\) and gets the payoff \(\pi_{ij}\). His expected payoff is \(f_i(p_1,\ldots,p_n)=\sum_j \pi_{ij}p_j\). Average expected payoff in the whole population is \(f(p_1,\ldots,p_n)=\sum_i p_i f_i(p_1,\ldots,p_n)\). Assume that animals with higher expected payoffs reproduce more successfully and pass on their strategies to their offspring. One arrives at the replicator equation $$ \dot p_i = ( f_i(p_1,\ldots,p_n) - f(p_1,\ldots,p_n) ) p_i, \quad i=1,\ldots,n. $$ This is a differential equation on the \(n\)-simplex. Strategies that are more successful than average increase in the population; strategies that are less successful than average decrease. Solutions do not necessarily approach equilibria, but if they do, the equilibria are Nash equilibria of classical game theory, with some additional properties.

The replicator equation is part of "evolutionary game theory": the change in \((p_1,\ldots,p_n)\) over time is evolution. If you're a dynamical systems guy or gal, this sounds pretty interesting! So I created the course, and started keeping up a little with evolutionary game theory, in the sense of reading articles about it in

Nature

or

Science

. These appeared with some frequency, were usually coauthored by Martin Nowak, and dealt with the evolution of cooperation.

The usual model for why cooperation is a problem is the Prisoner's Dilemma, but I prefer Snowdrift, a classic that I learned from the book under review. Two drivers get stuck in the snow. Each has two available strategies: get out and shovel (S), or relax in the car and listen to music (R). The benefit to each driver of getting unstuck before the snow plow arrives is \(b>0\). The cost of the shoveling required to get unstuck is \(c>0\). It can be borne by one driver or split. This gives the following matrix of payoffs to driver 1; his payoff depends on not only on his strategy but on the other driver's strategy.

If \(b-c < 0\), whichever strategy driver 2 chooses, driver 1 gets a better payoff by choosing \(R\), relax in the car. Of course driver 2 can do the same analysis. So we expect both drivers to choose \(R\) and get a payoff of 0. However, if \(b-\frac{c}{2} > 0\), this conclusion is disconcerting. Had both drivers shoveled, each would have earned a positive payoff.

Let's assume \(\frac{c}{2} < b < c\). The classical game theory response to the problem of cooperation that occurs with these parameters is to view the game as one that is likely to be repeated. In a snowy climate, the Snowdrift situation can occur repeatedly. Suppose the expected number of occurrences is \(m\). Imagine three strategies: \(S\), always shovel; \(R\), always relax; and \(T\), tit-for-tat: shovel the first time, and thereafter do whatever the other guy did last time. Here are the new expected payoffs:


Driver 2 S R T

Driver 1         S

\(m\left( b- \frac{c}{2} \right)\) \(m\left( b- c \right)\) \(m\left( b- \frac{c}{2} \right)\)

R

\(mb\) 0 \(b\)

T

\(m\left( b- \frac{c}{2} \right)\) \(b-c\) \(m\left( b- \frac{c}{2} \right)\)

If \(m(b- \frac{c}{2} )>b\), i.e., if \(m\) is big enough, then a best response to \(T\) is \(T\). It becomes rational for the drivers to cooperate (and to threaten to cease cooperation if the other stops cooperating).

Of course, the best response to \(R\) is still \(R\). In a population in which most individuals use the selfish strategy \(R\), someone who tries to use \(T\) will get the payoff \(b-c<0\). In a population model based on the replicator equation, the tit-for-tatters will die out. So how does cooperation get started?

This sort of question is the basis of a currently active research program.

Let's turn to the book. As mathematical books go, it is pretty user-friendly and aimed at a general scientific audience. Evolutionary game theory is certainly a big part of the book, but it is integrated into a general treatment of the mathematics of evolution.

What did I learn? Nowak treats not only the differential equations approach to evolutionary game theory that I was familiar with but a discrete stochastic approach as well. The starting point is the Moran process. Imagine a finite population of size \(N\) consisting of Type \(A\) individuals and Type \(B\) individuals. The population state is \(i\), \(0 \le i \le N\), the number of Type \(A\) individuals. In each time period one individual is randomly selected to die, and one (possibly the same) to reproduce itself. The population state becomes \(i-1\), \(i\), or \(i+1\) with certain probabilities. This is a Markov process with two absorbing states, \(0\) and \(N\): once the population arrives at one of these states, it stays there. The Moran process is a model for what biologists call neutral evolution. Either Type \(A\) or Type \(B\) eventually dies out, despite the lack of any selective advantage. Much, possibly most, evolution is neutral in this sense.

Fitness can be introduced into the Moran process by biasing the selection of the individuals to die and reproduce using a parameter \(r\) that measures the ratio of the fitness of \(A\) to the fitness of B. More generally, the relative fitness of \(A\) and \(B\) can depend on the population state \(i\). This is precisely the evolutionary game theory point of view.

For example, one can imagine a population of relaxers (Type \(A\)) and tit-for-tatters (Type \(B\)). During one time period each plays an average of \(m\) games of Snowdrift against random opponents and gets payoffs. The payoffs determine relative fitness. Then one dies and one reproduces. Do a small number of tit-for-tatters have a good chance of taking over?

These ideas are extended in a chapter on evolutionary graph theory, where Nowak presents ``first steps into a largely unexplored territory." In one version of evolutionary graph theory, the \(N\) individuals in the population occupy the vertices of a graph. The graph models, for example, spatial or social relationships. During a time period, each individual plays a game, for example Snowdrift, against random opponents who are adjacent in the graph. Payoffs, which yield relative fitnesses, are calculated. An individually is randomly chosen to reproduce, and an adjacent individual is randomly chosen to die and be replaced.

There is also a chapter on spatial games, in which a different approach is used. Suppose each position on a spatial grid is occupied by a player who plays Snowdrift with each of his neighbors, always using tit-for-tat or relax. Each player accumulates a total payoff. Then each player observes the payoffs of his neighbors, and, for the next round, adopts the strategy of the neighbor whose payoff is highest. The results are plotted as a pattern of colors. One can see islands of cooperation form, grow, collide, fragment, and disappear. One can also observe beautiful patterns that Nowak calls ``Persian carpets."

The book ends with four "application" chapters on Nowak's work on HIV infection, virulence in parasites, cancer, and language. Much of this work is pathbreaking and famous. I'll just make two brief comments. First, the view of cancer is interesting: cells, like individuals, both compete and cooperate; cancer is a breakdown of cooperation. Second, the chapter on language can serve as an introduction to linguistics. Nowak views the use of language as a game in which successful communication leads to higher payoffs and hence reproductive success, either biological or cultural.

So the book is good. If you buy it or check it out from your library, you'll probably find it enjoyable and stimulating. One question it raises is, ``Why don't we see more evolutionary game theory at the Snowbird meeting?'' The group that works on evolutionary game theory doesn't seem to have much overlap with our activity group. I don't know why that is, but I think our members would find this work fascinating, and might have something to contribute. Snowbird organizers, please take note!

Editor's note: John A. Adam has a new book, A Mathematical Nature Walk, published by Princeton University Press in 2009. If you are interested in reviewing this book, please contact the Book Reviews editor.

Driver 2 S R

Driver 1         S

\(b-\frac{c}{2}\) \(b-c\)

R

\(b\) 0
Categories: Magazine, Book Reviews
Tags:

Please login or register to post comments.

Name:
Email:
Subject:
Message:
x