33. Evolutionary Game Theory: Fighting and Contests

Prof: Okay,
this is the second lecture on behavior.
And today what I want to do is
I want to give you one of the major analytical tools for
dealing with behavior, which is evolutionary game
theory. And before I get into the body
of the lecture, basically I want to tell you at
the beginning of the lecture where it came from–
it came out of Economics–and I want to tell you that I’m going
to give you two examples of particular games;
one is the hawk-dove game, the other is the prisoner’s
dilemma. And it turns out that neither
of these games is really directly testable with good
biological behavioral examples. So instead of actually testing
these ideas with biology, what I’m going to show you is
how biology introduces interesting qualifications to
the assumptions of the games. And so I’ll give you the two
games, and then I’ll give you a series
of biological examples, and then comment on how that
really changes our thinking about the assumptions of the
games. But before I go into all of
that, I want to signal– and I’ll come back to this in
the last slide– that evolutionary game theory
is one of the parts of evolutionary biology and
behavior that connects this field to economics and political
science, and that the prisoner’s dilemma
model, which I will present in the
middle of it, actually is a particular
embodiment of the tragedy of the commons,
which is, of course, affecting the way that we use
all of our natural resources, and methods for solving the
tragedy of the commons are actually central issues in both
economics and in political science.
So this is actually an area in
which there are strong trans-disciplinary connections
of ideas. So the basic idea behind an
evolutionary game is that what you do depends on what everybody
else is doing, and that means it’s going to be
frequency dependent. In other words,
if I decide to be aggressive, in a certain environment,
the success of that will depend upon the frequency with which I
encounter resistance. So game theory is fundamentally
frequency dependent. And the central idea in
evolutionary game theory is that of the evolutionary stable
strategy. So I’m going to show you that
that, in fact, is equivalent to a Nash
equilibrium. So when you’re playing a game
against another member of a population it’s not like playing
a game against the abiotic environment,
because your opponents can evolve.
It’s not like you’re playing a
game against, you know, in a sense of staying
in the game against winter temperatures,
or something like that. You actually have an opponent
that has a strategy, and the strategy can change.
So that makes the whole
analysis of games the analysis of a move and a counter-move,
and that counter-move can either be dyadic,
where you’re playing against one other player,
or it could be that you could conceive of it as playing
against the entire population. So there are some nuances there.
And in that sense evolutionary
game theory is really very fundamentally co-evolutionary;
it’s always about how your strategy co-evolves with the
other strategies that might pop up in the population.
But it’s strategies within a
population. Co-evolutionary game theory is
really not applied so much to one species evolving against
another. It’s usually,
how will my behavior do against the other behaviors that are
present in the population? Okay, where did it come from?
Well here are some of–here’s a
little gallery of heroes. Basically it comes out of von
Neumann and Morgenstern’s book on game theory,
which was published I think in 1944,
as then further developed by people like John Nash and
Reinhardt Selten. So these guys more or less
founded it. And you can actually go into
Maynard Smith’s book on evolutionary game theory and
pull out of the appendix of von Neumann and Morgenstern one of
the payoff matrices that they use;
I mean, you can see that these guys were actually studying that
book, and then developing it in an evolutionary context.
John von Neumann was a
Hungarian genius who managed to show how some of the basic
problems in quantum mechanics could be connected and
explained; he did that back in 1929,
and then he went on more or less to invent the idea of an
operating system for computers. So he contributed very greatly
to the conceptual underpinnings of the information revolution;
and he also invented game theory.
So John von Neumann was a
bright guy. John Nash, of course,
is famous from the book and the movie A Beautiful Mind,
as the fellow who saw at Princeton that the stable
solution to a game that is being played between two parties is
that the stable solution is the one that you play when everybody
else is playing their best possible strategy.
That’s the insight that he had
in the bar at Princeton, as a graduate student,
and then he succumbed to his schizophrenia and didn’t really
recover until he was in his sixties.
And Reinhardt Selten is a
German professor who developed game theory in the context of
economics, and generalized it into all
sorts- using all sorts of alternative assumptions.
And these two guys shared the
Nobel Prize in Economics for it. So that’s where it came from.
And these are some of the key
events that I’ve just gone over. And then these ideas were
developed by George Price, a really remarkable guy,
and John Maynard Smith, and applied in biological
behavior. And John’s book came out in
1982. So it had a lot of impact.
And if John had just been
willing to recognize and acknowledge that in fact an ESS
is a Nash equilibrium, he probably would’ve shared in
that Nobel Prize. But he didn’t.
This is John.
He was a very good dishwasher.
I spent a lot of time with him.
So that’s one of my photos.
And really the key figure that
stimulated John, just as he stimulated Bill
Hamilton, was George Price. And George Price was developing
these ideas in the context of the puzzle of altruism and
cooperation; how did cooperation and
altruism ever come to be in an evolutionary context?
And you’ll see that when we
come to the prisoner’s dilemma and I talk about Axelrod’s
experiment with competing different strategies against
each other on a computer, that Price really actually
contributed twice to the solution,
or to our thinking, about this problem of where did
altruism and cooperation come from?
One, in the context of game
theory, and once in the context of kin selection and
hierarchical selection. Okay, here are the basics.
The basic thing that you ask in
game theory is can any conceivable alternative invade?
And this turns out to be
why–by invade I mean will a mutation come up that modifies
behavior, and if it comes up, will it increase in the
population? If it’s going to increase in
the population, it will do so because it has
greater lifetime reproductive success.
So if that gene affects a
behavioral strategy in such a way that over the course of the
lifetime it increases the reproductive success relative to
other strategies, that will be what we call
invasion. So if alternatives cannot
invade, then that means that the resident strategy,
the one that’s already there, is an evolutionary stable
strategy. So the stability means
stability against invasion; stability against alternatives.
Now you might ask yourself,
how do we know what all the alternatives are?
And the answer is in reality we
don’t. But in theory we can imagine,
if we restrict our attention to a certain scope of possible
behaviors, that the alternatives are all
the possible combinations of behaviors within that restricted
set. Okay?
So that is actually the thing
that’s going on. The theorist is sitting there
and saying, “Should I be more aggressive or less
aggressive?” Well all the possible behaviors
consist of not being aggressive at all, or being very
aggressive, and everything in between.
So those would be the ones that
you tested against. You’ll see how that works when
we go through a couple of examples.
So the ESS is then a strategy
that resists invasion, and it turns out that it’s
exactly the same thing as a Nash equilibrium.
So when John Nash solved this
problem for game theory, back in Princeton in I think
1951, he in fact was also at the same time solving the problem
that Maynard Smith and George Price posed in I think 1973;
just in a different context. So here’s a simple game,
and this is one of the first that Price and Maynard Smith
cooked up, to try to illustrate how you
would apply this thinking to animal behavior.
And they called it the
hawk-dove game. So two animals come together,
and they’re going to fight over a resource,
and that resource has value V, and that means that the fitness
of the winner will be increased by V.
The loser doesn’t have to have
zero fitness, it’s just what–it’s the
increment in fitness which is determined by this particular
encounter that we’re talking about.
So we say, “Well,
they can have one of two strategies;
they can be hawks or doves.”
And the idea is that the hawk
strategy is that you escalate and you continue to fight either
until you’re injured, in which case you have to back
off because you can’t fight anymore,
or until you win and the opponent retreats,
in which case you get the whole thing.
And the dove strategy is that
you go up and you display, and if the opponent escalates,
you back off immediately and run away,
and if the opponent doesn’t escalate you’ll see that you’ll
share the resource. Okay?
If two hawks encounter each
other, then one or both are going to be injured,
and the injury will reduce fitness by a certain cost.
So being a hawk has a benefit
in that you can be aggressive and acquire resources,
but it has a cost in that if you run into another hawk,
you can get beaten up and injured.
So this is sort of the
fundamental intellectual construct of game theory;
it’s a payoff matrix. And the idea is that it lays
out, for the things on the left, what happens to them when they
interact with the things on the right.
So when a hawk interacts with a
hawk, this is its payoff. When it interacts with a dove,
this is its payoff. When a dove interacts with a
hawk, this is its payoff, and when it interacts with a
dove, that’s its payoff. I’m going to take you through
that. So if a hawk encounters a hawk,
it has a fifty percent chance of winning and a fifty percent
chance of being injured. So its payoff is one-half of
the benefit minus the cost. So you just see we’re kind of
averaging that payoff over many such possible encounters.
So the assumption here is that
hawks are total blockheads and they escalate blindly;
they disregard differences in size and condition;
they’re really stupid, they just go in there and they
fight for the resource, and they don’t have any nuance
to them at all. The dove will give up the
resource. If a hawk encounters a dove,
it gets the resource, the dove gets zero.
So it gives it up and the hawk
gets it; and that’s what these entries
in the matrix mean. Okay?
So the hawk is encountering the
dove, the dove is encountering the hawk, the hawk gets V,
the dove gets zero. That doesn’t mean it has zero
fitness, it just means that its fitness doesn’t change because
of the encounter. It doesn’t get anything in
addition, but it also doesn’t lose anything.
So you can think of the dove as
a risk-averse strategy. When a dove meets a dove,
they share it. They sort of shake hands and
say, “Hey, 50/50.”
Now if a strategy is going to
be stable, then it must be the case that
if almost all members of the population adopt it,
then the fitness of the typical member is greater than that of
any possible mutant; otherwise a mutant could
invade, and that would mean the strategy wasn’t stable.
So in this case if we let W of
H be the fitness of the hawk, and W of D be the fitness of
the dove, and E of H,D be the payoff to
an individual adopting a hawk against a dove–
and we have two possible strategies,
I and J; so these are going to be,
in general, what we’ve instanced by hawk
and dove here– I is going to be stable if the
fitness of I is greater than the fitness of J.
And if the mutant J is at very
low- when we assume the mutant is at very low frequency.
So if I is going to be stable,
at very low frequency, then when I encounters I,
it has a higher fitness than when J encounters I.
Or when I encounters I,
it has the same fitness as when J encounters I.
And when I encounters J,
it has a greater fitness than when J encounters J.
So this is just a way of being
very careful and logical about laying out the different
possible relationships of fitness on encounters.
Now what happens?
Well dove is not an ESS.
If a population is 100% doves
and one hawk pops up, it’s going to interact almost
all with doves. It’s not going to run into any
hawks. It’s just going to go around
beating up doves and taking away the spoils.
So it will invade.
Hawk will be an ESS if the
payoff of an encounter is greater than the cost of the
encounter. Okay?
Now even if the population is
100% hawks, and every other individual it
encountered is somebody that fights and beats you up,
that will be stable if V is greater than C.
But what happens if V is less
than C? Well if the cost of injury is
high, relative to the reward of victory, then we expect mixed
strategies. That means the following:
if we–well I’ll ask you to play this.
Just think about this situation
a little bit, and I would like you to take
just a moment to explain what’s going on to each other,
and then I’ll ask one of you to tell me what happens when you
start with 100% doves and a hawk mutant pops up,
and another of you to tell me what happens when you have 100%
hawks and a dove mutant pops up, when this condition is the
case–okay?– when it really hurts a hawk to
encounter another hawk? So take a minute to describe to
your partner what that frequency dependence is like,
and then I will ask one of you to replay each of those cases.
Prof: Okay, let’s go.
Who would like to explain what
happens when you have a population which is 100% hawks
and a dove crops up as a mutation?
What happens?
Student: Mutation by the
hawk doesn’t, because the doves are able to
>. Prof: Okay.
Why did that happen?
Student: Because every
time it repeats it, every time
>, and there’s never a situation
where>. Prof: Well actually it
doesn’t happen Manny; it’s not quite like that.
Remember what the payoff is for
the dove. When the dove encounters a
hawk, its payoff- its fitness doesn’t alter.
Another idea.
Student: It will
increase in its cost because since there’s a dove reaching
out at the hawk as a dove, it gains I guess a lot more
than it gains just because of the hawk.
And you said the fitness of the
dove wouldn’t change. So that one dove will at least
be present. So it will
>. Prof: Yes,
what’s the average fitness of the hawks in that population?
Student: One-half V
minus C. Prof: Yes,
and V is less than C? Student: Yes.
Prof: So it’s a negative
isn’t it? Is 0 bigger than a negative
number? What happens to the doves?
They increase.
They increase because they
actually don’t bear any cost at all when they run into a hawk;
their fitness is not decreased. And so basically they are a
neutral allele that’s introduced into the population,
and if they have perfect heredity, they start
reproducing. Right?
And basically what’s going on
is that the hawks are mutilating each other.
They’re damaging each other so
much that even though the dove’s fitness is zero on this scale,
it’s still greater than the hawk’s,
which when the hawks are mostly encountering hawks is negative,
on this scale. Okay, now let’s turn it around.
What happens when it’s all
doves and a hawk enters the population;
what happens? We have a population that we’ve
just made in our mind. It’s 100% doves and a hawk
comes in. Student: It really
depends if it’s a hawk. Prof: Go further.
Student: So if there’s
one hawk,>.
Prof: Yeah,
it goes like gangbusters. It only meets doves;
it never gets beaten up by another hawk.
Student: So if there’s
>. Prof: Yes,
and so it just keeps going. Right.
So from either side,
from either 100% doves or from 100% hawks, the vector is
towards the middle somewhere; and where it’s going to
stabilize depends on the relationship of V and C.
That’s why this is a mixed
strategy. Neither strategy is an
evolutionarily stable strategy. The only reason that they can
persist, with this relationship of V to C, is that they come to
some kind of intermediate frequency.
If there’s too many hawks the
doves will win out, and if there are too many doves
the hawks will win out. Okay now, so that’s one- that’s
an example of a game that will result in a mixed strategy.
Now let’s look at the
prisoner’s dilemma. Okay?
So this is the payoff matrix
for player one, this is the strategy of player
one, and this is the strategy of player two.
And C stands for cooperate and
D stands for defect. So basically the reason this
game is set up this way is that it’s trying to show you that it
would be better for both players to cooperate,
but both players are actually motivated to defect,
and so if you have short-term selfishness,
which is determining the outcome, defection will win over
cooperation. So that you will not,
in this circumstance, just playing this game one
shot, you will not get the evolution of cooperation and
altruism out of the prisoner’s dilemma.
Instead you will get the
tragedy of the commons. So the entries here.
This is the expected value of
cooperator playing cooperator; cooperator playing defector;
defector playing cooperator; and defector playing defector.
And if we put in some
particular numbers that actually represent an instance of the
general conditions, these particular numbers are
chosen in such a way that defection will in fact be
selected. So cooperation will be an ESS
if the expected value of C playing C is greater than the
expected value of D playing C. Okay?
And that’s not the case.
D will be an ESS if the
expected value of D playing D is greater than the expected value
of C playing D; which is true.
Now look at the payoffs:
3 is greater than 1. If the population were all
cooperators, everybody would get 3.
If the population is all
defectors, everybody only gets 1.
But because of the way the
payoff matrix is set up, with the interactions between
the cooperators and the defectors,
this is the evolutionary stable strategy,
and this one, which is great for the group,
is not stable against invasions by defectors,
because the payoff to a defector, who is playing against
a cooperator, is even greater.
But when a defector plays
against a defector, life gets pretty unpleasant.
So in fact this is the tragedy
of the commons. So the general condition for
this, okay, if we do the algebra
rather than the arithmetic, is that the stable strategy is
always to defect from the social contract,
always not to cooperate, if T is greater than (*>*) R,
R *>* P, P *>* S, and R *>* than the average of S
and T. So that this all has been
analyzed in detail, and this is sort of the
paradigmatic social science game that is used in many contexts.
Now what if you play it again
and again? This was the first idea about
how even in this circumstance, even if you’re playing a
prisoner’s dilemma game, with rewards set up like this,
you could get the evolution of cooperation.
Just do it again and again.
So you’re not just playing
once, you’re playing many times against the same person.
And a very simple strategy
turned out to work. Bill Axelrod,
at the University of Michigan– he’s a political
scientist–said, “I want to hold a computer
tournament, and I want everybody around the
world who’s interested in this issue to send me their computer
program to play against other computer programs,
in an iterated prisoner’s dilemma.”
And it turned out that a very
simple one did extremely well, and that is Tit-for-Tat.
So you cooperate on the first
move–if you’ve run into a defector, you get beat up by
him; if you run into a cooperator
you win, both of you win. And then you do whatever the
guy did last time. So the essential features of
Tit-for-Tat, that make it work, is that it retaliates but it’s
forgiving, it doesn’t hold a grudge.
The other guy defects on you,
you’re going to punish him. If he switches to cooperation,
you say, “Oh fine, I don’t hold a grudge,
I’ll cooperate with you on the next time.”
So after a huge amount of
research, it turns out that there are
some extremely nuanced and complicated strategies that can
do a little bit better than Tit-for-Tat.
But the appeal of Tit-for-Tat
is its simplicity. It doesn’t take very much
cognitive power to implement this behavioral strategy.
It doesn’t take very much
memory to implement it. Okay?
It seems to be something which
is simple and robust and that wins.
Now as soon as you put in
space, you can get a much more complex strategy.
And the take-home from that is
that if you have what’s called population viscosity,
which means that particular individuals tend to encounter
each other more spatially than if they’re just randomly mixed
up in the population, that promotes cooperation.
So Martin Novak,
up at Harvard–actually he was at Princeton when he did this–
he came up with a whole lot of nice,
two-dimensional representations of those games.
And this is one possible
outcome here. Okay?
So here blue is a
cooperator–and these guys are, by the way, playing prisoner’s
dilemma, and they’re playing prisoner’s
dilemma against their neighbors; they’re not playing randomly in
space, they’re actually playing against the neighbors that are
physically sitting right there. So blue represents somebody
that was a cooperator on the previous round and is a
cooperator now; whether they retain in the
population or not depends on whether they’re losing or
winning in the encounters. Green is a cooperator that was
a defector. And here you can see that here
are some cooperators that have won against some defectors,
and they’re forming a little ring right around that little
blue island of cooperation. And red is a defector that was
a defector, and yellow is a defector that was a cooperator.
And what happens in this
particular game is that the percent cooperation goes up,
comes down, stabilizes right at 30%.
So in a situation in which–the
prisoner’s dilemma suggests, if you just consider an
interaction in isolation, it’s going to be 100%
defectors. Just putting in space and
giving individuals a chance to interact repeatedly with other
individuals creates a situation where often cooperators are
actually interacting with cooperators,
and they’re getting a win, and as soon as they build up a
little spatial island of cooperation,
they do great. So they hold their own in a sea
of defectors, just due to the two-dimensional
nature of the interaction. Okay, so thus far in the
lecture I have give you pretty abstract mathematical kinds of
stuff. And what I now want to do is go
into a series of biological examples.
And the biological examples are
not direct tests of evolutionary game theory.
What they are is the
application of game theoretical thinking to biological contexts,
that then inform us about the assumptions that we’re making in
the games. And one of the early
applications was to the bowl-and-doily spider.
So this is a female
bowl-and-doily spider. It doesn’t really show her bowl
and her doily, but basically what she does is
she spins a web that looks like a bowl,
and then it has a layer down below it that looks like a
doily, and she puts up trip lines that
go up above it, so that insects that are flying
along hit one of these trip lines and fall into the bowl,
and she’s sitting on the doily and she comes up and grabs it.
And it’s on the doily that the
mating interactions take place. So this figure should look
pretty familiar to you from the last lecture.
Insemination in spiders works
pretty much like insemination in dung flies.
The probability that a male
will fertilize eggs increases from the start of copulation up
to a certain point, where he’s getting perhaps 90
or 95% of them. So he’s getting diminishing
marginal returns as he sits on the female.
And in this study the contrast
was between a resident male who actually had gone in and had
successfully displayed to the female,
and knew whether or not copulation was actually now
taking place, or was going to take place,
and a new intruder who’s coming in.
So if the resident is somebody
who’s going to have this experience,
and the intruder coming in has no idea what’s been going on–
okay?–we will assume that the intruder knows nothing about the
shape of this curve; it may know that the curve has
started but it knows nothing about the shape.
So it has to simply make an
assumption about the average value of that female–the
intruder is a male. Whereas the resident knows what
this curve is, then the payoff to the resident
gets greater and greater, the longer the copulation goes
on, and then it drops towards the end of the copulation.
It’s already gotten 90% of the
eggs. Okay?
And so actually this happens
pretty quickly, which is kind of nice;
I mean, if you’re doing a behavioral study in the field,
it’s nice to have it over quickly so you can get your data
quickly. Only seven minutes after the
beginning of insemination that female doesn’t have very much
more added value to that male and he would be better going
off– as you know from the marginal
value theorem, that line, at the tangent,
is going to be crossing probably somewhere up in here–
it’s good for him to jump off, try to go find another female.
Whereas that intruder coming
in, not having so much information about the system,
at least on a simple assumption, just thinks,
oh, the female has a certain kind of average value,
and I’m not- I haven’t been able to copulate with her yet,
so this is what I expect. Okay?
Well if you look at the actual
behavior of spiders. This is the observed,
this is the predicted. So the predicted is that the
percentage of fights that would be won by the resident would go
up; he would fight really hard if
he were interrupted, after a certain point in
copulation, assuming he could start over
again, and that after he had been
copulating for seven minutes, he wouldn’t care anymore.
So the prediction is intensity
of fighting would peak and then drop;
and the observed values seem to follow that pretty well.
So here’s the twist on game
theory. The cost-benefit ratio in the
payoff matrix is being altered by the behavior of copulation,
and one of the participants knows and the other one doesn’t.
So the thing that this example
introduces into evolutionary game theory is the whole issue
of who has information on the potential payoffs of the game,
and it shows that that makes a big difference.
And that’s not in the
assumptions of hawk versus dove; it’s not in the assumptions of
the prisoner’s dilemma. This is some important aspect
of biology that alters that analysis.
So this just runs through what
happens. You put both males in at the
start, the bigger male will win. Okay?
So if neither of them has any
information, any more information than the other,
the big one wins. If they’re the same size,
the fights are settled by what’s the difference in reward?
So the resident will fight
longer and will be more likely to win at the end of the
pre-insemination phase, but intruders are more likely
to win after seven minutes of insemination.
If a resident is smaller than
the intruder, they persist longer,
when the reward was greater. So you will find weenie little
runts fighting great big bullies if they know something about the
reward they’re going to get. And if the costs and benefits
are nearly identical, they’ll fight until one or both
are seriously injured or in fact dead.
So this is another way,
of course, of underlining that the payoff in evolution is
number of offspring, not personal survival.
So they’re willing to risk a
lot, if there’s a lot on the line.
So that’s one biological
example, and that’s the bowl-and-doily spider.
This next example has to do
with Harris sparrows. And again it has to do with
information, but now it also has something to do with honest
signaling and perception. So there is a sense here in
which what you’re going to see is simple-minded sparrows
getting really ticked off at being deceived.
So this is a study done by
Seivert Rohwer, who’s in the museum at the
University of Washington in Seattle,
and what he noticed was that if you just go out in Nature,
you see a lot of variation in how dark the heads of the males
are, and that these guys with the
dark heads are dominant and they win most of the fights.
And by the way,
you see a lot of this in birds, that they have a signal that
they can give that is a signal of their condition and of the
likelihood that they might be able to win a fight if they got
into it. So here are some of the
experiments; and so I put up Appearance and
Reality. Okay?
I don’t know if Harris’s
sparrows analyze the problem philosophically in terms of
appearance and reality, but they certainly react to
appearance and reality. So what Seivert did was he
experimentally treated subordinates,
either by painting them black, or by injecting testosterone,
or by painting them black and injecting them with
testosterone. So if you paint them black,
they look dominant; they behave- they do not behave
dominant because they don’t know they’ve been painted black,
and they don’t have the testosterone in their system.
Do they rise in status?
If you inject them with
testosterone, they behave like they’re
dominant, but they don’t have the signal that they’re dominant
and they get beaten up, they do not rise in status.
Because basically what they’re
doing is they’re behaving in a very–according to bird
lore–they’re behaving in a very deceptive fashion.
But if you do both things,
you paint them black and you inject them with testosterone,
then you do to them essentially what evolution and their
development has already done to them,
which is that the black is actually naturally expressed in
male individuals that have higher testosterone levels:
they look dominant, they behave dominant and they
rise in status. So this is a very interesting
observation right here. Okay?
Now that’s one twist on
evolutionary game theory. It says that your perception of
your opponent, and your understanding of
whether he’s trying to deceive you or not, is an important
thing. However, there is another
issue, and that is that it always pays to assess before
escalating. When I was in grad school we
had a Great Pyrenees, a great big dog,
Aikane. His head stood about this high;
he weighed about 130 pounds. And we were living in a suburb
of Vancouver, British Columbia.
And I was out for a walk one
day with Aikane, and a great big aggressive male
German Shepherd came around the corner,
about fifty feet away, and each dog went on alert,
ears went up, hair went up on the back.
They started barker ferociously
at each other. They rushed,
at high speed, at each other.
I was thinking,
“Oh my God, I’m going to have to pull a
fight apart.” They went by each other,
like ships in the night, went about fifty feet down the
road, and they both urinated on a post and trotted proudly away.
They had managed to avoid
serious damage. Well that’s what’s going on
with Red Deer. If Red Deer get into a fight
where they are really about equally matched,
they can end up locking antlers in such a way that they can’t
extricate themselves and they will actually starve to death.
Also, if they are swinging
those nice pointed antlers around in a fight,
they can rip out the eye of an opponent,
they can get a wound that will be infected,
and they’ll get bacterial sepsis and die from an
infection. So fights are dangerous.
But fights are the only way
they can get babies. So what they do is they first
do a lot of assessing. They approach each other and
they first roar. So if you’re around moose in
the fall, or deer in the fall, you will hear roaring,
and that’s what they’re doing. Basically the ability of a male
deer to make sound is pretty directly proportional to how
good- what kind of shape he’s in.
So if that sound is equally
impressive, then they get into a thing where they do a parallel
walk; they actually walk next to each
other, kind of sizing each other up.
And it turns out that if
they’re very closely matched in size, these parallel walks can
go on for four or five hours. They’ll just be wandering all
over the landscape, trying to see who’s going to
give up first. Okay?
So that’s the parallel walk.
And in a certain number of
cases one stag will say finally, “Well, looks like I’m
going to lose this one, it’s not worth fighting.”
And then finally after doing
that parallel walk thing, if it’s not resolved by then,
they will fight, and one will win and one will
withdraw. So the point of this is that
actual fights among animals are much more nuanced than the
simple Hawk-Dove game would ever have you believe.
And I think it’s probably true,
throughout all sorts of tradeoffs in evolutionary
biology, that every time there is a
significant cost, there will be some modification
of behavior, or some way of tweaking that
cost, that will arise, that will reduce the cost.
So this is all cost reduction.
They need to get their mating,
but they’re going to do it in a way that’s not going to kill
them, if at all possible. And that’s not in the simple
assumptions of any of the evolutionary games that I showed
you. So how solid are the
assumptions of this whole way of looking at the world?
Well it turns out that the
assumption is being made is that you’ve got a big randomly mixed
population. If you put in kin selection,
so that the opponents can be related to each other,
so that a brother might be fighting with a brother,
the analysis gets complicated, but the result’s simple:
if you’re related to the other player you’re nicer.
That’s not surprising.
If you have repeated contests
and there’s an opportunity to learn, the results will change.
If there’s no learning,
then having the series of contests really doesn’t make any
difference. So it is the ability to learn
and to remember that turns the repeated prisoner’s dilemma into
a situation in which cooperation can evolve.
So you have to have some
cognitive capacity to do that. If the population is very
small, mutants might not be rare, and the basic model has to
be altered. It turns out that asexual
reproduction doesn’t matter too much.
The sexual system–we usually
get to the ESS if the genetic system will produce it;
you know, will allow it–is you have more genes affecting a
trait, it becomes more likely that the population will hit the
ESS. If you have asymmetry in the
contest, that will–as we’ve seen with
the bowl-and-doily spider, and with the size of the
contest, size of body size, and with badge size in sparrows
and in deer– that will change the outcome.
If you analyze pair-wise
contest versus playing against the whole population,
it turns out in general a mutant really is playing against
the whole population. There you actually have to do
it on a computer usually; you can’t–it’s hard to analyze
analytically. But it doesn’t make a huge
difference. Okay?
So the take-home points that I
want you to get from evolutionary game theory is that
this is a tool, it’s an abstract tool,
and it is probably the tool of choice anytime you’re looking at
frequency dependent evolution of phenotypes.
It is very often good for your
mental health, as an evolutionary biologist or
behaviorist, to test some property against
the invasion of all possible mutants.
That’s a very useful criterion.
So, for example,
if you are thinking about those red grouse in Scotland who are
out in the fall in a big assembly,
and somebody says, “Oh, the reason that they do that is
so that next spring they won’t reproduce so much.”
And you ask yourself,
“What if a mutant crops up in that population that doesn’t
think like that and it’s just going to reproduce like
gangbusters, no matter how dense the
population is?” That little thought process
tells you the explanation that was being given doesn’t work,
because that selfish mutant will invade.
So it’s a very useful criterion.
And I’d like to recommend Ben
Polak’s course. Ben is a very good teacher.
Ben’s gotten teaching awards.
He teaches an Econ course on
game theory, and it will lead you through this stuff.
And Ben is very good at
actually having you do homework assignments in which you solve
games; which is more than this course
has time for. So if you want to get your head
around this, I recommend Ben’s course.
And next time we’re going to do
mating systems and parental care.

Leave a Reply

Your email address will not be published. Required fields are marked *