PENULTIMATE DRAFT, FINAL VERSION FORTHCOMING IN THE EVOLUTION OF
COOPERATION VOL.2: SIGNALLING, COMMITMENT & EMOTION (MIT PRESS)
False advertising in
biological markets:
Partner choice & the problem of reliability
Ben Fraser
9/25/2011
This is the penultimate draft of my chapter in a forthcoming book on the evolution of
cooperation, edited by Kim Sterelny, Richard Joyce, Brett Calcott and myself.
The partner choice approach to understanding the evolution of cooperation builds on
approaches that focus on partner control by considering processes that occur prior to
pair or group formation. Proponents of the partner choice approach rightly note that
competition to be chosen as a partner can help solve the puzzle of cooperation (Noe
2006; Miller 2007; Nesse 2007). I aim to build on the partner choice approach by
considering the role of signalling in partner choice. Partnership formation often
requires reliable information. Signalling is thus important in the context of partner
choice. However, the issue of signal reliability has been understudied in the partner
choice literature. The issue deserves attention because – despite what proponents of
the partner choice approach sometimes claim – that approach does face a cheater
problem, which we might call the problem of false advertising in biological markets.
Both theoretical and empirical work is needed to address this problem. I will draw on
signalling theory (Maynard-Smith & Harper 2005; Searcy & Nowicki 2005) to
provide a theoretical framework within which to organise the scattered discussions of
the false advertising problem extant in the partner choice literature. I will end by
discussing some empirical work on cooperation, partner choice, and punishment
among humans (Barclay 2006; Nelissen 2008; Horita 2010).
The Problem of Cooperation
Numerous definitions of ‘cooperation’ have been offered in the biological literature.
Some researchers use ‘cooperation’ very generally to cover all acts by one individual
that benefit one or more other individuals (Sachs et al. 2004, 137). Others consider
such usage too liberal, since it counts as cooperative behaviours that benefit others
only incidentally, and restrict ‘cooperation’ to behaviours that have been selected
because they benefit others (West et al. 2007, 416). Thus, while some researchers
would count an elephant that defecates and thereby feeds a dung beetle as cooperating
with the beetle, others would not. Minimally, though, the various definitions on offer
agree that cooperation involves one organism A benefiting another organism B, where
that means A’s behaviour increases B’s fitness. There is also general agreement about
when such behaviour is, from an evolutionary perspective, prima facie puzzling: it is
when the increase in B’s fitness comes at an apparent cost to the fitness of A. This is
the problem of cooperation upon which I wish to focus.
One way to solve the puzzle of cooperation is to show how, appearances aside, A is
not actually sacrificing its own fitness to benefit B.1 Hamilton (1964) put together a
good deal of the puzzle when he conceived of the pieces not as individual organisms
but as far smaller units: genes. Hamilton’s theory of kin selection showed how A
helping B to survive and/or reproduce could increase A's “inclusive” fitness (even if
doing so decreased A's “personal” fitness) so long as A and B were sufficiently close
relatives. Even after Hamilton’s elegant insight, however, much of the puzzle of
cooperation remained fragmentary.
Partner Control Models of Cooperation
Cooperation among unrelated individuals has predominantly been viewed through the
lens of reciprocity (Trivers 1971). The key question here is how an organism A should
behave toward a given partner B over a series of encounters in order to maximise its
total pay-off over that series. Should A cooperate with B, or refrain from cooperating,
or sometimes do the one and sometimes the other? The important insight grounding
the reciprocity-based approach is that, under certain conditions, the immediate cost A
pays to benefit B can be recouped (and more) over time if B repays A’s help (whether
the repayment is in kind, or in a different currency). For example, the Tit-for-Tat
strategy in Axelrod's (1984) iterated Prisoner’s Dilemma tournament enjoyed success
because it conditionalised its own cooperative behaviour on the cooperativeness of the
strategies with which it found itself paired.
Studying cooperation from the perspective of reciprocity focuses attention on what
has been called “partner fidelity” (Bull & Rice 1991), “partner verification” (Noe &
Hammerstein 1994), and “partner control” (Noe 2001). I will use the term ‘partner
control approach’ here to mean approaches to understanding the evolution of
cooperation that focus on how an individual should manage its interactions with a
given partner, and in particular on how individuals engineer the incentive structure of
a given interaction partner for their own benefit.
1
I set aside here group selectionist explanations (see Sober & Wilson 1998), which show how, under
certain conditions, genuinely fitness-sacrificing behaviour can be maintained in populations under
selection.
One reason for dissatisfaction with the partner control approach is that it is
surprisingly under-supported by empirical data. Uncontroversial examples of
reciprocity-based cooperation among non-human organisms have proven elusive.2 A
more important reason – and the focus here – is that the partner control approach
neglects important aspects of cooperation. It models only one aspect of a complex
process. As noted above, the partner control approach focuses on how an individual
manages its interactions with a given partner. However, interactions between
organisms can be imagined as consisting of three stages: a pair/group formation stage,
a decision stage (e.g. cooperate or not?) and, finally, a division stage, in which the
yield of the interaction is apportioned among those involved (Noe 1990, 79; Dugatkin
1995, 4).3 The partner control approach considers only the second of these stages.
Importantly for current purposes, that approach typically assumes that individual
agents have no control over partnership or group formation, whereas in nature it is
likely to often be the case that organisms exert some degree of partner choice
(Dugatkin 1995). The partner control approach is thus limited in scope and makes
unrealistic assumptions.
2
One reason that has been suggested for this is that the cognitive demands of managing reciprocal
interactions are rarely met (Stevens et al. 2004; Stevens & Hauser 2005). But, the cognitive demands of
reciprocity need not be high – even if most reciprocating organisms are in fact quite cognitively
sophisticated – since even plants and paramecia can trade costs and benefits in a way that counts as
reciprocity as defined above. Another reason for the dearth of clear empirical examples of reciprocity is
that excluding alternative hypotheses is difficult. The classic case of (supposed) reciprocity-based
cooperation is blood-sharing in vampire bats (Wilkinson 1984). Subsequent work has challenged this as
an example of reciprocity, since the data fail to rule out the possibility that blood-sharing results from
kin selection plus the occasional misidentification of kin, for instance, or is a form of tolerated theft
(Clutton-Brock 2009). Even demonstrating Tit-for-Tat reciprocity in action in the laboratory is
problematic, given the difficulty of designing experiments that faithfully replicate an iterated Prisoner’s
Dilemma (Noe 2006, 11).
3
There is yet another stage that must also be considered, namely, the generation of benefit stage, at
which the cooperating individuals actually produced whatever good is then distributed. Both the partner
control and the partner choice model have tended to neglect this stage, and I must do so here simply for
reasons of space For detailed discussion, see Calcott (2008; this volume).
Partner Choice Models of Cooperation
The shortcomings of the partner control approach to the problem of cooperation have
left some researchers inclined to explore an alternative: the partner choice approach.
The emphasis here is not on how best to deal with a given partner but on “the option
of choosing and switching partners” (Noe 2006, 5). It is important to note that partner
control and partner choice models are not competing, mutually exclusive alternatives.
Rather, the partner choice approach is meant to complement the partner control
approach. I will discuss in a later section precisely how the two approaches relate to
each other, For now, I am concerned to define some key terms, before critically
discussing the partner choice approach. (The following material is drawn from an pair
of influential early papers by Noe & Hammerstein 1994; 1995.)
A ‘biological market’ exists whenever organisms engage in mutually beneficial
exchanges of resources and, crucially, when at least one of the trading individuals can
exercise choice in selecting a trading partner. Put negatively, this second requirement
says that the market metaphor does not apply when desired resources can simply be
taken (i.e. theft) or when individuals can force others to partner with them (i.e.
coercion).
In many biological markets, organisms can be divided into ‘trader classes,’
according to the kind of resource they offer. For example, numerous ant species can
be classed together as ‘protection’ traders, and this class forms a market with various
species of aphids, butterflies, and plants, which together comprise a (rather
heterogeneous) trader class dealing in nutrients. Notice, in some biological markets,
there may be only one commodity on offer. For example, in some forms of hunting
there may be a marketplace for hunting skill, in which would-be hunters compete to
be chosen as participants by hunt-leaders. Even if a hunting group comprises several
specialised roles, each requiring a distinct skill-set, there may nevertheless be
competition to fill each role. For example, consider turtle hunting among the Meriam
Islanders of the Torres Strait (see Smith & Bliege-Bird 2000): hunt leaders assemble
groups comprising a boat driver, a harpooner, and several ‘jumpers’, and competition
to fill each role may result in ‘sub-markets’ for specific hunting skills. So, it is not
necessary to set up a biological market that traders fall into two classes, although this
is certainly where the emphasis falls in the partner choice literature.
Given the existence of a biological market, competition among individuals to be
chosen as a trading partner and pressure on individuals to make good partner choices
are both to be expected. There will accordingly be ‘market selection,’ defined as
“selection of traits that maximize fitness in biological markets, such as the ability to
compete with members of the same trading class, the ability to attract trading partners,
and the ability to sample alternative partners efficiently” (Noe & Hammerstein 1995,
336).
The partner choice approach to understanding cooperation, then, attempts to
explain cooperation as the result of competition to be picked as a partner in profitable
exchanges. An early statement of this approach was given by Bull & Rice (1991, 68),
who defined partner choice models as those in which cooperation is evolutionarily
stable because “an individual of species A is paired with several members of species B
for a single interaction, but A chooses to reward only the most co-operative members
of B.” Recently, Nesse (2007, 151) has discussed the approach under the heading of
“the social selection perspective,” which “shifts the focus of attention away from
decisions to cooperate or defect and abilities to detect cheating, and toward the quite
different tasks of selecting carefully among a variety of potential partners, trying to
discern what they want, and trying to provide it, so one is more likely to be chosen
and kept as a partner.” According to the partner choice approach, puzzling
cooperation – A benefiting B at some cost to itself – becomes intelligible when it
becomes clear that benefiting B allows A to either establish or maintain a mutually
beneficial interaction with B: it is an entry fee.
Although the partner choice approach is presented as complementary to, rather
than in competition with, the partner control approach, proponents of the former do
claim that it enjoys several advantages over the latter. For one thing, the partner
choice approach does not rely on unrealistic assumptions about the nature of
cooperative interactions (such as that the interacting individuals have no control over
partnership formation). For another, empirical support for partner choice models is
comparatively abundant (for example, see work by Bshary and colleagues on cleaner
fish, and a review in Sachs et al. 2004). Most importantly for current purposes, it is
claimed that the partner choice approach, unlike the partner control approach, does
not face the so-called ‘cheater problem’ (or at least faces that problem to a far lesser
extent).
The Relationship Between Partner Choice and Partner Control Models
As noted above, the partner choice approach is typically presented as complementary
to the partner control approach, rather than in competition with it. Both are supposed
to be important to fully understanding cooperation. However, the precise way in
which the approaches complement each other requires clarification. There are two
possibilities here. The two approaches may (1) model different stages of one complex
interaction, or (2) model different kinds of interaction.4 I discuss these possibilities in
turn.
One way in which the partner choice approach may complement the partner
control approach is by modelling different stages of the complex process that is
cooperation. Partner choice focuses on the “formation” stage of the cooperative
process, at which partnerships or groups come together. Partner control approaches
assume that partnership or group formation is, if not random, at least not under the
control of the interacting individuals. Partner choice models supplement partner
control models by considering the formation stage of the cooperative process in more
realistic detail.
Noe seems to see the relationship between the partner control and partner choice
approaches as complementary in the sense above. There is, though, another way to
understand ‘complementary’. Partner choice and partner control models may be
relevant – not to different stages of the one extended cooperative interaction – but to
importantly different kinds of cooperative interactions.
In some cooperative interactions, the benefits of cooperation are distributed to the
partners in sequence, serially rather than simultaneously. This is the cases in so-called
‘reciprocal altruism’, such as blood sharing among vampire bats (the classic study
being Wilkinson 1984). In such cases, control mechanisms for preventing or
punishing defection are important. (Interestingly, one such mechanism may be the
threat of partner switching in response to defection, and here, the distinction between
mechanisms of partner control and those of partner choice begins to blur). So, we may
see partner control models as particularly relevant when cooperative interactions
involve the sequential distribution of benefits.
4
Thanks to one of the editors – Kim Sterelny – for pointing this out.
In other cooperative interactions, by contrast, the benefits of cooperation are not
generated and distributed sequentially. Rather, partners reap the rewards of
cooperation with each other simultaneously. Crucially, in some (perhaps many) such
cooperative interactions, the magnitude of the benefit generated by cooperation may
depend on the nature of the partner(s) involved in the interaction. When this is so –
when the size of one’s reward depends on the quality of one’s partners – then partner
choice mechanisms will be especially vital, provided of course that there is scope for
choice in the first place.
To sum up, one way to understand the claim that partner choice and partner control
models of cooperation are complementary is to see each as modelling a different stage
of the same extended interaction. In some cases, this may well be the correct way to
understand the claim. But, there is another way in which the two approaches can
complement each other, namely, by each illuminating a qualitatively different kind of
cooperative interaction. In this case, whether a partner choice or a partner control
model is likely to be most illuminating will depend (among other things, of course) on
the way in which benefits are produced and distributed. We should expect partner
choice, rather than partner control, models to be particularly helpful in understanding
cooperative interactions that generate benefits simultaneously for the cooperators,
where the extent of those benefits depends on the quality of the interacting agents.
Cooperation and the Cheater Problem
In the context of the partner control approach, the problem of cheating is as follows. If
B receives a benefit from a cooperative individual A but does not repay that benefit, or
repays less than was received, then B-type individuals may eventually replace
cooperators like A in the population of interacting individuals, since B-type
individuals enjoy the benefits of cooperation without paying any of the associated
costs. Showing how such cheating (defection, exploitation, free-riding) can be
prevented from undermining cooperation is a central concern for the partner control
approach.
Cheating is supposedly not a problem – or is at least less of a problem – for the
partner choice approach (see e.g. Noe & Hammerstein 1994, 2; Nesse 2007, 145). To
assess this claim, it is necessary to specify just what the advocates of partner choice
mean by ‘cheating’.
Noe & Hammerstein identify one kind of cheating in the context of partner choice
as reneging on a proposed trade. They write: “cheating, that is changing the value of
the commodity offered after the pair has been formed…” (1994, 6 [emphasis added]).
The claim is then that such post-choice changes of offer are often impossible, and
hence that cheating is often not a problem for the partner choice approach to
cooperation. Noe & Hammerstein write:
To our minds the cheating option can safely be ignored in the large number of cases
in which either the commodity cannot be withdrawn or changed in quality or quantity
once it is offered on the market (1994, 2)
For example, Noe & Hammerstein observe that the “food bodies of myrmecophilous
plants are examples of such irretrievable offers [since] once the plant has ‘decided’ to
provide a quantity x of food bodies, these remain available to the ants” (1994, 3). In
this case, the plant traders have a certain amount of the commodity of interest to the
ant traders, but cannot withdraw that commodity once it is offered (or at least cannot
easily do so).
One problem here is that cheating (in this sense) does not seem precluded for the
ants. Moreover, there is some evidence that plants in such partnerships can and in fact
do withdraw the commodities on offer. For example, Edwards et al. (2006) studied a
particular ant-plant mutualism and found that ‘ant shelters’ (domatia) on stems that
lose leaves – a sign that protector ants may not be patrolling enough – tend to wither
away. So, it may be that the particular example chosen by Noe & Hammerstein to
make the case that cheating is precluded in biological market interactions was poorly
chosen.
In any case, even if post-choice changes of offer are impossible in many biological
marketplaces, it has become apparent that the term ‘cheating’ is used to pick out
different things in the contexts of partner control and partner choice. The partner
control approach faces something aptly called a “cheater problem” – free-riding –
while there is another thing – reneging – which is also aptly described as a cheater
problem and which the partner choice approach avoids. Cheater problems come in
many varieties.
Even if reneging is impossible in many biological marketplaces, there is a different
kind of cheater problem that can arise in the context of partner choice. To appreciate
this problem, it will help to first note the importance that proponents of the partner
choice approach assign to signalling in the context of partner choice.
Signalling and the Problem of Reliability
Noe & Hammerstein write that “trading may take place on the basis of an honest
signal that is correlated with access to a commodity, instead of being based on the
commodity itself” (1995, 336). Noe also notes that “choosing partners implies a
number of mechanisms [including] judging the partner’s quality, a memory for the
partner’s quality and location, searching strategies, judging the honesty of signals and
so on.” (2006, 5 [emphasis added]). Indeed, Noe thinks it is important to distinguish
between markets “in which commodity values are measured directly and those in
which signals play an intermediating role” (2001, 108). Bull & Rice (1991, 69) and
Sachs et al. (2004, 141) both point out that explanations for cooperation in terms of
partner choice depend on there being some way in which individuals can assess and
discriminate between potential partners. Partner assessment need not necessarily
involve signals (as will be discussed below), but via signalling is one way that it can
be done.
Once it has been acknowledged that signals play an important role in at least some
partner choice scenarios, the problem of reliability becomes unavoidable. In brief, the
problem of reliability is as follows. Organisms are often interested in unobservable
qualities of other organisms: a sexual rival’s fighting ability, an offspring’s hunger
level, the evasive ability of potential prey, or the quality of a potential mate. Making
adaptive decisions depends crucially on estimating these unobvious qualities. As it
turns out, potential partners, predators and prey often provide the relevant
information: they roar, beg, stot, sing or dance, for example, or signal in some other
way. The problem of reliability arises when we ask why, given the often strong
incentive to mislead signal receivers, signal senders do not do so more often. Why do
signallers not exaggerate the relevant quality to their own advantage? Why do
signalling systems not collapse as a result of receivers eventually ignoring a
cacophony of dishonest signals?
‘False Advertising’ in Biological Markets
It is now possible to state a cheater problem that faces the partner choice approach,
one that is (I claim) currently underappreciated and insufficiently addressed. The
problem is that of false advertising: a trader of one class may present itself as a better
partner than it actually is, in order to increase its chances of being chosen as a partner
by members of the other trading class. The problem of false advertising in biological
markets is a specific case of the more general problem of reliability in biological
signalling systems. The problem of false advertising differs from reneging as
described above. False advertising is not a matter of a trader in a biological market
genuinely having the relevant commodity or property but not delivering it, but rather
of the trader lacking that commodity or property while convincing others otherwise.
There is some recognition of the false advertising problem in the extant literature
on partner choice. For example, Noe (2001, 94) noted that in biological markets the
“commodities on offer can be advertised [and] as in commercial advertisements there
is a potential for false information”. Noe also mentions the possibility in some
markets for “subtle cheating” where “signals associated with the future transfer of
resources are occasionally dishonest” (1995, 338; see also Noe 2001, 94). Thus, I do
not take myself to be pointing out something overlooked by proponents of the partner
choice approach. Rather, I am suggesting that this kind of cheater problem is more
pressing than has yet been acknowledged and, further, that the treatments of the
problem offered to date are unsatisfactory.
Current Treatments of ‘False Advertising’
Attempts to address the false advertising problem for the partner choice approach
have been too sanguine in dismissing the problem, or have made misdirected efforts
to address it, or have been disunified and in need of clarification.
Dismissing the Problem
Nesse (2007, 145) in his discussion of partner choice and cooperation claims that
social selection “will select for displays of resources and selective altruism that reflect
an individual’s potential value as a partner.” Nesse is too quick to assume selection
will favour honest signalling of partner quality. One should wonder why displays that
reflect partner value would be selectively favoured, rather than those that flatteringly
exaggerate it to the displayer’s advantage. Speaking specifically about cooperation
among humans, Nesse says that “deception and cheating have been major themes in
reciprocity research, and they apply in social selection models, but their effects are
limited by inexpensive gossip about reputations and by the difficulty of faking
expensive resource displays” (2007, 145).
Nesse’s claims about the limited scope for deception in human partner choice are
questionable. For one thing, not all signals of partner value require the expenditure of
large quantities of resources. Displaying qualities like kindness, honesty, and patience
– all plausibly valuable qualities in many kinds of cooperative partnerships – may be
quite cheap in terms of energy, risk, and material resources. In addition, gossip may
not be so cheap. The risk of making enemies is a hard-to-quantify but nevertheless
real cost of gossiping, which ought not to be ignored. While these brief remarks do
not settle matters any more than do Nesse’s own, but they serve to show that Nesse is
too sanguine regarding the problem of false advertising in biological markets as
minor.
Missing the Point
Sachs et al. (2004) note that partner choice models of cooperation must incorporate
effective partner assessment systems. They write: “the [partner] assessment system is
the biological arena in which one or more potential partners are observed for their
cooperative tendencies, such that their level of cooperation in further interactions can
be predicted… [It] allows an individual to gain information about which partners are
cooperative and how cooperative they are” (2004, 141-142). Sachs et al. identify
“parcelling” and “distribution” as two ways in which partner assessment may be
conducted. Parcelling and distribution both involve splitting up a resource to be
invested. In the former case, the resource is divided temporally, while in the latter
case, the resource is divided spatially.
There are two problems with taking parceling and distribution to be partner
assessment systems that allow effective partner choice. One problem is conceptual,
the other empirical. I will discuss them in turn.
The conceptual problem is that parceling and distribution occur after members of
two trading classes have partnered up. Two impala grooming each other in brief
bursts have already formed a grooming pair. A yucca plant selectively aborting those
of its many flowers that have been over-exploited by selfish yucca moths is already
interacting with its many partners. This is not to say that parceling and distribution are
unimportant in the context of the partner choice approach. On the contrary, they are
good ways of deciding when to do some partner “switching” (Noe 1995, 337). There
is a difference, though, between partner choice, which occurs prior to the formation of
a trading pair or group, and partner switching, which is a matter of strategically
leaving one’s current partner for greener fields elsewhere. The difference is one that
proponents of the partner choice approach are themselves at pains to mark. Treating
parceling and distribution as ways of making effective partner choices is thus
conceptually confused and potentially misleading. These are clearly ways of
engineering the incentives of partners – and are important as such – but they are not
mechanisms of partner choice.
The empirical problem with parceling and distribution as ways of making good
partner choices is that neither will be an option in biological markets where
indivisible resources are at stake. For example, in mating markets where one trader
class consists of monandrous females (those who mate with only one male), the
commodity on offer is exclusive reproductive access, which cannot be parceled or
distributed. In such cases, the timing of partner assessment matters crucially. Partner
switching after dabbling a toe, so to speak, will not be an option. Traders offering
indivisible resources must identify who is genuinely a high-value partner and who is
not prior to committing to a trade.
This problem is not limited to cases in which the resource being traded is
indivisible, either. The problem may also arise in cases where the costs of partner
switching are high. For instance, if searching for a new partner is very costly in terms
of time, energy and/or risk, then a choice once made may be effectively fixed. Here
again, traders must be able to identify who is genuinely a high-value partner and who
is not prior to committing to a trade.
A third means of partner assessment mentioned by Sachs et al. is “image scoring”
(2004, 142; see also Nowak & Sigmund 1998). For example, potential clients of
cleaner fish, while waiting for service, observe the cleaner’s interaction with its
current client and are much more likely to interact with a cleaner if its current
interaction ends peacefully instead of in conflict. This way of making partner
assessments can be used prior to pair formation, and is hence potentially a mechanism
for genuine partner choice. Waiting clients that see the current cleaning interaction
end in conflict can simply swim away.
It is worth drawing a distinction at this point between cues and signals. Signals are
behavioural or morphological traits that alter the behaviour of other organisms, have
evolved because of that effect, and are effective because the response of receivers has
also evolved (Maynard-Smith & Harper 2003, 15). A cue, by contrast, is any “feature
of the world, animate or inanimate, that can be used by an animal as a guide to future
action” (Maynard-Smith & Harper 2003, 15). Showing that a behavioural or
morphological trait is a signal is far more demanding than showing that trait to be a
cue. In the former case, much must be established about the evolutionary history of
the trait. To show something is a cue, though, we need only show that other organisms
attend to it when deciding how to act.
Returning now to the case of the cleaner-client fish interaction, it seems that image
scoring is better described in terms of cues than of signals. The relevant observation
made by potential clients is not of any specific behavioural display by the cleaner that
is designed to entice clients. It is rather the observation of a certain state of affairs: an
amicable end to the cleaner’s current interaction. This seems more akin to predators
choosing prey via cues than it is to, say, peahens choosing mates based on signals like
the peacock’s extravagant tail. A predator choosing which of a herd of prey animals to
chase wants to pick one it is likely to catch, and watching to see which ones limp is a
good way to find out which ones will be most easily caught: limping here is a cue. A
client fish wants to interact with cooperative cleaners, and watching to see whether a
cleaner’s current interaction ends peacefully – rather than in a cheating-precipitated
chase – seems like a good way to obtain at least some information about the
cooperativeness of the cleaner.
The fact that partner assessment is sometimes done using cues is not in itself any
kind of problem for the partner choice approach. Indeed, one might think that
assessing partners using cues is less problematic than relying on signals from potential
partners. Cues can be more or less accurate predictors, but at least they don’t provide
scope for false advertising. However, such deception in the context of partner
assessment is possible even when assessments are cue-based, as become evident when
we pay closer attention to the cleaner-client fish case.
Cleaner fish prefer to eat clients’ mucus rather than parasites (Bshary & Grutter
2003). Large fish have more defection-tempting mucus than do smaller ones. Nonpredatory clients cannot eat a cheating cleaner. Mobile clients – those whose home
range encompasses more than one cleaning station – tend not to bother with punitive
chases, instead simply swimming away from cheating cleaners. These facts together
make large, non-predatory, mobile clients the perfect ‘marks’ for cheating cleaners. It
turns out that the image scoring system in the cleaner-client market is exploited by a
certain class of cleaners, dubbed “biting cleaners” (Bshary 2002, 2088).
A biting cleaner servicing a small client and being observed by a large, nonpredatory, mobile client, will often rub its pelvic fins around the small client’s dorsal
fin area (Bshary & Wurth 2001, 1495). This behaviour has been termed “host
stabilization” and apparently renders the current client quiescent, ensuring that the
cleaning interaction ends peacefully (Potts 1973, 274).5 The biting cleaner thus sets up
the score by ensuring that its mark observes the reassuring cue and approaches for
service. The cleaner then defects, ignoring the large client’s parasites and plundering
its abundant mucus. It is unclear whether a dorsal rub provides any benefit to the
small client. If it does not – if it merely wasted time and thus inflicts a net loss – then
biting cleaners manage an impressive deception indeed, subtly cheating one client
while appearing cooperative to another. Biting cleaners should perhaps be dubbed
‘Machiavellian masseurs.’
It is important to note that biting cleaners do not somehow fake the relevant cue:
their interaction with the small client really does end peacefully. Their deception
consists in exploiting the cue-based image scoring system of partner assessment
operative in this particular biological market. They make sure they are perceived as
cooperative under precisely those conditions when being so perceived will open up
the most profitable defection opportunities. There is an interesting question to be
asked here regarding the classification of the Machiavellian masseur’s behaviour:
should such strategic massaging be counted as a cue, or instead, does it qualify as a
signal insofar as it has evolved in part in order to influence others’ behaviour? Even if
the behaviour is counted as a signal rather than a cue, it should be stressed that the
5
Host stabilization is typically used to soothe clients after a conflict, or to induce waiting fish to remain
in the area when the cleaning station is crowded and busy.
signal is parasitic upon the cue-based system of partner assessment. (There is perhaps
a parallel of sorts to be drawn here with cases of Batesian mimicry.) Thus, false
advertising is an issue even in biological markets where partner assessment and
choice is conducted on the basis of cues.
Piecemeal Solutions
Noe’s discussions of partner choice emphasise the importance of signalling in
biological markets and acknowledge the problem of reliability (what I am calling
‘false advertising’). The problem with Noe’s treatment of the issue is certainly not a
lack of ideas. It is rather a lack of unity and detail. Image scoring (as just discussed) is
one of many proposals Noe offers about how the problem might be solved (Bshary &
Noe 2003). There are several others.
Noe has sometimes appealed to costly signalling theory in his discussions of
partner choice. He claims that:
the handicap principle predicts that in the context of mate choice, agonistic
competition or predation, receivers of signals only pay attention to those signals that
are costly to produce… because only individuals that are fit enough to back-up the
signal will produce it at high intensity (2001, 109).
Noe is here claiming that advertisements in biological marketplaces where the
interests of different trader classes conflict must be costly if they are to be believable.
Elsewhere, though, he has said things in conflict with this. For example, Noe has
suggested that partner choice is facilitated when traders of one class can signal their
inability to pursue courses of action detrimental to the interests of traders of the other
class. As an example, Noe describes cooperative breeding among purple martins (Noe
& Hammerstein 1994, 7). A dominant male will allow ‘tenant’ couples to breed on his
territory, in exchange for sexual access to the female tenants. The deal goes sour for
the ‘landlord’ if his male tenants sneakily mate with many females on his territory.
Noe writes:
[W]e expect the ‘choosing’ class, i.e. the dominants, to prefer partners with an
‘honest signal’ of inferiority: an easily perceptible character that cannot change
overnight, and that constrains its bearer to keep to its role (Noe & Hammerstein
1994, 6).
As it turns out, landlords prefer male tenants with juvenile plumage. Males bearing
juvenile plumage are ‘sexually handicapped’ when it comes to attracting females. By
foreclosing his option of mating with numerous tenant females, then, a male with
juvenile plumage makes himself a non-threatening and thus appealing tenant. It is
worth pointing out, though, that displaying juvenile plumage is not costly in the way
Noe envisions when talking about the handicap principle.
In yet other places, Noe appeals to yet another kind of barrier to false advertising
(Noe & Voelkl, this volume). He notes that during “outbidding competition” – in
which members of one trader class vie to be chosen as a partner by a member of the
other class – the competitors:
may be forced to produce their commodity at the maximum possible level. Thus,
while their output from this competition cannot be taken as a proxy for how much
they will invest later on, it provides – at least – reliable information about their
potential.
The field of signalling theory has identified several mechanisms that can ensure the
honesty of signals (even when sender and receiver interests conflict). Noe’s
discussions of signalling in partner choice mention many of these, but in a haphazard
way, often in the context of specific empirical examples, and without an eye to the
bigger picture. The partner choice approach would benefit from having in place a
unified theoretical framework for thinking about kinds of solutions to the problem of
false advertising in biological markets. Such a framework would help guide
investigation of specific market interactions.
A Theoretical Framework for Addressing the False Advertising Problem
In this section, I will draw on work in signalling theory to provide a framework within
which to organise and clarify Noe’s various discussions of the issue of signalling and
reliability in biological markets. The mechanisms that can underwrite the reliability of
a signal can be divided into three broad classes: costs, constraints, and commitments.
Below, I will discuss each class of mechanism, and will show how the scattered
discussions of signal reliability in the partner choice literature can be fitted into the
framework this three-way distinction provides. In each case, I suggest research
questions that can usefully inform future work on partner choice and cooperation.
Cost and Honest Advertisement
Handicaps are signals kept honest by costs. Amotz Zahavi’s (1975; 1977) solution to
the problem of reliability in signalling was to point out that signals can be relied upon
to be honest if it is prohibitively expensive to send a dishonest signal, that is, if the
costs of sending such a signal outweigh whatever benefits might be gained by doing
so. Zahavi called his solution to the problem of reliability the “handicap principle”.
Noe rightly latches on to the handicap principle as a means of solving the false
advertising problem. Many cases of signalling to potential partners in mating markets
are amenable to this kind of explanation; for example, the peacock’s tail (see e.g.
Petrie & Halliday 1994). However, Noe gives a rather simplistic presentation of the
costly signalling idea that is based on Zahavi’s initial formulation of the handicap
principle.
Zahavi’s idea has gone through several incarnations since it was first suggested.
He initially claimed that high signal costs impose a test on signallers – a test that only
high-quality individuals can pass – and that signalling and surviving is thus an
effective way to advertise one’s quality (1975). Zahavi later suggested that “the
phenotypic manifestation of the handicap is adjusted to correlate to the phenotypic
quality of the individual” (1977, 603). In the initial formulation of the handicap
principle, both high quality and low quality individuals were assumed to pay the costs
of signalling. In this later version, high signalling costs are paid only by those who
can afford those costs (i.e. the genuinely high quality individuals), while those who
cannot afford high signalling costs either do not signal at all, or, signal at a lower
intensity that is affordable given their quality. In light of these three variations on the
costly signalling idea, Searcy & Nowicki (2005, 10) distinguish between “Zahavi”
handicaps, “conditional” handicaps, and “revealing” handicaps.
Future work on signalling to potential partners in biological markets should take
account of the advances in discussions of costly signalling theory. In particular, the
issue of signal costs should not be treated in too cavalier a fashion. Careful accounting
of the costs involved in a behavioural or morphological display is needed to
substantiate the claim that the display is a costly signal. A detailed discussion of the
challenges posed by such accounting is given in Kotiaho (2001), and a schema for
classifying signal costs is given by Searcy & Nowicki (2005). Too narrow a focus on
Zahavi’s initial – and relatively primitive – statement of the costly signalling idea can
only handicap attempts to use this idea to understand signalling in the context of
partner choice and cooperation.
Constraints and Honest Advertisement
An index is a signal that is kept honest by constraints against faking. Whereas faking
a handicap is possible but unprofitable, faking an index is simply not possible. Indices
are signals “whose intensity is causally related to the quality being signalled, and
which cannot be faked” (Maynard-Smith & Harper 2003, 15).
This is the most likely place to fit Noe’s example of members of a trading class
engaging in outbidding competition by signalling at maximum output. Actual
empirical work investigating this possibility is rather thin, though. For a specific
example, consider the production of nectar by caterpillars in order to attract protector
ants. Noe mentions studies of this interaction that report that a lone caterpillar’s nectar
production initially increases with increasing numbers of attending ants, but soon hits
a ‘ceiling’ (additional ants don’t prompt greater nectar production). This ‘ceiling’
effect may well show that it is not possible for a caterpillar to produce nectar above
some particular level, that is, nectar production is constrained. But this is not yet to
show that nectar-production is an index used in out-bidding competition.
The crucial experiment for determining whether nectar-producing caterpillars
really are engaging in out-bidding competitions for the services of protector ants, has
not yet been done. That experiment would keep the number of ants fixed, but vary the
number of caterpillars. If nectar production is a form of out-bidding competition, then
an increase in caterpillar numbers (i.e. more competing bidders) should generate an
increase in each individual caterpillar’s level of nectar production (up to some ceiling
level that will doubtless vary across individuals).
Outbidding competition conducted via indices is an intriguing theoretical
possibility, but is currently empirically under-supported. Future empirical work
investigating this possibility must take into account at least two issues. Obviously, one
is the relation between the signal and the quality signalled: establishing that the
possession of the quality constrains the production of the signal is needed in order to
show that the signal is an index. The other is the relation between the signal and the
context in which it is sent. In cases of outbidding competition, signal intensity should
rise as marketplaces become more crowded, that is, as more bidders enter the
competition.
Commitment Devices and Cooperation
A commitment device provides reliable information about one’s likely future actions
in virtue of restricting the space of actions one is able to take, or strongly biasing one
toward certain of the available options. The work of economist Robert Frank (1988;
cf. Fessler & Quintelier, this volume) provides a good example of this kind of
approach.
Frank’s starting point is the idea of a “commitment problem” (1988, 4). A
commitment problem arises whenever an agent can best serve his own interests only
by credibly committing himself to act in a way that may later be contrary to his selfinterest. An agent might need to make a credible promise of honesty in order to
reassure and secure would-be partners in cooperative endeavours where cheating
would be profitable and undetectable. An agent might need to make credible threats of
revenge in order to deter would-be exploiters in situations where avenging a wrong
would be more costly than not doing so. Commitment problems are common and
solving them is important.
Frank suggests that evolution has endowed humans with the means to solve
commitment problems, namely, “moral sentiments”: anger, contempt, disgust, envy,
greed, shame and guilt (1988, 46, 53). Moral sentiments help us solve commitment
problems because “being known to experience certain emotions enables us to make
commitments that would otherwise not be credible” (1988, 5). The promises of an
agent known to be prone to guilt will be for that reason more trusted, Frank suggests,
and threats from agents known to be prone to anger will be for that reason taken more
seriously.
Moral sentiments alone may suffice for solving personal commitment problems,
where the goal is for an agent to act in their own longer-term interest despite shorterterm temptation. For interpersonal commitment problems, though, more is needed (as
Frank recognised). If the commitment device is internal to the agent (“subjective”, as
it is put by Fessler & Quintelier, this volume), then there must be some way for other
agents to tell – and tell reliably – that one is committed to being honest, or punitive, or
cooperative, as the case may be. For Frank, it is unfakeable expressions of emotions
associated with moral sentiments that allow other agents to tell this (here, the index
and commitment accounts intersect).
Of course, commitment devices need not be internal to agents and signalled in
some way to influence the partner choices of others. Commitment devices themselves
may be discernable to others. Noe’s example of purple martin landlords preferring
male tenants who are ‘sexually handicapped’ by juvenile plumage might fit here.
Commitments bind agents; they foreclose some future option(s). Assuming that a
male displaying juvenile plumage at the start of a breeding season cannot change his
appearance rapidly enough to seduce tenant females that same season, the male has
bound himself (at least in the short term) to being a relatively sexually nonthreatening tenant for the dominant landlord male. Then again, we may want to
reserve the term ‘commitment device’ for factors that foreclose options indefinitely.
And in any case, it is unclear whether males displaying juvenile plumage really are
juveniles (making plumage a cue), or whether they are mature birds that have retained
juvenile plumage as a breeding strategy (making plumage, potentially at least, a
signal).
A clear case of commitment playing a role in partner choice-mediated cooperation
comes from the case of ritual scarification, tattooing, and other forms of highly visibly
body modification (Fessler & Quintelier, this volume). Such modifications can serve
to mark the modified individual as a member of a particular group. Depending on the
wider social dynamics, such marking may strongly prejudice an individual’s partner
choices, even to the extent of precluding some choices, such as the choice to defect to
a different group. If so, then marked individuals may well be more attractive than
unmarked ones as partners in group-beneficial cooperative endeavours, precisely
because such individuals’ fates are tied to the fate of the group.
To sum up, there are a plurality of ways in which signalling might work in the
context of partner choice. Nothing in this section has been ground-breakingly new.
Even so, it is worthwhile to organise the scattered discussions of false advertising
extant in the partner choice literature, and to explicitly bring together work in
signalling theory with the market perspective on cooperation.
Punishment and Partner Choice
I want to turn now to cooperation and partner choice in humans, and consider the role
of punishment. I think punishment is a neglected option in the partner choice
literature. Noe, for instance, is sceptical about the possibility that punishment of false
advertising might serve to maintain signal reliability in the context of partner choice
(see e.g. Noe & Hammerstein, 1995, 337). This is partly because he underestimates
the ways in which punishment can be efficacious. He says that:
Punishment as revenge for past behaviour without future fitness advantages cannot be
‘evolutionarily stable’. ‘Punishment’ can only work in long-lasting relationships in
which the aggression of the punisher moulds the behaviour of the punished individual
in a manner beneficial to the punisher. (2001, 101, 105)
Noe assumes here that the only way that punishment could benefit the punisher is if it
rehabilitates the punishee. While that is one way in which punishment might benefit
the punisher, it is not the only way. Importantly (and importantly by Noe’s own lights
as a proponent of the partner choice approach), punishment might benefit the punisher
by influencing the partner choices of observers in the punisher’s favour.
Experimental economists have studied effect of costly punishment on partner
choice in humans. Rob Nelissen investigated “how the costs invested in an altruistic
act influence its interpersonal consequences” (2008, 243). By ‘altruism,’ Nelissen
meant moralistic punishment, specifically, the paying of a cost to punish unfairness.
He predicted that “people [would] confer social benefits (both in terms of enhanced
preference and financial rewards) on altruistic punishers proportionally to the cost
they incurred in punishing” (2008, 243-244).
Nelissen’s subjects were given a sum of money with which to play a “trust game”,
which worked as follows: each player was given a sum of money and the option of
sending some or all of that money to another player. Any money sent to a trustee
would be tripled, and the trustee would then have the option of returning some, all or
none of that amount to the sender. Subjects had to choose a partner for the trust game
from among the participants of a previous experiment. Subjects were told that their
three potential partners – labelled A, B, and C – had observed a “dictator game” in
which the dictator split $10 unevenly with the receiver, keeping $8 and giving only
$2. Subjects were also told that A, B, and C had had the opportunity to spend some of
their own money to take money away from the dictator: giving up $1 would reduce
the dictator’s total by $2. Finally, subjects were told that A chose to spend $0 out of
$5 on punishment, B chose to spend $1.50 out of $5 on punishment, and C chose to
spend $1.50 out of $10 on punishment. In one condition, subjects were randomly
matched with A, B, or C and were then asked how much they would entrust to that
partner. In the other condition, subjects were asked which of A, B, and C they wanted
to play the trust game with and were then asked how much they would entrust to that
partner.
Nelissen found that subjects chose B over A and C (2008, 244). He also found that,
when pairing was random, subjects paired with B sent the most money in the trust
game (2008: 246). As Nelissen interprets the findings:
[T]he costs incurred in altruistic punishment were perceived as signalling the extent
to which punishers value fairness… [P]eople prefer punishers more [as trust-game
partners] if they invest more to punish unfairness but only if the invested amount can
be perceived as a reliable index of fairness concerns (2008, 244, 246).
The difference between B and C lies in the relative cost each paid in order to punish
unfairness. Punishment was, relatively speaking, twice as costly for B as for C.
Subjects thus seem to take the cost of punishment into account when deciding who to
interact with or how to behave toward partners that are forced upon them. While this
does not show that more costly acts of punishment are more reliable signals that the
punisher values fairness, it at least suggests that observers judge them to be such.6
The evidence for a signalling role for punishment in the context of human
cooperative partner choice is admittedly rather thin at this stage. The influence of
costly moralistic punishment on partner choice is yet to be fully described. Very
recent work by Horita (2010) indicates that being a punisher influences others’ partner
choice to one’s own advantage in some cases, but works to one’s disadvantage in
others. Specifically, punishing unfairness is good for one’s prospects of being chosen
as a partner when one will play the role of provider of resources (i.e. when punisher
6
Nelissen’s work on moralistic punishment dovetails with Barclay’s (2006) work on the topic. Barclay
ran an experiment in which accepting a cost to punish free-riding during a Public Goods Game
benefitted other players. He found that individuals who paid to punish were subsequently rated as more
trustworthy and more worthy of respect than non-punishers and were chosen over non-punishers as
partners in subsequent trust games (2006, 330).
will have control over distribution of resources in later interaction). However,
punishers seem to be chosen less frequently than non-punishers when, in the coming
interaction, the chooser has control over the distribution of resources.
At least in the case of humans, punishment may play an important role in partner
choice and cooperation, both by imposing costs on non-cooperation and by
influencing decisions about whom to pair with for mutually beneficial interactions.
Proponents of the partner choice approach should therefore not be too quick to
dismiss punishment as a potential means of solving the false advertising problem, at
least in the context of human cooperation and partner choice.
Conclusion
There is a cheater problem for the partner choice approach: the problem of false
advertising. That problem is currently under-studied. Addressing it will require
importing some signalling theory into the partner choice approach. I have started that
process here, but much remains to be done. Hopefully, my contribution will help
inform future work on partner choice and cooperation.
Works Cited
Axelrod, R. 1984. The Evolution of Cooperation. New York: Basic Books.
Barclay, P. 2006. “Reputational benefits for altruistic punishment.” Evolution and Human Behaviour
27: 325-344.
Bshary, R. 2002. “Biting cleaner fish use altruism to deceive image-scoring client reef fish.”
Proceeding of the Royal Society of London B 269: 2087-2093.
Bshary, R & Grutter, A. 2003. “Cleaner wrasse prefer client mucus.” Proceeding of the Royal Society
of London B 270: S242-S244.
Bshary, R. & Noe, R. 2003. “Biological markets: the ubiquitous influence of partner choice on the
dynamics of cleaner fish – client reef fish interactions.” In Genetic and Cultural Evolution of
Cooperation, ed. P. Hammerstein, 167-184. Cambridge, MA: MIT Press.
Bshary, R. & Wurth, M. 2001. “Cleaner fish Labroides dimidiatus manipulate client reef fish by
providing tactile stimulation.” Proceeding of the Royal Society of London B 268: 1495-1501.
Bull, J. & Rice, W. 1991. “Distinguishing mechanisms for the evolution of cooperation.” Journal of
Theoretical Biology 149: 63-74.
Calcott, B. 2008. “The other cooperation problem: generating benefit.” Biology and Philosophy 23:
179-203.
Calcott, B. This volume. “The evolution of complex cooperation.”
Clutton-Brock, T. 2009. “Cooperation between non-kin in animal societies.” Nature 462: 51-57.
Dugatkin, L. 1995. “Partner choice, game theory, and social behaviour.” Journal of Quantitative
Anthropology 5: 3-14.
Dugatkin, L. 1997. Cooperation Among Animals. Oxford: Oxford University Press.
Edwards, D., Hassal, M., Sutherland, W. & Yu, D. 2006. “Selection for protection in an ant-plant
mutualism: host sanctions, host modularity, and the principal-agent game.” Proceeding of the Royal
Society of London B 273: 595-602.
Fessler, D. & Quintelier, K. This volume. “Suicide bombers, weddings, and prison tattoos.”
Frank, R. 1988. Passions Within Reason. New York: W. W. Norton & Co.
Hamilton, W. 1964. “The genetical evolution of social behavior (I and II).” Journal of Theoretical
Biology 7: 1-52.
Horita, Y. 2010. “Punishers may be chosen as providers but not as recipients.” Letters on Evolutionary
Behavioural Science 1: 6-9.
Kotiaho, J. 2001. “Costs of sexual traits.” Biological Reviews 76: 365-376.
Maynard-Smith, J. & Harper, D. 2003. Animal Signals. Oxford: Oxford University Press.
Miller, G. 2007. “Sexual selection for moral virtues.” The Quarterly Review of Biology 82: 97-121.
Nelissen, R. 2008. “The price you pay: cost-dependent reputation effects of altruistic punishment.”
Evolution and Human Behavior 29: 242-248.
Nesse, R. 2007. “Runaway social selection for displays of partner value and altruism.” Biological
Theory 2: 143-155.
Noe, R. & Hammerstein, P. 1994. “Biological markets: supply and demand determine the effect of
partner choice in cooperation, mutualism and mating.” Behavioural Ecology and Sociobiology 35:
1-11.
Noe, R. & Hammerstein, P. 1995. “Biological markets.” Trends in Ecology and Evolution 10: 336-339.
Noe, R. & Voelkl, B. This volume. “Cooperation and biological markets: the power of partner choice.”
Noe, R. 1990. “A Veto game played by baboons: a challenge to the use of the Prisoner’s Dilemma as a
paradigm for reciprocity and cooperation.” Animal Behaviour 39: 78-90.
Noe, R. 2001. “Biological markets: partner choice as the driving force behind the evolution of
mutualisms.” In Economics in Nature: Social Dilemmas, Mate Choice, and Biological Markets, eds.
R. Noe, J. van Hoof & P. Hammerstein, 93-118. Cambridge: Cambridge University Press.
Noe, R. 2006. “Cooperation experiments: coordination through communication versus acting apart
together.” Animal Behaviour 71: 1-18.
Nowak, M. & Sigmund, K. 1998. “Evolution of indirect reciprocity by image scoring.” Nature 393:
673-677.
Petrie, M. & Halliday, T. 1994. “Experimental and natural changes in the peacock’s (Pavo cristatus)
train can affect mating success.” Behavioral Ecology and Sociobiology 35: 213-217.
Potts, G. 1973. “The ethology of Labroides dimidiatus on Aldabra.” Animal Behaviour 21: 250-291.
Sachs, J., Mueller, U., Wilcox, T. & Bull, J. 2004. “The evolution of cooperation.” The Quarterly
Review of Biology 79: 135-160.
Searcy, W. & Nowicki, S. 2005. The Evolution of Animal Communication. Princeton, NJ: Princeton
University Press.
Smith, E. & Bliege-Bird, R. 2000. “Turtle hunting and tombstone opening.” Evolution and Human
Behavior 21: 245-261.
Sober, E. & Wilson, D. 1998. Unto Others. Cambridge, MA: Harvard University Press.
Stevens, J. & Hauser, M. 2004. “Why be nice? Psychological constraints on the evolution of
cooperation.” Trends in Cognitive Sciences 8: 60-65.
Stevens, J., Cushman, F. & Hauser, M. 2005. “Evolving the psychological mechanisms for
cooperation.” Annual Review of Ecology, Evolution and Systematics 36: 499-518.
Trivers, R. 1971. “The evolution of reciprocal altruism.” Quarterly Review of Biology 46: 35-57.
West, S., Griffin, A. & Gardner, A. 2007. “Social semantics.” Journal of Evolutionary Biology 20: 415432.
Wilkinson, G. 1984. “Reciprocal food sharing in the vampire bat.” Nature 308: 181-184.
Zahavi, A. 1975. “Mate selection: a selection for handicap.” Journal of Theoretical Biology 53: 205214.
Zahavi, A. 1977. “The cost of honesty: further remarks on the handicap principle.” Journal of
Theoretical Biology 67: 603-605.