1
MINDREADING IN THE ANIMAL KINGDOM?
José Luis Bermúdez
In R. Lurz (Ed.), Animal Minds (Cambridge University Press, 2009), pages 145-164
Introduction
Can non-human animals think and reason about what other creatures are thinking, reasoning, or
experiencing? Experimentalists, ethologists, and theorists have answered this deceptively simple
question in many different ways. Some researchers have made very strong claims about the socalled mindreading abilities in animals (Byrne and Whiten 1988, 1990, 1991; Premack and
Woodruff 1978, Hare, Call, and Tomasello 2001, Tomasello and Call 2006; Hare et al, 2002;
Dally, Emery, and Clayton 2006; Tschudin 2001). Others have been critical of such claims
(Heyes 1998; Povinelli and Vonk 2006, Penn and Povinelli 2007). Even a cursory look at the
extensive literature on mindreading in animals reveals considerable variation both in what
mindreading abilities are taken to be, and in what is taken as evidence for them. The first aim of
this paper is to tackle some important framework questions about how exactly the mindreading
hypothesis is to be stated. In sections 1 and 2, three importantly different versions of the
mindreading hypothesis are distinguished. The first (which I call minimal mindreading) occurs
when a creature’s behavior covaries with the psychological states of other participants in social
exchanges. The second (which I call substantive mindreading) involves attributions of mental
states. In section 2, substantive mindreading is further divided into propositional attitude
mindreading and perceptual mindreading. In section 3, I present reasons for thinking that the
role of propositional attitude psychology in human social life is very much over-stated and show
that this very much weakens the analogical case for identifying propositional attitude
mindreading in nonlinguistic creatures. And in section 4, I present a revised version of an
argument I have given elsewhere (Bermúdez 2003) to show that the most sophisticated form of
substantive mindreading (the type of mindreading that exploits the concepts of propositional
attitude psychology) is only available to language-using creatures.
I
Minimal and substantive mindreading
My starting-point is that many types of animal are genuine thinkers. I have discussed this at
length elsewhere (Bermúdez 2003) and will not rehearse the arguments again here. The evidence
from comparative psychology and cognitive ethology overwhelmingly supports taking some
forms of animal behavior to be genuinely psychological, generated by primitive forms of belief
and desire via processes that have significant commonalities with the forms of reasoning
engaged in by language-using creatures.
Animals are capable of sophisticated social behaviors. Many of these social behaviors do not
have any sort of psychological dimension. Schooling and flocking behaviors are obvious
examples. And there are behaviors with a psychological dimension that do not involve any social
coordination, as in cases of emotional contagion. But there do appear, at least at first sight, to be
forms of social coordination in the animal kingdom that have a psychological dimension and that
involve a sensitivity to the psychological states of other participants in the interaction.
Here is an example of how non-linguistic creatures can exploit social cues. A well-known set of
experiments by Brian Hare and collaborators have revealed that domestic dogs are strikingly
successful on object choice tasks with social cues (Hare et al. 2002). One reason these results are
striking is that most primates, which are generally thought to have quite sophisticated socialcognitive skills, seem unable to perform above chance on object choice tasks. In a standard
object choice task, an experimenter hides a food reward in one of two opaque containers. The
2
subject, which did not see the food being hidden, has to choose between the two containers.
Before the animal is presented with the choice the experimenters “signals” which container the
food is in by using one of a range of communicative cues (such as pointing to, marking, or
looking at the correct container). Hare et al. found that domestic dogs master object choice tasks
very quickly, often without any learning.
The success of domestic dogs in picking up and exploiting social cues to solve object choice
tasks is a paradigm example of social coordination with a psychological dimension – as opposed
to coordinated group behaviors, such as schooling or flocking, where (to simplify somewhat) an
individual’s behavior depends simply upon changes in the behavior of other participants in the
coordinated group behavior. That there is social coordination is obvious. What makes it social
coordination involving sensitivity to psychology is that the dogs behave in ways that depend
upon changes in the psychological states of the other participant in the interaction. The dogs in
the object choice tasks are able to respond to different visual cues. These cues all have something
in common. They all have a common cause in the psychological profile of the experimenter –
namely, the experimenter’s intention to signal to the animal the location of the reward. This
allows us to extrapolate to predictions about how the dogs will behave in future tests – namely,
that they will respond to visual cues that have the same cause and origin. In essence, we assume
that the dogs are responding to cues in the abstract, rather than to the physical gestures by which
those cues are made – a “multi-track” sensitivity, as opposed to a set of contingencies between
particular responses and particular stimuli.
This account of social coordination involving sensitivity to psychology is quasi-operational.
While it goes beyond observed behavior in making reference to the psychological states of the
experimenter, it does not go beyond the observed behavior of the experimental subject. The
experimental subject is characterized in purely behavioral ways. Saying that a non-linguistic
creature displays psychological sensitivity in this quasi-operational sense does not attribute to it
any psychological states. Displaying sensitivity to psychology only requires behaving in ways
that depend upon the psychological states of another participant in the interaction. It is neither
implied nor required that an animal displaying sensitivity to psychology should represent (or
even be capable of representing) the psychological states of the other participant(s) in the
exchange.
I will use the expression “minimal mindreading” to describe what is going in instances of social
coordination involving sensitivity to psychology of this type. Here is the official definition:
A creature engages in minimal mindreading when its behavior is systematically
dependent upon changes in the psychological states of other participants in the
interaction.
Characterizations of minimal mindreading are descriptive rather than explanatory. To say that an
animal is engaged in minimal mindreading is simply to assert that certain contingencies hold
between its behavior and the psychological states of the creatures with which it is interacting. It is
not in any sense to say why those contingencies hold.
The expression “substantive mindreading”, in contrast, is intended to be explanatory. Attributions
of substantive mindreading are made in order to explain how and why an animal’s behavior
depends systematically upon the psychological states of other participants in the interaction. What
explains the dependence, it is typically claimed by those who identify substantive mindreading in
the animal kingdom, is the fact that the animal engaged in a social interaction is mentally
representing the psychological states of other participants in the interaction.
Here is the matching official definition.
3
A creature engages in substantive mindreading when its behavior is systematically
dependent on its representations of the psychological states of other participants in the
interaction.1
Although the notion of systematic dependence features in the definition both of minimal and of
substantive mindreading, it is doing different work in each. In the definition of minimal
mindreading, the systematic dependence is not intended to be causal. Minimal mindreading is
covariation pure and simple. In contrast, claims of substantive mindreading are intended to be
causal. The animal’s behavior is caused by (and hence can be explained by appeal to) how it
represents the mental states of others.
2
Two types of substantive mindreading
Substantive mindreading occurs when a creature behaves in ways that depend systematically upon
how it represents the psychological states of other participants in the interaction. But there are
different types of psychological state and, correspondingly, different types of substantive
mindreading. In this section I articulate what I take to be the most important distinction in this
area, both from a theoretical and from an experimental perspective.
We can start from the obvious fact that psychological states do not typically generate behavior on
their own. Behavior is the product of complexes of psychological states – of what might be
thought of as psychological profiles. We can only think about the behavioral implications of
individual psychological states through the prism of the subject’s psychological profile. This is
part of what makes studying the psychology of nonlinguistic creatures so challenging. A welldesigned experiment tries to find behavioral criteria for the presence or absence of a particular
form of psychological state. No interesting psychological states have unambiguous and
unequivocal implications for behavior, however. So assumptions have to be made about the
subject’s more general psychological profile and, of necessity, these assumptions are not
themselves under investigation and scrutiny. This means that there is always an element of bootstrapping going on when we explore the psychological lives of non-linguistic creatures.
Fortunately, psychological states lie on a continuum in terms of the directness of their
implications for behavior. At one end of the continuum are psychological states with more or less
immediate implications for behavior. At the other end lie the psychological states that feed into
action only very indirectly. We can think about where an individual psychological state lies on the
continuum in terms of the complexity and particularity of the background psychological profile
required for it to issue in action. In some cases the background psychological profile is very
simple, given by a relatively fixed set of drives and goals that may well be constant across
individuals within a given community, or even across a given species. In other cases the
background psychological profile is highly complex and highly individual.
This basic fact about the relation between psychology and action has important implications for
thinking about substantive mindreading. A creature engages in substantive mindreading to the
extent that its behavior depends systematically upon how it represents the psychological states of
others. A mindreading creature behaves in ways that reflect its predictions about how other
creatures are going to behave – predictions derived from representations of their psychological
states. It is clear that certain conditions have to be met for these predictions to be successful. It is
not enough simply that the mindreading creature represent the psychological states of other
participants. Or even that it represent those psychological states accurately. The success of
behavioral predictions stands or falls with their being in conformity with the background
psychological profile of the creature whose behavior is being predicted. I say that a prediction is
1
Behavior can be understood in a thin sense here. Preferential looking counts as a behavior, for example.
4
in conformity with a background psychological profile when the predicted behavior is the
behavior that would result from the combination of the (accurately represented) psychological
state and the background psychological profile.
What does it take to secure conformity with the background psychological profile? One way to
secure conformity would be through explicitly (and accurately) representing the background
psychological profile in order to apply some set of principles that connect psychology with
behavior. These principles may be proto-theoretical, as proposed by adherents of the “theory of
mind” approach to mindreading. In this case the principles themselves are explicitly represented.
Or they may be principles governing the subject’s own decision-making processes, as suggested
by supporters of the simulationist approach.2 On the simulationist view the principles are not
explicitly represented. The particular principles, and how they are applied, are less important than
the “raw materials” on which they operate. In this case the raw materials are representations of
psychological profiles.
But conformity can also be achieved without explicit representation. In cases where the relevant
elements of the background psychological profile are generic and widely held it is possible simply
to trade on them. So, for example, if I put a $100 bill in plain view on the sidewalk and I correctly
identify someone as catching sight of it, I am fairly safe in predicting that they will bend over to
pick it up. The background psychology required to generate this behavior is nothing more than a
desire for free money, which can we can safely assume to be constant across the human
population – so constant in fact that there is no need explicitly to represent it, and certainly no
need to delve any deeper into the particularities of the individual’s psychology. In this case a
reasoner who moves directly from the observation that a person has seen the $100 bill will move
directly (and with justification) to a prediction that the person will bend over to pick it up. In
almost every case this prediction will be accurate.
Again, we have two end-points on a continuum. The more complex and variable the relevant
elements of the background psychological profile, the more necessary explicit representation
becomes, and the more extensive it has to be. I am taking complexity and variability here to be
distinct phenomena. Complexity itself does not mandate explicit representation. Predictors can
trade on constant elements of the background psychological profiles when those elements are
generic and widely held, no matter how complex they are. The real problem is created by
variability. If there are many different ways in which an agent’s psychology might be configured
relative to the behavior being predicted, then there are all sorts of ways in which a prediction
might go wrong, even if the prediction is based on a completely accurate psychological
attribution.
Making predictions that involve explicitly representing a background psychological profile can be
a substantial intellectual achievement. It typically involves, for example, representing the range of
different motivational states that an agent has, together with the information they currently
possess about the environment and a range of more general beliefs. But it is not enough, of
course, simply to represent the states. The representer must also represent how they fit together
and how they might jointly determine a particular action. To put it another way, the representer
must reason about how the agent might reason their way to a particular action. This reasoning is
typically conscious – and even when it proceeds below the threshold of consciousness it is
consciously accessible. The activity of explanation and prediction is a personal-level activity (in
the sense of Dennett 1969 – see Ch. 1 of Bermúdez 2005 for further discussion).
2
For the debates between simulationist and theory-theory approaches to mindreading see the papers in
Davies and Stone 1995 and Carruthers and Smith 1996.
5
These general observations about the different ways in which identifying another’s psychological
state can generate predictions of behavior have important implications for how we think about
substantive mindreading in nonlinguistic creatures. Both experimentalists (Tomasello and Call
1997 Pt 2, 2006) and philosophers (Bermúdez 2003) have noted that substantive mindreading is
not a unitary phenomenon. There are different types of mindreading, varying according to the
type of psychological state that they involve. Predictions based on representations of perceptual
states, for example, reflect a different type of mindreading from predictions based on
representations of beliefs and desires. Standardly these types of mindreading are distinguished
simply as a function of differences between the relevant represented psychological states, on the
tacit assumption that if psychological states are different in type then representing them requires
distinct abilities. The current observations give us a principled way of categorizing mindreading
abilities that lines up in important respects with the standard distinctions.
Consider, for example, the psychological states that philosophers standardly term propositional
attitudes. These includes beliefs, desires, hopes, fears, and so on. What they have in common,
from a philosophical point of view, is that they can all be analyzed as attitudes that thinkers and
reasoners have towards propositions. We will look in more detail later on at what propositions
might be and why this is important for thinking about the mindreading abilities of nonlinguistic
creatures. For the moment the important point is that the propositional attitudes are collectively
located at one end of our continuum. They typically do not have direct implications for action.
There is no single way that a particular belief or desire will feed into action. Individual beliefs and
desires feed into action only indirectly, as a result of an agent’s specific psychological profile.
This holistic character of the propositional attitudes places a very specific burden on mindreading
that involves attribution of propositional attitudes, because it brings into play the distinctiveness
of the agent’s background psychological profile. Any creature that exploits propositional attitude
attributions in order to predict the behavior of another agent will need also to represent explicitly
that agent’s background psychological profile. The success of the prediction will ultimately
depend upon the accuracy both of the propositional attitude attribution and of the representation
of the background psychological profile. Let us call this complex form of mindreading
propositional attitude mindreading.
Perceptual mindreading is typically far less complex. In many contexts what a creature perceives
has obvious and immediate implications for action – seeing a predator, or a food source, for
example. The only elements of the background psychological profile that need to be brought into
play are generic and universal. In these cases, therefore, there is correspondingly little or no need
for the creature explicitly to represent the agent’s background psychological profile. Of course,
not all cases of perceptual mindreading are straightforward. Sometimes predicting how an agent
will respond to something in plain view requires delving deep into the agent’s psychology. But
experiments exploring the mindreading abilities of nonlinguistic creatures tend to lie at the
straightforward end of the spectrum. They typically exploit the fact that seeing a food item has
immediate implications for action, as in the much-discussed food competition paradigm
developed in Hare and Tomasello 2001.
For these reasons propositional attitude mindreading is a more complex and sophisticated
intellectual activity than perceptual mindreading. It involves explicitly representing elements of
an agent’s background psychological profile, and then reasoning about how the agent will act in
the light of what is known of their psychology. The complexity of the explicit representation and
the scope of the reasoning will vary depending on the particular propositional attitude. But there
will always have to be some explicit representation and some reasoning on the part of the
propositional attitude mindreader. Perceptual mindreading is not like this. It is perfectly possible
for a creature to be a perceptual mind reader without any capacity for explicitly representing an
agent’s background psychological profile, and without any capacity to reason about how
psychology issues in action. This is because perceptual mindreaders can often exploit and trade on
6
direct connections between perception and action. The holistic character of the propositional
attitudes means that there are no comparable direct connections between propositional attitudes
and action.
3
The double analogy
Something like the following pattern of reasoning is implicit in many discussions of animal
minds.
(1)
(2)
(3)
Certain species of non-human animals solve many problems of social interaction
and coordination that are analogous to problems solved by humans.
Humans solve these problems through mindreading strategies.
Hence non-human animals also have to be mindreaders.
There is a double analogy here. The first analogy is between the types of social situations
confronted by human and non-human animals. The second analogy is between the strategies that
human and non-human employ to navigate those situations. There are many important questions
that might be raised about whether and how arguments from analogy should be used in these
contexts. I will prescind from these questions here. What I want to focus on is the basis on which
the second analogy is made. Even if one thinks that it is acceptable to reason analogically from
the mindreading strategies of human animals to those of nonhuman animals, it is important to
start from an accurate picture of humans solve problems of social interaction and social
coordination.
Many philosophers assume without argument that some version of what is standardly called folk
psychology or commonsense psychology is the principal tool that we employ to navigate the social
world. This is standardly understood to involve attributing propositional attitudes. In previous
work I have expressed skepticism about this assumption. There are many forms of social
understanding and social coordination that proceed without attributions of propositional attitudes.
One example that I have discussed (Bermúdez 2003, 2005) are highly stereotypical interactions
that can be modeled using frames and routines. This is particularly relevant for thinking about
mindreading in nonlinguistic creatures.
Many of the social interactions that we engage in are highly stereotypical. We negotiate them
successfully because we are able to predict what other participants will do. But those predictions
need not, and in fact rarely do, involve forms of propositional attitude mindreading. When one
goes into a shop or a restaurant, for example, it is obvious that the situation can only be
effectively negotiated because one has certain beliefs about why people are doing what they are
doing and about how they will continue to behave. I cannot effectively order dinner without
interpreting the behavior of the person who approaches me with a pad in his hand, or buy some
meat for dinner without interpreting the person standing behind the counter. But these beliefs
about what people are doing do not involve second-order beliefs about their psychological states.
Ordering meals in restaurants and buying meat in butcher's shops are such routine situations that
one need only identify the person approaching the table as a waiter, or the person standing
behind the counter as a butcher. Simply identifying social roles provides enough leverage on the
situation to allow one to predict the behavior of other participants and to understand why they are
behaving as they are.
Social understanding and social coordination exploiting shared knowledge of social routines and
stereotypes is a form of reasoning. However, this reasoning is similarity-based and analogybased. Social understanding becomes a matter of matching perceived social situations to
prototypical social situations and working by analogy from partial similarities. We do not store
general principles about how social situations work, but rather have a general template for
7
particular types of situation with parameters that can be adjusted to allow for differences in detail
across the members of a particular social category.
Research in computer science and artificial intelligence provides one of modeling this type of
social reasoning. Motivated in part by the intractability of rule- and logic-based solutions to these
everyday social interactions, computer scientists have proposed what are known as frame-based
systems (Nebel 1999). Here is Minsky's original articulation of the notion of a frame:
Here is the essence of the theory: when one encounters a new situation (or makes
a substantial change in one's view of the present problem) one selects from
memory a structure called a frame. This is a remembered framework to be
adapted to fit reality by changing details as necessary.
A frame is a data structure for representing a stereotyped situation, like being in a
certain kind of living room, or going to a child's birthday party. Attached to each
frame are several kinds of information. Some of this information is about how to
use the frame. Some is about what one can expect to happen next. Some is about
what to do if those expectations are not confirmed.
We can think of a frame as a network of nodes and relations. The top levels of a
frame are fixed, and represent things that are always true about the supposed
situation. The lower levels have many terminals – slots that must be filled by
specific instances or data. Each terminal can specify conditions its assignments
must meet. (The assignments themselves are usually smaller sub-frames.) Simple
conditions are specified by markers that might require a terminal assignment to be
a person, an object of sufficient value, or a pointer to a sub-frame of a certain
type. More complex conditions can specify relations among the things assigned to
several terminals. (Minsky 1974, pp. 111-112)
The frame-based approach gives a concrete example of the form that a routine-based approach
to social understanding and social coordination might take. The key point is that, as stressed
earlier, the parameters for the frame (what Minsky calls the terminals) need not be propositional
attitude attributions. Social interactions that are sufficiently stereotypical to be modeled in terms
of frames can proceed without propositional attitude mindreading. Where they do involve
mindreading this can simply be perceptual mindreading.
Frames and routines provide a framework for interpreting some of the observational and
ethological evidence often cited for propositional attitude mindreading in primates. An
important part of the case (as reviewed, for example, in Pt II of Tomasello and Call 1997)
comes from observation of behaviors in the wild that seem to involve tactical deception (Byrne
and Whiten 1990) and/or communication. Leaving aside the methodological issues raised by the
analysis of what has seemed to some observers to be rather anecdotal observations, one issue to
explore is whether the observed behaviors could not be viewed as stereotypical and patterned
interactions where the parameter-setting does not involve one creature forming beliefs about the
desires and beliefs of another creature.
Consider communicative behaviors, such as the much-discussed alarm calls of vervet monkeys
(Cheney and Seyfarth 1990). One of the key features of vervet monkey alarm calls is that they
use different types of alarm call in response to the presence of different types of predator – and
that monkeys hearing the alarm call respond in different ways to each type of call. This seems to
fit very closely the frame model just outlined, with relatively fixed responses (the calls and the
behaviors to which they give rise) triggered by different perceptual experiences (playing the role
of the terminals in Minsky’s frames). This interpretation of the vervet alarm calls is consistent
with many of the claims that have been made about the degree of cognitive sophistication that
8
they reflect. So, for example, it is consistent with the vervet alarm calls carrying information
about events in the environment (as opposed to being expressions of the monkey’s state of
arousal). And yet it does not involve bringing into play the machinery of propositional attitude
mindreading. The routine does not involve one monkey intending to bring it about that the other
monkeys believe that a predator is nearby – or that the other monkeys recognize the first
monkey’s intention to bring it about that they form this belief.
Tactical deception has been less systematically and longitudinally studied than vervet monkey
alarm calls. It is certainly possible that what have been interpreted as episodes of primates
intentionally manipulating (or attempting to manipulate) the propositional attitudes of
conspecifics will turn out to be complex and sophisticated routines of the type analyzed by
Minsky. It is plausible that the parameter-setting will involve a degree of mind-reading, but the
default assumption (particularly in the light of the considerations that will emerge in the next
section) should be that this will be perceptual mindreading. Certainly, many of the reported
instances of tactical deception do seem to be interpretable in terms of intentions to manipulate a
conspecific’s visual perspective (rather than their propositional attitudes). Consider the
following well-known description of an instance of tactical deception in a troop of baboons in
Ethopia.
An adult female spent 20 min in gradually shifting in a seated position over a
distance of about 2m to a place behind a rock about 50 cm high where she began to
groom the subadult male follower of the group – an interaction not tolerated by the
adult male. As I was observing from a cliff slightly above [the animals] I could judge
that the adult male leader could, from his resting position, see the tail, back and
crown of the female’s head, but not her front, arms and face: the subadult male sat in
a bent position while being groomed, and was also invisible to the leader. The leader
could thus see that she was present, but probably not that she groomed. (Report by
Hans Kummer quoted in Byrne 1995 p. 106)
We can understand what is going on here in terms of a Minsky-style routine. The female
baboon is engaging in a complicated pattern of quasi-stereotypical behavior in which the
terminals are filled by instances of perceptual mindreading. One terminal is filled by her
calculation of the alpha male’s line of sight. Another by her perception of the rock between the
alpha male and the subadult male. There is no need to appeal to the female baboon’s intention to
manipulate the beliefs of the alpha male.
4
The limits of nonlinguistic mindreading
In the previous section I tried to weaken the temptation to think that the complex forms of social
interaction and social coordination that we see in the animal kingdom demand explanations in
terms of propositional attitude mindreading. In this section I take a more direct tack. I present a
version of an argument I first proposed in Bermúdez 2003. The argument aims to show that
propositional attitude mindreading is not available to creatures that lack a public language. In
order to make the structure of the argument more perspicuous I make each step explicit, adding
comments where applicable.
(A)
Unlike perceptual mindreading, propositional attitude mindreading involves
representing another creature’s attitude to a proposition.
A representation of another creature as, say, believing that the food is hidden behind the tree is
tripartite in nature. It involves representing
(i)
(ii)
a particular individual as
bearing a particular propositional attitude to
9
(iii)
a particular proposition.
Perceptual mindreading is also tripartite in nature. It involves representing
(i)
(ii)
(iii)
a particular individual as
perceiving
a particular object or state of affairs.
Despite this similarity in structure, however, the representations required for propositional
attitude are far more complex than those required for perceptual mindreading.
Consider a perceptual mindreader M representing another agent α as perceiving a state of affairs
S. The perceptual mindreader is already perceiving S. In order to represent α as perceiving S, M
needs simply to add to its representation of S a representation of a relation between α and S. As
we saw in the previous section, in many cases of perceptual mindreading this additional
representation can be very straightforward. It can be simply a matter of representing S as lying
in α’s line of sight. The representational skills required are basic geometric skills, on a par with
those involved in working out possible trajectories of objects (including the agent’s own body)
through the environment. As we saw in section 2, moreover, the fact that S is in α’s line of sight
can often have very immediate implications for action. This means that moving from a
representation of α as perceiving S to a prediction of how α will behave is often completely
straightforward.
Now consider a propositional attitude mindreader M* representing another agent β as believing
a proposition P. Here it is not typically the case that P corresponds to a state of affairs in the
distal environment that M* is already perceiving. In fact (as we saw in the last section), in many
of the case where propositional attitude mindreading is identified in the animal kingdom P is
actually false. The aim in tactical deception as standardly interpreted, for example, is to generate
false beliefs in another agent – and hence where the deceiver must intend to bring it about that
an agent believe that p where p is false. So, the question arises: What is it to represent a
proposition (particularly one that one does not oneself believe)?
One answer here is that propositions just are states of affairs, and so representing a proposition
is no more and no less complicated than representing states of affairs. On this interpretation
propositional attitude mindreading does not come out as fundamentally different in kind from
perceptual mindreading. It is true that propositional attitude mindreading can involve
representing states of affairs that do not exist (as in tactical deception cases), but it is widely
accepted that many types of non-human animals can represent non-existent states of affairs.
After all, we can only explain the behavior of non-linguistic creatures in psychological terms if
we attribute to them desires, and having a desire often involves representing a non-existent state
of affairs.
The obvious problem with this view is that states of affairs lack some of the fundamental
characteristics of propositions. In particular, propositions are true or false, while states of affairs
are not the sort of things that can be either true or false. On many standard ways of thinking
about propositions and states of affairs, states of affairs are the things that make propositions
true or false – precisely because propositions are representations of states of affairs that can be
true or false.3
3
Things are not quite as simple as this, since propositions can be logically complex and it is not clear that
there are logically complex states of affairs. Even if one thinks that the proposition that the table is red is
made true by state of affairs of the table being red, it is far from obvious that the proposition the table is not
red is made true by the state of affairs of the table not being red. Many philosophers would deny that
10
The “truth-aptness” of propositions is absolutely fundamental to the whole enterprise of
propositional attitude mindreading. Propositional attitude mindreading allows us to explain and
predict the behavior of other subjects in terms of the representational states that generated it. It
is such a powerful tool because it works both when other subjects represent the world correctly,
and when they misrepresent it. Moreover (as will become very important in the following) the
truth-aptness of propositions is what explains the inferential connections between propositional
attitudes that are not belief-like. Desires, for example, are not true or false in the way that
beliefs are. But desires have contents (propositions) that stand in logical relations to the contents
of desires and other beliefs. These logical relations are constantly exploited in practical
reasoning.
In sum, my claim is that propositional attitude mindreading involves representing another
agent’s representation of a state of affairs. I will use the term “proposition” to abbreviate
“representation of a state of affairs”. Anyone who thinks about propositions in a different way is
invited to use the unabbreviated expression.
(B)
Propositional attitude mindreaders must represent propositions in a way that allows
them to work out how the relevant propositional attitudes will feed into action.
This should not be controversial in the light of discussion earlier in the paper. Propositional
attitude mindreading is a way of explaining and predicting behavior. Obviously it is not enough
simply to represent the propositional attitudes of another agent. Those propositional attitudes
must be represented in a format (what is sometimes called a vehicle) that can be exploited in
reasoning that leads to explanation and prediction.
(C)
Working out how a particular set of propositional attitudes will feed into action depends
upon the inferential relations between those propositional attitudes and the agent’s background
psychological profile.
This is an immediate consequence of the earlier discussion in section 2. Propositional attitudes
do not feed directly into action in the way that many perceptual states do. Exploiting
propositional attitude attributions to explain and predict behavior requires working out the
different possible inferential connections between those propositional attitudes and the agent’s
background psychological profile, as well as the information that they have through perception
about the distal environment. A propositional attitude mindreader must be able to represent
propositions in a way that makes clear the logical and inferential relations between th attributed
propositional attitudes, the assumed background psychological profile, and the anticipated
action.
(D)
The representations exploited in propositional attitude mindreading are consciously
accessible constituents of a creature’s psychological life.
When a propositional attitude mindreader forms beliefs about the mental states of another agent,
those beliefs are integrated with rest of the mindreader’s propositional attitudes – with their
beliefs about the distal environment, with their short-term and long-term goals, for example. This
integration is required for the results of propositional attitude mindreading to feature in a
creature’s practical decision-making. And, since practical decision-making takes place at the
conscious level, beliefs about the mental states of other agents must also be consciously
accessible.
(E)
The representational format for propositional attitude mindreading must be either
language-like or image-like.
negative states of affairs exist. Nonetheless, the fact remains that when the table is not red is true, its truth
consists in the holding of some state of affairs (the table being black, for example).
11
This basic distinction here can be put in a number of different ways – as the distinction between
digital representations and analog representations, for example. It is a very basic distinction for
cognitive science, as reflected for example in what has come to be known as the imagery debate.4
The central idea is straightforward, although there are many different ways of working out the
details. A language-like representational format allows complex representations to be built up in
a rule-governed way from basic representational units. The representational units and complex
representations built up from them function as symbols, with no intrinsic connection to what they
represent. An image-like representational format, in contrast, functions as a picture. It is not built
up from basic representational units and it represents through similarity relations between the
structure of the representation and the structure of what is being represented.
As far as propositional attitude mindreading is concerned, the most obvious candidates for an
image-like representational format comprise mental models theory in the psychology of
reasoning (originally proposed in Craik 1943 but most comprehensively developed in JohnsonLaird 1983) and the conception of mental maps put forward by Braddon-Mitchell and Jackson
(Braddon-Mitchell and Jackson 1996). The idea of structural isomorphism is key to both
approaches. Mental maps and models are isomorphic to what they represent, in the sense that the
relations holding between elements in the map/model can be mapped onto relations holding
between elements in what is represented.
(F)
The representational format for propositional attitude mindreading must exemplify the
structure of the represented propositions.
This follows from (B) and (C) above. Propositional attitude mindreading allows mindreaders to
navigate the social world by making sense of and predicting the behavior of other agents.
Attributions of propositional attitudes do not lead immediately to predictions and explanations –
precisely because propositional attitudes themselves do not themselves feed directly into action.
So, a propositional attitude mindreader needs to be able to work out the implications of the
attributed propositional attitudes for the agent’s behavior, in the light of what is known (or
conjectured) about the agent’s background psychological profile. This is a matter of reasoning
about the logical and inferential relations between propositional attitudes. Accurate predictions
depend upon the predictor being able in some sense to track the reasoning that the agent might
themselves engage in.
This means that propositions (and hence, of course, propositional attitudes) must be represented
in a way that allows the mindreader to reason about the logical and inferential connections
between propositions. In particular, they must be represented in a way that makes clear the
structure of the relevant propositions. This is because many of the most basic logical and
inferential relations between propositions hold in virtue of their structure.
Consider, for example, the most fundamental form of logical thinking – that codified in the
propositional calculus and involving the basic logical connectives, such as disjunction,
conjunction, and the material conditional (if. . . then. . .). It is hard to see how a mindreader could
reason about the propositional attitudes and background psychological profile of another agent
without attributing to them conditional beliefs. These conditional beliefs might, for example,
dictate possible behaviors contingent upon particular environmental factors (e.g. if the prey goes
into the woods then I will follow it). Or they might record regularities and contingencies in the
environment (e.g. if it rains then there will be more insects on the leaves). In order to reason
about how an individual with these conditional beliefs might behave they must be represented in
a way that reflects their structure – in a way that reflects the fact that they are attitudes to a
complex proposition that relates two other propositions. Unless the conditional belief is
4
For classic discussions see Anderson 1978 and Pylyshyn 1981. Kosslyn et al. 2006 is a more recent
contribution.
12
represented in this way it will be impossible to reason, for example, that since the agent has seen
the prey going into the woods it will follow it.
(G)
Representing propositions in a pictorial or image-like format does not reveal their
canonical structure.
It is certainly true that images have a type of structure. So do mental models and maps. We can
identify distinct parts in analog representations and indeed identify them across different
representations. Without this it would be impossible for analog representations to represent in
virtue of their isomorphic structural resemblance to the state of affairs that they represent. But
this is not the right sort of structure for them to represent propositions in the way that (F)
requires.
Braddon-Mitchell and Jackson themselves bring out both how imagistic representations can be
structured, and why that is not enough to represent propositions in a manner that will allow them
to feature in inferences.
There is no natural way of dividing a map at its truth-assessable representational
joints. Each part of a map contributes to the representational content of the whole
map, in the sense that had that part of the map been different, the representational
content of the whole would have been different. Change the bit of the map of the
United States between New York and Boston, and you change systematically what
the map says. This is part of what makes it true that the map is structured. However,
there is no preferred way of dividing the map into basic representational units. There
are many jigsaw puzzles you might make out of the map, but no single one would
have a claim to have pieces that were all and only the most basic units. (BraddonMitchell and Jackson, p.171)
As this very helpful formulation brings out, analog representations do not have a canonical
structure. Their structure can be analyzed in many different ways (corresponding to many
different jigsaw puzzles that one can construct from it), but none of these can properly be
described as giving the structure of the map.
This is why maps (and other analog representations) are not well-suited to represent propositions.
Propositions have what might be a termed a canonical structure and in order to understand how
the inferential connections in which a proposition might stand a thinker needs to understand that
canonical structure. The canonical structure of a proposition corresponds to what BraddonMitchell and Jackson describe as the “preferred way of dividing” the proposition into basic
representational units. The canonical structure of a conditional proposition, for example, is ‘If A
then B’, where ‘A’ and B’ are the basic representational units (in this case the basic
representational units are themselves propositions). But it is clear that this canonical structure
cannot be captured in any sort of analog representation. We have no idea what a conditional map
might look like, for example.
(H)
The canonical structure of a proposition is only revealed when propositions are
represented in a linguistic format.
According to (E), propositions must be represented in either an image-like or a language-like
representational format. From (F) we have that an image-like representation of a proposition
cannot reveal its canonical structure. So, (H) will follow provided that language-like
representations of propositions can reveal their canonical structure. But it is obvious that the
canonical structure of a proposition is revealed when it is represented linguistically. Viewed in the
abstract, language is a mechanism for creating complex representations through the combination
of basic representational units according to independently identifiable combinatorial rules.
Language contains markers (such as the logical connectives) corresponding to the basic inferential
13
connections between propositions. Indeed, for many philosophers it is almost a tautology that
sentences express propositions.
(I)
The linguistic representations required for propositional mindreading must involve
natural language sentences.
(H) tells us that propositions must be represented in a linguistic format for propositional
mindreading. But it does not say anything about that linguistic format, beyond that it must be
capable of revealing the structure of a proposition. There are two different candidate formats. On
the one hand, the vehicles for propositional attitude mindreading might be the sentences of a
public language. Or, on the other, they might be the sentences of what is sometimes termed the
language of thought. The language of thought is proposed as a representational format for certain
types of cognitive information-processing (Fodor 1975). It is a key element in what is sometimes
called the computational or representational model of the mind. There is no need, however, to
explore the details of the arguments for and against the language of thought hypothesis and the
computational model of the mind (for further discussion see Bermúdez 2005 and Bermúdez
forthcoming). The important point is that the language of thought hypothesis is an explanatory
hypothesis in cognitive science. It is a hypothesis about the machinery of subpersonal information
processing. Information-processing that exploits sentences in the language of thought takes place
below the threshold of consciousness. This means that sentences in the language of thought
cannot be consciously accessible constituents of a creature’s psychological life. But we saw in (D)
that the representations exploited in propositional attitude mindreading must be consciously
accessible constituents of the mindreader’s psychological life. They must be capable of featuring
in a creature’s conscious practical decision-making.
5
Conclusion
This paper has introduced two fundamental distinctions to be kept clearly in view when thinking
about mindreading in the animal kingdom. The first is between minimal mindreading and
substantive mindreading. Minimal mindreading occurs in a social interaction when a creature’s
behavior depends systematically upon changes in the psychological states of other participants in
the interaction. In contrast, a creature engages in substantive mindreading when its behavior is
systematically dependent upon how it represents the psychological states of other participants.
Substantive mindreading is not a unitary phenomenon. There is a principled distinction between
perceptual mindreading and propositional attitude mindreading, stemming from the holistic
character of the propositional attitudes. Using attributions of propositional attitudes to predict and
explain behavior involves complex forms of reasoning that exploit information about the agent’s
background psychological profile.
The principal aim of this paper is to argue that propositional attitude mindreading does not and
cannot exist in the absence of language. The temptation to identify propositional attitude
mindreading in the animal kingdom often rests on the tacit assumption that most if not all of the
complexities of human social understanding and social coordination depend upon propositional
attitude mindreading. We looked at an important type of interaction that can be modeled without
making this assumption, and considered how that model might be applied to behaviors in the
animal kingdom that are sometimes taken as evidence for propositional attitude mindreading.
Finally, I developed an argument to the effect that propositional attitude mindreading cannot exist
in the absence of language. If this argument is sound, it shows that perceptual mindreading is the
only form of substantive mindreading that can exist in the (non-human) animal kingdom.
References
Allen, C., & Bekoff, M. (1997). Species of Mind: The Philosophy and Biology of Cognitive
14
Ethology. Cambridge MA: MIT Press.
Anderson, J. R. (1978). Arguments concerning representations for mental imagery.
Psychological Review, 85, 249-277.
Axelrod, R. (1984). The Evolution of Coperation. New York: Basic Books.
Bermudez, J. L. (2003). Thinking without Words. New York: Oxford University Press.
Bermudez, J. L. (2005). Philosophy of Psychology: A Contemporary Introduction. London:
Routledge,
Bermudez, J. L. (Forthcoming). Cognitive Science: An Introduction to the Science of the Mind.
Cambridge: Cambridge University Press.
Braddon-Mitchell, D., & Jackson, F. (1996). Philosophy of Mind and Cognition. Oxford:
Blackwell.
Byrne, R. W., Whiten, A. (Ed.). (1988). Machiavellian Intelligence. Oxford: Oxford University
Press.
Byrne, R. W., & Whiten, A. (1990). Tactical deception in primates: The 1990 database. Primate
Report, 27(1-101).
Byrne, R. W., & Whiten, A. (1991). Computation and mindreading in primate tactical deception.
In A. Whiten (Ed.), Natural Theories of Mind: Evolution, Development, and Simulation
of Everyday Mindreading. Oxford: Blackwell.
Carruthers, P., & Smith, P. K. (Eds.). (1996). Theories of Theory of Mind. Cambridge:
Cambridge University Press.
Cheney, D. L., & Seyfarth, R. M. (1995). How Monkeys See The World. Chicago: University of
Chicago Press.
Craik, K. (1943). The Nature of Explanation. Cambridge: Cambridge University Press.
Dally, J. M., Emery, N. J., & Clayton, N. S. (2006). Food-caching western scrub jays keep track
of who was watching when. Science, 312, 1662-1665.
Davidson, D. (1975). Thought and talk. In S. Guttenplan (Ed.), Mind and Language. Oxford:
Oxford University Press.
Davies, M., & Stone, T. (Eds.). (1995). Folk Psychology: The Theory of Mind Debate. Oxford:
Basil Blackwell.
Dennett, D. (1969). Content and Consciousness. London: Routledge Kegan Paul.
Fodor, J. (1975). The Language of Thought. Cambridge MA: Harvard University Press.
Hare, B., Brown, M., Williamson, C., & Tomasello, M. (2002). The domestication of social
cognition in dogs. Science, 298, 1634-1636.
Hare, B., & Tomasello, M. (2001). Do chimpanzees know what conspecifics know? Animal
Behavior, 61, 139-151.
Heil, J. (1982). The Nature of True Minds. Cambridge: Cambridge University Press.
Heyes, C. M. (1998). Theory of mind in nonhuman primates. Behavioral and Brain Sciences, 21,
101-134.
Hurley, S., & Nudds, M. (2006). Rational Animals? Oxford: Oxford University Press.
Johnson-Laird, P. (1983). Mental Models. Cambridge: Cambridge University Press.
Kosslyn, S. M., Thompson, W. L., & Ganis, G. (2006). The Case for Mental Imagery. Oxford:
Oxford University Press.
Maynard Smith, J. (1982). Evolution and the Theory of Games. Cambridge: Cambridge
University Press.
Nowak, M. A., Sasaki, A., Taylor, C., & Fudenberg, D. (2004). Emergence of cooperation and
evolutionary stability in finite populations. Nature, 428, 646-650.
15
Penn, D. C., & Povinelli, D. (2007). On the lack of evidence that non-human animals possess
anything remotely resembling a 'theory of mind'. Philosophical Transactions of the Royal
Society B, 362, 731-744.
Povinelli, D., & Vonk, J. (2006). We don't need a microscope to explore the chimpanzee's mind.
In S. Hurley & M. Nudds (Eds.), Rational Animals? Oxford: Oxford University Press.
Premack, D., Woodruff, G. (1978). Does the chimpanzee have a theory of mind? Behavioral and
Brain Sciences, 1(515-526).
Pylyshyn, Z. (1981). The imagery debate: Analogue media versus tacit knowledge.
Psychological Review, 88, 16-45.
Tomasello, M., & Call, J. (1997). Primate Cognition. Oxford Oxford University Press.
Tomasello, M., & Call, J. (2006). Do chimpanzees know what others are seeing - or only what
they are looking at? In S. Hurley & M. Nudds (Eds.), Rational Animals? Oxford: Oxford
University Press.
Tschudin, A. (2001). 'Mind-reading' mammals: Attribution of belief tasks with dolphins. Animal
Welfare, 10, S119-127.
Wimmer, H., & Perner, J. (1983). Beliefs about beliefs: Representation and constraining function
of wrong beliefs in young children's understanding of deception. Cognition, 13, 103-128.