Cognitive Systems Research 8 (2007) 15–27
www.elsevier.com/locate/cogsys
Teleonomic functions and intrinsic intentionality: Dretske’s theory
as a test case
Action editor: Mark Bickhard
Itay Shani
Department of Philosophy, School of Social Sciences, University of the Witwatersrand, Johannesburg, Private Bag 3, Wits 2050, South Africa
Received 19 May 2006; accepted 10 June 2006
Available online 24 August 2006
Abstract
Fred Dretske’s theory of indicatory functions [Dretske, F. (1988). Explaining behavior: reasons in a world of causes. Cambridge, MA:
MIT/Bradford; Dretske, F. (1994). A recipe for thought. Originally published as ‘‘If You Can’t Make One, You Don’t Know How It
Works.’’ In P. French, T. Uehling, & H. Wettstein (eds.), Midwest studies in philosophy: Vol. 19. Reprinted in D. J. Chalmers (2002) (pp.
468–482).] is undoubtedly one of the more ambitious attempts to articulate a sound naturalistic foundation for an adequate theory of
intentional content. In what follows I argue that, contrary to Dretske’s explicit intentions, his theory fails a crucial adequacy test – that of
accounting for mental content as a system-intrinsic property. Once examined in light of the first-person perspective of an embodied psychological agent, I argue, it becomes clear that neither ‘indication’, nor ‘function’, as used by Dretske, can be consistently applied. Dretske’s theory of indicatory functions is, thus, doubly incoherent. It is then argued that the problems identified here stretch far beyond
Dretske’s specific theory – covering the better part of contemporary attempts to naturalize content. I conclude by suggesting that these
general problems of representation, exemplified so vividly in Dretske’s theory, also testify to the inadequacy of the quest to reduce teleological phenomena (function and purpose) to predominantly mechanistic variables.
2006 Elsevier B.V. All rights reserved.
Keywords: Dretske; Epiphenomenalism; Function; Indication; Intrinsic intentionality; Self-organization; Teleology; Teleonomy
Contemporary research on mental content is dominated
by the persuasion that an adequate account of intentionality, that property of mental states whereby they represent
conditions (objects, properties, events, processes, places,
and situations – both real and imaginary) external to themselves, ought to be naturalistic. Call this the naturalistic criterion.1 Most workers in the field (though by no means all
of them) adhere to an additional adequacy criterion: An
appropriate naturalistic explanation of mental content is
one that explains the intentionality of a mental state as
an intrinsic property of the system in which it is embedded.
In pain of explanatory regress, the idea goes, mental states
cannot derive their intentionality from some external
source (an interpreter, programmer, observer, and so
forth); they must be intentional on their own right. Call this
the intrinsicality criterion.2
E-mail address: shanii@social.wits.ac.za
For our present purpose, it is enough if we understand the naturalistic
criterion as requiring that intentionality be explained as an integral part of
the natural world, without invoking entities or processes that cannot be so
integrated. For an overview of naturalism emphasizing its integrative
aspect see Hooker (1995); for a comprehensive overview of naturalistic
approaches in epistemology and philosophy of science see Kitcher (1992).
2
On the distinction between intrinsic and derived intentionality see, for
example, Haugeland (1981), Searle (1992, pp. 78–82), and Shani (2005).
Adherents of the intrinsicality criterion include, among others, Bickhard
(1993), Block (1990), Fodor (in Dennett, 1987, p. 288), Harnad (1990), and
Millikan (1989). A notable critic of the intrinsicality criterion is Dennett
(1987, 1996).
1. Introduction
1
1389-0417/$ - see front matter 2006 Elsevier B.V. All rights reserved.
doi:10.1016/j.cogsys.2006.06.001
16
I. Shani / Cognitive Systems Research 8 (2007) 15–27
In his book Explaining Behavior (1988) Fred Dretske
sets himself the ambitious task of articulating a theory of
mental content that satisfies both criteria. In addition, Dretske hopes to satisfy another important requirement: that
the theory will explain how the contents of mental states
are relevant for behavior. Call this the ‘causal relevance criterion’. Dretske’s strategy is to explain the content of mental states in terms of their indicatory functions. The
reduction of content to indicatory function is meant to
achieve this tripartite goal of satisfying the naturalistic criterion, the intrinsicality criterion and the causal relevance
criterion.3
Recently, Dretske’s claim to satisfied the causal relevance criterion has been intensively criticized (Baker,
1991; Bickhard, 2003; Block, 1990; Kim, 1991; Saidel,
2001; Stampe, 1990). The central charge against Dretske
is that his theory identifies the properties that determine
the contents of mental states with historical properties
while mental causation, on the other hand, depends on
presently effective properties. This unfortunate feature of
the theory, the critics argue, yields the discouraging consequence that the contents of mental states are inert at the
time intentional actions are taking place. The ultimate
result, then, is that it is not in virtue of their contents that
mental states are causally relevant for behavior – the content-constitutive properties of mental states are
epiphenomenal.
In this paper, I would like to pursue a path far less trotted, however, and to concentrate on Dretske’s failure to
satisfy the intrinsicality criterion.4 While the epiphenomenalism charge targets the failure of Dretske’s theory to
account for the causal efficacy of content, I will attempt
to establish the claim that in addition, and contrary to Dretske’s explicit purpose, his theory falls short of sustaining
intrinsic intentionality. Thus, I argue that neither Dretske’s
notion of ‘indication’ (see Section 3), nor his notion of
‘function’ (Section 4), can do the work they are expected
to do, namely, sustain a notion of mental representation
that makes functional, and epistemic, sense from the firstperson perspective of a genuine cognitive agent. On close
examination, then, it becomes clear that Dretske’s ‘‘recipe
for thought’’ (Dretske, 1994) is, at best, a recipe for derived
intentionality, and that it does not account for the possibility of intrinsically functioning, intrinsically informative,
mental states. It is then argued (in Section 5.2) that the epiphenomenalism of Dretske’s theory is but a mirror image
of this basic failure to model intentionality as a systemintrinsic phenomenon. In addition, I argue (Section 5.1)
that the charges advanced against Dretske can be extrapo-
3
As mentioned in the next section, there exists yet another criterion
Dretske aims to satisfy – the ‘misrepresentation criterion’.
4
This is not to suggest that the problem remained completely unnoticed,
however, Bickhard (2003), for example, pays attention not only to the
epiphenomenalism, but also to the intrinsicality problem immanent in
Dretske’s theory. My discussion of the intrinsicality criterion and its
significance owes much to Bickhard’s work, there and elsewhere.
lated so as to apply, with equal force, to the majority of
currently existing attempts to naturalize mental content.
Dretske’s failure to satisfy the intrinsicality criterion is,
then, but a special case of theoretical malfunctioning on
a more general scale. More positively, I indicate (in Section
5.3) how these problems might be avoided by taking a
novel approach towards the question of representation,
in particular by taking representation to be an emergent
aspect of dynamic self-governing. The paper concludes
(Section 5.4) with the diagnostic suggestion that the failure
to satisfy the intrinsicality criterion, as exemplified in Dretske’s theory, is related to a neglect to pay attention to the
irreducible role played by self-organization in the construction of biological and mental functions. Far from being
coincidental, such neglect, I argue, is shared by many and
is motivated by a general metaphysical commitment to a
mechanistic picture of reality, a commitment which leaves
no room for genuine self-organization and self-governing,
and, ipso facto, no room for genuine function and purpose
(biological or mental). It follows that in order to solve the
intrinsicality problem, and to secure a place for teleological
phenomena within the general order of things, we need to
take a fresh look at this deep-seated commitment.
2. Dretske’s theory of indicatory functions
The theory offered in chapter 4 of Explaining Behavior is
proposed as an improvement over Dretske’s earlier theory
of mental content presented in his book Knowledge and the
Flow of Information (1981, see his 1986 for an early version
of the new account). In his earlier book, Dretske formulated the canonical account of what came to be known as
‘Information Semantics’ (IS). The basic insight of IS ties
content individuation to information and information to
lawful, or counterfactual supportive, indication. A signal
s is said to carry the information that ‘a is F’ if, and only
if, it (lawfully, counterfactually) indicates that ‘a is F’. Indication is, in turn, explained by recourse to the notion of
reliable correlation: a condition (state, event) C1 indicates
another condition C2, if, and only if, C1 is reliably correlated with C2 (typically via a causal connection).
However, as early as his 1981 account Dretske was well
aware of the limitations of this reductivist program. Reducing semantic content to strictly information theoretic terms
yields a conception of meaning too far removed from
ordinary mental content, as commonly conceived, to do
justice to some of its core characteristics. In particular,
Dretske came to realize that IS is not well poised to solve
the problem of misrepresentation. Reliable correlation is a
factive notion whereas representation is normative. A correlation may obtain or may not obtain, but it cannot
obtain properly or improperly. By contrast, one could represent adequately as well as inadequately, one could, for
example, say, or think, something that is wrong, provide
an inaccurate description, assume a misguided assumption,
etc. Consequently, the ability to explain the possibility of
misrepresentation, of representational error, must be
17
I. Shani / Cognitive Systems Research 8 (2007) 15–27
considered an additional adequacy criterion for theories of
intentionality (call this ‘the misrepresentation criterion’).
For all its elegance, a pure information theoretic semantics
is ill equipped to handle the problem of misrepresentation.5
Dretske’s solution to this chronic inability to account
for misrepresentation within the confines of a pure IS theory, was to incorporate his (information theoretic) account
of natural signs within a teleonomic, functional, theory of
content. The basic idea is that what confers on a natural
sign the status of a fully accredited representation (a
‘belief’, as Dretske puts it) is the fact that it has the function
of indicating what it naturally indicates. More precisely,
Dretske’s idea can be schematically represented as follows.
An inner state C constitutes a belief to the effect that ‘o is
F’ if, and only, if
(i) C reliably indicates that o is F.
(ii) There is a system S, of which C constitutes a functional proper part, such that C’s function in S is to
indicate that ‘o is F’.
An additional core assumption of this basic model is
that what elevates C from a mere natural sign to the rank
of a functional indicator is the fact that
(a) C has been selected, via a learning process, to play a
specific causal role in S. And
(b) C was selected to play the role it does because of its
indicatory properties (i.e., in virtue of being a natural
sign).
These basic assumptions, then, are already present in the
theory defended in Knowledge and the Flow of Information,
but they are developed to full maturity in Explaining
Behavior.6
Unlike the earlier theory, Dretske’s latter theory is biologically oriented and its kernel consists of an articulated
attempt to explain mental contents as a subspecies genre
of biological functions. In a nutshell, Dretske’s proposal
consists of the idea that the key towards a successful naturalization of mental content lies in identifying the content
of mental states with their indicatory functions. Dretske
argues that in order for an inner state to possess intrinsic
content, to be causally relevant for behavior, and to satisfy
the misrepresentation criterion, it is not enough that it will
be indicative. Rather, the state must function as an indicator, and must do so for the system in which it is embedded.
His concern with the causal relevance criterion leads
Dretske to insist that ‘‘the fact that something has meaning’’ be ‘‘a causally relevant fact about that thing’’ (1988,
p. 80). More precisely, Dretske deals with the causal relevance problem by postulating that what a mental state is
doing, its current causal role in the system, must be such
that it is causally explained, in part, by what the state indicates, i.e., by its semantic value. Suppose, then, that C, a
mental state, causes some motor output M. If we ask what
is the immediate, triggering, cause of M, says Dretske, our
causal explanation will not refer to C’s semantic properties
but, rather, to its physical, neurophysiological, properties.
What, then, is the causal relevance of C’s semantic character? Dretske answers that its relevance lies in the fact that it
partakes in a causal explanation explicating why C is wired
in the system’s mental economy in such a way that, under
some definite circumstances, it triggers M. In short, C’s
semantic profile is causally relevant for behavior in the
sense that it is a structuring cause of the C ! M connection: the fact that C indicates F is (partly) responsible for
the crystallization of the C ! M triggering causal pattern
(see Fig. 1). When this happens, when C gets a handle on
the steering wheel of behavior and establishes itself as a
triggering cause of M, and does so in virtue of the fact that
it indicates F, then, Dretske argues, C acquires the function
of indicating F, and thereby represents F.
In this way Dretske also hopes to satisfy the intrinsicality criterion and the misrepresentation criterion. First, by
acquiring an indicatory function within the system, S, in
which it is embedded, C also acquires an intrinsic semantic
significance. Second, since C’s function is to indicate F and
not, say, G, a tokening of C in the presence of G would
qualify as a misrepresentation.
At this juncture, there are two things that need to be
emphasized about Dretske’s interpretation of the elusive
notion of ‘function’. First, according to Dretske, ‘function’
Triggering Cause
C
M
5
Even Fodor, who disapprove the appeal to biological functions as a
means of solving the misrepresentation problem, had to thicken his own
information-theoretic account of content with auxiliary assumptions (i.e.,
his celebrated asymmetrical dependency principle) in order to deal with the
problem (see Fodor, 1987, 1990).
6
Dretske’s early solution to the problem of misrepresentation (1981,
chap. 8) was abundantly criticized (a notable example is Fodor (1984)),
and it was not before long that he himself renounced it (for a sympathetic
appraisal, however, see Sterelny 1990, chap. 6). Dretske’s latter account
(1988, 1994) differs from the account presented in Knowledge and the Flow
of Information in two primary respects: it includes an improved account of
the learning process involved in the acquisition of inner states with
indicatory functions, and it explicitly identifies indicatory functions as a
biological functions.
Indication
F
Structuring Cause
Fig. 1. Structuring cause (adapted from Dretske (1988)).
18
I. Shani / Cognitive Systems Research 8 (2007) 15–27
is a diachronic, rather than a synchronic, notion. Namely,
what makes an inner state C function as an F indicator for
a system S is not the fact that indicating F is C’s present
causal role in S; rather, what makes C function as an F
indicator is the fact that it has been selected for performing
the causal role it currently performs in virtue of the fact
that it indicated F. Dretske’s notion of function is thus
selection dependent, in a way that resembles other teleonomic theories of content such as Millikan (1989). Second,
Dretske concentrates on an ontogenetic, rather than a phylogenetic, modeling of function. That is, he is not concerned with the selection of traits on an evolutionary
time scale, but, rather, with the developmental selection
characteristic of individual learning processes. In this
respect, his theory differs significantly from Millikan’s.
It is a widespread phenomenon among plants and animals that a behavior M is triggered by an inner state C that
was naturally selected for (in virtue of) indicating an external condition F. In noctuid moths, for example, there exists
an evolutionarily established contingency pattern between
bat sensing and bat avoidance behavior. Still, Dretske
argues, such an evolutionarily shaped (genetically determined) pattern does not confer on C the status of a belief
whose content is F. In order to qualify as a belief (hence
as a genuine mental state), it must be the case that the fact
that C indicates F will actually partake in an individual
process of behavioral modification, culminating in the crystallization of C as a cause of M. Such a qualification rules
out simple tropistic, and instinctive, behavior but it is perfectly satisfied by a relatively simple process of operant
learning.7
Thus, consider a rat that learns to press a bar (M) when,
and only when, a certain tone (C) is heard. The correlation
between enacting M upon hearing C, and the rewarding
experience of feeding, leads to the crystallization of a
behavioral pattern in which hearing the tone becomes a
cause of, or a switch for, the behavioral output. In this
learning process, Dretske argues, C is recruited as a cause
of M, because of what it indicates about F, the external
condition on which the success of M depends. The learning
process selects C as a cause of M, and does so in virtue of
the fact that C indicates F. The indicatory profile of C,
thus, becomes a structuring cause of the C ! M contingency pattern. C ‘‘gets a hand on the steering wheel’’ (Dre-
7
Dretske’s main reason for denying that natural selection confers on
individual indicatory states the status of a belief seems to be this. If
reasons (intentional attitudes) are to qualify as causes, they must be the
causes of individual actions. But natural selection, Dretske argues, does
not explain individual actions; it only explains why certain types of causal
dependencies between inner states and behavioral outputs exist (they were
selected). Thus, Dretske concludes, ‘‘one must look to systems whose
control structures are actually shaped by the kind of dependency relations
that exist between internal and external conditions. The places to look for
these cases are places where individual learning is occurring, places where
internal states acquire control duties or change their effect on motor output
as a result of their relation to the circumstances on which the success of
this output depends (1988, p. 95).’’
tske, 1988, p. 101) due to the fact that it indicates F;
indicating F becomes its function.
This, then, is how Dretske accounts for mental content
in terms of indicatory functions. In the remaining parts
of the paper I shall try to establish the conclusion that,
Dretske’s contention notwithstanding, his theory does not
satisfy the intrinsicality criterion. I shall argue that neither
Dretske’s notion of function, nor his notion of indication,
are consistent with a system-intrinsic conception of mental
content. Dretske’s theory, then, is doubly incoherent. First,
I argue that mental states cannot be intrinsically indicative
(or, intrinsically informative) in virtue of being indicative in
Dretske’s sense. Second, I argue that, contrary to Dretske’s
proposal mental states cannot function as intrinsic indicators of external conditions in virtue of having been selected
for so indicating. I call these problems the first, and the second, incoherence problems, respectively.
3. The first incoherence problem: mental states cannot be
intrinsically indicative (or informative) in virtue of being
Dretske-indicators
As we have seen, Dretske accounts for indication in
terms of reliable correlation. What makes C informative
of F, then, is the fact that it ‘‘locks onto’’ to (encodes) F.
The problem with this idea is that it rests on an irreconcilable third-person conception of ‘information’. From the
first-person perspective of the psychological agent that
owns C, the fact that this inner state corresponds to F is
epistemically vacuous, it is simply insufficient to generate
knowledge of F. No matter how reliable the correspondence between F and C is, S, the psychological agent, can
only access F via C (or some other mental states). Unlike
an external observer, S cannot observe the correspondence
from both ends and use the fact that it obtains as an independent source of knowledge. In the absence of such
knowledge, however, the situation is analogous to having
an access to the symbol string ‘‘ -.’’ without knowing that
it is the Morse code correspondent of ‘‘N’’: no knowledge
of ‘‘N’’ can be miraculously gained merely in virtue of the
fact that the correspondence obtains. If the fact that C corresponds to F is epistemically vacuous, however, if it yields
no intrinsically available information to the effect that it is
F that C stands for, then it cannot be taken as constitutive
of C’s being F-informative.
An analogous way to state the problem is this. The fact
that C encodes F is an extrinsic fact about C in the sense
that, in itself, it makes no difference to the internal causal
structure of the representation.8 Thus, C would be exactly
the same even if, instead of corresponding to F, it were to
correspond to F 0 , or even to nothing at all (note that the
‘‘ -.’’ Morse code would be exactly as it is even if it were
not paired with the character ‘‘N’’). If C’s indicatory profile
8
The relation between C and F is, as it were, an external relation (for
more on the distinction between external and internal relations, and on its
significance for theories of content, see Bickhard (2003)).
I. Shani / Cognitive Systems Research 8 (2007) 15–27
makes no difference to its internal causal makeup, however,
then – since whatever C does it does in virtue of its causal
powers – the fact that C possesses this particular indicatory
profile can bear no impact on the manner in which it interacts with other mental structures (C 0 , C00 . . .) in S’s cognitive
space. But then the difference between C F and, say,
C F 0 (where ‘ ’ stands for ‘reliably corresponds to’) is
not a difference that can be detected elsewhere in the system, and hence not a difference that makes a difference to
the ongoing flow of S’s cognitive activity. As before, the
upshot is that the mere fact that C corresponds to F does
not engender information that can be used, let alone consciously apprehended, from the first-person perspective of
the system itself.
Now, one may attempt to resist this ‘‘argument from
extrinsicality’’ by holding that it misrepresents the idea
behind Dretske’s appeal to the notion of a structuring
cause. Recall that, according to Dretske, the indicatory
properties of mental states are causally relevant for behavior not because they act as efficient, triggering, causes but,
rather, because, and insofar as, they become structuring
causes of established neuro-motor contingency patterns.
What the argument from extrinsicality shows, the rejoinder
goes, is that mental states cannot function as triggering
causes in virtue of being Dretske-indicators, but since this
point is conceded by Dretske right at the outset it can
hardly be considered an effective criticism of his position.
Moreover, the idea behind the assumption is that F can
become a structuring cause within S’s mental economy is,
precisely, that it cannot be arbitrarily replaced with other
potential correspondents (F 0 , F00 . . .). F being a structuring
cause of C ! M implies (a) that this contingency pattern
has been selected due to the fact that past activations of
it reliably corresponded with the presence of F; and (b) that
such past correspondences proved rewarding enough to
motivate the selection. Correspondingly, suppose F 0 is a
non-nourishing obnoxious substance; then the activation
of a C ! M contingency pattern in the presence of F 0
would result in a non-rewarding experience, which will,
in turn, activate a feedback learning process selecting
against future activation of this contingency pattern. Is it
not a mistake, then, to maintain that C’s indicatory profile
bears no impact on the manner in which it interacts with
other mental states?
The first objection, I believe, carries little weight.
Regardless of Dretske’s intentions, the question in front
of us is whether his theory succeeds in accommodating
the first-person perspective of real psychological subjects,
and the argument from extrinsicality suggests that it does
not. If this failure is due to Dretske’s assumption that the
only sense in which the semantic properties of mental states
might be causally relevant is by virtue of acting as structuring causes, then so much the worse for the assumption.
The second objection is, however, more serious, and it
deserves a more thoroughgoing consideration. To repeat,
the argument from extrinsicality purports to show that
since corresponding to F is an extrinsic fact about C this
19
fact is not reflected in C’s causal makeup and, perforce,
cannot, in itself, affect the manner in which C interacts with
other mental states within the system’s cognitive network.
It then drives at the conclusion that, since, from the firstperson perspective, information must be available in the
form of discernible ‘‘news of difference’’ (Bateson, 1979),
correspondence relations are insufficient to generate information that could be effective from such a perspective.
The second objection challenges one of the premises of
the argument, namely, the assumption that the fact that a
C F relation obtains cannot, itself, affect the manner in
which C interacts with other mental states within S’s cognitive network. The apparent refutation consists in the fact
that the existence of such an indicatory relation is, presumably, a structuring cause of C ! M. If ‘‘C is recruited as a
cause of M because of what it indicates about F’’ (Dretske,
1988, p. 101), then, presumably, indicating F does translates into a specific effect on the manner in which C interacts with other components in the network.
What the objection fails to notice, however, is the misleading nature of the suggestion that C is selected as a
cause of M because of what it indicates about F. Recall that
the selection process to which Dretske refers is a learning
process, and, as such, a process in which the system itself
is an active participant. It is the system’s ability to respond
to signals (negative or positive, feedback or feedforward)
with novelty – with novel neural configurations and novel
dispositions for behavior – which makes learning possible,
and which underpins the selective recruitment of some contingency patterns over others. But if the system itself mediates the selection it follows that, whatever it may be, that
which causes C to be selected as a cause of M must be
something about C that the system can sense and value,
something that can motivate a selection. This is even more
conspicuous given Dretske’s explicit assumption that selection, via operant learning, for particular causal roles is the
key to the solution of the intrinsicality problem (see Section
2): if, as we now see, such a selection operates on variations
that the system itself must be able to discern, from its own
perspective, then the properties that are directly relevant
for selection must be properties that can be so discerned.
Yet, this is precisely what cannot be done when it comes
to C’s property of being in perfect correspondence to F:
nothing in this property per se can motivate internal
selection.
In order for there to be a selection favoring a systematic
activation of C ! M, in correlation with F’s presence, it is
not enough that C F obtains, and that C gets to cause M
when F obtains; rather, what makes such positive selection
possible is the fact that activating C ! M in F’s presence
yields rewarding internal outcomes, outcomes which the
system can appreciate (i.e., recognize and evaluate) from
its own perspective. To put it otherwise, although F is, in
Dretske’s words, a condition ‘‘on which the success of M
depends’’ (ibid.), knowledge of the successfulness of the
act depends on the availability of internal outcomes delivering the good news. And since selecting M as a typical
20
I. Shani / Cognitive Systems Research 8 (2007) 15–27
behavior in F-infested environments causally depends on
such knowledge it follows that it is the internal outcomes
that are directly responsible for the selection.
The point, then, is that, in order to affect the manner in
which C interacts with other components in the network so
as to produce successful adjustments (successful learning),
correspondence is insufficient. For that, we need correspondence and internal interaction outcomes, and it is the outcomes, and not correspondence per se, which motivate
C’s recruitment as a cause of M. Thus, considered on its
own merits the fact that C reliably corresponds to F bears
no traces which could be discerned elsewhere in the system,
it yields no news of difference that make a difference, no
information to work with.
To recapitulate, what both arguments (the epistemic
vacuity and the extrinsicality argument) show is that Dretske’s assumption that reliable indication is constitutive of
semantic significance is untenable. If it were, than the mere
fact that C is a Dretske-indicator of F would have been sufficient for making it intrinsically informative for S, provided that it is appropriately wired in S’s cognitive
makeup. But what the arguments show, is that the one
thing that the postulation of a symbol-world correspondence relation does not explain is how any symbol could
function, intrinsically, as a representation in virtue of the
fact that it stands in such correspondence relations to some
external items – no matter how well it is wired in the system. This, then, is the first incoherence problem (for more
on the epistemic incoherence of informational encodings
see Bickhard, 2000b, 2003; Edelman & Tononi, 2000, chap.
11; Shani, 2005). The gist of the critique advocated here is
also hinted in Piaget’s argument against ‘‘copy’’ theories of
knowledge (1970, p. 15).
4. The second incoherence problem: mental states cannot
function as intrinsic indicators of external conditions in virtue
of having been selected for their indicative properties
But the problems with Dretske’s proposal run deeper
than the commitment to an epistemically untenable notion
of indication. I shall now argue that not only is Dretske’s
notion of indication unsuitable for the task of accounting
for intrinsic intentionality, his notion of function is equally
inept.
The import of the first incoherence problem is that the
notion of ‘‘indication’’ Dretske employs fails the intrinsicality criterion and that for this reason, and on Dretske’s
own terms, it is ill suited to serve the purpose of articulating an adequate naturalistic account of mental content.
This means that if a theory of content is to employ the term
‘indication’ as an useful explanatory construct (and why
not? After all, representation presupposes some form of
indication. . .) it must cast into it a different sense.
An useful hint as to what such a concept may be can be
found in Dretske’s stock example of the rat that learns to
press a bar upon hearing a certain tone. When the rat in
this experimental setting learns to press a bar in response
to a sound stimulus, it learns to associate the stimulus,
and the behavioral output, with anticipation of feeding.
There is, then, a sense in which the stimulus, once so associated, indicates the prospect of feeding, indicates the likelihood of such an interaction outcome.
Interpreting ‘indication’ in this sense is conspicuously
opposite to Dretske’s own interpretation. As we have seen,
Dretske uses the term to denote reliable covariance, of the
sort exemplified by natural signs, where the signal invariably follows the external condition it is said to indicate.
Under this interpretation, indication is reactive, consisting
of an ‘‘upstream,’’ signified-item-to-sign, arrow. By contrast, on the alternative interpretation suggested here ‘indication’ is essentially proactive (or, enactive); it implies an
anticipation of possible future interaction outcomes hence
an opposite ‘‘downstream,’’ sign-to-signified-item, arrow.9
No doubt, a commitment to such an alternative notion
of indication entails its own questions and problems. In
particular, it raises the question how can the rich texture
of our ordinary representations of the world around us
be constructed out of such seemingly primitive, and thoroughly action-oriented, information (while Akins, 1996; is
a skeptic, detailed attempts to provide a positive solution
to the problem are suggested in Bickhard (1993) & Shani
(in press)). However, insofar as the intrinsicality criterion
is concerned, the alternative on offer carries a clear advantage: unlike the property of being in reliable correlation
with a given external item, the property of anticipating
an interaction outcome is internally accessible, and, as such,
it can, and do, affect the internal selection of contingency
patterns.
Thus, for example, encounters with a nutritious food
of type F yield different internal outcomes than, say,
encounters with samples of the obnoxious substance G
in the simple sense that, when completed, they leave
the system in one final state rather than in another –
say, X rather than Y. Suppose now that, as in the rat
experiment, the possibility of arriving at X is associated
with the occurrence of a certain stimulus C in the sense
that, when C occurs, it becomes possible for the system
to engage itself in an action that yields X (by doing
M, for example). Before long, the system may come to
anticipate X upon C’s occurrence, and (given X’s desirability) to engage itself in behavior conducive to X. In
other words, C may come to proactively indicate X. It
is worth noticing that in this case it is one internal state,
C, that indicates another internal state, X; and since
both states are, as it were, written in the flesh, the problem of accounting for the first-person significance of the
indication becomes tractable.
It might seem that proactive indications are limited to
the system’s interior and, therefore, that they are ill suited
9
For more on the distinction between upstream and downstream
signaling see Collier and Hooker (1999). For the distinction between
reactive and enactive approaches to the mind see, for example, Newton
(2000).
I. Shani / Cognitive Systems Research 8 (2007) 15–27
to represent the external world, yet such a judgment is premature. Notice that X, the indicated interaction outcome,
depends not only on the system’s actions but also on the
environment: F-infested environments will support the possibility of arriving at X, while G-infested environments will
fail to do so. In other words X, and the actions capable of
yielding X, dynamically presuppose F’s existence: a successful arrival at X, via M, ontologically depends on the environment ‘‘cooperating’’ by manifesting the properties that
constitute F (the availability of food), and an X-conducive
behavior presupposes such a ‘‘cooperation.’’ There is a
sense, then, in which X implicitly categorizes some environments as X-type environments, environments in which this
interaction outcome is in fact possible.10 Thus, indicating
the availability of an interaction outcome is also, indirectly,
an indication that the environment supports this outcome:
indicating X is indirectly, and implicitly, an indication of F.
However primitive, such categorizations or predications,
constructed within the system as structural changes
brought about by interactions, provide the system with
valuable information about its external surroundings;
information that might be false, should the environment
fail to ‘‘cooperate,’’ and whose falsity might be detected,
should the interaction fail to achieve the expected outcome.
With the emergence of complex webs of interconnected
indications the power, articulation, and scope of the
knowledge at the system’s disposal, and its ability to learn
from its own successes and failures, may grow exponentially.11 Finally, note that, unlike Dretske’s passive correlates, the environmental conditions dynamically
presupposed by proactive indications are internally (i.e.,
essentially) related to those indications: X could not be
the internal outcome that it is if it were it not for F.12 Thus,
while attending to the internal, first-person, components of
representation, this alternative, proactive, model of indication seems well poised to address the question how representational knowledge relates to the external world.
But, although substituting this action-oriented notion of
indication for Dretske’s original proposal solves the first
incoherence problem, Dretske’s theory of content faces
another major obstacle, an obstacle that cannot be
10
For more on implicit indication and dynamic presupposition see
Bickhard (2000b, 2003).
11
The idea that knowledge of the external world is literally constructed
as structural effects brought about by interactions can be found, in one
way or another, in the writings of otherwise diverse thinkers such as
Damasio (1999), Edelman and Tononi (2000), Gibson (1979), Maturana
and Varela (1980), and Piaget (1954).
12
This may seem unfair to Dretske given that he, too, is presupposing
that the availability of F is necessary for C’s being what it is; however, the
point is that, on Dretske’s account, C is F-informative because, and only
because, of the correspondence between the two, and, as mentioned in
Section 3, correspondence per se does not affect the properties manifested
by the correspondents. By contrast, on the assumption that information
about the external world is constituted by dynamic interactions it follows
that the intrinsic properties of an environment are constitutive of the
intrinsic properties manifested by the internal states representing that
environment.
21
amended merely by inducing this (in my opinion) necessary
substitution. For no matter what notion of indication you
care to employ, no mental state can function as intrinsic
indicator in virtue of satisfying the conditions that, according to Dretske, grant it a functional status. In other words,
Dretske’s notion of function is a veritable blind alley on its
own merit.
As mentioned before, despite the fact that Dretske’s
indicatory functions are modeled on a developmental,
rather than an evolutionary, time scale the model he offers
is teleonomic, or selection-dependent.13 This means, that
what confers a functional status on an indicatory state,
is not the causal role performed by that state, but the fact
that it has been selected (via learning) for playing this
role. Yet this selection-based explanation of indicatory
functions yields a notorious circularity problem: as we
shall see shortly, in order to be selected as an F-indicator,
C must first function as an F-indicator; so, rather than
constituting its status as an intrinsic functional indicator,
C’s selection as an F-indicator presupposes such intrinsic
functioning.
Consider, again, Dretske’s example of the rat that learns
to press a bar (to do M) upon hearing a certain tone (upon
undergoing a perceptual state C). On Dretske’s account, C
acquires the function of indicating F (the availability of
food) only after the C ! M contingency pattern has been
solidified. Yet, this gets things backwards. Recall that when
a rat learns to press a bar in response to a sound stimulus it
learns to associate the stimulus, and the possible behavioral
output, with anticipation of feeding, an interaction outcome towards which, under normal conditions, the animal
is motivated. Without the anticipation, there is no basis for
motivational arousal, and without such motivational and
emotional factors there is no basis for appraisals of failure
or success, hence, no basis for learning.14 But if hearing the
sound invokes anticipation of feeding, which in turn exerts
control on the rat’s behavior, then it transpires that C
already functions as an X-indicator, and derivatively as
F-indicator, prior to, and in a way that is presupposed
by, the successful termination of the learning process.
Thus, C’s selection as an F-indicator presupposes a selection-independent notion of functional (functionally useful)
indication.
To reassure ourselves of the validity of this claim, I suggest we examine it in the light of two key parameters: causal efficacy and normativity.
13
Teleonomic theories of content are known by other names too, e.g.,
‘teleological’, ‘etiological’, ‘historical’ or ‘proper function’ theories. My
choice of the term ‘teleonomic’ is explained in the concluding section.
14
In recent years there is a growing acknowledgment of the essential role
played by motivational and emotional factors in learning and other
cognitive processes see, for example, Bickhard (2000c), Christensen and
Hooker (2001), Damasio (1994), Edelman and Tononi (2000), Faw (2000),
Mook (1996), Montague, Dayan, Person, and Sejnowski (1995).
22
I. Shani / Cognitive Systems Research 8 (2007) 15–27
• Causal efficacy: To begin, note that all the causal tasks
performed by C in the post-selection period are essentially in place prior to the establishment of a standardized C ! M connection. C’s selection for the task of
indicating whatever it is that it indicates is but a stamp
of approval on a successful performance that takes place
before the learning process achieves its closure, yet the
performance itself remains essentially unaltered. But if
performing all the causal tasks (including all the causally
indicative tasks) of the post-selection period at the preselection phase is not enough to confer on C the status
of a functional indicator it seems inevitable to conclude
that the functional relations Dretske hypothesizes are
epiphenomenal.
• Normativity: Invoking functions in attempts to explain
representation is a popular move among naturalists largely because it carries a promise of accounting for the
normative dimension of representation, including, in
particular, the possibility of misrepresentation. As we
saw in Section 2, Dretske is no exception. Yet the claim
that selection constitutes functionality, hence normativity, is rather dubious. Even prior to its selection as an
invariant cause of M, there is a clear sense in which
C’s causing M, thereby leading to interactions bent on
yielding X, is good for S (the system): sure enough, consuming nutrients is essential for S’s survival and healthy
functioning. By indicating that X is likely to be realized
(hence indicating that F is about to obtain), and that M
is likely to lead to such a realization, C contributes to S’s
well being. Such a contribution carries normative significance for S, since in order to maintain its viability, its
survival and ongoing self-maintenance, the system must
make sure that the conditions on which its viability
depends continue to hold, and whatever contributes to
the satisfaction of those conditions is intrinsically good
for the system.
Nor will it do to maintain that understanding functions,
and functional normativity, in terms of contribution to the
maintenance of an organic whole illuminates only pragmatic aspects of normativity but that it fails to shed light
on the alethic (truth-related) aspects which preoccupy Dretske. Rather, this selection-independent notion of function
relates directly to the problem of misrepresentation. Recall
that, on the proactive model, representational error occurs
when the environment falls short of supporting the interaction possibilities indicated by a given intentional state, that
is, when it does not manifest the properties that sustain the
success conditions of the anticipated interaction outcome.
Representational error, then, is, first and foremost, a defiance of expectation; it constitutes a hindrance to the system’s Sisyphic effort to make its way in the world; it
constitutes a functional failure. Thus, misrepresentation is
a specific form of malfunctioning in as much as the prospects indicated by a misrepresenting intentional state are
ungrounded, making that state ill-equipped to contribute
to the system’s collective effort to maintain itself, and to
orient itself in its social and natural environments. Moreover, it is precisely because it constitutes a malfunction,
because it defies expectations and hinders prospective
self-regulation, that error can be detected via negative feedback, and that corrective measures utilizing such feedback
might ensue. Finally, note that nothing in this explanation
requires selection to account for the emergence of misrepresentation; representational error is totally constituted in
current system states and in current dynamic patterns of
system–environment interactions.
Thus, our alternative account of functions, and of functional indication, enables us to concur with Dretske’s claim
that representation is a specific form of function, an emergent sub species genera of, naturalistically explicable, biological phenomena. At the same time, it denies Dretske’s
contention that indicatory functions are selection-dependent, thereby avoiding epiphenomenalism and the circularity inherent in the idea that selection constitutes functional
relations.
Taking a broader look at the problem in front of us – the
incoherence of selection-based explanations of indicatory
functions – we may note that the same lesson applies, mutatis mutandis, to selection-based theories of function in general, and, in particular, to the popular view that natural
selection confers functional status on biological traits.
According to selection-based theories, it is only after there
has been a selection ‘‘for’’ a trait T that T can be considered
functional; selection constitutes functionality, and functional normativity.15 But this obscures the fact that, in order
to be selected, T must first contribute to the adaptability, or
ecological competence, of certain individuals such that
these individuals will perform, on average, better than other
conspecifics and, as a result, will have an improved fitness
rate. Such contribution to individual ecological performance is presupposed by selection and therefore, on pain
of regress, cannot be explained as its outcome; and yet, it
is functional and normative par excellence.
First, note that, in this case too, all the causal capacities
that the selection-based explanation ascribes to T in the
post-selection period are already at play at this pre-selection
stage. Consider an example. Some marine invertebrates
(e.g., rotifers, barnacles, and bryozoans) developed an irreversible adaptive response to predation. Usually, that is,
under normal conditions, they take the form of a typical
morph but when exposed to nearby predators they can rapidly, and irreversibly, change their appearance into an alternative, atypical, morph, or to produce progenies with such
non-standard appearance. The predator-induced morph
lowers mortality rate in predator infested environments
and thus has a higher fitness in those environments, but in
predator free environments it is has lower fitness (Dukas,
1998). According to selection-based theories such as Millikan’s (1984, 1989) the structures responsible for these
15
The distinction between selection of a trait and selection for (i.e., in
virtue) of a trait is due to Sober (1984).
I. Shani / Cognitive Systems Research 8 (2007) 15–27
defense strategies in the tiny marine creatures has the function of protecting the creatures because they were selected
for doing so. But this obscures the fact that in order to be
selected ‘‘for’’ the task, the relevant causal structures must
have already been at work – serving their owners by reducing mortality rate in predator-infested environments.
The moral, then, is that the adaptive performance to be
selected is already there in its entirety prior to the culmination of the selection process (first the performance, then the
reward. . .). But, if performing all the causal tasks of the
post-selection period at the pre-selection period is not
enough to confer on T the status of a functional trait, then,
as in Dretske’s case, epiphenomenalism seems inevitable
(cf. Christensen & Bickhard, 2002a; Saidel, 2001).
Second, since in this pre-selection stage T is useful to the
organisms in which it is embedded it already carries normative significance for those organisms. Thus, consider the
manifestation of, and reaction to, alarm signals such as tail
splashing in beavers’ populations, or various vocalizations
in vervet monkeys. These signals, and their characteristic
modes of usage, were selected because they proved to be
useful, ecologically competent, patterns of behavior, contributing to the survival and stable sustenance of individuals and populations. But the usefulness of such signals in
their contexts of application constitutes a normative
dimension that is, again, selection-independent. There is a
clear sense in which it was good for beavers to splash their
tail and for vervet monkeys to make their calls even before
these behaviors were selected across the populations. It follows that selection presupposes normative relations, and,
insofar as selection-based accounts appeal to selection as
the putative source of norms, they are inconsistent.16
It transpires, then, that the problems of epiphenomenalism, and of normative inconsistency, that haunt selectionbased theories of biological functions stem from a neglect
of a crucial fact about biotic evolution. Natural selection
operates on variability in individual (or group) performance between systems that already possess a degree of
functional organization, and that are already equipped
with inner states capable of making some contribution to
the incessantly self-preserving (i.e., functional) causal organization of their owners. To put it otherwise, in order to be
a participant in the game of natural selection you have to
be able to reproduce, maintain homeostasis, and compete
for resources; but a physical system, which must, and
can, maintain itself via resource acquisition, self-recuperation and self-reproduction – an autonomous agent – is
already a clear exemplar of a functional system.17
16
Various authors have made the claim that selection based theories of
function presuppose a more fundamental, selection-independent, notion
of function. Examples that may be cited here are Bigelow and Pargetter
(1987), Bunge and Mahner (1997, chap. 4), Christensen and Bickhard
(2002a, 2002b), McIntosh (2001), and Stotz and Griffiths (2002).
17
For some accounts of autonomous agency see Bickhard (2000a),
Christensen and Hooker (2000), Gibson (1994), Kauffman (2000),
Smithers (1995), and Ulanowicz (1986).
23
Defenders of selection-based theories of function might
respond by arguing that such functional systems are ultimately assembled by the operation of natural selection on
simple, non-functional, template replication mechanisms,
hence that, in the final analysis, selection does generate
functions. Yet, significant developments in the last decades
in the study of biological systems as complexly organized
dynamical systems put this standard neo-Darwinian
dogma to doubt. As is intimated in the works of Eigen
(1971), Kauffman (1993, 1995), Margulis (e.g., Margulis
& Sagan, 1986), Maturana and Varela (1980), and others,
the very emergence of life presupposes holistic self-maintenance of the sort exemplified by collectively autocatalytic
macromolecules. The point is that such self-maintaining
systems – predating the emergence of the double helix –
were already functionally organized and capable of prebiotic evolution. On this view, then, self-organization plays
an essential, irreducible, role in the construction of biological order. As Kauffman puts it
‘‘[M]uch of the order in organisms, from the origins of
life itself to the stunning order in the development of a
newborn child from a fertilized egg, does not reflect
selection alone. Instead, much of the order in organisms,
I believe, is self-organized and spontaneous. Self-organization mingles with natural selection in barely understood ways to yield the magnificence of our teeming
biosphere. We must therefore expand evolutionary theory (2000, p. 2).’’
To conclude, the upshot of the arguments advanced in
this section is that selection does not, and cannot, constitute the ultimate explanation of functional organization
and functional normativity. But if the emergence of functions, and of functional normativity, cannot be attributed
solely to selection processes, then, as a special case, it follows that mental states cannot acquire indicatory functions
merely in virtue of having been selected for their indicatory
properties. This, then, is the second incoherence problem.
5. Some theoretical implications
It is time to take stock. The discussion that follows
examines the conclusions that can be derived from the
arguments advanced in the last two sections. In addition,
a special emphasis is given to some broader implications
that might be drawn, using extrapolation and further analysis, from these more direct conclusions and that seem to
be theoretically significant for the general project of
explaining representational phenomena. These include
insights into existent pitfalls, and hints at the prospects
for a brighter future.
5.1. Intrinsicality and the incoherence arguments:
extrapolating beyond Dretske’s theory
I have argued that Dretske’s attempt to satisfy the
intrinsicality criterion fails. The reasons for the failure,
24
I. Shani / Cognitive Systems Research 8 (2007) 15–27
however, go well beyond Dretske’s own theory of indicatory functions. The moral of the second incoherence argument is that there is a problem with the very idea that
functions, and functional normativity, are selection-dependent. If the argument is cogent, then selection-based
theories of function presuppose a more fundamental, selection-independent, notion of function, and the consequences
for the thriving industry of explaining functions, and, a fortiori, representational functions, in terms of selection are
dire.
As for the first incoherence argument, the implications
are even broader. If the argument is sound, it not only
shows that information semantics is ill equipped to deal
with the problem of intrinsic intentionality, it also casts a
shadow on the entire enterprise of accounting for content
in terms of correspondence, or encoding, relations, thereby
taking to task almost all of the contemporary naturalistic
semantics.
5.2. Intrinsicality and causal efficacy
Moreover, we are now in a position to observe the connection between the popular charge against Dretske to the
effect that his theory implies epiphenomenalism, and our
own findings. From this vantage point of view, the epiphenomenalism of Dretske’s theory is a mirror image of the
basic failure to explain intentionality as a system-intrinsic
phenomenon. The link between intrinsicality and epiphenomenalism is intuitive enough. For a representation to
be system-intrinsic is for it to be capable of functioning as
a representation for the system in which it is embedded.
That is to say, if C is a representation embedded in a system S, and if C’s content is P, it must be the case that C
can function as a representation for S in virtue of its content; C’s content must be functionally available to S and
it must be capable of making a difference, a causal difference, to S’s thought and action. A theory whose prescriptions for content individuation yield contents that can be
neither accessed nor used by their owners is a theory whose
content assignments are necessarily causally inert, hence
epiphenomenal. Conversely, a theory whose prescriptions
for content individuation yield causally inert contents necessarily fails to explain intentionality as a system-intrinsic
property.
5.3. A hint on how to approach a solution to the incoherence
tangles
The popularity of teleonomic, selection-based accounts
of function and representation stems, to a large extent,
from the fact that many scientifically minded thinkers
(e.g., Dawkins, 1976; Dennett, 1987; Pinker, 1997) believe
it to be the only respectful way whereby the question of
purposeful behavior may be approached. Similarly, the
popularity of encoding-based accounts of content is rooted
in the persuasion that this is the only way whereby the
question of representation may be approached. But if the
arguments presented here are along the right track, we
had better look for alternatives. A not too careful reading
between the lines of this critical essay reveals that it already
contains the seeds of a possible alternative. For it offers, in
passing
(a) thinking of function in terms of making systematic
contribution to the maintenance of an organic whole
instead of in terms of selective history, and
(b) thinking of indication (hence representation) in terms
of anticipation of (possible) interaction outcomes
rather than in terms of reliable correlation.
Nor is this alternative a mere hypothetical program.
Proponents of a dynamic systems approach to mental phenomena called ‘interactivism’ (Bickhard, 1993 and elsewhere; Christensen & Hooker, 2000) have developed, in
considerable detail, a theoretical account of representational content incorporating these basic insights (along
with significant theoretical tenets borrowed from complexity theory, developmental psychology, ecological psychology, pragmatism, phenomenology and more). On a more
general scale, it may be mentioned that contemporary cognitive science witnesses a steady growth in the popularity of
embodied, and action-oriented, theories of mind manifesting a significant degree of approximation to the ideas
defended here (Lakoff & Johnson, 1999; Varela, Thompson, & Rosch, 1991; are but two of the more familiar examples). Although it is my conviction that this alternative way
of looking at the problem of mental representation possesses a decisive advantage over more popular theories
such as Dretske’s in that it offers a coherent solution to
the intrinsicality problem, the paper’s modest aim was simply to show how, and why, Dretske’s own solution fails.
Therefore, within the confines of the present discussion, I
refrained from making a fully systematic attempt to
explain, and defend, the interactive program.
5.4. Intrinsicality and self-organization: or, why attempts to
reduce teleology to mechanistic causation are bound to fail
Having concentrated on the difficulties enfolded in Dretske’s position, I would like to conclude with a general
observation regarding the connection between Dretske’s
failure to respect the intrinsicality criterion and his neglect
to take note of the constitutive role played by self-organizing dynamics in the construction of functions, and of functional indications.
In discussing the two incoherence problems, I argued
that the failure to model either indication, or function, as
system-intrinsic phenomena stems from a neglect of their
inherently dynamic, self-organizing, character. Functions,
I argued, ought to be understood as contributions to the
self-organizing dynamics of an autonomous organic whole,
and indications ought to be explained as functions of a special sort whose contribution to autonomy consists of environmentally sensitive, anticipatory, action-guidance. It is
I. Shani / Cognitive Systems Research 8 (2007) 15–27
with this picture in mind, I concluded, that we may hope to
overcome the difficulties faced by the traditional approach
to function and representation, of whom Dretske is, without a doubt, a particularly well-spoken representative. In
the remaining pages I shall argue that there is a reason, a
deeply seated metaphysical reason, behind the reluctance
of the traditional approach to take advantage of self-organization in attempting to account for biological, and mental, phenomena. As before, I argue that Dretske’s theory
can be used as a telling example.
As we shall see shortly, Dretske’s appeal to a selectiondependent account of function, and of functional indication, reflects a deep theoretical commitment to the idea that
an adequate naturalistic explanation of teleological phenomena must conform to a mechanistic outlook of reality.
A consistent adoption of the mechanistic image of reality,
however, leaves no room for genuine self-organization
and, as a result, no room for explaining functions in general, and representational functions in particular, as system-intrinsic phenomena. Indeed, the commitment to the
idea that the only naturalistically acceptable kinds of explanations are explanations that refer to mechanistic modes of
production and becoming (or, at the very least, that presuppose strictly mechanical processes at the relevant level
of ‘‘implementation,’’ or ‘‘realization’’) has the inevitable
effect that, in the final account, all teleological phenomena
are rendered illusionary. Attempts to redeem our teleological intuitions by making telos conform to the mechanistic
framework are plenty and here again, as we shall see, Dretske is a loyal representative, but the rift between telos and
mechanism is such that these attempts manage to salvage
no more than a faint apparition of genuine function and
purpose. I therefore propose, that by taking seriously the
idea that function and representation ought to be explained
in predominantly self-organizational terms, we are obliged
to reconsider our all too sweeping commitment to the
mechanistic view of the world and to take a fresh look at
the role of telos in nature.
The theory presented in chapter four of Explaining
Behavior constitutes a deliberate attempt to provide a naturalistic foundation for a viable account of mental content.
The hub of the theory is the idea that the key towards a
successful naturalization lies in identifying mental representation as a kind of biological function, which function
is in turn explained by reference to selective history. Theories of content that hinge on this idea are often referred to
as ‘teleological’ (e.g., Papineau, 1991), but a closer examination reveals that, for reasons that are far from trivial,
the euphemism ‘teleonomic’ is a more appropriate choice.
The term ‘teleonomy’ was proposed by Pittendrigh
(1958, p. 394) as a substitute for the traditional, more
familiar, term ‘teleology.’ Teleology, as commonly conceived, is the study of ends or final causes – the explanation
of phenomena by reference to goals, or purposes. As such,
it carries with it connotations of the Aristotelian worldview
which was repudiated with the advent of modern science.
On the mechanistic worldview extracted from classical
25
dynamics there is no place for final causes; and purposive,
or seemingly purposive, behavior must ultimately be
reduced to efficient, mechanistic, causation. It is often
maintained that one of Darwin’s remarkable achievements
was that his theory of the evolution of species by way of
natural selection made such a reduction feasible. Darwin’s
theory, the idea goes, provided the means for explaining
the purposive behavior, and design-like organization,
found in nature in terms of the mechanisms governing
mutation, variation, and selection. The seeming teleology
of biological phenomena could now be explained in terms
compatible with the mechanistic modes of explanation
characteristic of the physical sciences.18 The upshot of such
a reductive explanation is that the appearance of purposefulness in nature is exactly that – an appearance, the result
of nature’s laborious and opportunistic blind tinkering.
The term ‘teleonomy’ – especially as adapted by Monod
(1971) and Mayr (1992) – was meant to cover precisely this
type of explanation, namely, to account for apparently purposive structures, functions, and behaviors as evolutionary
adaptations, which could, in the final analysis, be analyzed
into their ultimate mechanistic components. In the words
of Richard Dawkins, ‘‘in effect, teleonomy is teleology
made respectable by Darwin (1982, p. 294).’’
It is clear that Dretske’s theory is teleonomic in spirit, if
not in its letter. As mentioned throughout the paper, the
underlying working assumption of Dretske’s account is
that what confers a functional status on indicatory states
is the fact that they were selected for their indicatory properties. Since such an account essentially reduces intentionality to indicatory function, the implication is that the
apparent purposefulness of intentional states is simply the
result of nature’s cunning blind tinkering – the same logic
underlying Pittendrigh’s introduction of ‘teleonomy’ as a
substitute for the debunked term ‘teleology.’
The epistemic incoherence of Dretske’s teleonomic theory of content, and especially the fact that it presupposes
an untenable explanation of the emergence of systemintrinsic functions (representational or otherwise), gives
grounds for suspecting that the problem might be symptomatic of a more general fault, namely, that it might have
to do with the idea that teleology could be reduced to
teleonomy.
In more than one place, Dretske compares the task of
explaining purposeful behavior and design-like organization in nature to that of explaining a work of engineering
(1988, pp. 96–97; 1994). The comparison is illuminating.
To begin, note that there is an obvious disanalogy
between engineered, or otherwise manufactured, artifacts
and naturally constructed biological systems: all naturalists believe that only the formers are the products of intelligent design. Nevertheless, advocates of teleonomy insist
18
Ironically, it is in contemporary physics itself that the mechanistic
paradigm ultimately breaks down, a fact that seems to have gone beyond
the radars of those who espouse its application to biological, and
intentional, phenomena.
26
I. Shani / Cognitive Systems Research 8 (2007) 15–27
also on the existence of a manifest analogy: both are made
to appear purposeful, both are structured as if they possess a telos. The word ‘made’ is revealing; it implies that
the functional organization of the system is dictated by an
external agency. Such an external agency might be intelligent, but it need not be so. So long as the end product is
secured it matters not whether the designing agency is
intelligent or blind, natural or unnatural, final or efficient.
Now, clearly an artifact is a system whose organization is
shaped from without; but the interesting point is that so is
the case with a mechanistically tinkered-together contraption. Indeed, one of the defining characteristics of a mechanistic explanation is that it accounts for the behavior of
the system under scrutiny completely in terms of external
agencies – the impact of other bodies, the operation of
forces, etc. (cf. Bohm, 1957; Prigogine & Stengers, 1984;
Rosen, 1991; Ulanowicz, 2000). The bottom line is that
a mechanical device – whether naturally formed or artificially contrived – does not partake in the making of its
own organization.
It transpires, then, that the communality between intelligent design and teleonomy is that both of them presuppose external formation while excluding self-organization.
Proponents of mainstream reductionism often assume that
the only alternatives to mechanism are vitalism or, worse
still, supernatural intelligent design. Yet, the dilemma is
fabricated in that it ignores the possibility of a self-organized purposive dynamics. This is not merely a logical quibble. The recent decades have seen a rapid advancement in
the study of complex, dynamically non-linear, systems of
various levels of manifested complexity, and a remarkable
increase in our understanding of the self-organized aspects
characterizing their emergence and behavior. Such developments leave a place for more than a shred of optimism
regarding the prospects of accounting for function and purpose in primarily self-organizational terms.
One of the fascinating features of this newly emerging
theoretical approach to the study of function and purpose
in nature is that it offers a getaway from the need to choose
between the Scylla of intelligent design and the Charybdis
of a mechanistic ‘‘natural design.’’ While the intelligent
design solution has been amply criticized for implying a
supernatural interference in the order of things, it has often
been overlooked that a mechanistic solution implies an
almost equally unsettling conclusion. For, as mentioned
above, on a strictly mechanistic explanatory framework
biologically evolved function and purpose are, in the final
analysis, mere appearances – they are, as it were, mere as
if phenomena. But if function and purpose could be
explained by reference to the process dynamics characteristic of open, self-organized, systems, then there might be
room for a less suspicious approach towards the prospect
of explaining genuinely telelological phenomena as an integral part of nature. Such integration would amount to the
reestablishment of telos within the natural order of things,
in a way that implies neither supernatural manipulation,
nor eliminative reductionism.
Needless to say, a more elaborate exploration of the
bearings of these recent developments in complex systems
research on the potential rehabilitation of teleology
deserves a separate treatment.
Acknowledgements
I thank Amir Horowitz and the audience at the 2005 annual colloquium of the Israeli Philosophical Association in
Haifa, where an early draft of the paper has been read.
References
Akins, A. (1996). Of sensory systems and the ‘‘Aboutness’’ of mental
states. The Journal of Philosophy, 22–362X, 337–372.
Baker, L. R. (1991). Dretske on the explanatory role of belief. Philosophical Studies, 63, 99–111.
Bateson, G. (1979). Mind and nature: A necessary unity. NY: E.P. Dutton.
Bickhard, M. H. (1993). Representational content in humans and
machines. Journal of Experimental and Theoretical Artificial Intelligence, 5, 285–333.
Bickhard, M. H. (2000a). Autonomy, function and representation.
Communication and Cognition – Artificial Intelligence, 17(3–4),
111–131, Special issue on: The contribution of artificial life and
the sciences of complexity to the understanding of autonomous
systems..
Bickhard, M. H. (2000b). Information and representation in autonomous
agents. Journal of Cognitive Systems Research, 1(2), 65–75.
Bickhard, M. H. (2000c). In: R. D. Ellis & N. Newton (Eds.), Motivation
and emotion: An interactive process model.
Bickhard, M. H. (2003). Process and emergence: normative function and
representation. In J. Seibt (Ed.), Process theories: crossdisciplinary
studies in dynamic categories. Dordrecht: Kluwer Academic.
Bigelow, J., & Pargetter, R. (1987). Functions. Journal of Philosophy, 84,
181–196.
Block, N. (1990). Can the mind change the world? In G. Boolos (Ed.),
Meaning and method: Essays in honor of Hilary Putnam (pp. 137–170).
Cambridge: Cambridge University Press.
Bohm, D. (1957). Causality and chance in modern physics. New York:
Harper & Brothers.
Bunge, M. A., & Mahner, M. (1997). Foundations of biophilosophy. Berlin:
Springer.
Christensen, W. D., & Bickhard, M. H. (2002a). The process dynamics of
normative function. Monist, 85(1), 3–28.
Christensen, W. D., & Bickhard, M. H. (2002b). Function as design versus
function as usefulness. Unpublished manuscripts.
Christensen, W. D., & Hooker, C. A. (2001). Self-directed agents. In: J.
McIntosh (Ed.), Naturalism, evolution and intentionality. Canadian
Journal of Philosophy, Special Supplementary Volume.
Christensen, W. D., & Hooker, C. A. (2000). An interactivist-constructivist approach to intelligence: Self-directed anticipative learning.
Philosophical Psychology, 13(1), 5–45.
Collier, J. D., & Hooker, C. A. (1999). Complexly organized dynamical
systems. Open Systems and Information Dynamics, 6, 241–302.
Damasio, A. R. (1994). Descartes’ error: Emotion, reason and the human
brain. NY: Grosset-Putnam.
Damasio, A. R. (1999). The feeling of what happens: Body and emotion in
the making of consciousness. NY: Harcourt Brace.
Dawkins, R. (1976). The selfish gene. New York: Oxford University Press.
Dawkins, R. (1982). The extended phenotype: The gene as the unit of
selection. Oxford: Freeman.
Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT.
Dennett, D. C. (1996). Kinds of Minds. NY: Basic Books.
Dretske, F. (1981). Knowledge and the flow of information. Oxford:
Blackwell.
I. Shani / Cognitive Systems Research 8 (2007) 15–27
Dretske, F. (1986/93). Misrepresentation. In A. I. Goldman (Ed.),
Readings in philosophy and cognitive science. Cambridge, MA: MIT/
Bradford, Reprinted.
Dretske, F. (1988). Explaining behavior: Reasons in a world of causes.
Cambridge, MA: MIT/Bradford.
Dretske, F. (1994). A recipe for thought. Originally published as ‘‘If You
Can’t Make One, You Don’t Know How It Works.’’ In: P. French, T.
Uehling, & H. Wettstein (Eds.), Midwest studies in philosophy: vol. 19.
Reprinted in D. J. Chalmers (2002) (pp. 468–482).
Dukas, R. (1998). Evolutionary ecology of learning. In R. Dukas (Ed.),
Cognitive ecology: The evolutionary ecology of information processing
and decision making. Chicago: University of Chicago Press.
Edelman, G. M., & Tononi, G. (2000). Consciousness: How matter
becomes imagination. London: Penguin Books.
Eigen, M. (1971). Molecular self-organization and the early stages of
evolution. Quarterly Reviews of Biophysics, 4(4), 149.
Faw, B. (2000). In: R. D. Ellis & N. Newton (Eds.), Consciousness,
motivation and emotion: Biopsychological reflections.
Fodor, J. A. (1984). Semantics, Wisconsin style. Synthese, 59, 231–250.
Fodor, J. A. (1987). Psychosemantics. Cambridge, MA: MIT.
Fodor, J. A. (1990). A theory of content and other essays. Cambridge, MA:
MIT.
Gibson, J. J. (1979). The ecological approach to visual perception. Hillsdale,
NJ: LEA.
Gibson, E. J. (1994). Has psychology a future? Psychological Science, 5,
69–76.
Harnad, S. (1990). The symbol grounding problem. Physica D, 42,
335–346.
Haugeland, J. (1981). Semantic engines: An introduction to mind design.
In J. Haugeland (Ed.), Mind design. Cambridge, MA: MIT.
Hooker, C. A. (1995). Reason, regulation and realism: Toward a naturalistic regulatory system theory of reason. Albany, NY: State University
of New York Press.
Kauffman, S. A. (1993). Origins of order: Self-organization and selection in
evolution. New York: Oxford University Press.
Kauffman, S. A. (1995). At home in the universe: The search for the laws of
self-organization and complexity. NY: Oxford University Press.
Kauffman, S. A. (2000). Investigations. NY: Oxford University Press.
Kim, J. (1991). Dretske on how reasons explain behavior. In B.
McLaughlin (Ed.), Dretske and his critics (pp. 52–72). Cambridge:
Basil Blackwell.
Kitcher, P. (1992). The naturalists return. Philosophical Review, 101,
53–114.
Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The embodied
mind and its challenge to western thought. NY: Basic Books.
Margulis, L., & Sagan, D. (1986). Microcosmos. New York: Summit.
Maturana, H., & Varela, F. (1980). Autopoiesis and cognition. Dordrecht:
Reidel.
Mayr, E. (1992). One long argument: Charles Darwin and the genesis of
modern evolutionary thought. London: Allen Lane.
McIntosh, J. S. (2001). Function, malfunction and intentional explanation. Unpublished manuscript.
Millikan, R. G. (1984). Language, thought and other biological categories.
Cambridge, MA: MIT/Bradford.
Millikan, R. G. (1989). Biosemantics. Journal of Philosophy, 86(6),
281–297, Reprinted in R. G. Millikan 1993.
27
Monod, J. (1971). Chance and necessity: An essay on the natural
philosophy of modern biology. Trans. from French: A. Wainhouse.
New York: Alfred A. Knopf.
Montague, P. R., Dayan, C., Person, P., & Sejnowski, T. J. (1995). Bee
foraging in uncertain environments using preditive Hebbian learning.
Nature, 377, 725–728.
Mook, D. G. (1996). Motivation: The organization of action (2nd ed.). New
York: W.W. Norton.
Newton, N. (2000). Conscious emotion in a dynamic system: How i can
know how i feel. In R. D. Ellis & N. Newton (Eds.), The Caldron of
consciousness: Motivation, affect and self-organization. Philadelphia:
John Benjamins.
Papineau, D. (1991). Teleology and mental states. Aristotelian Society
Supplementary, LXV, 33–54.
Piaget, J. (1954). The construction of reality in the child. NY: Basic Books.
Piaget, J. (1970). Genetic epistemology. NY: Columbia.
Pinker, S. (1997). How the mind works. New York: Norton.
Pittendrigh, C. S. (1958). Adaptation, natural selection and behavior. In
A. Roe & G. Simpson (Eds.), Behavior and evolution. New Haven: Yale
University Press.
Prigogine, E., & Stengers, I. (1984). Order out of chaos. New York:
Bantam.
Rosen, R. (1991). Life itself: A comprehensive inquiry into the nature, origin
and fabrication of life. NY: Columbia University.
Saidel, E. (2001). Teleosemantics and the epiphenomenality of content. In
J. McIntosh (Ed.), Naturalism, evolution and intentionality. Canadian
Journal of Philosophy, Special Supplementary Volume.
Searle, J. R. (1992). The rediscovery of the mind. Cambridge, MA: MIT/
Bradford.
Shani, I. (2005). Computation and intentionality: A recipe for epistemic
impasse. Minds and Machines, 15, 207–228.
Shani, I. (in press). Narcissistic sensations and intentional directedness:
How second-order cybernetics helps dissolving the tension between the
egocentric character of sensory information and the (seemingly) worldcentered character of cognitive representations. Cybernetics and
Human Knowing.
Smithers, T. (1995). Are autonomous agents information processing
systems? In L. Steels & R. A. Brooks (Eds.), The artificial life route to
‘Artificial Intelligence’: Building situated embodied agents. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Sober, E. (1984). The nature of selection. Cambridge, MA: MIT/Bradford.
Stampe, D. W. (1990). Desires as reasons – Discussion notes on Fred
Dretske’s ‘Explaining Behavior: Reasons in a World of Causes’.
Philosophy and Phenomenological Research, 50, 787–793.
Sterelny, K. (1990). The representational theory of mind: An introduction.
Oxford: Blackwell.
Stotz, K., & Griffiths, P. E. (2002). Dancing in the dark: Evolutionary
psychology and the problem of design. In S. Scher & M. Rauscher
(Eds.), Evolutionary psychology: Alternative approaches. Dordrecht:
Kluwer.
Ulanowicz, R. (1986). Growth and development: Ecosystems phenomenology. NY: Springer Verlag.
Ulanowicz, R. E. (2000). Life after Newton: An ecological metaphysics.
In: D. R. Keller & F. B. Golley (Eds.), The philosophy of ecology.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind:
Cognitive science and human experience. Cambridge, MA: MIT.