Academia.eduAcademia.edu
Philosophy of Mind’s New Lease on Life: Autopoietic Enactivism meets Teleosemiotics Professor Daniel D. Hutto Professor of Philosophical Psychology School of Humanities University of Hertfordshire de Havilland Campus Hatfield, Hertfordshire AL10 9AB Telephone: +44 (0)1707 285655 Email: d.d.hutto@herts.ac.uk “What he saw was a shocking surprise. Every Who down in Who-ville, the tall and the small, Was singing! Without any presents at all! He HADN’T stopped Christmas from coming! IT CAME! Somehow or other, it came just the same! And the Grinch, with his grinch-feet ice-cold in the snow Stood puzzling and puzzling: ‘How could it be so?’ ‘It came without ribbons! It came without tags! It came without packages, boxes or bags!’ And he puzzled three hours, till his puzzler was sore. Then the Grinch thought of something he hadn’t before! ‘Maybe Christmas,’ he thought, ‘doesn’t come from a store. ‘Maybe Christmas … perhaps … means a little bit more!’” - Dr. Suess, How the Grinch Stole Christmas, New York: Random House, 1957. 1. Introduction Mind in Life is a big book – in more ways than one, and in more good ways than one. In promoting the idea that there is a deep continuity between life and mind, Thompson defends a boldly anti-representationalist version of enactivism: it challenges, root and branch, the ‘passive-cognitivist’ view of the mind-brain. It aims for nothing short of the eradication of misleading Cartesian and Kantian (of the Critique of Pure Reason) dualisms that, despite challenges, continue to dominate analytic philosophy of mind and some branches of cognitive science. Thompson sees adoption of the enactive approach as a way to put aside the mind-body problem, once and for all, and to refocus our investigations on the more fertile – phenomenologically-inspired body-body problem. We are meant to give up on the traditional input-output processing model of the mind, one that continues to pay homage, if only tacitly, to the idea of what is sensorially or informationally ‘given’; of content that is received or informs, on the one hand, and the idea that such gifts are intellectually categorized, conceptualized, and schematized – in a downstream and serial manner – by higher forms of cognitive spontaneity, on the other. By reconceptualizing mentality in essentially active, 2 dynamic and loopy terms, Thompson undermines the traditional boundaries and dichotomies. The boundaries thought to hold between mind and body and between mind and world are revealed to be, ultimately, of only heuristic value, having no genuine metaphysical import. We are asked to trade in the modern, Cartesian, picture of the mind as a sort of mechanism for a more Aristotelian vision of mentality, which emphasizes its biological character and the special features it shares with all living systems.1 All of this is in the service of putting us in a better position to understand aright the place of consciousness in nature, and thus to deal with the infamous explanatory gap. In what follows, for reasons of space, I say nothing more about the book’s success in meeting its principal aim of enlarging and enriching “the philosophical and scientific resources we have for addressing the gap” (p. x). I think it surely achieves this and I am wholly sympathetic to Thompson’s ambitions on this front and, by and large, the general sort of enactivism he promotes. This is not to say that this book closes the gap (it doesn’t seek to) or that it is the last word on consciousness (it doesn’t promise to be). Still, it is a genuine tour de force, that adds creatively and convincingly to our ways of understanding basic forms of mentality. I will take this much for granted here and work instead to ensure that Thompson’s work gets the kind of reception it should have. For the truth is that many working in analytic philosophy of mind and cognitive science continue to be utterly mystified about what enactive approaches have to offer. To remedy this, this commentary will seek to clarify certain core features of Thompson’s proposal about the enactive nature of basic mentality, as best it can, and to bring his ideas into direct conversation with accounts of basic cognition of the sort favoured by analytical philosophers of mind and more traditional cognitive scientists – i.e. those who tend to be either suspicious or critical of enactive/embodied approaches (to the extent that they confess to understanding them at all). My proposed way of opening up this sort of dialogue is to concentrate on the close similarities between Thompson’s biologically-based proposal about non-representational forms of basic cognition and what I take to be a reasonable modification to the ambitions of teleosemantic theories of content. Insofar as today’s theories of mental representation are less concerned to understand content in properly semantic terms they are moving ever closer to the sorts of account proposed by enactivists of the Thompsonian stripe – close enough to have meaningful debates about the nature of basic mentality. It is against this backdrop that I put a spotlight on the true promise and value of enactivism, providing some compelling reasons for wanting to go Thompson’s way. 3 This will be achieved, in large part, by showing that there is no wholly agreed way in which the family of fundamental theoretical notions – content, representation and, especially, information – are understood by opponents of enactivism. Despite their centrality and critical importance to traditional cognitivism there are no agreed and well-defined accounts of the exact nature of contentful, representational or informational properties. As such, it can be difficult to determine the precise boundaries between representationalist and nonrepresentationalist approaches. Moreover, when we get down to brass tacks, it looks as if the most scientifically respectable attempts to make sense of these ideas leads straight into the arms of the sort of enactivism that Thompson proposes (or something near enough). In short, once we make necessary clarifying adjustments to the best proposals for understanding the kind of informationally-sensitive responding that constitutes basic mentality, Thompson-style enactivism may well turn out to be the most promising naturalist game in town, if not the only one. 2. The Information Processing Challenge The last claim of the previous section is far from obviously true. Originally advanced by Varela, Thompson and Rosch (1991), the big idea behind enactivism is to treat consciousness and cognition as emergent phenomena constituted by, and thus to be understood in terms of, specifiable patterns of organismic activity. Enactivism of this stripe denies that the most basic forms of genuinely mental activity necessarily involve, or are to be explained by, the manipulation of contentful representations. Instead, enactivists hold that mentality emerges from – is ‘brought forth by’ – the self-creating (autopoietic) activities of organisms, and that the latter are constituted by essentially embodied, diachronic environmental interactions that are the basis for new possibilities for self-creating activity in a dynamic way. Drawing on insights from phenomenology and dynamical systems theory, enactivists invert the familiar explanatory strategies of orthodox cognitive science by supposing that “Abilities are prior to theories ... Competence is prior to content … [and that] knowing how is the paradigm cognitive state and it is prior to knowing that” (Fodor 2008, p. 10). The framework has proved attractive to many. A great variety of enactivist proposals have now been advanced about many topics, including: consciousness, perception, intentionality, attention, memory, emotion, intersubjective social cognition and self-consciousness. Nevertheless this take on basic mentality is viewed by many as at best something that might supplement existing theories of cognition, and at worst is nothing more than a 4 confused and obscuring gloss which adds nothing positive to the already well-established cognitivist accounts. Varela-inspired versions of enactivism face a standard objection, given that their “main explanatory tool … is the theory of self-organizing and autonomous dynamic systems” (Thompson 2007, p. 26). For even among those who are prepared to accept that the enactive approach holds promise for understanding how organisms develop and interact over time, a standard verdict is that it lacks the independent explanatory resources to provide a genuinely alternative understanding of the basis of mentality (Ramsey 2007, Clark 2008). Clark (2008) describes the emphasis placed by theories like Thompson’s on the dynamics of the total state of systems as both a boon and burden. On the positive side, it allows the theorist to “accurately capture the way two or more systems engage in a continuous realtime, and effectively instantaneous dance of mutual codetermining interaction” (p. 25). On the downside, it is problematic “insofar as it threatens to obscure the specifically intelligencebased route to evolutionary success” (p. 25). And it does this to the extent that it fails to recognize “the brain as the principal (though not the only) seat of information-processing activity” (p. 25, emphasis original). Accordingly, we need not deny the importance of timing, action and coupled unfolding to cognition so long as we do not forget that these play support roles in intelligent responses “grounded in processes of information extraction, transformation and use” (Clark 2008, p. 19). In sum, the complaint is that any version of enactivism that relies entirely on dynamical systems theory under-appreciates the fundamental role played by information-processing mechanisms in making mental activity so much as possible. Call this the Information-Processing Challenge. The Information-Processing Challenge would present a formidable problem for enactivists if it could be safely taken for granted that the standard computational/information processing explanatory strategies of traditional cognitivism are in perfectly good order under standard renderings. But enactivists question just this. This is precisely what Thompson (2007) has in his sights when he doubts the truth of the received view in cognitive science. That view, he maintains, is committed to the idea that “in order to explain cognitive abilities we need to appeal to information-bearing states inside the system. Such states, by virtue of the semantic information they carry about the world, qualify as representations” (p. 52, emphasis added). This is where the trouble in assessing this debate starts. For there is, in fact, less consensus about exactly what orthodox cognitive science is committed to than this statement suggests – in particular it is not clear that the kind of information or content that matters must be semantic. Still the conservative wing of cognitive science certainly does insist that 5 intelligence depends on information-processing. Textbooks in the field tell us that the core assumption of traditional cognitive science is “that there are sub-personal contents and subpersonal operations that are truly cognitive in the sense that these operations can be properly explained only in terms of these contents” (Seager 2000, p. 27). As the Clark quotation reminds us, information is thought to be the basic currency of cognition; it is received, stored, manipulated, and transformed by intelligent systems. It is the fuel for cognitive engines. So conceived “information is a prime commodity, and when it is used in biological theorizing it is granted a kind of atomistic autonomy as it moves from place to place, is gathered, stored, imprinted, and translated” (Oyama 2000, p. 1). This standard metaphor suggests that cognitive operations really involve the manipulation or processing of information or content of some kind or other – but despite the foundational importance of this claim it is incredibly difficult to pin down, with any firm grip, the theoretical commitments of those who propound such stories. In particular, it is hard to get a clear sense if they are truly committed to this sort of picture and if so exactly what it is that is supposed to be processed by these intelligent systems and how this is done. I suggest that the more we work through the possible readings and home in on a credible account that is naturalistically acceptable, the closer we come to accepting the kind of enactivist proposal Thompson advocates. What then might possibly fuel cognition? 3. What might Content be? The sentences of natural language, as expressed in linguistically mediated beliefs and utterances, are clearly the paradigms of contentful representations, if anything is. It is also quite clear that when philosophers of mind first developed naturalized theories of content their aim was to explain how mental representations could have semantic properties of just the same sort possessed by linguistic representations. A major motivation for this project is that success in this endeavour would make it possible to explain how language could gain its semantic properties from underived mental contents, those which could be explained by appeal to non-semantic properties – such as causation or biological function. Assuming that an exact parallel exists between the content of thought and language ensures that whatever can be thought or judged can be said, in principle. Saying something really only requires finding a public means of expressing oneself. With this in mind psychosemantic theories of content, including teleosemantics, seek to explain the semantic properties of mental states 6 where these are understood as having the very same sorts of semantic properties possessed by natural language expressions. Proponents of Fodor’s ‘language of thought’ hypothesis wear this commitment on their sleeves. They are interested in mental representations with semantic content of essentially the same kind as that which natural language sentences possess (this follows given that whatever content the latter have is wholly derived from the former). Fodor (2008) tells us that “the content of a mental representation is its referent” (p. 216). If such content is to express truths then the referents in question would need to be states of affairs (or the equivalent) and the referents would have to be picked out intensionally, i.e. under some description or mode of presentation, in order to satisfy the platitudinous disquotational rule (“Snow is white” in L iff Snow is white). It is quite clear that others working in this area are also primarily interested in mental representations with content of this sort. Millikan (2005), for example, thinks that representational content is essentially truth conditional. Hence, “intentionality has to do with truth conditions” (p. 93). For her this requirement goes all the way down, thus “the intentionality of language is exactly parallel to the intentionality of bee dances” (p. 98). The same goes for Papineau who offers a “naturalistically acceptable explanation of representation: namely that the biological purpose of beliefs was to occur in the presence of certain states of affairs, which states of affairs counted as their truth conditions” (1987, p. xvi). And, again, McGinn tells us that the aim of teleosemantic theories of content was to show how “teleology turns into truth conditions” (McGinn 1989, p. 148). All of this connects with what Fodor claims is a truism – that “the mind’s main concern is not acting but thinking, and that paradigmatic thinking is directed to ascertaining truths” (Fodor 2008, p. 8). It follows that if the mind’s main business is to ascertain truths, and if mental representations are the tools for conducting such business, then they must aim at truth. Thus contents must be at least truth-apt. Ascertaining truth is typically a risky affair. A formal requirement on genuinely representational contents of this sort is that they might be false. Representation always admits of the possibility of error or misrepresentation. Some hold that certain types of mental states, such as perception and memory, are factive (Hopkins, forthcoming). Such states necessarily reflect the facts. They come with a cast-iron epistemic guarantee: if a token mental state of that kind represents it as the case that p, then it is the case that p. Here’s the rule: If it should turn out that the content of the state in question does not represent the facts then we are not dealing with a mental state of that kind. Note that 7 factive here qualifies the mental states in question, not their contents. To be a representational content that expresses p always allows for the possibility that not-p. Now if this were the received view in orthodox cognitive science about the kind of content possessed by mental representations, which play a role in the information-processing challenge, then it would be directly at odds with the kind of enactivist approach that Thompson defends. But things are not so simple. The first thing to note is that linguistically-mediated beliefs and utterances are conceptual representations. They represent instensionally (with an ‘s’), under guises; they represent things as this-or-that. As Fodor (2008) insists “if a symbol represents a such and such, it must represent it as something or other” (p. 178). However, many theorists believe in the existence of non-conceptual content. This is traditionally understood as a kind of representational content which presents the world as being a certain way, despite the fact that the creature or system doing the representing lacks the concepts that would canonically express the content in question. Now it’s a nice question whether we can really make sense of the idea of a kind of truth-conditional content that is non-conceptual. We might worry that we can’t so long as we think that intensionality (with an ‘s’) and concepts necessarily go together. However, this worry can be avoided if there is a kind of content that is subject to norms other than those to do with truth and falsity. The idea that this might be so is gaining in popularity. Thus, Crane (2009) rejects what he calls the ‘propositional attitude thesis’ about perception without surrendering the idea that perceptual states possess representational content. Instead, he claims such states have accuracy and correctness conditions that are not any kind of truth conditions. A main motivation for this is the observation that accuracy and correctness come in degrees whereas truth or falsity do not. Crane compares experiences to pictures in this respect. Like pictures, he holds, experiences can be more or less accurate, but they are not intrinsically true or false. Nor can pictures stand in logical relations. Thus although Crane thinks they have a kind of representational content he denies it is of the truthconditional sort.2 They have a kind of content that is more primitive, more basic than that had by propositional attitudes such as beliefs. Fodor (2008) has advanced a similar line. He observes: “pictures don’t have truth conditions. In the root case, for a symbol to be true it has to pick out an individual and property and predicate the latter of the former; but iconic representations have no way to do either. So, the camera doesn’t lie, but nor does it tell the truth” (p. 175-176). This is his way of making room for the “possibility that some mental representation is nonconceptual” (p. 8 179, emphasis added). He promotes the idea that there can be representing that isn’t representing as. Thus “X represents Y insofar as X carries information about Y, where ‘carries information about …’ is read as transparent … [this allows] for representation that’s not ‘under a description’” (p. 179). The idea that being an information-carrying state suffices for being a mental representation with nonconceptual content is a radical departure from standard thinking about what is minimally required to qualify as a representational state. As noted, many now question the idea that content is necessarily truth-conditional. We are told that “it isn’t apparent that an intentional state, event, or object about something other than a state of affairs should be evaluated in terms of truth/falsity” (Gunther 2003, p. 5). But it is generally held that “what is true of any state (event, experience, and so forth) with content, is that it is governed by semantic normativity. For whether its content is conceptual or nonconceptual, propositional or not, an intentional state presents the world as being a certain way; and intrinsic to this presentation, to its content, is a set of (semantic) conditions under which it does this correctly, truthfully, satisfactorily, appropriately, skillfully, and so on” (Gunther 2003, p. 5-6). On this view, “semantic normativity is the mark of intentionality” (Gunther 2003, p. 6). We can assume, along with Millikan (2005) that the normativity at play here is of the non-evaluative (e.g. possibly of the merely biological) sort. But even if symbols and concepts are not involved or crunched, and even if this isn’t any kind of intensional (with an ‘s’), truth-conditional representing, there is more going on than being in states that merely carry or contain information in the sense Fodor describes. What appears to matter to those who invoke the normative requirement is the character of the organism’s response, understood as involving some kind of norm other than truth. I will pick up this thread in a more positive vein in the next section. Fodor’s proposal, however, is apparently much more radical than this. It allows that there can be mental representation with content but without ‘semantic normativity’ in the reduced sense just described. Simply being in a state that registers information suffices for representing (but not representing as). But there’s a tension in his calling nonconceptual states with this sort of content mental representations. For Fodor (2009) tells us that “The mark of the mental is its intensionality (with an ‘s’) that’s to say that mental states have content; they are typically about things … only what’s literally and unmetaphorically mental has content”. If this is taken to mean that intensionality (with an ‘s’) is a minimal, necessary requirement for genuine mentality then it turns out that states bearing only informational content not only lack semantic properties, they are not truly mental. This fits with Fodor’s 9 earlier verdicts about the sorts of creatures that belong to the class of cognizers or mentalizers; “wherever precisely the line is to be drawn, and however thick it may be, it is vastly plausible that we fall on one side and the paramecium fall on the other” (Fodor 1986, p. 12). I shall pick up this thread more positively in the next section too. Frankly, all of this makes my puzzler sore. Content, it seems, is a bit like Christmas: it can come without truth conditions, without concepts, without intensionality (with an ‘s’), without semantics, without mentality. Content, I guess, means something more. It would be nice – very nice, indeed – if there were some agreed, unequivocal, non-slippery and fully stable understanding of just what ‘content’ and ‘representation’ are; given that mainstream cognitive science apparently depends on these notions so heavily. It is perhaps too much to say that the history of the use of these terms is shrouded in equivocation, equivocation, equivocation. But it is no exaggeration that despite their utterly foundational importance to orthodox cognitive science they are extremely elastic, far from unambiguous and not yet very well-understood. One thing is certain. There is no point in looking to our pre-scientific folk intuitions to decide the matter. As Jackson and Pettit (1993) point out, “‘Content’ is a recently prominent term of art and may well mean different things to different practitioners of the art” (p. 269). Nonetheless, Crane (2009) assures us that his usage of the term, which deviates from the propositional attitude rendering, corresponds to the way that many professional philosophers use it. To back that up he cites the pedigree of this usage, reminding us that the notion of content belongs with the theory of intentionality that Brentano offered us. But, be this as it may, it does not by itself provide us with a robust and perspicuously clear understanding of content or its properties, nor even any ready way to demarcate it from other phenomena. Indeed, as Crane (2008) has recently admitted in response to a challenge by Nes (2008), making sense of intentionality itself seems to require calling on the notion of representation to do foundational work. He says: “It is the notion of representation, I think, that will distinguish intentionality from … other phenomena” (p. 216). If so then we must call on our notion of representations to do important work in demarcating intentionality. But presumably we need a notion of content in order to distinguish representational phenomena from nonrepresentational phenomena. And, as we just noted, a notion of intentionality is needed to help us understand the notion of content. So it seems that trying to make sense of these notions in this way is to move in a rather tight, and seemingly incestuous, circle. And we must be on our guard here for another reason. Brentano’s understanding of intentionality is complex; it embeds more than one notion – he speaks not only of being 10 directed at objects but also of such objects having intentional inexistence (see Menary 2009 for an excellent exegesis). There are different ways of understanding these ideas in today’s context. Yet those who look to Brentano for a lead on the nature of intentionality typically end up making appeal to a notion of representation or content that is modeled directly on the kind of semantic content (whether truth conditional or referential) associated with linguistically mediated states of mind such as propositional attitudes. It is by this route that many of today’s philosophers come to endorse what I will call the thesis of semantic intentionality. Flanagan (1991) supplies us with a neat reminder of how the standard thinking goes on this issue in his discussion of the central tenets of James’ philosophy of mind. The concept of intentionality is a medieval notion with philosophical roots in Aristotle and etymological roots in the Latin verb intendo, meaning ‘to aim at’ or ‘point toward’. The concept of intentionality was resurrected by and clarified by ... Franz Brentano … Brentano distinguished between mental acts and mental contents. My belief that today is Monday has two components. There is my act of believing and there is the content of my belief, namely, that today is Monday … Beliefs are not alone in having meaningful intentional content … Language wears this fact on its sleeve. We say that people desire that [ ---- ], hope that [ ---- ], expect that [ ---- ], perceive that [ ---- ], and so on, where whatever fills the blank is the intentional content of the mental act. Intentionality refers to the widespread fact that mental acts have meaningful content …The fact that we are capable of having beliefs, desires, or opinions about non-existing things secures the thesis that the contents of mental states are mental representations, not the things themselves – since in the case of unicorns, ghosts, devils and our plans for the future there simply are not real things to be the contents of our mental states! On this interpretation, James, is an advocate of what Jerry Fodor calls the representational theory of mind (p. 28, second and third emphases added). Going this way only takes us back to square one. Just as the notion of content is a term of art so too is representation as Millikan (1993) reminds us, “the name ‘representation’ does not come from scripture” (p. 103). The short exercise of this section is meant to remind us that, after all, as Matthen (2006) helpfully notes, representation is “a new and controversial concept … The natural home of this concept is in the study of communication between agents who possess intentions and goals. It is not immediately clear how it can be extended to states issued by automatic sub-personal systems” (p. 147). Thus relying on our intuitive 11 grasp of what these notions mean renders us none the wiser about the core nature of intentional, representational or contentful properties. 4. A Fresh Start It seems we need a different approach to these issues. I suggest putting aside our antecedent philosophical commitments and intuitions about the nature of ‘representation’ and ‘content’ and working forward from an agreed and non-controversial understanding of the nature of information, as the notion called upon in a variety of sciences. This should be the lowest common denominator in driving our thinking about what is needed to understand basic mentality and any advance beyond it requires justification. The question is – does basic cognition or mentality require information processing? Before we get started – just what is the most basic kind of cognition or mentality? One suggestion is that “cognitive interactions are those in which sensory responses guide action and actions have consequences for subsequent sensory stimulation, subject to the constraint that the system maintain its viability. ‘Sensory response’ and ‘action’ are taken broadly to include, for example, a bacterium’s ability to sense the concentration of sucrose in its immediate environment and to move itself accordingly” (Thompson 2007, p. 125). There seems no good reason to rule this out as an instance of cognition or mentality, albeit basic, other than attachment to the idea that true cognition or mentality must involve symbols and concepts. But, as we saw in the previous section, even champions of a more restrictive understanding of cognition have apparently begun to waver on this point. If so, then as Thompson (2007) argues, the thesis of deep continuity between life and mind is secure, so long as we adopt a liberal understanding of autopoiesis as “internal self-production sufficient for constructive and interactive processes in relation to the environment” (p. 127). Any living creature capable of this will need to be informationally sensitive (see Hutto 1999, ch 2 &3, 2008, ch. 3). But it doesn’t follow that they need to process information – if this means that information is some sort of commodity that is in some way contentful, as such talk appears to suggest. Godfrey Smith (2007) distinguishes two senses of information, a weaker and stronger one. The weak notion is the familiar one that derives from the work of Shannon and which has played a pivotal role in the development of communication technology. It assumes that informational relations are nothing more than covariance relations; they exist wherever correlations between facts, events or properties obtain. Let us 12 call this the information-as-covariance notion. As Godfrey Smith notes this conception of information is ‘unproblematic’ and does not require much philosophical attention.3 It has a richer cousin that is much more controversial. It is referred to as ‘semantic’ or ‘intentional’ information, the kind of contentful information – the message – that some communications convey. That notion significantly adds to the basic Shannon notion. Let us call it the information-as-content notion. There is a real danger of conflating these two notions. Jacob (1997) tells us that “the relevant notion of information at stake in informational semantics is the notion involved in many areas of scientific investigation as when it is said that a footprint or a fingerprint carries information about the individual whose footprint or fingerprint it is. In this sense, it may also be said that a fossil carries information about a past organism. The number of tree rings in a tree trunk carries information about the age of the tree … In all of these cases, it is not unreasonable to assume that the informational relation holds between an indicator and what it indicates (or a source) independently of the presence of an agent with propositional attitudes” (Jacob 1997, p. 45, emphasis added). To stress this last point, he adds that “the information or indication relation is going to be a relation between states or facts … It is an ‘objective’ relation” (p. 49-50, emphasis added). There is no doubt that information-as-covariance is an objective relation. And as the quotation suggests it has wide currency in a number of sciences. But to talk of informational semantics and to speak of indication when describing it, as Dretske (1981, 1988) does, courts confusion with its richer, sister notion of information-as-content. As Cummins et al (2006) point out, “‘indication’ is just a semantic-sounding word for detection” (p. 200). This being so equating information and indication relations is doubly problematic – not only is there a risk of smuggling in a notion of semantic content where it does not belong, there is also the fact that the idea of detection undermines the idea that the information in question is a purely objective relation; it makes no sense to talk of detection in the absence of an agent (or equivalent) that does the detecting. This highlights something important. It suggests that if we stick only with the weak notion of information then there are no grounds for thinking that the world, standing apart from agentive systems, contains anything that could be called informational content. To see what’s at stake it helps to consider Cummins et al’s (2006) attempt to expose a special problem for teleosemantics, by noting that we really have no choice but to believe in the existence of unexploited content, content of a sort that must exist independently and logically prior to the capacity of systems to make use of it. Apparently, this is a problem for 13 teleosemantic theories because they insist that natural signs or signals lack representational, semantic content until a consuming response is selected for, one that governs a system’s reactions. The worry raised is that cognitive systems are surely able to come to use previously unexploited content – either by individual learning or evolution in the species. But this apparently presents teleosemantics with a conundrum for it “requires content to pre-date selection and teleosemantics requires selection to pre-date content” (p. 199). Although interesting in its own right, it is not my purpose here to review the argument advanced by Cummins and co. against teleosemantics. Rather I want to highlight an observation they make at a crucial juncture when considering possible replies. For it is important to their argument that the content in question is that allegedly contained in representations used by cognitive systems. Thus they write: “A very natural response is to say that unexploited content isn’t really content. After all, there is a lot of unexploited information in the environment, information that cognitive systems must acquire abilities to exploit. We do not call that information content” (p. 204, emphases original). But in cases of basic mentality – that of paramecia – this is exactly the kind of situation we have to deal with. Agents are interacting in reliable, informationally sensitive but noncontentful ways with their environments. Why non-contentful? Well, as we have just seen, if we stick to the information-as-covariance notion there is no content out there for them to interact with (let alone register, pick up, and so on). If there’s no objective content in the world, then perhaps content comes into being along with the activity of agents or consuming systems. This idea lies at the heart of teleosemantics. As Millikan (2006) says “the content of a representation is determined, in a very important part, by the systems that interpret it” (p. 100). But, once again, the metaphors can mislead. We should not think of agents as content-consuming systems – for this suggests that there is already pre-existing content to be consumed; and we have just ruled that idea out. Perhaps then we should speak of content-creating systems instead. That’s a step in the right direction, but note – now we are getting very close to the enactivist story. After all, Thompson (2007) maintains that “Cognition is behaviour in relation to meaning and norms that the system itself enacts or brings forth on the basis of its autonomy” (p. 126, emphases added). There is doubtless much to be learned and perhaps salvaged from teleosemantic theories – which are widely regarded as the best, if still imperfect, attempts to naturalize representational content. Perhaps, with modifications, such accounts might help to augment autopoietic enactivism. Some contemporary proponents of enactivism believe that the basic idea requires supplement by appeal to additional notions. 14 We are told, “It is a mistake to take the theory of autopoiesis as originally formulated as a finished theory … autopoiesis leaves many questions unanswered. In particular, several essential issues that could serve as a bridge between mind and life (like a proper grounding of teleology and agency) are given scant or null treatment in the primary literature” (Di Paolo 2009, p. 12). Even if we accept this, it is unwise and unnecessary, for the reasons gestured at above, to buy into the teleosemanticists’ semantic ambitions. Indeed, despite initial optimism, many now doubt that attempts to naturalize semantic content have any chance of success. GodfreySmith (2006) provides an astute assessment “there is a growing suspicion that we have been looking for the wrong kind of theory, in some big sense. Naturalistic treatments of semantic properties have somehow lost proper contact with the phenomena” (p. 42). Nevertheless, he also acknowledges that the driving idea behind teleosemantics – that evolved structures can have a kind of ‘specificity’ or ‘directedness’ – is essentially correct: “there is an important kind of natural involvement relation that is picked out by selection-based concepts of function. But this relation is found in many cases that do not involve representation or anything close to it” (p. 60). What should we make of this? To quote a famous Rolling Stones lyric, “You can’t always get what you want, but if you try sometimes, you just might find, you get what you need”. Teleosemantic accounts fail to provide an adequate basis for naturalizing semantic or intensional (with an ‘s’) content but they are proceeding along the right lines. Crucially, with adjustment they provide serviceable tools for making sense of something more modest – i.e. organismic responses involving intentionality (with a ‘t’). What if in the place of teleosemantics we put teleosemiotics? Teleosemiotics is an order of ‘teleosemantics – hold the semantics’. Teleosemiotics borrows what is best from teleosemantics and covariance accounts of information to provide a content-free naturalistic account of the determinate intentional directedness that organisms exhibit towards aspects of their environments (Hutto 2008, ch. 3). Yet unlike teleosemantics, it does not seek to understand the most basic forms of directedness, such as registering, in semantic, contentful or representational terms. Such modes of responding are not to be understood as contentinvolving or even content-creating to the extent that these notions are understood in terms of reference or truth conditions. Compare this with Thompson’s discussion of virtual milieus, vital norms and meaning as essential features of cognition, as inspired by Merleau-Ponty’s The Structure of Behaviour. As he stresses, the even bacterial cells, the simplest life forms on earth (where life is 15 understood autopoietically) have needs that are fulfilled by deriving nutrition from the environment. They achieve their ends by ingesting sucrose – an environmental feature. Following Merleau-Ponty, Thompson (2007) holds that the property of being a nutrient is a virtual property, not something found ‘objectively’ in the environment. Rather “it is enacted or brought forth by the way the organism, given its autonomy and the norms its autonomy brings about, couples with the environment” (p. 74). Thus, “sucrose has meaning and value as food but only in the milieu that ‘the system itself brings into existence’ or ‘constitutes for itself’” (p. 74). The whole point is that “Behaviour … expresses meaning-constitution rather than information processing” (p. 71). This way of talking will seem alien, or misplaced, to many analytic philosophers but Thompson is quite clear that he is concerned with norms in “the biological sense of norms” (p. 75). Moreover, the ‘meaning’ and ‘value’ that are brought forth neither constitute nor depend on semantic content. Nor does it imply the existence of representations in anything like the standard sense. We are told “if we wish to continue using the term representation, then we need to be aware of what sense this term can have for the enactive approach … autonomous systems do not operate on the basis of internal representations … they enact an environment” (p. 58-59). Substantively I agree, but it will only breed confusion to use terms like ‘meaning’ and ‘representation’ to describe the cognitive antics of bacteria – hence I prefer the more austere teleosemiotic talk of informationally-sensitive responses to natural signs. But this comes to much the same thing. Can a modified teleosemantics – i.e. teleosemiotics – serve as a secure point of contact between what phenomenologically-inspired enactivists have to offer and some of the best work in the analytic tradition on theories of content? Making allowance for differences in language and connation I think the answer is clearly ‘Yes’. However, there are more important twists in this tale. Thompson (2007) rejects the idea of evolution as driven by external forces, such as natural selection. This is not to say he denies the existence or importance of natural selection, only that he objects to the standard interpretations of it. So, this is something that philosophers of biology need to debate – I say no more about it here. Rather I want to focus on another aspect of his view of ‘enactive evolution’, one that has a direct bearing on the issue at hand. It is his rejection of the idea that organisms are “systems that have atomistic traits as their proper parts” (p. 203). To accept this requires surrendering the idea that we should understand the intentionality of biologically basic cognitive systems as being a property of their individual mental states. 16 While this will no doubt shock some – and goes against the grain of the standard way of talking in analytic philosophy of mind – there is much to be said for it. For on reflection there is every reason to think that the intentionality exhibited by basic cognizers differs significantly from that of beings whose thinking or perceiving takes the form of, or involves, propositional attitudes. To think otherwise would be to imagine intentionality as a property of individual mental states – i.e. states of mind that bear special kinds of mental content. This would be to model such states of mind on isolated words or sentences in the heads of thinkers, however weakly. To be wholly free of this idea we ought to think of the intentional directedness as simultaneously focused on both virtual and actual worldly targets and as involving the goal-directed activity of the whole organism. Consider that, in the style of David Attenborough, we say of baby, Sheba or Rover that they are trying to do this or achieve that. Moreover, we say that they succeed or fail because of what they know, think, notice or sense. Notice – it is the activity itself, and not some sub-part of it, that we can coherently regard as being successful or not. Deciding if it is, or not, requires appeal to some set of norms that specifies the goal in question. For this we must make appeals to the creature’s evolutionary history, individual learning or the norms of an established practice, and so on. Whether a bit of goal-directed organismic activity succeeds, or not, depends on whether certain facts obtain. Well-designed organisms have many (and often quite complex) means of responding to the natural signs of environmental correspondences that are important to them. Responding to such signs is meant to guide their behaviour with respect to the state of the world so they succeed in their activities. And, if they are well-built and conditions are normal, their activities non-accidently succeed often enough to fulfill their needs. All of this can be true without it being the case that some sub-part of the organismic system – e.g. an internal mental state – contentfully represents some part of the external world correctly or incorrectly by saying that it stands thus or so. Indeed, in very basic cases there is no principled basis for picking out one segment, or part, of a much larger organismic response to some external natural sign as a discrete, contentful state of mind that represents some more distal state of affairs. In normal conditions it is the totality of an organism’s response that ensures the non-accidental success of its activities. As such, it is the attitude of the whole organism engaged in such activities that exhibits intentional directedness. It is the response as a whole that targets certain aspects of the world and not some sub-part of the response. If so, it must be possible to be intentionally directed without having discrete mental states that possess any kind of mental content at all. 17 I call such non-contentful but world-directed attitudes – intentional attitudes. They are to be contrasted with properly contentful, sententially-mediated propositional attitudes, such as truth-conditional beliefs and desires. The attitudes of the latter sort do possess semantic content and linguistic structure. Indeed, I have long held that our “ordinary concept of belief ranges over cases which, from the philosophical point of view, we should distinguish as instances of beliefs-as-propositional-attitudes and beliefs-as-intentional attitudes’ (Hutto 1999, pp. 109–110). To have a content-involving thought, it is not enough for an organism to be merely intentionally directed at a situation or state of affairs, even in the sorts of complex and systematic ways intimated above. A creature could engage in many highly sophisticated activities while only having attitudes of an intentionally directed sort that are to be understood in purely non-intensional (with an ‘s’) terms. 5. Conclusion Adequately responding to the Information-Processing Challenge requires acknowledging the special importance of the informational sensitivities of sentient and sapient systems and understanding these correctly. This requires resisting the temptation to think of information as a kind of contentful, object-like, commodity, or to otherwise assume that basic mentality depends on the manipulation of content-bearing mental states. Where does this leave us? On the positive side, this conclusion is consistent with accepting that cognitive systems exploit the relations of covariance that hold between environmental states of affairs in various ways. This idea is consistent with – and, indeed, inspired by – the idea that well-fashioned organisms are responsive to natural signs and that these can guide actions successfully (in historically normal conditions), even if such signs do not supply the creatures’ cognitive mechanisms with contentful information and even if the signs are left semantically uninterpreted. That said successful action requires informational sensitivity and a kind of responsiveness to natural signs that introduces asymmetries (e.g. an organism’s sensitivity to one state of affairs enables it to respond appropriately to a more distal state of affairs – in historically normal circumstances). To be sure, organismic actions do not always succeed. But the mere possibility of worldly misalignment does not imply (and need not be explained in terms of) the existence of semantic relations of truth and reference. 18 How does this help the prospects of enactivism? The appeal to teleosemiotics shows that the Information-Processing Challenge can be defused by enactivists without abandoning their core motivating insight. Now, it might be thought that even if contentful representations are not necessary for understanding basic mentality, surely this cannot be the whole story about cognition. True. But we can go a long way (even if not the whole way) in making sense of very complex, elaborate and sophisticated worldly engagements without assuming they are either constitutively contentfully or contentfully mediated. The great bulk of living, thinking organisms act successfully by making appropriate responses to objects or states of affairs in ways mediated by their sensitivity to natural signs. But this does not involve contentfully representing those objects or states of affairs as such or even representing them non-conceptually. Basic forms of mentality depend on informational sensitivity and response, sometimes of a quite sophisticated variety, not processing informational content. Undoubtedly, some states of mind exhibit semantic intentionality – propositional attitudes, for instance. They are properly contentful. Nevertheless, a great deal of sophisticated, worlddirected cognition exhibits intentional directedness that is not contentful in the sense just discriminated. Teleosemiotics understands on-line perceptual responding as informationally sensitive but it rejects the idea that this equates to a purely informational kind of nonconceptual representing. It denies that such responding constitutes “a way of representing X without representing it as anything” (Fodor 2008, p. 182). Radical Enactivism – of the sort that both Thompson and I promote – explicitly rejects the idea that content, whether informational or representational, is an inevitable ingredient in the process that enables basic mentality. In this, not only is this brand of enactivism wholly in line with the spirit of the original and most philosophically challenging conception of enactivism, it is independently well-motivated. Surely, it will be objected, some behaviour is too off-line, ‘plastic and flexible’ to be explained without appeal to the manipulation of contentful propositional attitudes and symbolic representations. Just how far can we go? Much further than is commonly thought, I think, before we need to introduce anything like contentful states of mind into the picture (see Hutto 2008, ch. 4 and 5). Still, there are obvious limit cases. Certain kinds of deliberative planning – such as acting on the basis of considerations that are explicitly represented – must be content-involving. We should accept that “the ability to think the kind of thoughts that have truth-values is, in the nature of the case, prior to the ability to plan a 19 course of action. The reason is perfectly transparent: Acting on plans (as opposed to, say, merely behaving reflexively or just thrashing about) requires being able to think about the world” (Fodor 2008, p. 13). There are two things to note about this remark. First, if the above arguments hold then we should resist the idea that all worldly engagements that do not involve sophisticated, contentful deliberation and symbol-crunching are nothing but bits of non-cognitive reflex or ‘thrashing about’. Secondly, we must ask exactly who – i.e. which cognitive beings – are capable of reflective planning and deliberation? If that class contains only us – i.e. adult, linguistically competent and typically developing human beings – beings who have benefited from a wealth of specialized social scaffolding through engaging in communal practices – then we will want a story about how we get in a position to do so. Thompson-style enactivism promises to tell that story. It may be possible that we can account for the contentful or meaningful basis of such activities without gaps and without having to believe in the existence of underived mental contents of the sort that have resisted naturalistic explanation for so long. Perhaps, after all, we have reason to believe the enactivist credo, that practice logically precedes theory, and not – pace Fodor – the other way around. If so, philosophy of mind gets a new lease on life. References Clark A. 2008. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. Oxford: Oxford University Press Crane T. 2008. Reply to Nes. Analysis 68:215-18 Crane T. 2009. Is Perception a Propositional Attitude? Philosophical Quarterly 59:45269. Cummins R, Blackmon J, Byrd D, Lee A, Roth M. 2006. Representation and Unexploited Content. In Teleosemantics, ed. G MacDonald, D Papineau, pp. 195207. Oxford: Oxford University Press Di Paolo EA. 2009. Extended Life Topoi: 9-21. Dretske F. 1981. Knowledge and the Flow of Information. Cambridge, MA: MIT Press Dretske F. 1988. Explaining Behaviour: Reasons in a World of Causes. Cambridge, MA: MIT Press 20 Flanagan O. 1991. The Science of the Mind. Cambridge, MA: MIT Press Fodor JA. 1986. Why Paramecia Don’t Have Mental Representations. In Midwest Studies in Philosophy, ed. PA French, pp. 3-23. Minneapolis: University of Minnesota Press Fodor JA. 2008. LOT 2: The Language of Thought Revisited. Oxford: Oxford University Press Fodor JA. 2009. Where is My Mind? London Review of Books, 12 February. Godfrey-Smith P. 2006. Mental Representation and Naturalism. In Teleosemantics, ed. G Macdonald, D Papineau, pp. 42-68. Oxford: Oxford University Press Godfrey Smith P. 2007. Information in Biology. In The Cambridge Companion to the Philosophy of Biology, ed. D Hull, M Ruse, pp. 103-19. Cambridge: Cambridge University Press Gunther YH. 2003. General Introduction. In Essays on Nonconceptual Content, ed. YH Gunther, pp. 1-19. Cambridge, MA: MIT Press Hutto DD. 1999. The Presence of Mind. Amsterdam: John Benjamins Hutto DD. 2008. Folk Psychological Narratives: The Socio-Cultural Basis of Understanding Reasons. Cambridge, MA: MIT Press Jackson F, Pettit P. 1993. Some Content is Narrow. In Mental Causation, ed. J Heil, A Mele, pp. 259-82. Oxford: Oxford University Press Matthen M. 2006. Teleosemantics and the Consumer. In Teleosemantics, ed. G MacDonald, D Papineau, pp. 146-66. Oxford: Oxford University Press Menary R. 2009. Intentionality, Cognitive Integration and the Continuity Thesis. Topoi 28:31-43 McGinn C. 1989. Mental Content. Oxford: Basil Blackwell Millikan RG. 1993. White Queen Psychology and Other Essays for Alice. Cambridge, MA: MIT Press Millikan RG. 2005. Language: A Biological Model. Oxford: Oxford University Press Millikan RG. 2006. Useless Content. In Teleosemantics, ed. G MacDonald, D Papineau, pp. 100-14. Oxford: Oxford University Press Nes A. 2008. Are Only Mental Phenomena Intentional? Analysis 68: 205-15 21 Oyama S. 2000. The Ontogeny of Information: Developmental Systems and Evolution. Durham: Duke University Press Papineau D. 1987. Reality and Representation. Oxford: Oxford University Press Ramsey WM. 2007. Representation Reconsidered. Cambridge: Cambridge University Press Seager W. 2000. Theories of Consciousness. London: Routledge. Thompson E. 2007. Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA: Harvard University Press Varela FJ, Thompson E, Rosch E. 1991. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, M.A: MIT Press Wheeler M. 2005. Reconstructing the Cognitive World. Cambridge, MA: MIT Press 1 For a discussion of this contrast see Thompson (2007) p. 80. 2 I am reporting Crane’s view, not suggesting that it is unproblematic. One immediate worry about this proposal is that is hard to understand what it would be for an experience or picture to be accurate simpliciter. It looks as if to be accurate is to be accurate in this or that respect. And that would seem to imply that that a condition on being an experience or picture is that the ways in which the picture or experience might be accurate or not would have to be independently specificable. If so, that makes it look as for an experience or picture to be accurate or not (to whatever degree) depends its being used for a particular purpose that determines in what respect it might be so. 3 This notion of information has two features that make it of great value to the naturalist. It doesn’t presuppose what it sets out to explain – i.e. the existence of semantic content and it is a resoundingly non-mysterious, having put to good work in a number of hard sciences. Following Wheeler (2005) we can say that it is entirely muggle – there’s nothing magical about it. 22