Academia.eduAcademia.edu
  MASSIVE MODULARITY richard samuels Cognitive scientists disagree on many issues, but one very widespread commitment is that the mind is a mechanism of some sort: roughly speaking, a physical device decomposable into functionally specifiable subparts. On this assumption, a central project for cognitive science is to characterize the nature of this mechanism—to provide an account of our cognitive architecture—which specifies the basic operations, component parts, and organization of the mind. As such, this project is (albeit in modern, mechanistic guise) an attempt to answer issues that have been central to philosophy at least since Plato. The recognition of this fact—as well as the foundational character of the issues and arguments involved—has meant that philosophers have been actively involved in contemporary discussions of cognitive architecture. Though the overarching project of specifying a cognitive architecture spans many different topics and regions of enquiry, one central cluster of issues focuses on the extent to which our minds are modular in organization. It is this cluster of issues that I focus on here. Specifically, I discuss the question of whether the human mind is massively modular: roughly, whether our minds—including those “central” regions responsible for reasoning and decision-making—are largely or entirely composed of a great many specialized cognitive mechanisms or modules. This question represents the confluence of many issues of central theoretical import to philosophy and cognitive science, including issues about the scope and limits of computational explanation, the role of evolutionary theorizing in understanding the mind, and the extent to which our psychological capacities are innately specified. In large measure because of this, the issue of massive modularity has come to mark a major fault line dividing different approaches to the study of human cognition, and has attracted both prominent advocates—Leda Cosmides, John Tooby, Steven Pinker, Peter Carruthers, and Dan Sperber, to name a few—and its share of influential detractors (e.g., Jerry Fodor and Stephen J. Gould). 0001332545.INDD 60 8/10/2011 4:17:41 PM massive modularity 61 The present chapter is not the place to provide a comprehensive survey of the debate surrounding massive modularity (MM). Its goals are more limited. First, in Section 1, it explains what is at issue between advocates and opponents of MM, and spells out the hypothesis itself in more detail. Second, it sketches some of the more prominent arguments for MM. In particular, in Section 2, it considers some wellknown arguments from evolution, and in Section 3, arguments from computational tractability. Finally, in Section 4, it considers what many regard as the most serious theoretical challenge for MM—the problem of flexibility—to explain our cognitivebehavioral flexibility within the restrictions imposed by a modularist conception of cognitive architecture (Carruthers 2006). 1. What Is at Issue? To a first approximation, massive modularity is the hypothesis that the human mind is largely or entirely composed from a great many modules. More precisely, MM can be formulated as the conjunction of three claims: - Composition: The human mind is largely or entirely composed from modules. - Plurality: The human mind contains a great many modules. - Central Modularity: Modularity is found not merely at the periphery of the mind but also in those central regions responsible for such “higher” cognitive capacities as reasoning and decision making. In what follows I assume advocates of MM are committed to the conjunction of these claims. Even so, each is amenable to a variety of different interpretations. More needs to be said if we are to get clearer on what is at issue. 1.1. Composition Thesis MM is in large measure a claim about the kinds of mechanisms from which our minds are composed—viz. it is largely or even entirely composed from modules.1 1 There is a familiar notion of modularity, sometimes called Chomskian modularity, in which modules are not mechanisms but systems of mental representations—bodies of mentally represented knowledge or information—such as a grammar or a theory (Segal 1996; Samuels 2000; Fodor 2000). Paradigmatically, such structures are truth-evaluable in that it makes sense to ask of the representations if they are true or false. Moreover, they are often assumed to be innate and/ or subject to informational constraints (e.g., inaccessible to consciousness). Although Chomskian modules are an important sort of cognitive structure, they are not the ones most relevant to the sort of position advocated by massive modularists. This is because advocates of MM appear to assume that modules are a species of cognitive mechanism (Sperber 2002; Sperber and Hirschfeld 2007; Cosmides and Tooby 1992; Carruthers 2006). 0001332545.INDD 61 8/10/2011 4:17:41 PM 62 the oxford handbook of philosophy of cognitive science But this is vague in at least two respects. First, it leaves unspecified the precise extent to which minds are composed from modules. In particular, this way of formulating the proposal accommodates two different positions, which I call strong and weak massive modularity. According to strong MM all cognitive mechanisms are modules. Such a view would be undermined if we were to discover any non-modular cognitive mechanisms. By contrast, weak MM maintains only that the human mind is largely modular in structure. In contrast to strong MM, such a view is clearly compatible with the claim that there are some non-modular mechanisms. So, for example, a proponent of weak MM can readily posit non-modular devices for reasoning and learning. A second crucial respect in which the Composition Thesis is vague is that it leaves unspecified what modules are. For present purposes, this is an important matter since the interest and plausibility of the thesis turns crucially on what one takes modules to be. 1.2.1. Robust Notions of Module Though there are many notions of modularity in play within cognitive science,2 perhaps the most well-known and most demanding is due to Fodor (1983). On this view, modules are functionally characterizable cognitive mechanisms that tend to possess the following features to some interesting degree: • Domain-specificity: They operate on a limited range of inputs, defined by some task domain such as vision or language processing; • Informationally encapsulation: They have limited access to information in other systems; • Innateness: They are unlearned components of the mind; • Inaccessibility: Other mental systems have only limited access to a module’s computations; • Shallow outputs: Their outputs are not conceptually elaborated; • Mandatory operation: They respond automatically to inputs; • Speed: Their operation is relatively fast; • Neural localization: They are associated with distinct neural regions; • They are subject to characteristic and specific breakdowns; and • Their developmental trajectories exhibit a characteristic pace and sequence. This full-fledged Fodorian notion has been highly influential in many areas of cognitive science (Garfield 1987); but it has not played much role in debate over MM,3 and for good reason. The thesis that minds are largely or entirely composed of Fodorian modules is obviously implausible. Indeed, some of the entries on Fodor’s list—relative speed and shallowness, for example—make little sense when applied to central systems (Carruthers 2006; Sperber and Hirschfeld 2007). And even where 2 The following discussion is by no means exhaustive. For more detailed discussions of different notions of modularity see Segal (1996); Samuels (2000); and Carruthers (2006). 3 Incidentally, not even Fodor adopts it in his recent discussions of MM (Fodor 2000, 2008). 0001332545.INDD 62 8/10/2011 4:17:41 PM massive modularity 63 Fodor’s properties can be sensibly ascribed—as in the case of innateness—they carry a heavy justificationary burden that few seem inclined to shoulder (BaronCohen 1995; Sperber 1994). In any case, there is a broad consensus that not all characteristics on Fodor’s original list are of equal theoretical import. Rather, domain-specificity and informational encapsulation are widely regarded as most central. Both concern the architecturally imposed4 informational restrictions to which cognitive mechanisms are subject—the range of representations they can access—though the kinds of restriction involved are different. Domain-specificity is a restriction on the representations a cognitive mechanism can take as input—that “trigger” it or “turn it on.” A mechanism is domainspecific (as opposed to domain-general) to the extent that it can only take as input a highly restricted range of representations.5 Standard candidates include mechanisms for low-level visual perception, face recognition, and arithmetic. Informational encapsulation is a restriction on the kinds of information a mechanism can use as a resource once so activated—paradigmatically, though not essentially, information stored in memory. Specifically, a cognitive mechanism is encapsulated to the extent that it can access, in the course of its computations, less than all of the information available to the organism as a whole (Fodor 1983). Standard candidates include mechanisms such as those for low-level visual perception and phonology that do not draw on the full range of an organism’s beliefs and goals. Though there are many characteristics other than domain-specificity and encapsulation that have been ascribed to modules, when discussing more robust conceptions of modularity I will focus on these properties. This is both because they are widely regarded as the most theoretically important features of Fodorian modules, and because—as we will see—they are central to the topics to be considered here. 4 To claim that a property of a cognitive mechanism is architecturally imposed minimally implies the following. First, they are relatively enduring characteristics of the device. Second, they are not mere products of performance factors, such as fatigue or lapses in attention. Finally, they are supposed to be cognitively impenetrable (Pylyshyn 1984). To a first approximation: they are not properties of the mechanism that can be changed as a result of alterations in the beliefs, goals, and other representational states of the organism. 5 Two comments. First, it should go without saying—though it will be said anyway—that the notion of domain-specificity admits of degree and that researchers who use the notion are interested in whether we possess mechanisms that are domain-specific to some interesting extent. The same points also apply to the notion of informational encapsulation. Second, there is a range of different ways in which theorists have proposed to characterize types or domains of representations. For example, in one common view, domains of representations are content domains: sets of representations that are characterized in terms of what they are about, or what they mean (Fodor 1983). On another view, domains of representations are individuated by formal properties of representations ( Jackendoff 1992; Barrett and Kurzban 2006). In this view, the representations that comprise a domain share various formal, non-semantic properties. For further discussion of issues about the nature and individuation of domains see Sperber 1996; Fodor, 2000; Samuels, 2000; and Barrett and Kurzban 2006. 0001332545.INDD 63 8/10/2011 4:17:41 PM 64 the oxford handbook of philosophy of cognitive science That said, it is important to stress that not all those interested in modularity assume the centrality of these notions. 1.1.2. A Minimal Functional Notion of Module According to another, minimal conception of modules that has become increasingly commonplace in cognitive science—especially among advocates of MM—modules are just distinct, functionally characterized cognitive mechanisms of the sort that correspond to boxes in a cognitive psychologist’s flow diagram (Fodor 2005). In a recent paper, Barrett and Kurzban (2006) summarize and endorse this growing consensus: We similarly endorse the view espoused by many evolutionary psychologists that the concept of modularity should be grounded in the notion of functional specialization (Barrett 2005; Pinker 1997, 2005; Sperber 1994, 2005; Tooby and Cosmides 1992) rather than any specific Fodorian criterion. Biologists have long held that structure reflects function, but that function comes first. That is, determining what structure one expects to see without first considering its function is an approach inconsistent with modern biological theory. The same holds true, we argue, for modularity. (Barrett and Kurzban 2006) Of course, there is nothing wrong per se with adopting such a conception of modularity. Indeed, one obvious virtue is that it renders MM more plausible. But it does so at the risk of leaching the hypothesis of its content, thereby rendering it rather less interesting than it may initially appear to be. For in the context of cognitive science, the idea that minds are composed of functionally specifiable mechanisms has near universal acceptance.6 So, if being a module just is being a functionally specifiable mechanism, then the thesis that minds are composed of modules is just the consensus view. 1.2. Plurality Thesis Still, it does not follow, as many have claimed, that no distinctive version of MM can be formulated with the minimal notion of modularity (Fodor 2005; Prinz 2006). In particular, some proponents of MM maintain that their thesis is interesting not because it implies that minds are composed of minimal modules, but because it implies what I earlier called the Plurality Thesis: the view that minds contain a great many cognitive mechanisms or modules (Carruthers 2006).7 Is this an interesting thesis? Clearly, if formulated in terms of a robust notion of modularity, the Plurality Thesis is quite radical since many deny that 6 This is so even for fans of empiricist and domain-general accounts of cognitive processes. After all, a domain-general learning mechanism is still a functionally specifiable device. 7 It may also be that functional specialization admits of degree, and that an interesting version of MM could maintain that minds are largely or entirely composed of highly specialized mechanisms. 0001332545.INDD 64 8/10/2011 4:17:41 PM massive modularity 65 domain-specific and/or encapsulated devices have a substantial role to play in our cognitive economy. But things are less clear if one adopts the minimal notion. According to some advocates of MM, such a thesis would still be interesting since many deny that there are lots of such minimal modules. Carruthers, for example, maintains that such a claim is rejected by “those who . . . picture the mind as a big general-purpose computer with a limited number of distinct input and output links to the world” (Carruthers 2006). But on reflection this cannot be quite right. Big general-purpose computers are not simple entities. On the contrary, they are almost invariably decomposable into a large number of functionally characterizable sub-mechanisms.8 So, for example, a standard von Neumann-type architecture decomposes into a calculating unit, a control unit, a fast-to-access memory, a slow-to-access memory, and so on; and each of these decomposes further into smaller functional units that are themselves decomposable into sub-mechanisms, and so on. As a consequence, a standard von Neumann machine will typically have hundreds or even thousands of distinct functionally characterizable subcomponents.9 Thus is would seem that even radical opponents of MM can endorse the sorts of Plurality Thesis advocated by Carruthers and others. Indeed, some have argued that this is little more than a consequence of the consensus view in cognitive science—viz. that cognitive mechanisms are hierarchically decomposable into smaller systems (Fodor 2005). Still, there is an important distinction between this anodyne version of plurality and the sort of view that is characteristic of MM, even on the minimal conception of modules. To a first approximation, non-modularists, such as those who construe the mind as a big general-purpose computer with a limited number of distinct input and output links, are committed to a plurality of functional modules because, qua mechanists, they are committed to the idea that complex mechanisms are decomposable into simpler parts. On this view, there will large numbers of parts at lower levels in the decomposition. But there will also be some relatively abstract level of description at which there is only a small number of devices. Roughly, on such views the highest level of analysis will be one in which all the parts are organized into a relatively small number of cognitive mechanisms. In contrast, advocates of MM deny that there is any such level of composition. Rather, they maintain that even at the highest levels of description, the human mind will resemble a confederation of hundreds or even thousands of functionally dedicated devices—a cheater detection mechanism, a theory of mind 8 Indeed this is more or less guaranteed by the widespread assumption that the functional decomposition of a “large” system will typically have many levels of aggregation (Simon 1962). I return to this point below. 9 A similar point applies to the sort of radical connectionism on which the mind is characterized as one huge undifferentiated neural network. This is often—and rightly—seen as the antithesis of MM (Pinker 1997), and yet it is committed to a vast plurality of mechanisms. After all, each node in a neural network is a mechanism, and in any version of the connectionist story, there will a great many such nodes. 0001332545.INDD 65 8/10/2011 4:17:41 PM 66 the oxford handbook of philosophy of cognitive science device, a frequentist module, and so on—that in no interesting sense compose to form some larger single unitary mechanism. In short, all mechanists about cognition are committed to a plurality of cognitive mechanisms because they are committed to functional decomposition. Call this decompositional plurality. But only advocates of MM are committed to what we might call a compositional plurality: the existence of a large number of mechanisms that cannot be composed further. It would thus seem that, contrary to what many have claimed, an interesting version of MM could be formulated in terms of the minimal, functional notion of a module.10 1.3. Central Modularity Let us turn to the final thesis that comprises MM: Central Modularity: Modules are found not merely at the periphery of the mind but also in those central regions responsible for such “higher” cognitive capacities as reasoning and decision making. This does not strictly follow from the claims discussed so far since one might deny that there are any central systems for reasoning and decision making. But this is not the view that advocates of MM seek to defend. Indeed, a large part of what distinguishes MM from the earlier, well-known modularity hypothesis defended by Fodor (1983) and others is that the modular structure of the mind is not restricted to input systems (those responsible for perception, including language perception) and output systems (those responsible for producing behavior) ( Jackendoff 1992). So, for example, it has been suggested that there are modules for such central processes as social reasoning (Cosmides and Tooby, 2000), biological categorization (Pinker 1994), and probabilistic inference (Gigerenzer et al. 1999). How interesting is the Central Modularity thesis? This depends on the notion of modularity involved, but also the kind of plurality that is at stake. Start with versions of the thesis formulated with the minimal notion of a module. If the claim is merely that there are central, functional modules, then Central Modularity is merely the consensus view in cognitive science. Similarly, if the claim is merely that there are lots of central, functional modules, then once more it is hard to discern any interesting and distinctive position. But if the kind of plurality involved is not merely decompositional, but compositional in character, then we appear to have a position that is rather more worthy of attention: 10 It should be noted that the present discussion presupposes answers to some genuine but largely unaddressed questions about the individuation of cognitive mechanisms. In particular, it is far from clear when two or more mechanisms are themselves parts of some larger mechanism. For some discussion of such issues, see Lyons 2001. 0001332545.INDD 66 8/10/2011 4:17:41 PM massive modularity 67 Central Compositional Modularity: Central cognition depends on a great many functional modules that are not themselves composable into “larger” more inclusive systems. This would be a distinctive version of Central Modularity. Not because it maintains that central cognition depends on functional modules, or because it assumes the existence of many such mechanisms, but because it implies a kind of decentralized or confederate view of central cognition: one on which our capacity for thought, reasoning, judgment, and the like depends on the interaction of a multitude of distinct mechanisms. This is one way to articulate an interesting version of Central Modularity without recourse to a Fodorian conception of modules. Moreover, it is a suggestion that comports well with views articulated by some prominent advocates of MM. So, for example, it appears to capture what Tooby and Comsides have in mind when they liken our cognitive architecture to “a confederation of hundreds or thousands of functionally dedicated computers (often called modules)” ( Tooby and Cosmides, 1995, xiv) What makes their position interesting is not merely that there are lots of such devices, but that they comprise a loose confederacy of subsystems as opposed to, say, an all-encompassing unitary central executive. Let us now consider versions of Central Modularity formulated in terms of a more robust conception of modularity. Here, the degree to which one’s version of Central Modularity is interesting will depend on both (1) the extent to which central cognition is subserved by domain-specific and/or encapsulated mechanisms, and (2) how many such modules there are. Both these questions could be answered in a variety of different ways. At one extreme, for example, one might adopt the following relatively weak claim: Weak Central Modularity: There are a number of domain-specific and/or encapsulated central systems, but there are also non-modular—domaingeneral and unencapsulated—central systems as well. Such a proposal is not without interest. But it is not especially radical in that it does not stray far from the old-fashioned peripheral modularity advocated by Fodor. Moreover, as we will see in Section 4, it does not raise the sorts of deep theoretical problems that plague other, stronger versions of MM. At the other extreme one might maintain: Strong Central Modularity: All central systems are domain-specific and/or encapsulated, and there are a great many of them. This is a genuinely radical position since it implies that there are no domain-general, informationally unencapsulated central systems. But this Strong Central Modularity is also implausible, for as we will see in later sections, there are no good reasons to accept it, and some reason to think it is false. 0001332545.INDD 67 8/10/2011 4:17:41 PM 68 the oxford handbook of philosophy of cognitive science 2. Massive Modularity and Evolution AQ: 1 Discussions of MM are closely tied to claims about the evolutionary plausibility of different architectural arrangements. Specifically, many have argued that MM is plausible in the light of quite general considerations about the nature of evolution. Though this is not the place to discuss such arguments in detail, what follows aims to provide a flavor of the evolutionary motivations for MM. In doing so, I discuss briefly two prominent arguments for MM.11 (For more detailed discussion of such arguments see Tooby and Cosmides 1992; Sperber 1994; Samuels 1998; Fodor 2000; Buller 2005; and Barrett and Kurzban 2005.) 2.1. Evolvability One common argument for MM derives from Simon (1962)’s seminal discussion of evolutionary stability (Carston 1996; Carruthers 2006). According to Simon, for an evolutionary process to reliably assemble complex functional systems—biological systems in particular—the overall system needs to be semi-decomposable: hierarchically organized from components with relatively limited connections to each other. Simon illustrates the point with a parable of two watchmakers, Hora and Tempus, both highly regarded for their fine watches. But while Hora prospered, Tempus became poorer and poorer and finally lost his shop. The reason: The watches the men made consisted of about 1000 parts each. Tempus had so constructed his that if he had one partially assembled and had to put it down— to answer the phone, say—it immediately fell to pieces and had to be reassembled from the elements. . . . The watches Hora handled were no less complex . . . but he had designed them so that he could put together sub-assemblies of about ten elements each. Ten of these subassemblies, again, could be put together into a larger subassembly and a system of ten of the latter constituted the whole watch. Hence, when Hora had to put down a partly assembled watch in order to answer the phone, he lost only a small part of his work, and he assembled his watches in only a fraction of the man-hours it took Tempus. (Simon 1962) The obvious moral—and the one Simon invites us to accept—is that evolutionary stability requires that complex systems be hierarchically organized from dissociable subsystems, and according to many, this militates in favor of MM (Carston 1996, 75). Though evolutionary stability may initially appear to favor MM, one concern is that the argument only supports the familiar mechanistic thesis that complex machines are hierarchically assembled from (and decomposable into) many 11 There are other less plausible arguments for MM, which due to space limitations, will not be considered here. For further discussion of other arguments for MM see Tooby and Cosmides (1992); Sperber (1994), and Samuels (2000). 0001332545.INDD 68 8/10/2011 4:17:41 PM massive modularity 69 subcomponents. But this clearly falls short of the claim that all (or even any) are domain-specific or encapsulated. Rather it supports at most the sort of banal Plurality Thesis which I earlier referred to as decompositional plurality: one that is wholly compatible with even a Big Computer view of central processes. All it implies is that if there are such complex central systems, they will need to be hierarchically organized into dissociable subsystems—which incidentally was the view Simon and his main collaborators endorsed all along (Simon 1962; Newell 1990). 2.2. Task Specificity Another well-known kind of evolutionary argument, widely associated with the work of the evolutionary psychologists Leda Cosmides and John Tooby, purports to show that once we appreciate the way in which natural selection operates and the character of the cognitive problems that human beings confront, we will see that there are good reasons for thinking that our minds contain a large number of distinct, modular mechanisms. In brief, the argument is this: Human beings confront a great many evolutionarily important cognitive tasks whose solutions impose quite different demands. For example, the demands on vision are distinct from those of speech recognition, of mindreading, cheater detection, probabilistic judgment, grammar induction, and so on. Further, it is unlikely that there is be a single general inference mechanism that could perform all these cognitive tasks, and even if there could be such a mechanism, it would be systematically outperformed by a system comprised of an array of distinct mechanisms, each of whose internal processes were specialized for processing the different sorts of information in the way required to solve the task (Carruthers 2006; Cosmides and Tooby 1992, 1994). But if this is so, then we should expect the human mind to contain a great many functionally specialized cognitive mechanisms since natural selection can be expected to favor superior solutions over inferior ones. In short: we should expect minds to be massively modular in their organization. Though there is a lot to say about this argument, I will restrict myself to two brief comments. (See Samuels 1998; Buller 2005; Fodor 2000; and Carruthers 2006 for further discussion.) First, if the alternatives were MM or a view of minds as comprised of just a single general-purpose cognitive device, then MM would be the more plausible. But these are clearly not the only options; on the contrary, there are lots of different options. For example, opponents of MM might deny that central systems are modular while still insisting there are plenty of modules for perception, motor control, selective attention, and so on. In other words, the issue is, not merely whether some cognitive tasks require specialized modules, but whether the sorts of tasks associated with central cognition—paradigmatically, reasoning and decision making—require a proliferation of such mechanisms. Second, it is important to see that the addition of functionally dedicated mechanisms is not the only way of enabling a complex system to address multiple tasks. An alternative is to provide some (small set of) relatively functionally non- 0001332545.INDD 69 AQ: 2 8/10/2011 4:17:41 PM 70 the oxford handbook of philosophy of cognitive science specific mechanism with the requisite bodies of information for solving the tasks it confronts. This is a familiar proposal among those who advocate non-modular accounts of central processes. Indeed, advocates of non-modular reasoning architectures routinely assume that reasoning devices have access to a huge amount of specialized information on a great many topics, much of which will be learned but some of which may be innately specified (Newell 1990; Anderson and Lebiere 2003). Moreover, it is one that plausibly explains much of the proliferation of cognitive competences that humans exhibit throughout their lives—for example, the ability to play chess, or reason about historical issues as opposed to politics or gene splicing or restaurants. To be sure, it might be that each such task requires a distinct mechanism, but such a conclusion does not flow from general argument alone. For all we know, the same is true of the sorts of tasks advocates of MM discuss. It may be that the capacity to perform certain tasks is explained by the existence of specialized mechanisms. But how often this is the case for central cognition is a largely open question that is not adjudicated by the argument from task specificity. 3. Computational Tractability and Relevance A second family of arguments for MM focuses on a range of problems that are familiar from the history of cognitive science: problems that concern the computational tractability of cognitive processes. Though such intractability arguments vary considerably in detail, they share a common format. First, they proceed from the assumption that cognitive processes are classical computational ones—roughly, algorithmically specifiable processes defined over mental representations. This assumption has been criticized in many quarters, but it has widespread acceptance in the context of the present debate, and for this reason I assume it here. Second, given the assumption that cognitive processes are computational ones, intractability arguments seek to undermine non-modular accounts of cognition by establishing the following Intractability Thesis: IT: Non-modular cognitive mechanisms—in particular mechanisms for reasoning and other central processes—are computationally intractable in roughly the sense that they require more time or cognitive resources—for example, memory and processing power—than humans can reasonably be expected to possess. But if this is so, and if the human mind is, as many cognitive scientists suppose, a computational system of some kind, then it follows that the mind is composed of modular cognitive mechanisms. After all, a model of cognition that requires 0001332545.INDD 70 8/10/2011 4:17:42 PM massive modularity 71 resources that we do not possess is simply not one that can accurately characterize the architecture of our minds. 3.1. Informational Impoverishment Why accept the Intractability Thesis? One well-known argument for IT, often associated with the work of Cosmides and Tooby, proceeds from the assumption that a non-modular mechanism—one that is task nonspecific or domain-general “lacks any content, either in the form of domain-specific knowledge or domainspecific procedures that can guide it towards the solution of problems” (Cosmides and Tooby 1994, 94). As a consequence, it “must evaluate all the alternatives it can define” (94). But as Cosmides and Tooby observe, such a strategy is subject to serious intractability problems, since even routine cognitive tasks are such that the space of alternative options tends to increase exponentially. Non-modular mechanisms would thus seem to be computationally intractable: at best intolerably slow, and at worst incapable of solving the vast majority of problems they confront. Though frequently presented as an objection to non-MM accounts of cognitive architecture, this argument is really only a criticism of theories that characterize cognitive mechanisms as suffering from a particularly extreme form of informational impoverishment. Any appearance to the contrary derives from the stipulation that domain-general mechanisms possess no specialized knowledge. But this conflates claims about the need for informationally rich cognitive mechanisms—a claim that is not denied—with claims about the need for modularity, and though modularity is one way to build specialized knowledge into a system, it is not the only way. As noted earlier, another is for non-modular devices to have access to bodies of specialized knowledge. Indeed, it is commonly assumed by non-modular accounts of central processing that such devices have access to huge amounts of information. This is obvious from even the most cursory survey of the relevant literatures. Fodor (1983), for example, maintains explicitly that non-modular central systems have access to huge amounts of information; Gopnik, Newell, and many others who adopt a non-modular conception of central systems maintain this as well (Gopnik and Meltzoff 1997; Newell 1990). The argument currently under discussion thus succeeds only in refuting a straw man. 3.2. Relevance Problems Non-modularists can avoid the conclusion of Cosmides and Tooby’s argument by positing relatively task nonspecific mechanisms that have access to lots of information. Yet it is precisely the assumption of informational richness that generates the most well-known tractability problems for non-modular accounts of cognition: what have historically been construed as versions of the frame problem, though are perhaps more accurately characterized as relevance problems (Pylyshyn 1989; Ford 0001332545.INDD 71 AQ: 3 8/10/2011 4:17:42 PM 72 the oxford handbook of philosophy of cognitive science and Pylyshyn 1996; Samuels 2010). Roughly put, such problems conform to the following general schema: Relevance Problems: Given a task, T, and computational system S, how does S determine (with reasonable levels of success) from all the available information which is relevant to the specific task at hand? (Glymour 1987). Such problems can arise in the performance of many different tasks, including planning, decision making, pragmatics, perception, and so on. But perhaps the most well-known— and notoriously difficult to address—is a kind of relevance problem that arises in the context of belief revision, what might be called problem of relevance in update: Relevance in Update: Given some new information, how do we determine (with reasonable levels of success) which of the representational states we possess are relevant to determining how to update our beliefs? Does this problem undermine non-modular, computational accounts of cognition? Presumably it is a very hard research topic for cognitive science. Among other things, it requires the specification of tractable, psychologically plausible computational processes that manage to successfully recruit those representations relevant to the task at hand. But the fact that the problem constitutes a hard research topic is not, by itself, reason to reject non-modular views. Rather, it is merely the specification of one central part of the problem of explaining belief revision, and moreover, a part of the problem that presumably modular and non-modular views alike need to address. What is required to turn this into an objection to non-modular, computational accounts is an argument for the claim that that non-modular accounts cannot plausibly accommodate the sort of relevance-sensitivity characteristic of human cognition. In what follows, I consider two arguments of this sort. 3.2.1. Exhaustive Search One might think that in order to identify those items of information relevant to the task at hand, a non-modular central system would need to perform exhaustive searches over our beliefs. But given even a conservative estimate of the size of any individual’s belief system, such a search would be unfeasible in practice. In this case, it would seem that non-modular reasoning mechanisms are computationally intractable. Though it is unclear that anyone really endorses this argument, some have found it hard not to view advocates of non-modular central systems as somehow committed to exhaustive search (Carruthers 2004; Glymour 1985). Yet this view is incorrect. What the non-modularist does accept is that unencapsulated reasoning mechanisms have access to huge amounts of information—paradigmatically, all the agent’s background beliefs. But the relevant notion of access is a modal one. It concerns what information—given architectural constraints—a mechanism can 0001332545.INDD 72 8/10/2011 4:17:42 PM massive modularity 73 mobilize in solving a problem. In particular, it implies that any background belief can be used, not that the mechanism in fact mobilizes the entire set of background beliefs—that is, that it engage in exhaustive search. 3.2.2. Inferential Holism and the Intractability of Unencapsulated Processes A second closely related intractability argument focuses on the apparent implications of the assumption that modular mechanisms are paradigmatically encapsulated. Though the argument has been formulated many times over (see Carruthers 2006; Samuels 2005; Barrett and Kurzban 2006), one relatively plausible rendering of the argument proceeds from the observation that much human reasoning is holistic in character. In contrast to the argument from exhaustive search, the sort of holism at issue is not that all—or even most—of our beliefs actually figure in any specific instance of reasoning. Instead, the sort of holism at stake here is modal in character. What it amounts to is that under the appropriate conditions—especially those involving different background beliefs—the relevance of a belief to a reasoning task can vary dramatically. Slightly more precisely: Inferential Holism: Given appropriate background beliefs, (almost) any belief can be rendered relevant to the assessment of (almost)12 any other belief.13 To take a fairly simple example:14 On the face of it, the current cost of tea in China has little to do with whether my brother’s baby in England will cry on Saturday morning. But suppose that I believe my brother has stocks invested in Chinese tea, that he reads the business section of the newspaper every Saturday morning, and that on reading bad financial news he tends to fly into a rage. Given these background beliefs, it seems that beliefs about the current cost of tea in China may well be relevant to beliefs about whether my brother’s baby will cry on Saturday morning. Mutatis mutandis for other beliefs. Or so it would seem. In which case, it would seem that under the appropriate conditions, a given belief can be relevant to the assessment of (almost) any other. How is the apparent holism of human inference related to issues of modularity? One connection is this: If our capacity for belief revision depends on some kind of domain-general inference system, then such a system will need to be highly unencapsulated. Otherwise the mechanism in question could not explain the holistic character of much human inference. But, the argument continues, such unencapsu12 Clearly, this could do with refinement. So, for example, few beliefs will presumably be relevant to the assessment of logical beliefs—e.g., that if P, then P. 13 Or to use Fodor (1983)’s terminology: belief revision processes are isotropic. 14 The example is based on a case used in Copeland (1993), which, in turn, was based on an example from Guha and Levy (1990). 0001332545.INDD 73 8/10/2011 4:17:42 PM 74 the oxford handbook of philosophy of cognitive science lated processes would be computationally intractable. They would require more time and resources than we, in fact, possess. In which case, it cannot be that we possess unencapsulated reasoning mechanisms. What are we to make of this argument? The first premise is plausible. If belief revision is holistic and depends on a single mechanism, then the mechanism would need to be unencapsualted. The problem is with the second premise. There is a long story to tell here. (See Samuels 2005.) But the short version is that tractability does not require encapsulation. As with most real-world computational applications— Web search engines, for example—there may be heuristic and approximation techniques that permit feasible computation: techniques that often, though not invariably, identify a substantial subset of those representations that are relevant to the task at hand. Of course, this would not be an option if we maintained that, when reasoning, we are guaranteed to identify relevant beliefs. But there is no reason whatsoever to suppose that this claim is true. Indeed, one very clear moral from the last four decades of research on human judgment and reasoning is that such standards of accuracy are misplaced.15 I conclude, then, that the present argument fails. 3.3. The Locality Argument A final kind of intractability argument that I consider here—one that has been hugely influential in recent debate—is due to Jerry Fodor (2000, 2008). Fodor’s argument is a complex one, but the core idea can be framed in terms of a tension between two claims.16 The first is that classical computational processes are local in roughly the following sense: what computations apply to a particular representation is determined solely by its constituent structure—that is, by how the representation is constructed from its parts (2000, 30). To take a very simple example, whether the addition function can be applied to a given representation is solely determined by whether it has the appropriate syntactic structure—for example, whether it contains a permissible set of symbols related by “+.” The second claim is that much of our reasoning is global in that it is sensitive to context-dependent properties of the entire belief system. In arguing for this, Fodor focuses primarily on abductive reasoning (or inference to the best explanation). Such inferences routinely occur in science and, roughly speaking, consist of coming to endorse a particular belief or hypothesis on the grounds that it constitutes the best available explanation of the data. One familiar feature of such inferences is that the relative quality of hypotheses are not assessed merely in terms of their ability to fit the data, but also in terms of their simplicity and conservativism. According to Fodor, however, these properties are not intrinsic to a belief or hypothesis but are 15 See, for example, Pohl (2005) for a discussion of the myriad errors that we make in reasoning. For more detailed discussion of the argument see Ludwig and Schneider (2007) and Samuels (2010). 16 0001332545.INDD 74 8/10/2011 4:17:42 PM massive modularity 75 global characteristics that a belief or hypothesis possesses by virtue of its relationship to a constantly changing system of background beliefs. The problem, then, is this: If classical computational operations are local, how could global reasoning processes, such as abduction, be computationally tractable? Notice that if the above is correct, a classical abductive process could not operate merely by looking at the hypotheses to be evaluated. This is because, by assumption, what classical computations apply to a representation is determined solely by its constituent structure, whereas the simplicity and conservativism of a hypothesis, H, depends not only on its constituent structure but its relations to our system of background beliefs, K. In which case, a classical implementation of abduction would need to look at both H and whatever parts of K determine the simplicity and conservativism of H. The question is: How much of K needs to be consulted in order for a classical system to perform reliable abduction? According to Fodor, the answer is that lots—indeed, very often, the totality—of the background will need to be accessed, since this is the “only guaranteed way” of classically computing a global property. But this threatens to render reliable abduction\computationally intractable. As Fodor puts it: Reliable abduction may require, in the limit, that the whole background of epistemic commitments be somehow brought to bear on planning and belief fixation. But feasible abduction requires in practice that not more than a small subset of even the relevant background beliefs are actually consulted. (2000, 37) In short: if classicism is true, abduction cannot be reliable. But since abduction presumably is reliable, classicism is false. If sound, the above argument would appear to show that classicism itself is untenable. So, why would anyone think it supports MM? The suggestion appears to be that MM provides the advocate of CTM with a way out: a way of avoiding the tractability problems associated with the globality of abduction without jettisoning CTM (Sperber 2005; Carruthers 2006). Fodor himself put the point as well as anyone: Modules are informationally encapsulated by definition. And, likewise by definition, the more encapsulated the informational resources to which a computational mechanism has access, the less the character of its operations is sensitive to global properties of belief systems. Thus to the extent that the information accessible to a device is architecturally constrained to a proprietary database, it won’t have a frame problem and it won’t have a relevance problem (assuming that these are different); not, at least, if the database is small enough to permit approximations to exhaustive searches. (2000, 64) The modularity of central systems is thus supposed to render reasoning processes sufficiently local to permit tractable computation. There are a number of serious problems with the above line of argument. One that will not be addressed here concerns the extent to which MM provides a satisfactory way of shielding computationalism from the tractability worries associated 0001332545.INDD 75 8/10/2011 4:17:42 PM 76 the oxford handbook of philosophy of cognitive science with globality. What will be argued, however, is that although simplicity and conservativism are plausibly context dependent, Fodor provides us with no reason whatsoever to think that they are global in any sense that threatens non-modular versions of computationalism. First, when assessing the claim that abduction is global, it is important to keep firmly in mind the general distinction between normative and descriptivepsychological claims about reasoning: claims about how we ought to reason, and claims about how we actually reason. This distinction applies to the specific case of assessing the simplicity and conservativism of hypotheses. On the normative reading, assessments of simplicity and conservativism ought to be global: that is, normatively correct assessments ought to take into consideration one’s total background epistemic commitments. But of course it is not enough for Fodor’s purposes that such assessments ought to be global. Rather, it needs to be the case that the assessments humans make are, in fact, global—and there is no reason whatsoever to suppose that this is true. A comparison with the notion of consistency may help to make the point clearer. Consistency is frequently construed as a normative standard against which to assess one’s beliefs (Dennett 1987). Roughly, all else being equal, one’s beliefs ought to be consistent with each other. When construed in this manner, however, it is natural to think that consistency should be a global property in the sense that any belief ought to be consistent with the entirety of one’s background beliefs. But there is absolutely no reason to suppose that human beings conform to this norm, and some reason to deny that we do. So, for instance, there is good reason to suppose that reliable methods of consistency checking are computationally too expensive for creatures like us to engage in, if consistency is construed as a global property of belief systems (Cherniak 1986). Moreover, this is so in spite of the fact that consistency really does play a role in our inferential practices. What I am suggesting is that much the same may be true of simplicity and conservativism. When they are construed in a normative manner, it is natural to think of them as global properties, but when construed as properties of the beliefs that figure in actual human inference, there is no reason to suppose that they accord with this normative characterization. Second, even if we suppose that simplicity and conservativism are global properties of actual beliefs, the locality argument still does not go through, since it turns on the implausible assumption that we are guaranteed to make successful assessments of simplicity and conservativism. Specifically, in arguing for the conclusion that abduction is computationally unfeasible, Fodor relies on the claim that “the only guaranteed way of Classically computing a syntactic-but-global property” is to take “whole theories as computational domains” (2000, 36). But guarantees are beside the point. Why suppose that we always successfully compute the global properties on which abduction depends? Presumably we do not. And one very plausible suggestion is that we fail to do so when the cognitive demands required are just too great. In particular, for all that is known, we may well fail under precisely those circumstances the classical view would predict—namely, when too much of a belief 0001332545.INDD 76 8/10/2011 4:17:42 PM massive modularity 77 system needs to be consulted in order to compute the simplicity or conservativism of a given belief. 3.4. Modularity and Tractability Even if intractability arguments for MM are not decisive, it is important to stress that modularity does provide a number of resources for addressing tractability problems. First, where a mechanism is functionally specialized or domain-specific, it becomes possible to utilize a potent design strategy for reducing computational load: namely, to build into the mechanism substantial amounts of information about the problems that is it supposed to address. This might be done in a variety of ways. It might be only implicit in the organization of the mechanism, or it might be explicitly represented; it might take the form of rules or procedures or bodies of propositional knowledge and so on. But however this information gets encoded, the key point is that a domain-specific mechanism can be informationally rich and, as a result, capable of rapidly and efficiently deploying those strategies and options most relevant to the domain in which it operates. Such mechanisms thereby avoid the need for computationally expensive search-and-assessment procedures that might plague a more general-purpose device. For this reason, domain specificity has seemed to many a plausible candidate for reducing the threat of combinatorial explosion without compromising the reliability of cognitive mechanisms (Sperber 1994; Tooby and Cosmides 1992). Second, encapsulation can help reduce computational load in two ways. First, because the device only has access to a highly restricted database or memory, the costs incurred by memory search are considerably reduced since there just is not that much stuff over which the search can be performed. Second, by reducing the range of accessible items of information, there is a concomitant reduction in the number of relations between items—paradigmatically, relations of confirmation and relevance—that can be computed. Yet one might reasonably wonder what all the fuss is about. After all, computer scientists have generated a huge array of methods—literally hundreds of different search and approximation techniques—for reducing computational overheads (Russell and Norvig 2003). What makes encapsulation of particular interest? Here is where the deeper explanation comes into play. Most of the methods that have been developed for reducing computational load require that the implementing mechanisms treat the assessment of relevance as a computational problem. Roughly, they need to implement computational procedures that select from the available information some subset that is estimated to be relevant. In contrast, encapsulation is supposed to obviate the need for such computational solutions. According to this view, an encapsulated device (at least paradigmatically) only has access to a very small amount of information. As a consequence, it can perform a (near) exhaustive search on whatever information it can access, thereby avoiding the need to assess relevance. There is a sense, then, in which highly encapsulated devices avoid the relevance problem altogether (Fodor 2000). 0001332545.INDD 77 8/10/2011 4:17:42 PM 78 the oxford handbook of philosophy of cognitive science 4. Problems of Cognitive-Behavioral Flexibility So far I have considered some prominent arguments for MM and found them wanting. I now consider a family of challenges for massive modularity that concern the apparent flexibility of human behavior and cognition. Section 4.1 spells out three sorts of representational flexibility that are alleged to pose a problem for MM, at least in its more radical forms. Next, Section 4.2 highlights some closely related problems that behavioral flexibility pose for MM. Finally, Section 4.3 reviews briefly some possible responses to these problems. 4.1. Representational Flexibility Perhaps the most commonly posed flexibility worries for MM concern various kinds of representational plasticity that human thought appears to exhibit, but that are not readily accommodated within a MM framework. Representational Integration. A first kind of flexibility concerns our capacity to freely combine conceptual representations across different subject matters or content domains. That is, we exhibit what Carruthers (2006) calls content flexibility. So, for example, it is not merely that we can think about colors, about numbers, about shapes, about food, and so on. Rather we can have thoughts that concern all these things at once—for example, that we had two roughly round red steaks for lunch. But if this is so, then the natural explanation of this capacity is that there are cognitive mechanisms that are able to combine representations from different cognitive domains (Fodor 1983, 102). In this case, it would seem that there must be at least some domain-general cognitive mechanisms. Content General Consumption. Not only can we freely combine concepts, we can also use the resulting representations in theoretical and practical inference to assess their truth or plausibility, but also to assess their impact on our plans and projects (Fodor 1983; Carruthers 2006). But if this so, then there must be mechanisms that can utilize such complex, novel representations. And the obvious explanation for this capacity is that we possess domain-general cognitive mechanisms—for example, for planning and belief revision—that can take representations as input more or less irrespective of their content. Inferential Holism. A third kind of representation flexibility concerns the range of information that we can bring to bear on solving a given problem. As noted in Section 3, human reasoning appears to exhibit a kind of holism or isotropy (Fodor 1983). Given surrounding conditions—especially background beliefs— the relevance of a belief to the theoretical or practical tasks in which one engages can change dramatically. Indeed, it would seem that given appropriate background assumptions, almost any belief can be rendered relevant to the task in which one engages (Copeland 1993). But if this is so, then the obvious explanation 0001332545.INDD 78 8/10/2011 4:17:42 PM massive modularity 79 is, as Fodor noted long ago, that we possess central systems that are unencapsulated to an interesting degree. What do these considerations show? First, they clearly do not show that there are no modular central systems. This is because even if the explanation of representational flexibility requires the existence of some non-modular central systems, this would be wholly consistent with the existence of other central systems that are modular in character. In other words, the above considerations are wholly compatible with what I earlier called weak MM: the thesis that central cognition depends on both modular mechanisms and domain-general, unencapsulated ones. Second, the above considerations are also compatible with the sort of compositional MM formulated using the minimal notion of a module. This is because such a thesis does not require what the above considerations render implausible—that all modules are domain-specific and/or encapsulated—and this is simply because, in the minimal sense of modularity, domain-general, unencapsulated mechanisms are modules. So, we should be cautious not to interpret the present considerations as undermining all versions of MM. Nevertheless, taken together the above kinds of representational plasticity do provide prima facie reason to suppose that there are cognitive mechanisms that are domain general and unencapsulated. This is because the assumption that there are such mechanisms yields the simplest and most natural explanation of the kinds of flexibility outlined above. To that extent, then, the existence of representational flexibility renders Strong MM implausible. Advocates of Strong MM have sought to provide accounts of the above kinds of flexibility—accounts that eschew any commitment to the sorts of non-modular mechanisms posited by Fodor and others. If such proposals could be made to work, then the argument from representational flexibility would be significantly weakened. In Section 4.3 I briefly review some of these modularist proposals. But first we need to consider a closely related kind of flexibility problem that an adequate version of MM must address. 4.2. Behavioral Flexibility and Flow of Control The worries considered so far concern the apparent flexibility of our representational capacities. But there is another very closely related kind of worry that concerns a striking fact about the character and range of our cognitive-behavioral repertoire. To a first approximation: Flexibility Thesis: We are capable of performing an exceedingly wide—perhaps unbounded—range of tasks in a context-appropriate fashion. According to some critics, the worry about MM, at least in radical form, is that it lacks the resources to account for this kind of flexibility. Some comments are in order. First, though there are many issues of detail regarding precisely how best to formulate the Flexibility Thesis, the general idea has 0001332545.INDD 79 8/10/2011 4:17:42 PM 80 the oxford handbook of philosophy of cognitive science very widespread acceptance. Indeed, it has a heritage that goes back at least as far as Descartes; it is widely endorsed by cognitive scientists (Newell 1990; Anderson and Lebiere 2003); and it has seemed irresistible to those who study either the anthropological record (Richerson and Boyd 2006) or the contrasts between human behavior and that of other primates (Whiten et al. 2003). Second, though the Flexibility Thesis is logically distinct from the sorts of representational flexibility mentioned in Section 4.1, it is important to stress that on many extant accounts of cognition, the two are very intimately related. Specifically, one very common reason for invoking flexible, representation-rich processes is to explain the highly variable yet context-appropriate character of human behavior. Crudely put: on one very common view of cognition—one that many modularists endorse—human behavior is flexible in large measure because it causally depends on flexible representational processes (Newell 1990; Pylyshyn 1984). Third, the fact of behavioral flexibility is, of course, not merely an explanatory challenge for modular theories of cognitive architecture, but a serious explanatory challenge for any account of cognition (Newell 1990). Indeed, it is arguably just the problem of explaining intelligent behavior. Nevertheless, some critics maintain that behavioral flexibility poses quite specific and serious challenges for advocates of MM because their position appears to preclude the sorts of explanations that most plausibly explain the character and range of human behavior: that is, those that posit domain-general, functionally nonspecific mechanisms. One central virtue of domain-general, functionally nonspecific mechanisms is that they can underwrite the performance of a great many tasks. They are, in Descartes’s memorable phrase, “universal instruments.” Advocates of a thoroughgoing MM cannot, of course, avail themselves of such mechanisms. But neither can they plausibly suppose that we possess a specific module for each task we can perform. As Descartes observed, the range of tasks that we can perform is simply too great for such a proposal to be at all plausible.17 How, then, can advocates of MM explain the range of tasks that we are capable of performing? It would seem that there is only one available option. Advocates of MM are committed to providing what might be called a confederate account of cognitive flexibility: one on which flexible behavior is, as Pinker puts it, the product of “a network of subsystems that feed each other in criss-crossing but intelligible ways” (Pinker 2005. See also Pinker 1994 and James 1890). But merely pointing this out is not, of course, an explanation of our cognitive-behavioral flexibility so much as a statement of the problem given a commitment to MM. The challenge for advocates of MM is to sketch the right sort of plurality and “criss-crossing” between mechanisms, and this would require an account that addresses at least the following problems. 17 As Fodor once pointed out, sometimes we manage to balance our checkbooks, but it is not at all likely that there is a modular device for doing that! 0001332545.INDD 80 8/10/2011 4:17:42 PM massive modularity 81 First, on the assumption that behavioral flexibility causally depends on flexible representation-rich processes, such an account would need to handle the sorts of flexibility mentioned in Section 4.1., that is: Integration Problem: Advocates of MM need to explain how novel, crossdomain representations can be produced. Consumption Problem: Advocates of MM need to explain how novel, crossdomain representations could be utilized in reasoning, decision making, and other cognitive processes. Holism Problem: Advocates of MM need to explain how some inferential processes could exhibit their characteristic holism. To avoid positing non-modular mechanisms, a thoroughgoing MM would need to explain such phenomena as a product of the collaborative activity of multiple modules. Second, because MM is committed to a confederate account of behavioral flexibility, advocates of MM also need to address a problem about the flow of control that is often ignored in discussions of MM. If solutions to the problems we confront frequently depend on the collaborative interaction of a host of modules, there needs to be some account of how the right module “gains control” of the process at the right time. This is because on such a model, a correct or appropriate outcome will occur only if an appropriate module is activated at the right time in the process. So, advocates of MM need to address what might be called the allocation problem: Allocation Problem: Advocates of MM need to characterize the control structures that ensure that representations are allocated to the relevant modules at the right time. Issues about flow of control are commonplace in computer science, and there are many ways to organize a computational system in order to address such issues. In the case of thoroughgoing versions of MM, however, the allocation problem has seemed especially pressing because it has proven hard to think of plausible control structures that could enable cognitive-behavioral flexibility without compromising the assumption that our minds contain only modular systems. Though there are many variants of the allocation problem, it would be useful to start with one especially well-known version, discussed by Fodor (2000). What Fodor purports to show is that the allocation problem poses a kind of logical problem for strong versions of MM. Specifically, he argues that, on pain of regress, solving the allocation problem requires that there exist at least some domain-general mechanisms. In this case there is, according to Fodor, a sense in which the hypothesis of a completely modular architecture—one that eschews domain-general mechanisms entirely—is “self-defeating.” To appreciate Fodor’s version of the allocation problem, we need to focus on the question of whether the mechanisms responsible for allocating representations to modules are themselves domain-specific. Fodor maintains that there are really 0001332545.INDD 81 8/10/2011 4:17:42 PM 82 the oxford handbook of philosophy of cognitive science P1 All Representations M1 Box 1 Box 2 P2 M2 P1 M1 All Representations Box 3 P2 M2 Figure 1. Fodor’s input problem. only two options, which are represented schematically in Figure 1. According to the first option, represented by Box 1, the allocation mechanism is relatively domaingeneral, in which case it is able to perform its allocating function because it can access both those representations that should be allocated to M1 and those that should be allocated to M2. (Think of someone passing apples to one friend and oranges to another. They need to have access to both apples and oranges to perform that task.) But the problem with this option is that the allocating mechanism (Box 1) is not itself domain-specific, and so (strong) MM is false. The second option, represented by Box 2 and Box 3 in the diagram, is that allocation mechanisms are no more domain-general than the modules to which they allocate representations. In this arrangement, the existence of allocation mechanisms does not violate MM by assuming the existence of non-modular devices. But according to Fodor, it is now unclear how the allocation mechanisms could, themselves, have been allocated the relevant representations. If we suppose that it was another domain-specific allocator, then regress ensues, and if we suppose that the allocator is domain-general, then we once more violate the assumptions of strong MM. On the face of it, then, allocation poses a serious challenge for strong versions of MM. 4.3. Massively Modular Architectures for the Explanation of Cognitive-Behavioral Flexibility What sort of massively modular architecture could address the various problems of flexibility and allocation outlined above? At this time, the issues remain largely open, and extant proposals are pitched at a very abstract—sometimes metaphorical—level. Nonetheless, I now propose to consider some of the suggestions that have been floated in recent years. 4.3.1. Weak Massive Modularity One response, mentioned earlier, would be to acknowledge the need for at least some domain-general and/or unencapsulated mechanisms. Such positions are commonplace in recent cognitive science among theorists who are otherwise quite sympathetic to modular accounts of cognition. Thus, for example, Susan Carey and 0001332545.INDD 82 8/10/2011 4:17:42 PM massive modularity 83 Inputs Outputs Figure 2. Pipeline architectures. John Anderson both endorse versions of this weak MM position, and moreover they do so in large measure because it helps handle the sorts of problems mentioned earlier (Anderson 2007; Carey 2009). Weak MM is a plausible position for it has the resources to accommodate the empirical evidence for modularity while also allowing for aspects of cognition— various kinds of learning, analogical inference, planning, and so on—that do not seem modular in character. Nonetheless, for some advocates of MM, such a position may seem unattractive on broad theoretical grounds. As we saw in Sections 2 and 3, it is quite common to maintain that, for evolutionary and computational reasons, non-modular mechanisms are implausible, and consequently that some more thoroughgoing version of MM is required. For such theorists it would be implausible to suppose that non-modular devices have a major role to play in human cognition. 4.3.2. Pipeline Architectures Suppose that one seeks a thoroughgoing—or strong—MM. How might one address the problems of representational flexibility and allocation? One possibility would be to advocate what are sometimes called pipeline architectures. The general idea is that the modules within such a system are organized in a lattice-like fashion so that their interconnections satisfy two conditions: a) Information flow is unidirectional: Once information enters a device in a given layer, n, it cannot subsequently enter another device in n, or a device in any layer prior to n. Rather, information is automatically routed to a device to some subsequent layer of the system. b) Uniqueness: Information processed by one module can be routed to (at most) one other module. On these assumptions, then, the overall system can be schematically represented as a set of parallel pipelines, each composed of a number of interconnected modular processing units. (See Figure 2.) It is important to stress that no one has ever seriously defended pipeline architectures in the simple form presented here.18 Nonetheless, it will be instructive to 18 It is worth noting, however, that there are various influential proposals that come very close. In particular, the Subsumption Architecture advocated by Rodney Brooks and his collaborators bears striking similarities, and raises very similar problems. For further discussion see Barrett (2006), Brooks (1991), Kirsch (1991), and Hurley (2001). 0001332545.INDD 83 8/10/2011 4:17:42 PM 84 the oxford handbook of philosophy of cognitive science consider them since simple pipeline architectures possess a number of properties relevant to our present discussion, which can help clarify the problems that flexibility poses for MM. First, such architectures enforce a strong kind of MM because they ensure that different mechanisms have access to different, non-overlapping pools of information. Indeed they ensure that modules satisfy exceedingly strong conditions on both domain–specificity and informational encapsulation.19 Modules within such a system will be domain-specific because they receive inputs from at most one other system. And since the information that any module receives is simply its input, each module will also be encapsulated. Thus pipeline architectures are both strongly modular and satisfy Fodorian conditions on modularity. Second, there is a sense in which pipeline architectures evade the sorts of allocation worries discussed earlier. Since modules within such an architecture are triggered by their inputs and pass information uniquely and unidirectionally, the flow of control within a pipleline architecture is rigid and inflexible. For example, if the first module in a pipeline, P, is activated by a sensory input, then every subsequent module in P will also be activated, and modules that are not in P will not be activated by the sensory input. One way to put the point is that in such a view there is no computational problem of allocation; rather, allocation is brute-causal and hardwired. Third, the previous observation is important for understanding Fodor’s allocation problem. This is because it highlights that strong MM per se is not selfdefeating—or at least not for the reasons that Fodor provides. Fodor’s problem turns on the putative fact that regress ensues unless one posits domain-general allocation mechanisms. But within a pipeline architecture the regress of allocation is halted by the first module in the pipeline—we might suppose a sensory mechanism of some sort. Thus, the dilemma that Fodor seeks to generate for MM—either regress of allocation or domain-general allocators—never gets off the ground. Fourth, it is important to see that pipeline architectures only succeed in resolving Fodor’s puzzle at a serious cost. Specifically, the proposed solution implies that modules in different pipelines cannot interact—that many configurations of intermodular interaction are impossible—and this, in turn, imposes serious limitations on the sorts of flexibility that can be accommodated by such a confederate system. First, representations in different pipes cannot be freely combined, in which case a pipeline architecture will not solve the integration problem. Second, since pipeline architectures cannot combine representations from distinct domains, they cannot explain our apparent capacity to use cross-domain representations in our practical and theoretical inferences. In other words, they cannot offer a solution to the consumption problem for MM.20 Third, since pipeline architectures enforce a rigid 19 Of course, this assumes the (obvious) fact that no sensory mechanisms are domain-general. Indeed there is a sense in which such systems do not confront a consumption problem since there are no cross-domain representations to be consumed. 20 0001332545.INDD 84 8/10/2011 4:17:43 PM massive modularity 85 distinction between informational pools, such a system cannot exhibit inferential holism. Finally, since behavioral flexibility is supposed to depend on the above sorts of representational flexibility, pipeline architectures preclude the kinds of intermodular interactions that seem required to produce novel, flexible behavior. In short: though pipeline architectures evade the allocation problem that Fodor poses, they do so at the cost of completely failing to accommodate the sorts of flexibility that advocates of MM need to explain. Finally, the above discussion suggests an interesting connection between the problems posed by allocation and the problems that cognitive flexibility poses. Recall: it is precisely because pipeline architectures enforce a rigid division between pipelines of modules that they both evade Fodor’s allocation problem and fail to exhibit representational flexibility. But what this suggests is that, within MM architectures, problems of allocation or control are closely related to the system’s capacity to exhibit various kinds of representational flexibility. Roughly put, the more flexibility the system exhibits, the more serious we should expect allocation problems to be. More specifically, the above discussion suggests that Fodor’s version of the allocation problem—that of requiring domain-general control structures—is one that only arises for modular systems that exhibit the appropriate kinds of representational flexibility, and the more flexibility exhibited, the more need there will be for such control structures. I return to this issue below. But for now let us consider another possible approach to the problems of flexibility and allocation. 4.3.3. Enzymatic Computation Recently, Clark Barrett has presented a proposal that is intended to address worries about allocation at the same time as it explains aspects of cognitive flexibility (Barrett 2005). Barrett’s point of departure is Fodor’s version of the allocation problem. As such, his proposal might be viewed merely as an attempt to resolve the kind of logical problem that, according to Fodor, allocation poses for massively modular architectures. Alternatively, it might be construed as trying to satisfy the stronger demand of providing an empirically plausible model for how a massively modular system might exhibit flexibility. What follows outlines Barrett’s proposal and then considers these options in turn. In developing his view, Barrett takes enzymatic systems in biochemistry as a model for how a modular mind might be organized. Broadly speaking, enzymatic systems possess two kinds of properties that make them appropriate as a model of cognitive modularity. The first class of properties is those that allow enzymes to function as specialized computational devices. Specifically: a) Enzymes accept information of a particular kind, generally in the form of chemical substrates with particular properties that meet the binding specificity criteria of the enzyme in a “lock and key” fashion. b) They perform specific operations on the information they admit, catalyzing reactions that produce reaction products with different properties than the input substrates. 0001332545.INDD 85 8/10/2011 4:17:43 PM 86 the oxford handbook of philosophy of cognitive science c) The reaction products produced by enzymes are in a format useable by other systems, thereby allowing for complex cascades of activity. A second class of properties possessed by enzymatic systems that Barrett thinks make them appropriate as a model of cognitive modularity concerns the environment in which interactions between enzymes and substrate occur. Specifically, such interactions occur in “open” systems (solutions) in which all substrates are accessible, in principle, to all enzymes. Thus in such enzymatic systems one has access generality—where all information (substrates) are available to all processing mechanisms (enzymes)—with processing specificity: where each kind of enzyme only performs highly specific operations on a very specific range of substrates—viz. those that satisfy the binding criteria of the enzyme. Importantly for Barrett’s purposes, enzymatic systems achieve this combination of access generality and processing specificity without the need for a mechanism that delivers substrates to enzymes. Thus within such systems there is no rigid routing (à la the pipeline model) or domain-general “meta” device for allocation (à la Fodor). Thus, according to Barrett, enzymatic systems provide both (1) an existence proof of naturally occurring modular systems that avoid the sorts of allocation problems Fodor raises, and (2) a model of how the flow of control within cognitive systems might occur. What should we make of Barrett’s proposal? First, let us consider it as a response to the putatively logical problem of allocation that Fodor poses for MM. Barrett is correct that flow of control within a MM cognitive system could, in principle, operate in the same way as enzymatic systems do. Specifically, it is possible to envisage a system comprised of process-specific mechanisms operating in an access general environment in such a way that specialized mechanisms gain access to relevant inputs without the need for non-modular allocation devices. As such, Barrett’s proposal provides a way to resolve the logical problem Fodor seeks to generate for MM, and to that extent the proposal is successful.21 How does Barrett’s enzymatic proposal fare as an empirically plausible model of the mind’s organization? Here I am rather less sanguine. Construed literally, the enzymatic model is deeply implausible, and construed as mere metaphor it is utterly unclear that it can be cashed out without reintroducing precisely the sorts of domain-general control structures that it seeks to avoid. In order to see why, on a literal construal, the enzymatic model is implausible, we need to get clearer about why enzymatic systems—real enzymatic systems—do not require routing mechanisms or meta-control devices. The central problem of control within a modular cognitive architecture is the problem of enabling the right mechanism to access the right representations at the relevant time. There is an analogous problem for enzymes. In order for enzymes to produce their products, they must bind the relevant substrate. How does that occur? Of course, there is no “routing system” that brings the relevant substrate to the right enzyme. Instead, enzymes 21 Though since the simpler pipeline model achieves the same result, it is unclear that this success is of any great significance. 0001332545.INDD 86 8/10/2011 4:17:43 PM massive modularity 87 interact with substrates via a process that crucially depends on chance collision. To a first approximation, dumb luck (random probability) is a central component of the story of how enzymes come to have their characteristic effects within an open system. Where solution conditions are fixed, the rate at which enzymatic processes occur is a function of enzyme and substrate concentrations. Where either concentration is high, catalytic reactions occur rapidly because there is a higher probability that individual enzymes will encounter substrates that satisfy their binding conditions. By contrast, where concentrations are low, rates of reaction are low because compatible proteins and substrates rarely encounter each other. And, of course, where there is just one instance of an enzyme and one instance of an appropriate substrate in a sea of other substrates, the probability of a relevant collision at any particular point in time is exceedingly low. The moral should be clear for discussions of cognitive architecture. Enzymatic systems do possess properties that are closely analogous to those possessed by putative cognitive modules. Moreover, enzymes are able to do their job—convert substrate into products—in the absence of any routing or meta-device for allocation. Thus the possibility of enzymatic computation shows, contrary to Fodor, that there is no logical problem of allocation within a massively modular system. But much more would need to be said in order to render the enzyme metaphor plausible as an empirical model of control flow. This is because, when applied in a literal fashion to cognitive systems, the proposal yields a conception of control flow on which appropriate cognitive processing—as opposed to lots of fruitless failed interactions between modules and representations—is simply the product of random interactions in an environment that contains high concentrations of modules of the same type and high concentrations of representations of the same type. And this is not even remotely plausible as a story for how cognition works.22 Of course, Barrett is well aware that, construed literally, enzymatic systems are not a plausible model for how modular cognitive systems interact. Indeed, it is a point he stresses repeatedly. As a consequence, he instead treats talk of enzymatic systems as a metaphor in need of further development. But now the problem is that it is far from clear how this can done while still preserving the idea that the human mind has a MM architecture that manifests flexibility and yet avoids the need for domain-general allocation devices. In developing his view, Barrett tends to draw on examples of computational models that bear an abstract resemblance to enzymatic systems. In particular, he is fond of developing the enzymatic metaphor with reference to the sorts of blackboard architectures that exhibit a kind of open access combined with processing specificity (Hayes-Roth 1985). But the problem with this is that computationally well-specified backboard architectures—as opposed to breezy descriptions of the 22 Though there is not enough space to consider the issue in detail here, one possible way to respond to the present worry would be to hypothesize that (1) the informational repository is open but highly structured, and (2) modules tend to be located in close proximity to those areas of the repository that contain information that satisfies their input conditions. 0001332545.INDD 87 8/10/2011 4:17:43 PM 88 the oxford handbook of philosophy of cognitive science rough idea—incorporate precisely the sort of domain-general allocation mechanism that Barrett seems so keen to avoid. Specifically, though such architectures are comprised of an open source repository of information (the blackboard), and multiple (software) modules—often called “knowledge sources”—they also have a control shell as a core component. What the control shell does is use generic control knowledge in order to make runtime decisions about the course of problem solving. In other words, it is an allocation device that determines which of the available modules gets to perform its computations at a given time on information in the blackboard (Corkill 2003). But this is precisely the kind of domain-general control device that is so conducive to Fodor’s presentation of the allocation problem, and so at odds with the spirit of the enzymatic model. In short, if the blackboard architecture is what one gets when the enzymatic metaphor is spelled out, then strong MM must be rejected. 5. Conclusion This chapter sought first to clarify MM and distinguish a range of importantly different versions of the thesis. Second, it critically assessed some of the more prominent arguments for MM—arguments that purport to show that it is plausible either on evolutionary grounds or on grounds of computational tractability. Finally, it introduced some of the problems that cognitive-behavioral flexibility appears to pose for MM, at least in its strongest forms. The foregoing discussion of flexibility clearly does not preclude the possibility of an empirically plausible strong MM. Among other things, there are other important modularist proposals—such as those proposed by Sperber (2005) and Carruthers (2006)—that have not been discussed here, and a comprehensive assessment would require due consideration of these proposals. But neither was the discussion intended to ground such a strong conclusion. Rather, the goals were threefold. The first was to flag some of the kinds of flexibility that appear to pose problems for MM. The second was to highlight that, in the absence of any well-specified and plausible modularist account of flexibility, positing non-modular devices appears to yield the most natural and plausible account of flexibility. The final goal was to suggest that in going beyond mere metaphor and vague suggestion, extant modularist proposals that seek to accommodate cognitive flexibility appear to risk reintroducing precisely the sorts of domain-general, non-modular mechanisms that they seek to banish. For all that has been said, this might reflect mere contingent features of extant proposals. But another and rather more intriguing possibility is that there are deep and systematic connections between manifesting human levels of cognitive-behavioral flexibility and the need for domain-general mechanisms. Though this is not the place to spell out the argument, I suspect that this is what is really going on. 0001332545.INDD 88 8/10/2011 4:17:43 PM massive modularity 89 REFERENCES Anderson, J. R. (2007). How Can the Human Mind Occur in the Physical Universe? New York: Oxford University Press. Anderson, J. R., and Lebiere, C. L. (2003). The Newell test for a theory of cognition. Behavioral & Brain Science 26: 587–637. Baron-Cohen, S. 1995. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press. Barrett, H. C. (2005). Enzymatic computation and cognitive modularity. Mind & Language 20: 259–87. Barrett, H. C., and Kurzban, R. (2006). Modularity in cognition: framing the debate. Psychological Review 113: 628–47. Barratt (2006). Brooks, R. A. (1991). Intelligence without reason. Proceedings of the 12th International Conference on Artificial Intelligence 569–95. Buller, D. (2005). Adapting Minds. Cambridge, MA: MIT Press. Carey, S. (2009). The Origin of Concepts. New York: Oxford University Press. Carruthers, P. (2003). On Fodor’s problem, Mind and Language, 18(5): 502–23. ——— . (2004). Practical reasoning in a modular mind. Mind and Language 19: 259–78. ——— . (2006) The Architecture of the Mind: Massive Modularity and the Flexibility of Thought. Oxford: Oxford University Press. Carston, R. (1996). The Architecture of the Mind: Modularity and Modularization. In D. Green (ed.), Cognitive Science: An Introduction. Oxford: Blackwell. Chiappe, D. (2000). Metaphor, modularity, and the evolution of conceptual integration. Metaphor and Symbol 15: 137–58. Chiappe, D., and MacDonald, K. B. (2005). The evolution of domain-general mechanisms in intelligence and learning. Journal of General Psychology 132: 5–40. Cherniak, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press. Collins, J. (2005). On the input problem for massive modularity. Minds and Machines 15(1): 1–22. Cooper, R. P., and Shallice, T. (2006). Hierarchical schemas and goals in the control of sequential behavior. Psychological Review 113 : 887–931. Copeland, J. (1993). Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell. Corkill, D. (2003) Collaborating Software: Blackboard and Multi-Agent Systems & the Future. In Proceedings of the International Lisp Conference, New York, New York, October 2003. Cosmides, L., and Tooby, J. (1994). Origins of domain specificity: The evolution of functional organization. In L. Hirschfeld and S. Gelman (eds.), Mapping the Mind: Domain Specificity in Cognition and Culture, 85–116. Cambridge: Cambridge University Press. Cosmides, L., and Tooby, J. (2000). The Cognitive Neuroscience of Social Reasoning. In M. S. Gazzaniga (ed.), The New Cognitive Neurosciences, 2nd ed. Cambridge, MA: MIT Press. Dennett, D. C. (1987). The Intentional Stance. Cambridge, MA: MIT Press. Fodor, J. (1983). The Modularity of Mind. Cambridge, MA: MIT Press. ——— . (2000). The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology. Cambridge, MA: MIT Press. ——— . (2005). Reply to Steven Pinker “So How Does the Mind Work?”. Mind & Language 20(1): 25–32. ——— . (2008). LOT 2: The Language of Thought Revisited. Oxford: Oxford University Press. 0001332545.INDD 89 8/10/2011 4:17:43 PM 90 the oxford handbook of philosophy of cognitive science Ford, K. M., and Pylyshyn, Z. W. (eds.) (1996). The Robot’s Dilemma Revisited: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex. Garfield, J. (ed.). (1987). Modularity in Knowledge Representation and Natural-Language Understanding. Cambridge, MA: MIT Press. Gigerenzer, G., Todd, P. M., and the ABC Research Group. (1999). Simple Heuristics That Make Us Smart. New York: Oxford University Press. Glymour, C. (1985). Comment: Fodor’s holism. Behavioral and Brain Sciences 8: 15–16. ——— . (1987). Android epistemology: Comments on “Cognitive wheels.” In Z. W. Pylyshyn (ed.), The Robot’s Dilemma. Norwood, NJ: Ablex, 65–76. Gopnik, A., and Meltzoff, A. (1997). Words, Thoughts and Theories. Cambridge, MA: MIT Press. Guha, R. V., and Levy, A. (1990). A Relevance Based Meta Level. MCC Technical Report No. ACT-CYC-040-90. Austin, TX: MCC Corp. Hayes, P. (1987). What the Frame Problem Is and Isn’t. In Z. Pylyshyn (ed.), The Robot’s Dilemma. Norwood, NJ: Ablex. Hayes-Roth, B. (1985). A blackboard architecture for control. Artificial Intelligence 26: 251–321. Horgan, T., and Tienson, J. (1996). Connectionism and the Philosophy of Psychology. Cambridge, MA: MIT Press. Hurley S. (2001). Perception and action: Alternative views. Synthese 129(1): 3–40. James, W. (1890). The Principles of Psychology, vol. 1. Cambridge, MA: Harvard University Press. Jackendoff, R. (1992). Is There a Faculty of Social Cognition? In R. Jackendoff, Languages of the Mind. Cambridge, MA: MIT Press, 69–81. ——— . (1992). Languages of the Mind. Cambridge, MA: MIT Press. Kirsh, D. (1991). Today the earwig, tomorrow man? Artificial Intelligence 47: 161–84. Reprinted in M. Boden (ed.), The Philosophy of Artificial Life. New York: Oxford University Press, 1996. Lormand, Eric. (1994). The Holorobophobe’s Dilemma. In K. Ford and Z. Pylylshyn (eds.), The Robot’s Dilemma Revisited. Norwood, NJ: Ablex. Ludwig, K., and Schneider, S. (2007). Fodor’s challenge to the classical computational theory of mind. Mind & Language 23(1): 123–43. Lyons, Jack C. (2001). Carving the mind at its (not necessarily modular) joints. British Journal for the Philosophy of Science 52(2): 277–302. Machery, E., and Barrett, H. C. (2006). Debunking Adapting Minds. Philosophy of Science 73: 232–46. Miller, G., Galanter, E., and Pribram, K. (1960). Plans and the Structure of Behavior. New York: Henry Holt. Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press. Newell, A., and Simon, H. A. (1972). Human Problem Solving. Englewood Cliffs, NJ.: Prentice-Hall. Pinker, S. (1994). The Language Instinct. New York: William Morrow. ——— . (1997). How the Mind Works. New York: W.W. Norton. ——— . (2005). So how does the mind work? Mind & Language 20(1): 1–24. Pohl, T. (2005). Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgement and Memory. Hove, UK: Psychology Press. Prinz, J. J. (2006). Is the Mind Really Modular? In R. Stainton (ed.), Contemporary Debates in Cognitive Science. Oxford: Blackwell, 22–36. Pylyshyn, Z. W. (1984). Computation and cognition. Cambridge, MA: MIT/Bradford. 0001332545.INDD 90 8/10/2011 4:17:43 PM massive modularity 91 ——— . (1987). The Robot’s Dilemma: The Frame Problem in Artificial Intelligence (Theoretical Issues in Cognitive Science). Norwood, NJ: Ablex. Richerson, P., and Boyd, R. (2006). Not by Genes Alone: How Culture Transformed Human Evolution. Chicago: University of Chicago Press. Russell, S., and Norvig, P. (2003). Artificial Intelligence, A Modern Approach. 2nd ed. Upple Saddle River, NJ: Prentice Hall. Samuels R. (1998). Evolutionary psychology and the massive modularity hypothesis. British Journal for the Philosophy of Science 49: 575–602. ——— . (2000). Massively Modular Minds: Evolutionary Psychology and Cognitive Architecture. In P. Carruthers and A. Chamberlain (eds.), Evolution and the Human Mind. Cambridge, UK: Cambridge University Press, 13–46. ——— . (2005). The Complexity of Cognition: Tractability Arguments for Massive Modularity. In P. Carruthers, S. Laurence, and S. Stich (eds.), The Innate Mind: Structure and Contents. Oxford: Oxford University Press. ——— . (2006). Is the Mind Massively Modular? In R. Stainton (ed.), Contemporary Debates in Cognitive Science. Oxford: Blackwell, 37–56. ——— . (2010). Classical computationalism and the many problems of cognitive relevance. Studies in History and Philosophy of Science Part A 41(3): 280–93. Segal, G. (1996). The Modularity of Theory of Mind. In P. Carruthers and P. Smith (eds.), Theories of Theory of Mind. Cambridge: Cambridge University Press. Shanahan, M. P. (1997). Solving the Frame Problem: A Mathematical Investigation of the Common Sense Law of Inertia. Cambridge, MA: MIT Press. Simon, H. (1962). The architecture of complexity. Proceedings of the American Philosophical Society 106: 467–82. Sperber, D. (1994). The Modularity of Thought and the Epidemiology of Representations. In L. A. Hirschfeld and S. A. Gelman (eds.), Mapping the Mind. Cambridge: Cambridge University Press, 39–67). ——— . (2002). In Defense of Massive Modularity. In I. Dupoux (ed.), Language, Brain, and Cognitive Development. Cambridge, MA: MIT Press, 47–57. ——— . (2005). Modularity and Relevance: How Can a Massively Modular Mind Be Flexible and Context-Sensitive?. In P. Carruthers, S. Laurence, and S. Stich (eds.), The Innate Mind: Structure and Content. Oxford: Oxford University Press. Sperber, D., and Wilson, D. (1996). Fodor’s frame problem and relevance theory. Behavioral and Brain Sciences 19(3): 530–32. Sperber, D., and Hirschfeld, L. (2007). Culture and Modularity. In P. Carruthers, S. Laurence, and S. Stich (eds.), The Innate Mind: Culture and Cognition. Oxford: Oxford University Press, 149–64. Stanovich, K. 2004. The Robot’s Rebellion: Finding Meaning in the Age of Darwin. Chicago: University of Chicago Press. Tooby, J., and Cosmides, L. (1992). The Psychological Foundations of Culture. In J.Barkow, L. Cosmides, and J. Tooby (eds.), The Adapted Mind. New York: Oxford University Press. Tooby, J., and Cosmides, L. (1995). Foreword. In S. Baron-Cohen, Mindblindness: An Essay on Autism and Theory of Mind. Cambridge, MA: MIT Press. Traverso, P., Ghallab, M., and Nau, D. (2004). Automated Planning: Theory & Practice. San Francisco: Elsevier. Whiten, A., Horner, V., and Marshall-Pescini, S. (2003). Cultural panthropology. Evolutionary Psychology 12: 92–105. 0001332545.INDD 91 8/10/2011 4:17:43 PM Author Queries: AQ 1: There’s an entry on the References list for these authors for 2006, but not for 2005. Please clarify. AQ 2: Please provide a corresponding entry on the References list. AQ 3: There are entries on the Reference list for this author for two other years, but not for 1989. Please clarify whether you intended it to be one of the other years or add a corresponding entry for 1989. 0001332545.INDD 92 8/10/2011 4:17:43 PM