The Language of Thought program has a suicidal edge. Jerry Fodor, of all people, has argued that although LOT will likely succeed in explaining modular processes, it will fail to explain the central system, a subsystem in the brain in which information from the different sense modalities is integrated, conscious deliberation occurs, and behavior is planned. A fundamental characteristic of the central system is that it is “informationally unencapsulated” -- its operations can draw from information from any cognitive domain. The (...) domain general nature of the central system is key to human reasoning; our ability to connect apparently unrelated concepts enables the creativity and flexibility of human thought, as does our ability to integrate material across sensory divides. The central system is the holy grail of cognitive science: understanding higher cognitive function is crucial to grasping how humans reach their highest intellectual achievements. But according to Fodor, the founding father of the LOT program and the related Computational Theory of Mind (CTM), the holy grail is out of reach: the central system is likely to be non-computational (Fodor 1983, 2000, 2008). Cognitive scientists working on higher cognitive function should abandon their efforts. Research should be limited to the modules, which for Fodor rest at the sensory periphery (2000).1 Cognitive scientists who work in the symbol processing tradition outside of philosophy would reject this pessimism, but ironically, within philosophy itself, this pessimistic streak has been very influential, most likely because it comes from the most well-known proponent of LOT and CTM. Indeed, pessimism about centrality has become assimilated into the mainstream conception of LOT. (Herein, I refer to a LOT that appeals to pessimism about centrality as the “standard LOT”). I imagine this makes the standard LOT unattractive to those philosophers with a more optimistic approach to what cognitive science can achieve.. (shrink)
This paper provides a theory of the nature of symbols in the language of thought (LOT). My discussion consists in three parts. In part one, I provide three arguments for the individuation of primitive symbols in terms of total computational role. The first of these arguments claims that Classicism requires that primitive symbols be typed in this manner; no other theory of typing will suffice. The second argument contends that without this manner of symbol individuation, there will be computational processes (...) that fail to supervene on syntax, together with the rules of composition and the computational algorithms. The third argument says that cognitive science needs a natural kind that is typed by total computational role. Otherwise, either cognitive science will be incomplete, or its laws will have counterexamples. Then, part two defends this view from a criticism, offered by both Jerry Fodor and Jesse Prinz, who respond to my view with the charge that because the types themselves are individuated. (shrink)
According to the language of thought (LOT) approach and the related computational theory of mind (CTM), thinking is the processing of symbols in an inner mental language that is distinct from any public language. Herein, I explore a deep problem at the heart of the LOT/CTM program—it has yet to provide a plausible conception of a mental symbol.
Suppose it is 2025 and being a technophile, you purchase brain enhancements as they become readily available. First, you add a mobile internet connection to your retina, then, you enhance your working memory by adding neural circuitry. You are now officially a cyborg. Now skip ahead to 2040. Through nanotechnological therapies and enhancements you are able to extend your lifespan, and as the years progress, you continue to accumulate more far-reaching enhancements. By 2060, after several small but cumulatively profound alterations, (...) you are a “posthuman.” To quote philosopher Nick Bostrom, posthumans are possible future beings, “whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards” (Bostrom 2003c). At this point, your intelligence is enhanced not just in terms of speed of mental processing; you are now able to make rich connections that you were not able to make before. Unenhanced humans, or “naturals,” seem to you to be intellectually disabled—you have little in common with them—but as a transhumanist, you are supportive of their right to not enhance (Bostrom 2003c; Garreau 2005; Kurzweil 2005). (shrink)
The core of the language of thought program is the claim that thinking is the manipulation of symbols according to rules. Yet LOT has said little about symbol natures, and existing accounts are highly controversial. This is a major flaw at the heart of the LOT program: LOT requires an account of symbol natures to naturalize intentionality, to determine whether the brain even engages in symbol manipulations, and to understand how symbols relate to lower-level neurocomputational states. This paper provides the (...) much-needed theory of symbols, and in doing so, alters the LOT program in significant respects. (shrink)
In The Mind Doesn’t Work that Way, Jerry Fodor argues that mental representations have context sensitive features relevant to cognition, and that, therefore, the Classical Computational Theory of Mind (CTM) is mistaken. We call this the Globality Argument. This is an in principle argument against CTM. We argue that it is self-defeating. We consider an alternative argument constructed from materials in the discussion, which avoids the pitfalls of the official argument. We argue that it is also unsound and that, while (...) it is an empirical issue whether context sensitive features of mental representations are relevant to cognition, it is empirically implausible. (shrink)
One of the most influential philosophical voices in the consciousness studies community is that of Daniel Dennett. Outside of consciousness studies, Dennett is well-known for his work on numerous topics, such as intentionality, artificial intelligence, free will, evolutionary theory, and the basis of religious experience. (Dennett, 1984, 1987, 1995c, 2005) In 1991, just as researchers and philosophers were beginning to turn more attention to the nature of consciousness, Dennett authored his Consciousness Explained. Consciousness Explained aimed to develop both a theory (...) of consciousness and a powerful critique of the then mainstream view of the nature of consciousness, which Dennett called,. (shrink)
Recently, proponents of Humean Supervenience have challenged the plausibility of the intuition that the laws of nature 'govern', or guide, the evolution of events in the universe. Certain influential thought experiments authored by John Carroll, Michael Tooley, and others, rely strongly on such intuitions. These thought experiments are generally regarded as playing a central role in the lawhood debate, suggesting that the Mill-Ramsey-Lewis view of the laws of nature, and the related doctrine of the Humean Supervenience of laws, are false. (...) In this paper, I take on these recent challenges, arguing that the intuition that the laws govern should be taken seriously. Still, I find the recent discussions insightful, in certain ways. Employing some ideas from one of the critics (Barry Loewer), I draw some non-standard conclusions about the significance of the thought experiments to the lawhood debate. (shrink)
The Mind Doesn’t Work That Way is an expose of certain theoretical problems in cognitive science, and in particular, problems that concern the Classical Computational Theory of Mind (CTM). The problems that Fodor worries plague CTM divide into two kinds, and both purport to show that the success of cognitive science will likely be limited to the modules. The first sort of problem concerns what Fodor has called “global properties”; features that a mental sentence has which depend on how the (...) sentence interacts with a larger plan (i.e., set of sentences), rather than the type identity of the sentence alone. The second problem concerns what many have called, “The Relevance Problem”: the problem of whether and how humans determine what is relevant in a computational manner. However, I argue that the problem that Fodor believes global properties pose for CTM is a non-problem, and that further, while the relevance problem is a serious research issue, it does not justify the grim view that cognitive science, and CTM in particular, will likely fail to explain cognition. (shrink)
With fifty-five peer reviewed chapters written by the leading authors in the field, The Blackwell Companion to Consciousness is the most extensive and comprehensive survey of the study of consciousness available today. Provides a variety of philosophical and scientific perspectives that create a breadth of understanding of the topic Topics include the origins and extent of consciousness, different consciousness experiences, such as meditation and drug-induced states, and the neuroscience of consciousness.
In this essay I defend a theory of psychological explanation that is based on the joint commitment to direct reference and computationalism. I offer a new solution to the problem of Frege Cases. Frege Cases involve agents who are unaware that certain expressions corefer (e.g. that 'Cicero' and 'Tully' corefer), where such knowledge is relevant to the success of their behavior, leading to cases in which the agents fail to behave as the intentional laws predict. It is generally agreed that (...) Frege Cases are a major problem, if not the major problem, that this sort of theory faces. In this essay, I hope to show that the theory can surmount the Frege Cases. (shrink)
events all seem to have something in common, metaphysically speaking, and some philosophers have inquired into what this common nature is. The main aim of a theory of events is to propose and defend an identity condition on events; that is, a condition under which two events are identical. For example, if Brutus kills Caesar by stabbing him, are there two events, the stabbing and the killing, or only one event? Each of the leading theories of events is surveyed in (...) this article. According to Jaegwon Kim, events are basically property instantiations. In contrast, Donald Davidson attempts to individuate events by their causes and effects. However, Davidson eventually rejects this view and, together with W.V.O. Quine, individuates events with respect to their location in spacetime. According to David Lewis, an event is a property of a spatiotemporal region. (shrink)
Armstrong's combinatorialism, in his own words, is the following project: "My central metaphysical hypothesis is that all there is is the world of space and time. It is this world which is to supply the actual elements for the totality of combinations. So what is proposed is a Naturalistic form of a combinatorial theory."2 Armstrong calls his central hypothesis "Naturalism." He intends his well−known theory of universals to satisfy this thesis. He now attempts to give a naturalistic theory of modality.