Advocates of the computationaltheory of mind claim that the mind is a computer whose operations can be implemented by various computational systems. According to these philosophers, the mind is multiply realisable because—as they claim—thinking involves the manipulation of syntactically structured mental representations. Since syntactically structured representations can be made of different kinds of material while performing the same calculation, mental processes can also be implemented by different kinds of material. From this perspective, consciousness plays a minor (...) role in mental activity. However, contemporary neuroscience provides experimental evidence suggesting that mental representations necessarily involve consciousness. Consciousness does not only enable individuals to become aware of their own thoughts, it also constantly changes the causal properties of these thoughts. In light of these empirical studies, mental representations appear to be intrinsically dependent on consciousness. This discovery represents an obstacle to any attempt to construct an artificial mind. (shrink)
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the ComputationalTheory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the ComputationalTheory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism—neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous (...) signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. (shrink)
In this paper I review some leading developments in the empirical theory of affect. I argue that (1) affect is a distinct perceptual representation governed system, and (2) that there are significant modular factors in affect. The paper concludes with the observation thatfeeler (affective perceptual system) may be a natural kind within cognitive science. The main purpose of the paper is to explore some hitherto unappreciated connections between the theory of affect and the computationaltheory of (...) mind. (shrink)
Despite its significance in neuroscience and computation, McCulloch and Pitts's celebrated 1943 paper has received little historical and philosophical attention. In 1943 there already existed a lively community of biophysicists doing mathematical work on neural networks. What was novel in McCulloch and Pitts's paper was their use of logic and computation to understand neural, and thus mental, activity. McCulloch and Pitts's contributions included (i) a formalism whose refinement and generalization led to the notion of finite automata (an important formalism in (...) computability theory), (ii) a technique that inspired the notion of logic design (a fundamental part of modern computer design), (iii) the first use of computation to address the mind–body problem, and (iv) the first modern computationaltheory of mind and brain. (shrink)
Moods have global and profound effects on our thoughts, motivations and behavior. To understand human behavior and cognition fully, we must understand moods. In this paper I critically examine and reject the methodology of conventional ?cognitive theories? of affect. I lay the foundations of a new theory of moods that identifies them with processes of our cognitive functional architecture. Moods differ fundamentally from some of our other affective states and hence require distinct explanatory tools. The computationaltheory (...) of mood I propose places them within the context of other mental phenomena and is consistent with the empirical data on moods. (shrink)
In this paper, I explore the implications of Fodor’s attacks on the ComputationalTheory of Mind (CTM), which get their most recent airing in The Mind Doesn’t Work That Way. I argue that if Fodor is right that the CTM founders on the global nature of abductive inference, then several of the philosophical views about the mind that he has championed over the years founder as well. I focus on Fodor’s accounts of mental causation, psychological explanation, and intentionality.
Over the past several decades, the philosophical community has witnessed the emergence of an important new paradigm for understanding the mind.1 The paradigm is that of machine computation, and its influence has been felt not only in philosophy, but also in all of the empirical disciplines devoted to the study of cognition. Of the several strategies for applying the resources provided by computer and cognitive science to the philosophy of mind, the one that has gained the most attention from philosophers (...) has been the ComputationalTheory of Mind (CTM). CTM was first articulated by Hilary Putnam (1960, 1961), but finds perhaps its most consistent and enduring advocate in Jerry Fodor (1975, 1980, 1981, 1987, 1990, 1994). It is this theory, and not any broader interpretations of what it would be for the mind to be a computer, that I wish to address in this paper. What I shall argue here is that the notion of symbolic representation employed by CTM is fundamentally unsuited to providing an explanation of the intentionality of mental states (a major goal of CTM), and that this result undercuts a second major goal of CTM, sometimes refered to as the vindication of intentional psychology. This line of argument is related to the discussions of derived intentionality by Searle (1980, 1983, 1984) and Sayre (1986, 1987). But whereas those discussions seem to be concerned with the causal dependence of familiar sorts of symbolic representation upon meaning-bestowing acts, my claim is rather that there is not one but several notions of meaning to be had, and that the notions that are applicable to symbols are conceptually dependent upon the notion that is applicable to mental states in the fashion that Aristotle refered to as paronymy. That is, an analysis of the notions of meaning applicable to symbols reveals that they contain presuppositions about meaningful mental states, much as Aristotle's analysis of the sense of healthy that is applied to foods reveals that it means conducive to having a healthy body, and hence any attempt to explain mental semantics in terms of the semantics of symbols is doomed to circularity and regress. I shall argue, however, that this does not have the consequence that computationalism is bankrupt as a paradigm for cognitive science, as it is possible to reconstruct CTM in a fashion that avoids these difficulties and makes it a viable research framework for psychology, albeit at the cost of losing its claims to explain intentionality and to vindicate intentional psychology. I have argued elsewhere (Horst, 1996) that local special sciences such as psychology do not require vindication in the form of demonstrating their reducibility to more fundamental theories, and hence failure to make good on these philosophical promises need not compromise the broad range of work in empirical cognitive science motivated by the computer paradigm in ways that do not depend on these problematic treatments of symbols. (shrink)
We discuss a research project that develops and applies algorithms for computational contextual vocabulary acquisition (CVA): learning the meaning of unknown words from context. We try to unify a disparate literature on the topic of CVA from psychology, ﬁrst- and secondlanguage acquisition, and reading science, in order to help develop these algorithms: We use the knowledge gained from the computational CVA system to build an educational curriculum for enhancing students’ abilities to use CVA strategies in their reading of (...) science texts at the middle-school and college undergraduate levels. The knowledge gained from case studies of students using our CVA techniques feeds back into further development of our computationaltheory. Keywords: artiﬁcial intelligence, knowledge representation, reading, reasoning, science education, vocabulary acquisition. (shrink)
In The Mind Doesn’t Work that Way, Jerry Fodor argues that mental representations have context sensitive features relevant to cognition, and that, therefore, the Classical ComputationalTheory of Mind (CTM) is mistaken. We call this the Globality Argument. This is an in principle argument against CTM. We argue that it is self-defeating. We consider an alternative argument constructed from materials in the discussion, which avoids the pitfalls of the official argument. We argue that it is also unsound and (...) that, while it is an empirical issue whether context sensitive features of mental representations are relevant to cognition, it is empirically implausible. (shrink)
Over the past thirty years, it is been common to hear the mind likened to a digital computer. This essay is concerned with a particular philosophical view that holds that the mind literally is a digital computer (in a specific sense of “computer” to be developed), and that thought literally is a kind of computation. This view—which will be called the “ComputationalTheory of Mind” (CTM)—is thus to be distinguished from other and broader attempts to connect the mind (...) with computation, including (a) various enterprises at modeling features of the mind using computational modeling techniques, and (b) employing some feature or features of production-model computers (such as the stored program concept, or the distinction between hardware and software) merely as a guiding metaphor for understanding some feature of the mind. This entry is therefore concerned solely with the ComputationalTheory of Mind (CTM) proposed by Hilary Putnam  and developed most notably for philosophers by Jerry Fodor [1975, 1980, 1987, 1993]. The senses of ‘computer’ and ‘computation’ employed here are technical; the main tasks of this entry will therefore be to elucidate: (a) the technical sense of ‘computation’ that is at issue, (b) the ways in which it is claimed to be applicable to the mind, (c) the philosophical problems this understanding of the mind is claimed to solve, and (d) the major criticisms that have accrued to this view. (shrink)
According to the computationaltheory of mind (CTM), to think is to compute. But what is meant by the word 'compute'? The generally given answer is this: Every case of computing is a case of manipulating symbols, but not vice versa - a manipulation of symbols must be driven exclusively by the formal properties of those symbols if it is qualify as a computation. In this paper, I will present the following argument. Words like 'form' and 'formal' are (...) ambiguous, as they can refer to form in either the syntactic or the morphological sense. CTM fails on each disambiguation, and the arguments for CTM immediately cease to be compelling once we register that ambiguity. The terms 'mechanical' and 'automatic' are comparably ambiguous. Once these ambiguities are exposed, it turns out that there is no possibility of mechanizing thought, even if we confine ourselves to domains (such as first-order sentential logic) where all problems can be settled through decision-procedures. The impossibility of mechanizing thought thus has nothing to do with recherché mathematical theorems, such as those proven by Gödel and Rosser. A related point is that CTM involves, and is guilty of reinforcing, a misunderstanding of the concept of an algorithm. (shrink)
Contemporary philosophy and theoretical psychology are dominated by an acceptance of content-externalism: the view that the contents of one's mental states are constitutively, as opposed to causally, dependent on facts about the external world. In the present work, it is shown that content-externalism involves a failure to distinguish between semantics and pre-semantics---between, on the one hand, the literal meanings of expressions and, on the other hand, the information that one must exploit in order to ascertain their literal meanings. It is (...) further shown that, given the falsity of content-externalism, the falsity of the ComputationalTheory of Mind (CTM) follows. It is also shown that CTM involves a misunderstanding of terms such as "computation," "syntax," "algorithm," and "formal truth." Novel analyses of the concepts expressed by these terms are put forth. These analyses yield clear, intuition-friendly, and extensionally correct answers to the questions "what are propositions?, "what is it for a proposition to be true?", and "what are the logical and psychological differences between conceptual (propositional) and non-conceptual (non-propositional) content?" Naively taking literal meaning to be in lockstep with cognitive content, Burge, Salmon, Falvey, and other semantic externalists have wrongly taken Kripke's correct semantic views to justify drastic and otherwise contraindicated revisions of commonsense. (Salmon: What is non-existent exists; at a given time, one can rationally accept a proposition and its negation. Burge: Somebody who is having a thought may be psychologically indistinguishable from somebody who is thinking nothing. Falvey: somebody who rightly believes himself to be thinking about water is psychologically indistinguishable from somebody who wrongly thinks himself to be doing so and who, indeed, isn't thinking about anything.) Given a few truisms concerning the differences between thought-borne and sentence-borne information, the data is easily modeled without conceding any legitimacy to any one of these rationality-dismantling atrocities. (It thus turns out, ironically, that no one has done more to undermine Kripke's correct semantic points than Kripke's own followers!). (shrink)
In this comment on Joshua Greene's essay, The Secret Joke of Kant's Soul, I argue that a notable weakness of Greene's approach to moral psychology is its neglect of computationaltheory. A central problem moral cognition must solve is to recognize (i.e., compute representations of) the deontic status of human acts and omissions. How do people actually do this? What is the theory which explains their practice?
A standing challenge for the science of mind is to account for the datum that every mind faces in the most immediate – that is, unmediated – fashion: its phenomenal experience. The complementary tasks of explaining what it means for a system to give rise to experience and what constitutes the content of experience (qualia) in computational terms are particularly challenging, given the multiple realizability of computation. In this paper, we identify a set of conditions that a computational (...)theory must satisfy for it to constitute not just a sufficient but a necessary, and therefore naturalistic and intrinsic, explanation of qualia. We show that a common assumption behind many neurocomputational theories of the mind, according to which mind states can be formalized solely in terms of instantaneous vectors of activities of representational units such as neurons, does not meet the requisite conditions, in part because it relies on inactive units to shape presently experienced qualia and implies a homogeneous representation space, which is devoid of intrinsic structure. We then sketch a naturalistic computationaltheory of qualia, which posits that experience is realized by dynamical activity-space trajectories (rather than points) and that its richness is measured by the representational capacity of the trajectory space in which it unfolds. (shrink)
Based on the belief that computational modeling (thinking in terms of representation and computations) can help to clarify controversial issues in emotion theory, this article examines emotional experience from the perspective of the Computational Belief–Desire Theory of Emotion (CBDTE), a computational explication of the belief–desire theory of emotion. It is argued that CBDTE provides plausible answers to central explanatory challenges posed by emotional experience, including: the phenomenal quality,intensity and object-directedness of emotional experience, the function (...) of emotional experience and its relation to cognition and motivation, and the relation between emotional experience and emotion. In addition, CBDTE avoids most objections that have been raised against cognitive theories of emotion. A remaining objection, that beliefs are not necessary for the emotions covered by CBDTE, is rejected as empirically unsupported. (shrink)
The Language of Thought program has a suicidal edge. Jerry Fodor, of all people, has argued that although LOT will likely succeed in explaining modular processes, it will fail to explain the central system, a subsystem in the brain in which information from the different sense modalities is integrated, conscious deliberation occurs, and behavior is planned. A fundamental characteristic of the central system is that it is “informationally unencapsulated” -- its operations can draw from information from any cognitive domain. The (...) domain general nature of the central system is key to human reasoning; our ability to connect apparently unrelated concepts enables the creativity and flexibility of human thought, as does our ability to integrate material across sensory divides. The central system is the holy grail of cognitive science: understanding higher cognitive function is crucial to grasping how humans reach their highest intellectual achievements. But according to Fodor, the founding father of the LOT program and the related ComputationalTheory of Mind (CTM), the holy grail is out of reach: the central system is likely to be non-computational (Fodor 1983, 2000, 2008). Cognitive scientists working on higher cognitive function should abandon their efforts. Research should be limited to the modules, which for Fodor rest at the sensory periphery (2000).1 Cognitive scientists who work in the symbol processing tradition outside of philosophy would reject this pessimism, but ironically, within philosophy itself, this pessimistic streak has been very influential, most likely because it comes from the most well-known proponent of LOT and CTM. Indeed, pessimism about centrality has become assimilated into the mainstream conception of LOT. (Herein, I refer to a LOT that appeals to pessimism about centrality as the “standard LOT”). I imagine this makes the standard LOT unattractive to those philosophers with a more optimistic approach to what cognitive science can achieve.. (shrink)
I articulate and defend a new theory of what it is for a physical system to implement an abstract computational model. According to my descriptivist theory, a physical system implements a computational model just in case the model accurately describes the system. Specifically, the system must reliably transit between computational states in accord with mechanical instructions encoded by the model. I contrast my theory with an influential approach to computational implementation espoused by Chalmers, (...) Putnam, and others. I deploy my theory to illuminate the relation between computation and representation. I also rebut arguments, propounded by Putnam and Searle, that computational implementation is trivial. (shrink)
Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference in narrative is addressed: References in passages told from the perspective of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented.
According to some philosophers, computational explanation is proprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computational explanation was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computational explanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertaining to computational explanation (...) and outline some promising answers that are being developed by a number of authors. (shrink)
In this paper I link two hitherto disconnected sets of results in the philosophy of emotions and explore their implications for the computationaltheory of mind. The argument of the paper is that, for just the same reasons that some computationalists have thought that cognition may be a natural kind, so the same can plausibly be argued of emotion. The core of the argument is that emotions are a representation-governed phenomenon and that the explanation of how they figure (...) in behaviour must as such be undertaken in those terms. I conclude with some interdisciplinary reflections in defence of the hypothesis that emotions might be more fundamental in the organization of behaviour than cognition; that, in effect, we may be emoters before we are cognizers . The aim of the paper is: (1) to introduce a number of promising results in philosophical and empirical emotion theory to a wider audience; and (2) to begin the task of organizing those results into a computational theoretical framework. (shrink)
According to Marr's theory of vision, computational processes of early vision rely for their success on certain "natural constraints" in the physical environment. I examine the implications of this feature of Marr's theory for the question whether psychological states supervene on neural states. It is reasonable to hold that Marr's theory is nonindividualistic in that, given the role of natural constraints, distinct computational theories of the same neural processes may be justified in different environments. But (...) to avoid trivializing computational explanations, theories must respect methodological solipsism in the sense that within a theory there cannot be differences in content without a corresponding difference in neural states. (shrink)
Since the cognitive revolution, it’s become commonplace that cognition involves both computation and information processing. Is this one claim or two? Is computation the same as information processing? The two terms are often used interchangeably, but this usage masks important differences. In this paper, we distinguish information processing from computation and examine some of their mutual relations, shedding light on the role each can play in a theory of cognition. We recommend that theoristError: Illegal entry in bfrange block in (...) ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMapError: Illegal entry in bfrange block in ToUnicode CMaps of cognition be explicit and careful in choosing 1 notions of computation and information and connecting them together. Much confusion can be avoided by doing so. Keywords: computation, information processing, computationalism, computationaltheory of mind, cognitivism. (shrink)
David Marr's theory of vision has been widely cited by philosophers and psychologists. I have three projects in this paper. First, I try to offer a perspicuous characterization of Marr's theory. Next, I consider the implications of Marr's work for some currently popular philosophies of psychology, specifically, the "hegemony of neurophysiology view", the theories of Jerry Fodor, Daniel Dennett, and Stephen Stich, and the view that perception is permeated by belief. In the last section, I consider what the (...) phenomenon of vision must be like for Marr's project to succeed. (shrink)
Neurophysiological investigations of the visual system by way of single-cell recordings have revealed a hierarchical architecture in which lower level areas, such as the primary visual cortex, contain cells that respond to simple features, while higher level areas contain cells that respond to higher order features apparently composed of combinations of lower level features. This architecture seems to suggest a feed-forward processing strategy in which visual information progresses from lower to higher visual areas. However there is other evidence, both neurophysiological (...) and phenomenal, that suggests a more parallel processing strategy in biological vision, in which top-down feedback plays a significant role. In fact Gestalt theory suggests that visual perception involves a process of emergence, i.e. a dynamic relaxation of multiple constraints throughout the system simultaneously, so that the final percept represents a stable state, or energy minimum of the dynamic system as a whole. A Multi-Level Reciprocal Feedback (MLRF) model is proposed to resolve the apparently contradictory concepts, by proposing a hierarchical visual architecture whose different levels are connected by bi-directional feed-forward and feedback pathways, where the computational transformation performed by the feedback pathway between levels in the hiararchy is a kind of inverse of the transformation performed by the corresponding feed-forward processing stream. This alternative paradigm of perceptual computation accounts in general terms for a number of visual illusory effects, and offers a computational specification for the generative, or constructive aspect of perceptual processing revealed by Gestalt theory. (shrink)
Situation theory has been developed over the last decade and various versions of the theory have been applied to a number of linguistic issues. However, not much work has been done in regard to its computational aspects. In this paper, we review the existing approaches towards `computational situation theory' with considerable emphasis on our own research.
There is currently much interest in bringing together the tradition of categorial grammar, and especially the Lambek calculus, with the recent paradigm of linear logic to which it has strong ties. One active research area is designing non-commutative versions of linear logic (Abrusci, 1995; Retoré, 1993) which can be sensitive to word order while retaining the hypothetical reasoning capabilities of standard (commutative) linear logic (Dalrymple et al., 1995). Some connections between the Lambek calculus and computations in groups have long been (...) known (van Benthem, 1986) but no serious attempt has been made to base a theory of linguistic processing solely on group structure. This paper presents such a model, and demonstrates the connection between linguistic processing and the classical algebraic notions of non-commutative free group, conjugacy, and group presentations. A grammar in this model, or G-grammar is a collection of lexical expressions which are products of logical forms, phonological forms, and inverses of those. Phrasal descriptions are obtained by forming products of lexical expressions and by cancelling contiguous elements which are inverses of each other. A G-grammar provides a symmetrical specification of the relation between a logical form and a phonological string that is neutral between parsing and generation modes. We show how the G-grammar can be oriented for each of the modes by reformulating the lexical expressions as rewriting rules adapted to parsing or generation, which then have strong decidability properties (inherent reversibility). We give examples showing the value of conjugacy for handling long-distance movement and quantifier scoping both in parsing and generation. The paper argues that by moving from the free monoid over a vocabulary V (standard in formal language theory) to the free group over V, deep affinities between linguistic phenomena and classical algebra come to the surface, and that the consequences of tapping the mathematical connections thus established can be considerable. (shrink)
By examining the contingent alliance that has emerged between the computationaltheory of mind and cyborg theory, we discern some questionable ways in which the literalization of technological metaphors and the over-extension of the “computational” have functioned, not only to influence conceptions of cognition, but also by becoming normative perspectives on how minds and bodies should be transformed, such that they can capitalize on technology’s capacity to enhance cognition and thus amend our sense of what it (...) is to be “human”. We consider “a moratorium on cyborg discourse” as a way of focusing the conceptual and social–political problems posed by this alliance. (shrink)
Computational learning theory explores the limits of learnability. Studying language acquisition from this perspective involves identifying classes of languages that are learnable from the available data, within the limits of time and computational resources available to the learner. Diﬀerent models of learning can yield radically diﬀerent learnability results, where these depend on the assumptions of the model about the nature of the learning process, and the data, time, and resources that learners have access to. To the extent (...) that such assumptions accurately reﬂect human language learning, a model that invokes them can oﬀer important insights into the formal properties of natural languages, and the way in which their representations might be eﬃciently acquired. In this chapter we consider several computational learning models that have been applied to the language learning task. Some of these have yielded results that suggest that the class of natural languages cannot be eﬃciently learned from the primary linguistic data (PLD) available to children, through.. (shrink)
Recent computational models of motor planning have relied heavily on anticipating the consequences of motor acts. Such anticipation is vital for dealing with the redundancy problem of motor control (i.e., the problem of selecting a particular motor solution when more than one is possible to achieve a goal). Computational approaches to motor planning support the Theory of Event Coding (TEC).
Levelt et al. attempt to “model their theory” with WEAVER++. Modeling theories requires a model theory. The time is ripe for a methodology for building, testing, and evaluating computational models. We propose a tentative, five-step framework for tackling this problem, within which we discuss the potential strengths and weaknesses of Levelt et al.'s modeling approach.
Of three types of evidence available to evolution theorists – comparative, continuity, and computational – the first is largely productive rather than predictive. Although comparison between extant species or languages is possible and can be suggestive of evolutionary processes, leading to theory development, comparison with extinct species and languages seems necessary for validation. Continuity and computational evidence provide the best opportunities for supporting predictions.
This commentary is an elaboration on Schyns, Goldstone & Thibaut's proposal for flexible features in categorization in the light of three areas not explicitly discussed by the authors: connectionist models of categorization, computational learning theory, and constructivist theories of the mind. In general, the authors' proposal is strongly supported, paving the way for model extensions and for interesting novel cognitive research. Nor is the authors' proposal incompatible with theories positing some fixed set of features.
The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents (...) comes into focus. This chapter explores these issues, and from its results details a novel approach to meeting the given conditions in a simple architecture of information processing. (shrink)
In the dissertation we study the complexity of generalized quantifiers in natural language. Our perspective is interdisciplinary: we combine philosophical insights with theoretical computer science, experimental cognitive science and linguistic theories. -/- In Chapter 1 we argue for identifying a part of meaning, the so-called referential meaning (model-checking), with algorithms. Moreover, we discuss the influence of computational complexity theory on cognitive tasks. We give some arguments to treat as cognitively tractable only those problems which can be computed in (...) polynomial time. Additionally, we suggest that plausible semantic theories of the everyday fragment of natural language can be formulated in the existential fragment of second-order logic. -/- In Chapter 2 we give an overview of the basic notions of generalized quantifier theory, computability theory, and descriptive complexity theory. -/- In Chapter 3 we prove that PTIME quantifiers are closed under iteration, cumulation and resumption. Next, we discuss the NP-completeness of branching quantifiers. Finally, we show that some Ramsey quantifiers define NP-complete classes of finite models while others stay in PTIME. We also give a sufficient condition for a Ramsey quantifier to be computable in polynomial time. -/- In Chapter 4 we investigate the computational complexity of polyadic lifts expressing various readings of reciprocal sentences with quantified antecedents. We show a dichotomy between these readings: the strong reciprocal reading can create NP-complete constructions, while the weak and the intermediate reciprocal readings do not. Additionally, we argue that this difference should be acknowledged in the Strong Meaning hypothesis. -/- In Chapter 5 we study the definability and complexity of the type-shifting approach to collective quantification in natural language. We show that under reasonable complexity assumptions it is not general enough to cover the semantics of all collective quantifiers in natural language. The type-shifting approach cannot lead outside second-order logic and arguably some collective quantifiers are not expressible in second-order logic. As a result, we argue that algebraic (many-sorted) formalisms dealing with collectivity are more plausible than the type-shifting approach. Moreover, we suggest that some collective quantifiers might not be realized in everyday language due to their high computational complexity. Additionally, we introduce the so-called second-order generalized quantifiers to the study of collective semantics. -/- In Chapter 6 we study the statement known as Hintikka's thesis: that the semantics of sentences like ``Most boys and most girls hate each other'' is not expressible by linear formulae and one needs to use branching quantification. We discuss possible readings of such sentences and come to the conclusion that they are expressible by linear formulae, as opposed to what Hintikka states. Next, we propose empirical evidence confirming our theoretical predictions that these sentences are sometimes interpreted by people as having the conjunctional reading. -/- In Chapter 7 we discuss a computational semantics for monadic quantifiers in natural language. We recall that it can be expressed in terms of finite-state and push-down automata. Then we present and criticize the neurological research building on this model. The discussion leads to a new experimental set-up which provides empirical evidence confirming the complexity predictions of the computational model. We show that the differences in reaction time needed for comprehension of sentences with monadic quantifiers are consistent with the complexity differences predicted by the model. -/- In Chapter 8 we discuss some general open questions and possible directions for future research, e.g., using different measures of complexity, involving game-theory and so on. -/- In general, our research explores, from different perspectives, the advantages of identifying meaning with algorithms and applying computational complexity analysis to semantic issues. It shows the fruitfulness of such an abstract computational approach for linguistics and cognitive science. (shrink)
A series of representations must be semantics-driven if the members of that series are to combine into a single thought. Where semantics is not operative, there is at most a series of disjoint representations that add up to nothing true or false, and therefore do not constitute a thought at all. There is necessarily a gulf between simulating thought, on the one hand, and actually thinking, on the other. A related point is that a popular doctrine - the so-called ' (...) class='Hi'>computationaltheory of mind' (CTM) - is based on a confusion. CTM is the view that thought-processes consist in 'computations', where a computation is defined as a 'form-driven' operation on symbols. The expression 'form-driven operation' is ambiguous, and may refer either to syntax-driven operations or to morphology-driven operations. Syntax-driven operations presuppose the existence of operations that are driven by semantic and extra-semantic knowledge. So CTM is false if the terms 'computation' and 'form-driven operation' are taken to refer to syntax-driven operations. Thus, if CTM is to work, those expressions must be taken to refer to morphology-driven operations; and CTM therefore fails, given that an operation must be semantics-driven if it is to qualify as a thought. CTM therefore fails on every disambiguation of the expressions 'formal operation' and 'computation,' and it is therefore false. (shrink)
Abstract Mental representations, Swiatczak (Minds Mach 21:19–32, 2011) argues, are fundamentally biochemical and their operations depend on consciousness; hence the computationaltheory of mind, based as it is on multiple realisability and purely syntactic operations, must be wrong. Swiatczak, however, is mistaken. Computation, properly understood, can afford descriptions/explanations of any physical process, and since Swiatczak accepts that consciousness has a physical basis, his argument against computationalism must fail. Of course, we may not have much idea how consciousness (itself (...) a rather unclear plurality of notions) might be implemented, but we do have a hypothesis—that all of our mental life, including consciousness, is the result of computational processes and so not tied to a biochemical substrate. Like it or not, the computationaltheory of mind remains the only game in town. Content Type Journal Article Pages 1-8 DOI 10.1007/s11023-012-9271-5 Authors David Davenport, Computer Engineering Department, Bilkent University, 06800 Ankara, Turkey Journal Minds and Machines Online ISSN 1572-8641 Print ISSN 0924-6495. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to (...) investigate semantic distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
We argue that there are mutually beneficial connections to be made between ideas in argumentation theory and the philosophy of mathematics, and that these connections can be suggested via the process of producing computational models of theories in these domains. We discuss Lakatos’s work (Proofs and Refutations, 1976) in which he championed the informal nature of mathematics, and our computational representation of his theory. In particular, we outline our representation of Cauchy’s proof of Euler’s conjecture, in (...) which we use work by Haggith on argumentation structures, and identify connections between these structures and Lakatos’s methods. (shrink)