The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents (...) comes into focus. This chapter explores these issues, and from its results details a novel approach to meeting the given conditions in a simple architecture of information processing. (shrink)
We study the computational complexity of reciprocal sentences with quantified antecedents. We observe a computational dichotomy between different interpretations of reciprocity, and shed some light on the status of the so-called Strong Meaning Hypothesis.
The Language of Thought program has a suicidal edge. Jerry Fodor, of all people, has argued that although LOT will likely succeed in explaining modular processes, it will fail to explain the central system, a subsystem in the brain in which information from the different sense modalities is integrated, conscious deliberation occurs, and behavior is planned. A fundamental characteristic of the central system is that it is “informationally unencapsulated” -- its operations can draw from information from any cognitive domain. The (...) domain general nature of the central system is key to human reasoning; our ability to connect apparently unrelated concepts enables the creativity and flexibility of human thought, as does our ability to integrate material across sensory divides. The central system is the holy grail of cognitive science: understanding higher cognitive function is crucial to grasping how humans reach their highest intellectual achievements. But according to Fodor, the founding father of the LOT program and the related Computational Theory of Mind (CTM), the holy grail is out of reach: the central system is likely to be non-computational (Fodor 1983, 2000, 2008). Cognitive scientists working on higher cognitive function should abandon their efforts. Research should be limited to the modules, which for Fodor rest at the sensory periphery (2000).1 Cognitive scientists who work in the symbol processing tradition outside of philosophy would reject this pessimism, but ironically, within philosophy itself, this pessimistic streak has been very influential, most likely because it comes from the most well-known proponent of LOT and CTM. Indeed, pessimism about centrality has become assimilated into the mainstream conception of LOT. (Herein, I refer to a LOT that appeals to pessimism about centrality as the “standard LOT”). I imagine this makes the standard LOT unattractive to those philosophers with a more optimistic approach to what cognitive science can achieve.. (shrink)
Advocates of the computational theory of mind claim that the mind is a computer whose operations can be implemented by various computational systems. According to these philosophers, the mind is multiply realisable because—as they claim—thinking involves the manipulation of syntactically structured mental representations. Since syntactically structured representations can be made of different kinds of material while performing the same calculation, mental processes can also be implemented by different kinds of material. From this perspective, consciousness plays a minor role (...) in mental activity. However, contemporary neuroscience provides experimental evidence suggesting that mental representations necessarily involve consciousness. Consciousness does not only enable individuals to become aware of their own thoughts, it also constantly changes the causal properties of these thoughts. In light of these empirical studies, mental representations appear to be intrinsically dependent on consciousness. This discovery represents an obstacle to any attempt to construct an artificial mind. (shrink)
According to some philosophers, computational explanation is proprietary to psychology—it does not belong in neuroscience. But neuroscientists routinely offer computational explanations of cognitive phenomena. In fact, computational explanation was initially imported from computability theory into the science of mind by neuroscientists, who justified this move on neurophysiological grounds. Establishing the legitimacy and importance of computational explanation in neuroscience is one thing; shedding light on it is another. I raise some philosophical questions pertaining to computational explanation and (...) outline some promising answers that are being developed by a number of authors. (shrink)
Computational modeling plays an increasingly important explanatory role in cases where we investigate systems or problems that exceed our native epistemic capacities. One clear case where technological enhancement is indispensable involves the study of complex systems.1 However, even in contexts where the number of parameters and interactions that define a problem is small, simple systems sometimes exhibit non-linear features which computational models can illustrate and track. In recent decades, computational models have been proposed as a way to (...) assist us in understanding emergent phenomena. (shrink)
There is no consensus as to whether a Liar sentence is meaningful or not. Still, a widespread conviction with respect to Liar sentences (and other ungrounded sentences) is that, whether or not they are meaningful, they are useless . The philosophical contribution of this paper is to put this conviction into question. Using the framework of assertoric semantics , which is a semantic valuation method for languages of self-referential truth that has been developed by the author, we show that certain (...)computational problems, called query structures , can be solved more efficiently by an agent who has self-referential resources (amongst which are Liar sentences) than by an agent who has only classical resources; we establish the computational power of self-referential truth . The paper concludes with some thoughts on the implications of the established result for deflationary accounts of truth. (shrink)
In this paper I review some leading developments in the empirical theory of affect. I argue that (1) affect is a distinct perceptual representation governed system, and (2) that there are significant modular factors in affect. The paper concludes with the observation thatfeeler (affective perceptual system) may be a natural kind within cognitive science. The main purpose of the paper is to explore some hitherto unappreciated connections between the theory of affect and the computational theory of mind.
The central aim of this paper is to shed light on the nature of explanation in computational neuroscience. I argue that computational models in this domain possess explanatory force to the extent that they describe the mechanisms responsible for producing a given phenomenon—paralleling how other mechanistic models explain. Conceiving computational explanation as a species of mechanistic explanation affords an important distinction between computational models that play genuine explanatory roles and those that merely provide accurate descriptions or (...) predictions of phenomena. It also serves to clarify the pattern of model refinement and elaboration undertaken by computational neuroscientists. (shrink)
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the Computational Theory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
Despite its significance in neuroscience and computation, McCulloch and Pitts's celebrated 1943 paper has received little historical and philosophical attention. In 1943 there already existed a lively community of biophysicists doing mathematical work on neural networks. What was novel in McCulloch and Pitts's paper was their use of logic and computation to understand neural, and thus mental, activity. McCulloch and Pitts's contributions included (i) a formalism whose refinement and generalization led to the notion of finite automata (an important formalism in (...) computability theory), (ii) a technique that inspired the notion of logic design (a fundamental part of modern computer design), (iii) the first use of computation to address the mind–body problem, and (iv) the first modern computational theory of mind and brain. (shrink)
In the dissertation we study the complexity of generalized quantifiers in natural language. Our perspective is interdisciplinary: we combine philosophical insights with theoretical computer science, experimental cognitive science and linguistic theories. -/- In Chapter 1 we argue for identifying a part of meaning, the so-called referential meaning (model-checking), with algorithms. Moreover, we discuss the influence of computational complexity theory on cognitive tasks. We give some arguments to treat as cognitively tractable only those problems which can be computed in polynomial (...) time. Additionally, we suggest that plausible semantic theories of the everyday fragment of natural language can be formulated in the existential fragment of second-order logic. -/- In Chapter 2 we give an overview of the basic notions of generalized quantifier theory, computability theory, and descriptive complexity theory. -/- In Chapter 3 we prove that PTIME quantifiers are closed under iteration, cumulation and resumption. Next, we discuss the NP-completeness of branching quantifiers. Finally, we show that some Ramsey quantifiers define NP-complete classes of finite models while others stay in PTIME. We also give a sufficient condition for a Ramsey quantifier to be computable in polynomial time. -/- In Chapter 4 we investigate the computational complexity of polyadic lifts expressing various readings of reciprocal sentences with quantified antecedents. We show a dichotomy between these readings: the strong reciprocal reading can create NP-complete constructions, while the weak and the intermediate reciprocal readings do not. Additionally, we argue that this difference should be acknowledged in the Strong Meaning hypothesis. -/- In Chapter 5 we study the definability and complexity of the type-shifting approach to collective quantification in natural language. We show that under reasonable complexity assumptions it is not general enough to cover the semantics of all collective quantifiers in natural language. The type-shifting approach cannot lead outside second-order logic and arguably some collective quantifiers are not expressible in second-order logic. As a result, we argue that algebraic (many-sorted) formalisms dealing with collectivity are more plausible than the type-shifting approach. Moreover, we suggest that some collective quantifiers might not be realized in everyday language due to their high computational complexity. Additionally, we introduce the so-called second-order generalized quantifiers to the study of collective semantics. -/- In Chapter 6 we study the statement known as Hintikka's thesis: that the semantics of sentences like ``Most boys and most girls hate each other'' is not expressible by linear formulae and one needs to use branching quantification. We discuss possible readings of such sentences and come to the conclusion that they are expressible by linear formulae, as opposed to what Hintikka states. Next, we propose empirical evidence confirming our theoretical predictions that these sentences are sometimes interpreted by people as having the conjunctional reading. -/- In Chapter 7 we discuss a computational semantics for monadic quantifiers in natural language. We recall that it can be expressed in terms of finite-state and push-down automata. Then we present and criticize the neurological research building on this model. The discussion leads to a new experimental set-up which provides empirical evidence confirming the complexity predictions of the computational model. We show that the differences in reaction time needed for comprehension of sentences with monadic quantifiers are consistent with the complexity differences predicted by the model. -/- In Chapter 8 we discuss some general open questions and possible directions for future research, e.g., using different measures of complexity, involving game-theory and so on. -/- In general, our research explores, from different perspectives, the advantages of identifying meaning with algorithms and applying computational complexity analysis to semantic issues. It shows the fruitfulness of such an abstract computational approach for linguistics and cognitive science. (shrink)
It has been argued that ethically correct robots should be able to reason about right and wrong. In order to do so, they must have a set of do’s and don’ts at their disposal. However, such a list may be inconsistent, incomplete or otherwise unsatisfactory, depending on the reasoning principles that one employs. For this reason, it might be desirable if robots were to some extent able to reason about their own reasoning—in other words, if they had some meta-ethical capacities. (...) In this paper, we sketch how one might go about designing robots that have such capacities. We show that the field of computational meta-ethics can profit from the same tools as have been used in computational metaphysics. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to (...) investigate semantic distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
We first discuss Michael Dummett’s philosophy of mathematics and Robert Brandom’s philosophy of language to demonstrate that inferentialism entails the falsity of Church’s Thesis and, as a consequence, the Computational Theory of Mind. This amounts to an entirely novel critique of mechanism in the philosophy of mind, one we show to have tremendous advantages over the traditional Lucas-Penrose argument.
In this paper, the authors describe their initial investigations in computational metaphysics. Our method is to implement axiomatic metaphysics in an automated reasoning system. In this paper, we describe what we have discovered when the theory of abstract objects is implemented in PROVER9 (a first-order automated reasoning system which is the successor to OTTER). After reviewing the second-order, axiomatic theory of abstract objects, we show (1) how to represent a fragment of that theory in PROVER9's first-order syntax, and (2) (...) how PROVER9 then finds proofs of interesting theorems of metaphysics, such as that every possible world is maximal. We conclude the paper by discussing some issues for further research. (shrink)
We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality. In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves (...) upon hypothesis and explanatory power of recent neuroimaging studies as well as provides evidence. (shrink)
I argue here for a number of ways that modern computational science requires a change in the way we represent the relationship between theory and applications. It requires a switch away from logical reconstruction of theories in order to take surface mathematical syntax seriously. In addition, syntactically different versions of the same theory have important differences for applications, and this shows that the semantic account of theories is inappropriate for some purposes. I also argue against formalist approaches in the (...) philosophy of science and for a greater role for perceptual knowledge rather than propositional knowledge in scientific empiricism. (shrink)
The problem of computational complexity of semantics for some natural language constructions – considered in [M. Mostowski, D. Wojtyniak 2004] – motivates an interest in complexity of Ramsey quantifiers in finite models. In general a sentence with a Ramsey quantifier R of the following form Rx, yH(x, y) is interpreted as ∃A(A is big relatively to the universe ∧A2 ⊆ H). In the paper cited the problem of the complexity of the Hintikka sentence is reduced to the problem of (...)computational complexity of the Ramsey quantifier for which the phrase “A is big relatively to the universe” is interpreted as containing at least one representative of each equivalence class, for some given equvalence relation. In this work we consider quantifiers Rf, for which “A is big relatively to the universe” means “card(A) > f (n), where n is the size of the universe”. Following [Blass, Gurevich 1986] we call R mighty if Rx, yH(x, y) defines N P – complete class of finite models. Similarly we say that Rf is N P –hard if the corresponding class is N P –hard. We prove the following theorems. (shrink)
Some problems rarely discussed in traditional philosophy of science are mentioned: The empirical sciences using mathematico-quantitative theoretical models are frequently confronted with several types of computational problems posing primarily methodological limitations on explanatory and prognostic matters. Such limitations may arise from the appearances of deterministic chaos and (too) high computational complexity in general. In many cases, however, scientists circumvent such limitations by utilizing reductional approximations or complexity reductions for intractable problem formulations, thus constructing new models which are (...) computationally tractable. Such activities are compared with reduction types (more) established in philosophy of science. (shrink)
Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This (...) research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. (shrink)
Computational philosophy (CP) aims at investigating many important concepts and problems of the philosophical and epistemological tradition in a new way by taking advantage of information-theoretic, cognitive, and artificial intelligence methodologies. I maintain that the results of computational philosophy meet the classical requirements of some Peircian pragmatic ambitions. Indeed, more than a 100 years ago, the American philosopher C.S. Peirce, when working on logical and philosophical problems, suggested the concept of pragmatism(pragmaticism, in his own words) as a logical (...) criterion to analyze what words and concepts express through their practical meaning. Many words have been spent on creative processes and reasoning, especially in the case of scientific practices. In fact, many philosophers have usually offered a number of ways of construing hypotheses generation, but they aim at demonstrating that the activity of generating hypotheses is paradoxical, obscure, and thus not analyzable. Those descriptions are often so far from Peircian pragmatic prescription and so abstract to result completely unknowable and obscure. To dismiss this tendency and gain interesting insight about the so-called logic of scientific discovery we need to build constructive procedures, which could play a role in moving the problem-solving process forward by implementing them in some actual models. The computational turn gives us a new way to understand creative processes in a strictly pragmatic sense. In fact, by exploiting artificial intelligence and cognitive science tools, computational philosophy allows us to test concepts and ideas previously conceived only in abstract terms. It is in the perspective of these actual computational models that I find the central role of abduction in the explanation of creative reasoning in science. I maintain that the computational philosophy analysis of model-based and manipulative abduction and of external and epistemic mediators is important not only to delineate the actual practice of abduction, but also to further enhance the development of programs computationally adequate in rediscovering, or discovering for the first time, for example, scientific hypotheses or mathematical theorems. The last part of the paper is devoted to illustrating the problem of the extra-theoretical dimension of reasoning and discovery from the perspective of some mathematical cases derived from calculus and geometry. (shrink)
We begin by distinguishing computationalism from a number of other theses that are sometimes conflated with it. We also distinguish between several important kinds of computation: computation in a generic sense, digital computation, and analog computation. Then, we defend a weak version of computationalism—neural processes are computations in the generic sense. After that, we reject on empirical grounds the common assimilation of neural computation to either analog or digital computation, concluding that neural computation is sui generis. Analog computation requires continuous (...) signals; digital computation requires strings of digits. But current neuroscientific evidence indicates that typical neural signals, such as spike trains, are graded like continuous signals but are constituted by discrete functional elements (spikes); thus, typical neural signals are neither continuous signals nor strings of digits. It follows that neural computation is sui generis. Finally, we highlight three important consequences of a proper understanding of neural computation for the theory of cognition. First, understanding neural computation requires a specially designed mathematical theory (or theories) rather than the mathematical theories of analog or digital computation. Second, several popular views about neural computation turn out to be incorrect. Third, computational theories of cognition that rely on non-neural notions of computation ought to be replaced or reinterpreted in terms of neural computation. (shrink)
This article introduces the topic ‘‘Production of Referring Expressions: Bridging the Gap between Computational and Empirical Approaches to Reference’’ of the journal Topics in Cognitive Science. We argue that computational and psycholinguistic approaches to reference production can benefit from closer interaction, and that this is likely to result in the construction of algorithms that differ markedly from the ones currently known in the computational literature. We focus particularly on determinism, the feature of existing algorithms that is perhaps (...) most clearly at odds with psycholinguistic results, discussing how future algorithms might include non-determinism, and how new psycholinguistic experiments could inform the development of such algorithms. (shrink)
We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only with proportional quantifiers, like more than half. This can be explained by noting that, according to the complexity perspective, only proportional quantifiers require working memory engagement.
Many of the formalisms used in Attribute Value grammar are notational variants of languages of propositional modal logic, and testing whether two Attribute Value Structures unify amounts to testing for modal satisfiability. In this paper we put this observation to work. We study the complexity of the satisfiability problem for nine modal languages which mirror different aspects of AVS description formalisms, including the ability to express re-entrancy, the ability to express generalisations, and the ability to express recursive constraints. Two main (...) techniques are used: either Kripke models with desirable properties are constructed, or modalities are used to simulate fragments of Propositional Dynamic Logic. Further possibilities for the application of modal logic in computational linguistics are noted. (shrink)
This article begins with an introduction to defeasible (nonmonotonic) reasoning and a brief description of a computer program, EVID, which can perform such reasoning. I then explain, and illustrate with examples, how this program can be applied in computational representations of ordinary dialogic argumentation. The program represents the beliefs and doubts of the dialoguers, and uses these propositional attitudes, which can include commonsense defeasible inference rules, to infer various changing conclusions as a dialogue progresses. It is proposed that (...) class='Hi'>computational representations of this kind are a useful tool in the analysis of dialogic argumentation, and, in particular, demonstrate the important role of defeasible reasoning in everyday arguments using commonsense reasoning. (shrink)
In this chapter, I argue that some aspects of cognitive phenomena cannot be explained computationally. In the first part, I sketch a mechanistic account of computational explanation that spans multiple levels of organization of cognitive systems. In the second part, I turn my attention to what cannot be explained about cognitive systems in this way. I argue that information-processing mechanisms are indispensable in explanations of cognitive phenomena, and this vindicates the computational explanation of cognition. At the same time, (...) it has to be supplemented with other explanations to make the mechanistic explanation complete, and that naturally leads to explanatory pluralism in cognitive science. The price to pay for pluralism, however, is the abandonment of the traditional autonomy thesis asserting that cognition is independent of implementation details. (shrink)
The paper presents an exploration of conceptual issues that have arisen in the course of investigating speed-up and slowdown phenomena in small Turing machines, in particular results of a test that may spur experimental approaches to the notion of computational irreducibility. The test involves a systematic attempt to outrun the computation of a large number of small Turing machines (3 and 4 state, 2 symbol) by means of integer sequence prediction using a specialized function for that purpose. The experiment (...) prompts an investigation into rates of convergence of decision procedures and the decidability of sets in addition to a discussion of the (un)predictability of deterministic computing systems in practice. We think this investigation constitutes a novel approach to the discussion of an epistemological question in the context of a computer simulation, and thus represents an interesting exploration at the boundary between philosophical concerns and computational experiments. (shrink)
Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of (...) models to non-programming researchers is essentially non-existent, and even for other modelers, the profusion of source code in a multitude of programming languages, written without programming guidelines, makes it almost impossible to access, check, explore, re-use, or continue to develop. It is high time to change this situation, especially since the tools are now readily available to do so. We propose that the modeling community adopt three simple guidelines that would ensure that computational models would be accessible to the broad range of researchers in cognitive science. We further emphasize the pivotal role that journal editors must play in making computational models accessible to readers of their journals. (shrink)
I articulate and defend a new theory of what it is for a physical system to implement an abstract computational model. According to my descriptivist theory, a physical system implements a computational model just in case the model accurately describes the system. Specifically, the system must reliably transit between computational states in accord with mechanical instructions encoded by the model. I contrast my theory with an influential approach to computational implementation espoused by Chalmers, Putnam, and others. (...) I deploy my theory to illuminate the relation between computation and representation. I also rebut arguments, propounded by Putnam and Searle, that computational implementation is trivial. (shrink)
Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major source of disagreement (...) among the different theories is whether children are equipped with special mechanisms and biases for word learning, or their general cognitive abilities are adequate for the task. We present a novel computational model of early word learning to shed light on the mechanisms that might be at work in this process. The model learns word meanings as probabilistic associations between words and semantic elements, using an incremental and probabilistic learning mechanism, and drawing only on general cognitive abilities. The results presented here demonstrate that much about word meanings can be learned from naturally occurring child-directed utterances (paired with meaning representations), without using any special biases or constraints, and without any explicit developmental changes in the underlying learning mechanism. Furthermore, our model provides explanations for the occasionally contradictory child experimental data, and offers predictions for the behavior of young word learners in novel situations. (shrink)
Recent research in computational neuroscience has demonstrated that we now possess the ability to simulate neural systems in significant detail and on a large scale. Simulations on the scale of a human brain have recently been reported. The ability to simulate entire brains (or significant portions thereof) would be a revolutionary scientific advance, with substantial benefits for brain science. However, the prospect of whole-brain simulation comes with a set of new and unique ethical questions. In the present paper, we (...) briefly outline certain of those problems and emphasize the need to begin considering the ethical aspects of computational neuroscience. (shrink)
Lexical semantics has become a major research area within computational linguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored.Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerised lexicons for (...) the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists, and computer scientists. Besides describing academic research, it also covers ongoing industrial projects. (shrink)
Most previous works on responsible conduct of research have focused on good practices in laboratory experiments. Because computation now rivals experimentation as a mode of scientific research, we sought to identify the responsibilities of researchers who develop or use computational modeling and simulation. We interviewed nineteen experts to collect examples of ethical issues from their experiences in conducting research with computational models. We gathered their recommendations for guidelines for computational research. Informed by these interviews, we describe the (...) respective professional responsibilities of developers and users of computational models in research. In particular, we examine whether developers should disclose the full computational codes, and we explain how developers and users should minimize harms from improper uses of models. (shrink)
It is often assumed that graphemes are a crucial level of orthographic representation above letters. Current connectionist models of reading, however, do not address how the mapping from letters to graphemes is learned. One major challenge for computational modeling is therefore developing a model that learns this mapping and can assign the graphemes to linguistically meaningful categories such as the onset, vowel, and coda of a syllable. Here, we present a model that learns to do this in English for (...) strings of any letter length and any number of syllables. The model is evaluated on error rates and further validated on the results of a behavioral experiment designed to examine ambiguities in the processing of graphemes. The results show that the model (a) chooses graphemes from letter strings with a high level of accuracy, even when trained on only a small portion of the English lexicon; (b) chooses a similar set of graphemes as people do in situations where different graphemes can potentially be selected; (c) predicts orthographic effects on segmentation which are found in human data; and (d) can be readily integrated into a full-blown model of multi-syllabic reading aloud such as CDP++ (Perry, Ziegler, & Zorzi, 2010). Altogether, these results suggest that the model provides a plausible hypothesis for the kind of computations that underlie the use of graphemes in skilled reading. (shrink)
In this paper, we argue for the centrality of prediction in the use of computational models in science. We focus on the consequences of the irreversibility of computational models and on the conditional or ceteris paribus, nature of the kinds of their predictions. By irreversibility, we mean the fact that computational models can generally arrive at the same state via many possible sequences of previous states. Thus, while in the natural world, it is generally assumed that physical (...) states have a unique history, representations of those states in a computational model will usually be compatible with more than one possible history in the model. We describe some of the challenges involved in prediction and retrodiction in computational models while arguing that prediction is an essential feature of non-arbitrary decision making. Furthermore, we contend that the non-predictive virtues of computational models are dependent to a significant degree on the predictive success of the models in question. (shrink)
The idea that human cognitive capacities are explainable by computational models is often conjoined with the idea that, while the states postulated by such models are in fact realized by brain states, there are no type-type correlations between the states postulated by computational models and brain states (a corollary of token physicalism). I argue that these ideas are not jointly tenable. I discuss the kinds of empirical evidence available to cognitive scientists for (dis)confirming computational models of cognition (...) and argue that none of these kinds of evidence can be relevant to a choice among competing computational models unless there are in fact type-type correlations between the states postulated by computational models and brain states. Thus, I conclude, research into the computational procedures employed in human cognition must be conducted hand-in-hand with research into the brain processes which realize those procedures. (shrink)
Molecular models are typical topics of chemical research depending on the technical standards of observation, computation, and representation. Mathematically, molecular structures have been represented by means of graph theory, topology, differential equations, and numerical procedures. With the increasing capabilities of computer networks, computational models and computer-assisted visualization become an essential part of chemical research. Object-oriented programming languages create a virtual reality of chemical structures opening new avenues of exploration and collaboration in chemistry. From an epistemic point of view, virtual (...) reality is a new computer-assisted tool of human imagination and recognition. (shrink)
Intelligent problem-solving depends on consciously applied methods of thinking as well as inborn or trained skills. The latter are like resident programs which control processes of the kind called (in Unix) daemons. Such a computational process is a fitting reaction to situations (defined in the program in question) which is executed without any command of a computer user (or without any intention of the conscious subject). The study of intelligence should involve methods of recognizing those beliefs whose existence is (...) due to daemons. Once having been aware of so produced belief, one can assess it critically and, if possible and necessary, make it more rational. Eg, beliefs concerning properties of time are produced by a daemon-like intuition, likewise the Euclidean properties of space. The merit of getting aware of such daemon's activities, and so transforming implicit beliefs into explicit ones, lies mainly in the axiomatic characterization of the properties involved. That makes possible to improve a daemon-like conceptual equipment (producing beliefs) by suitable modifications of the axioms, or postulates. Such postulate sets can also define artificial daemons to either emulate or improve natural intelligence. (shrink)
Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference in narrative is addressed: References in passages told from the perspective of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented.
Interest in the computational aspects of modeling has been steadily growing in philosophy of science. This paper aims to advance the discussion by articulating the way in which modeling and computational errors are related and by explaining the significance of error management strategies for the rational reconstruction of scientific practice. To this end, we first characterize the role and nature of modeling error in relation to a recipe for model construction known as Euler’s recipe. We then describe a (...) general model that allows us to assess the quality of numerical solutions in terms of measures of computational errors that are completely interpretable in terms of modeling error. Finally, we emphasize that this type of error analysis involves forms of perturbation analysis that go beyond the basic model-theoretical and statistical/probabilistic tools typically used to characterize the scientific method; this demands that we revise and complement our reconstructive toolbox in a way that can affect our normative image of science. (shrink)
The notions of argument and argumentation have become increasingly ubiquitous in Artificial Intelligence research, with various application and interpretations. Less attention has been, however, specifically devoted to rhetorical argument The work presented in this paper aims at bridging this gap, by proposing a framework for characterising rhetorical argumentation, based on Perelman and Olbrechts-Tyteca's New Rhetoric. The paper provides an overview of the state of the art of computational work based on, or dealing with, rhetorical aspects of argumentation, before presenting (...) the characterisation proposed, corroborated by walked-through examples. (shrink)
Recent findings indicate that the constituting digits of multi-digit numbers are processed, decomposed into units, tens, and so on, rather than integrated into one entity. This is suggested by interfering effects of unit digit processing on two-digit number comparison. In the present study, we extended the computational model for two-digit number magnitude comparison of Moeller, Huber, Nuerk, and Willmes (2011a) to the case of three-digit number comparison (e.g., 371_826). In a second step, we evaluated how hundred-decade and hundred-unit compatibility (...) effects were moderated by varying the percentage of within-hundred (e.g., 539_582) and within-hundred-and-decade filler items (e.g., 483_489). From the results we predict that numerical distance as well as compatibility effects should indeed be modulated by the relevance of tens and units in three-digit number magnitude comparison: While in particular the hundred distance effect should decrease, we predict hundred-decade and hundred-unit compatibility effects to increase with the relevance of tens and units. (shrink)
The Computational Metaphor is an extremely influential notion, and more than any other trend has given rise to the field of Cognitive Science. Environmentalism is at present better formalised as a political movement than as a scientific paradigm, despite significant research by Gibson and his followers. This article attempts to address the difficult problem of synthesising these two apparently antagonistic research paradigms.
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit (...)computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the “experimenter”, and Mary, the “computational modeller”. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling. (shrink)
Computational sociology models social phenomena using the concepts of emergence and downward causation. However, the theoretical status of these concepts is ambiguous; they suppose too much ontology and are invoked by two opposed sociological interpretations of social reality: the individualistic and the holistic. This paper aims to clarify those concepts and argue in favour of their heuristic value for social simulation. It does so by proposing a link between the concept of emergence and Luhmann's theory of communication. For Luhmann, (...) society emerges from the bottom-up as communication and he describes the process by which society limits the possible selections of individuals as downward causation. It is argued that this theory is well positioned to overcome some epistemological drawbacks in computational sociology. (shrink)
We report two experiments which tested whether cognitive capacities are limited to those functions that are computationally tractable (PTIME-Cognition Hypothesis). In particular, we investigated the semantic processing of reciprocal sentences with generalized quantifiers, i.e., sentences of the form Q dots are directly connected to each other, where Q stands for a generalized quantifier, e.g. all or most. Sentences of this type are notoriously ambiguous and it has been claimed in the semantic literature that the logically strongest reading is preferred (Strongest (...) Meaning Hypothesis). Depending on the quantifier, the verification of their strongest interpretations is computationally intractable whereas the verification of the weaker readings is tractable. We conducted a picture completion experiment and a picture verification experiment to investigate whether comprehenders shift from an intractable reading to a tractable reading which should be dispreferred according to the Strongest Meaning Hypothesis. The results from the picture completion experiment suggest that intractable readings occur in language comprehension. Their verification, however, rapidly exceeds cognitive capacities in case the verification problem cannot be solved using simple heuristics. In particular, we argue that during verification, guessing strategies are used to reduce computational complexity. (shrink)
This paper reports research concerning a suitable dialogue model for human computer debate. In particular, we consider the adoption of Moore's (1993) utilization of Mackenzie's (1979) game DC, means of using computational agents as the test-bed to facilitate evaluation of the proposed model, and means of using the evaluation results as motivation to further develop a dialogue model, which can prevent fallacious argument and common errors. It is anticipated that this work will contribute toward the development of human computer (...) dialogue, and help to illuminate research issues in the field of dialectics itself. (shrink)