Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word “consciousness” has no well-defined meaning, it is used to refer to aspects of human and animal informationprocessing. We then argue that we can enhance our understanding of what these aspects might be by designing and building virtual-machine (...) architectures capturing various features of consciousness. This activity may in turn nurture the development of our concepts of consciousness, showing how an analysis based on information-processing virtual machines answers old philosophical puzzles as well enriching empirical theories. This process of developing and testing ideas by developing and testing designs leads to gradual refinement of many of our pre-theoretical concepts of mind, showing how they can be construed as implicitly “architecture-based” concepts. Understanding how humanlike robots with appropriate architectures are likely to feel puzzled about qualia may help us resolve those puzzles. The concept of “qualia” turns out to be an “architecture-based” concept, while individual qualia concepts are “architecture-driven”. (shrink)
Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not real, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not (...) tell oneanything about the nature of that system. Putnam''s argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one''s view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable. (shrink)
Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not real, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not (...) tell oneanything about the nature of that system. Putnam''s argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one''s view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable. (shrink)
It is by now commonly agreed that the proper study of consciousness requires a multidisciplinary approach which focuses on the varieties and dimensions of conscious experience from different angles. This book, which is based on a workshop held at the University of Skövde, Sweden, provides a microcosm of the emerging discipline of consciousness studies and focuses on some important but neglected aspects of consciousness. The book brings together philosophy, psychology, cognitive neuroscience, linguistics, cognitive and computer science, biology, physics, art and (...) the new media. It contains critical studies of subjectivity vs objectivity, nonconceptuality vs conceptuality, language, evolutionary aspects, neural correlates, microphysical level, creativity, visual arts and dreams. It is suitable as a text-book for a third-year undergraduate or a graduate seminar on consciousness studies. (shrink)
Summary. A distinction is made between two senses of the claim “cognition is computation”. One sense, the opaque reading, takes computation to be whatever is described by our current computational theory and claims that cognition is best understood in terms of that theory. The transparent reading, which has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it, is the claim that the best account of cognition will be given by whatever theory turns out (...) to be the best account of the phenomenon of computation. The distinction is clarified and defended against charges of circularity and changing the subject. Several well-known objections to computationalism are then reviewed, and for each the question of whether the transparent reading of the computationalist claim can provide a response is considered. (shrink)
The development and deployment of the notion of pre-objective or nonconceptual content for the purposes of intentional explanation of requires assistance from a practical and theoretical understanding of computational/robotic systems acting in real-time and real-space. In particular, the usual "that"-clause specification of content will not work for non-conceptual contents; some other means of specification is required, means that make use of the fact that contents are aspects of embodied and embedded systems. That is, the specification of non-conceptual content should use (...) concepts and insights gained from android design and android epistemology. (shrink)
A distinction is made between superpositional and non-superpositional quantum computers. The notion of quantum learning systems - quantum computers that modify themselves in order to improve their performance - is introduced. A particular non-superpositional quantum learning system, a quantum neurocomputer, is described: a conventional neural network implemented in a system which is a variation on the familiar two-slit apparatus from quantum physics. This is followed by a discussion of the advantages that quantum computers in general, and quantum neurocomputers in particular, (...) might bring, not only to our search for more powerful computational systems, but also to our search for greater understanding of the brain, the mind, and quantum physics itself. (shrink)
Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to Searle. (...) In particular, I use the insight of the English Reply to contend that Searle's Chinese Room cannot argue against what I call the claim of \Weak Strong AI": there are some cases of understanding that a computer can achieve solely by virtue of that computer running a program. I refute several objections to my defense of the Weak Strong AI thesis. (shrink)
It is claimed that there are pre-objective phenomena, which cognitive science should explain by employing the notion of non-conceptual representational content. It is argued that a match between parallel distributed processing (PDP) and non-conceptual content (NCC) not only provides a means of refuting recent criticisms of PDP as a cognitive architecture; it also provides a vehicle for NCC that is required by naturalism. A connectionist cognitive mapping algorithm is used as a case study to examine the affinities between PDP and (...) NCC. (shrink)
(1) Van Gelder's concession that the dynamical hypothesis is not in opposition to computation in general does not agree well with his anticomputational stance. (2) There are problems with the claim that dynamic systems allow for nonrepresentational aspects of computation in a way in which digital computation cannot. (3) There are two senses of the “cognition is computation” claim and van Gelder argues against only one of them. (4) Dynamical systems as characterized in the target article share problems of universal (...) realizability with formal notions of computation, but differ in that there is no solution available for them. (5) The dynamical hypothesis cannot tell us what cognition is, because instantiating a particular dynamical system is neither necessary nor sufficient for being a cognitive agent. (shrink)
It is argued that standard arguments for the Externalism of mental states do not succeed in the case of pre-linguistic mental states. Further, it is noted that standard arguments for Internalism appeal to the principle that our individuation of mental states should be driven by what states are explanatory in our best cognitive science. This principle is used against the Internalist to reject the necessity of narrow individuation of mental states, even in the prelinguistic case. This is done by showing (...) how the explanation of some phenomena requires quantification over broadly-individuated, world-involving states; sometimes externalism is required. Although these illustrative phenomena are not mental, they are enough to show the general argumentative strategy to be incorrect: scientific explanation does not require narrowly-individuated states. (shrink)
have context-sensitive constituents, but rather because they sometimes have no constituents at all. The argument to be rejected depends on the assumption that one can only assign propositional contents to representations if one starts by assigning sub-propositional contents to atomic representations. I give some philosophical arguments and present a counterexample to show that this assumption is mistaken.
Animals and robots perceiving and acting in a world require an ontology that accommodates entities, processes, states of affairs, etc., in their environment. If the perceived environment includes information - processing systems, the ontology should reflect that. Scientists studying such systems need an ontology that includes the first - order ontology characterising physical phenomena, the second - order ontology characterising perceivers of physical phenomena, and a third order ontology characterising perceivers of perceivers, including introspectors. We argue that second - and (...) third - order ontologies refer to contents of virtual machines and examine requirements for scientific investigation of combined virtual and physical machines, such as animals and robots. We show how the CogAff architecture schema, combining reactive, deliberative, and meta - management categories, provides a first draft schematic third - order ontology for describing a wide range of natural and artificial agents. Many previously proposed architectures use only a subset of CogAff, including subsumption architectures, contention - scheduling systems, architectures with Ôexecutive functionsÕ and a variety of types of ÔOmegaÕ architectures. Adding a multiply - connected, fastacting ÔalarmÕ mechanism within the CogAff framework accounts for several varieties of emotions. H - CogAff, a special case of CogAff, is postulated as a minimal architecture specification for a human - like system. We illustrate use of the CogAff schema in comparing H - CogAff with Clarion, a well known architecture. One implication is that reliance on concepts tied to observation and experiment can harmfully restrict explanatory theorising, since what an information processor is doing cannot, in general, be determined by using the standard observational techniques of the physical sciences or laboratory experiments. Like theoretical physics, cognitive science needs to be highly speculative to make progress. Ó 2004 Published by Elsevier B. V. (shrink)
"Let us call whoever invented the zip "Julius"." With this stipulation, Gareth Evans introduced "Julius" into the language as one of a category of terms that seem to lie somewhere between definite descriptions (such as "whoever invented the zip") and proper names (such as "John", or "Julius" as usually used) (Evans 1982: 31). He dubbed these terms "descriptive names"1, and used them as a foil against which to test several theories of reference: Frege's, Russell's, and his own. I want to (...) look at some tensions in the first two chapters of The Varieties of Reference, tensions in Evans' account of singular terms that become apparent his account of descriptive names in particular. Specifically, I will concentrate on his claim that although descriptive names are referring expressions, they are not Russellian terms (i. e., terms which cannot contribute to the expression of a thought when they lack a referent). A recurring theme in this paper, and perhaps its sole point of interest for those not directly concerned with how to account for singular terms, is an attempt to place the blame for Evans' difficulties with an aspect of his thinking and method which I have referred to as "anti-realism". This might be confusing, as the aspect I am criticising is often of a vague and general sort, more akin to the ancient idea that "man is the measure of all things" than to any of the technical modern positions for which the term "anti-realism" is now normally used. But to refer to this aspect as "Protagorean" would suggest that I am accusing Evans of having been some kind of relativist, which I have no wish to do. Furthermore, there are times when the aspect does take a form which has more similarities to than differences from conventional notions of anti-realism. (shrink)
The scientific field of Artificial Intelligence (AI) began in the 1950s but the concept of artificial intelligence, the idea of something with mind-like attributes, predates it by centuries. This historically rich concept has served as a blueprint for the research into intelligent machines. But it also has staggering implications for our notions of who we are: our psychology, biology, philosophy, technology and society. This reference work provides scholars in both the humanities and the sciences with the material essential for charting (...) the development of this concept. The set brings together; * primary texts from antiquity to the present, including the crucial foundational texts which defined the field of AI * historical accounts, including both comprehensive overviews and detailed snapshots of key periods * secondary material discussing the intellectual issues and implications which place the concept in a wider context. (shrink)
Kasm does not offer any concept of proof which is regulative for all metaphysics, for he is convinced that each metaphysical approach requires its own proper logic and methodology. Within this pluralistic framework he seeks to discern the structure of formal truth as expressed in the concept of proof inherent in various metaphysical approaches.--L. S. F.
Science has always strived for objectivity, for a ‘‘view from nowhere’’ that is not marred by ideology or personal preferences. That is a lofty ideal toward which perhaps it makes sense to strive, but it is hardly the reality. This collection of thirteen essays assembled by Denis R. Alexander and Ronald L. Numbers ought to give much pause to scientists and the public at large, though historians, sociologists and philosophers of science will hardly be surprised by the material covered (...) here. (shrink)