Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word “consciousness” has no well-defined meaning, it is used to refer to aspects of human and animal informationprocessing. We then argue that we can enhance our understanding of what these aspects might be by designing and building virtual-machine (...) architectures capturing various features of consciousness. This activity may in turn nurture the development of our concepts of consciousness, showing how an analysis based on information-processing virtual machines answers old philosophical puzzles as well enriching empirical theories. This process of developing and testing ideas by developing and testing designs leads to gradual refinement of many of our pre-theoretical concepts of mind, showing how they can be construed as implicitly “architecture-based” concepts. Understanding how humanlike robots with appropriate architectures are likely to feel puzzled about qualia may help us resolve those puzzles. The concept of “qualia” turns out to be an “architecture-based” concept, while individual qualia concepts are “architecture-driven”. (shrink)
Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not real, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not (...) tell oneanything about the nature of that system. Putnam''s argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one''s view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable. (shrink)
Some have suggested that there is no fact to the matter as to whether or not a particular physical system relaizes a particular computational description. This suggestion has been taken to imply that computational states are not real, and cannot, for example, provide a foundation for the cognitive sciences. In particular, Putnam has argued that every ordinary open physical system realizes every abstract finite automaton, implying that the fact that a particular computational characterization applies to a physical system does not (...) tell oneanything about the nature of that system. Putnam''s argument is scrutinized, and found inadequate because, among other things, it employs a notion of causation that is too weak. I argue that if one''s view of computation involves embeddedness (inputs and outputs) and full causality, one can avoid the universal realizability results. Therefore, the fact that a particular system realizes a particular automaton is not a vacuous one, and is often explanatory. Furthermore, I claim that computation would not necessarily be an explanatorily vacuous notion even if it were universally realizable. (shrink)
The term \synthetic phenomenology" refers to: 1) any attempt to characterize the phenomenal states possessed, or modeled by, an artefact ; or 2) any attempt to use an artefact to help specify phenomenal states. The notion of synthetic phenomenology is clari¯ed, and distinguished from some related notions. It is argued that much work in machine consciousness would bene¯t from being more cognizant of the need for synthetic phenomenology of the ¯rst type, and of the possible forms it may take. It (...) is then argued that synthetic phenomenology of the second type looks set to resolve some problems confronted by standard, nonsynthetic attempts at characterizing phenomenal states. An example of the second form of synthetic phenomenology is given. (shrink)
Not all research in machine consciousness aims to instantiate phenomenal states in artefacts. For example, one can use artefacts that do not themselves have phenomenal states, merely to simulate or model organisms that do. Nevertheless, one might refer to all of these pursuits -- instantiating, simulating or modelling phenomenal states in an artefact -- as 'synthetic phenomenality'. But there is another way in which artificial agents (be they simulated or real) may play a crucial role in understanding or creating consciousness: (...) 'synthetic phenomenology'. Explanations involving specific experiential events require a means of specifying the contents of experience; not all of them can be specified linguistically. One alternative, at least for the case of visual experience, is to use depictions that either evoke or refer to the content of the experience. Practical considerations concerning the generation and integration of such depictions argue in favour of a synthetic approach: the generation of depictions through the use of an embodied, perceiving and acting agent, either virtual or real. Synthetic phenomenology, then, is the attempt to use the states, interactions and capacities of an artificial agent for the purpose of specifying the contents of conscious experience. This paper takes the first steps toward seeing how one might use a robot to specify the non- conceptual content of the visual experience of an (hypothetical) organism that the robot models. (shrink)
Mike Anderson1 has given us a thoughtful and useful field guide: Not in the genre of a bird-watcher’s guide which is carried in the field and which contains detailed descriptions of possible sightings, but in the sense of a guide to a field (in this case embodied cognition) which aims to identify that field’s general principles and properties. I’d like to make some comments that will hopefully complement Anderson’s work, highlighting points of agreement and disagreement between his view of the (...) field and my own, and acting as a devil’s advocate in places where further discussion seems to be required. Given the venue for this guide, we can safely restrict the discussion to embodied artificial intelligence (EAI), even if such work draws on notions of embodied cognition.. (shrink)
replicated by artificial intelligence (AI). The firstpersonal, subjective, what-it-is-like-to-be-something nature of consciousness is thought to be untouchable by the computations, algorithms, processing and functions of AI method. Since AI is the most promising avenue toward artificial consciousness (AC), the conclusion many draw is that AC is..
This paper seeks to identify, clarify, and perhaps rehabilitate the virtual reality metaphor as applied to the goal of understanding consciousness. Some proponents of the metaphor apply it in a way that implies a representational view of experience of a particular, extreme form that is indirect, internal and inactive (what we call “presentational virtualism”). In opposition to this is an application of the metaphor that eschews representation, instead preferring to view experience as direct, external and enactive (“enactive virtualism”). This paper (...) seeks to examine some of the strengths and weaknesses of these virtuality-based positions in order to assist the development of a related, but independent view of experience: virtualist representationalism. Like presentational virtualism, this third view is representational, but like enactive virtualism, it places action centre-stage, and does not require, in accounting for the richness of visual experience, global representational “snapshots” corresponding to the entire visual field to be tokened at any one time. (shrink)
Summary. A distinction is made between two senses of the claim “cognition is computation”. One sense, the opaque reading, takes computation to be whatever is described by our current computational theory and claims that cognition is best understood in terms of that theory. The transparent reading, which has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it, is the claim that the best account of cognition will be given by whatever theory turns out (...) to be the best account of the phenomenon of computation. The distinction is clarified and defended against charges of circularity and changing the subject. Several well-known objections to computationalism are then reviewed, and for each the question of whether the transparent reading of the computationalist claim can provide a response is considered. (shrink)
The development and deployment of the notion of pre-objective or nonconceptual content for the purposes of intentional explanation of requires assistance from a practical and theoretical understanding of computational/robotic systems acting in real-time and real-space. In particular, the usual "that"-clause specification of content will not work for non-conceptual contents; some other means of specification is required, means that make use of the fact that contents are aspects of embodied and embedded systems. That is, the specification of non-conceptual content should use (...) concepts and insights gained from android design and android epistemology. (shrink)
It is suggested that some limitations of current designs for medical AI systems stem from the failure of those designs to address issues of artificial consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, (...) a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical expertise; to become more aware of relevant research in the field of machine consciousness ; and to incorporate insights gained from these efforts into their designs. (shrink)
A connectionist system that is capable of learning about the spatial structure of a simple world is used for the purposes of synthetic epistemology: the creation and analysis of artificial systems in order to clarify philosophical issues that arise in the explanation of how agents, both natural and artificial, represent the world. In this case, the issues to be clarified focus on the content of representational states that exist prior to a fully objective understanding of a spatial domain. In particular, (...) the criticisms of (Chrisley, 1993) that were raised in (Holland, 1994) are addressed: how can we determine that a system’s spatial representations are more objective than before? And under what conditions (tasks, training regimes, environments) do such increases in objectivity occur? After analysing the results of experiments that attempt to shed light on these questions, the study concludes by comparing and contrasting this work with related research. (shrink)
While the recent special issue of JCS on machine consciousness (Volume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published. 1 The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, and include (...) these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness; these individual reviews were then collected together with a minimal amount of editing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Con- sciousness had to be omitted, but here at last it is. Each paper’s review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors! (shrink)
A distinction is made between superpositional and non-superpositional quantum computers. The notion of quantum learning systems - quantum computers that modify themselves in order to improve their performance - is introduced. A particular non-superpositional quantum learning system, a quantum neurocomputer, is described: a conventional neural network implemented in a system which is a variation on the familiar two-slit apparatus from quantum physics. This is followed by a discussion of the advantages that quantum computers in general, and quantum neurocomputers in particular, (...) might bring, not only to our search for more powerful computational systems, but also to our search for greater understanding of the brain, the mind, and quantum physics itself. (shrink)
Paintings are usually paintings of things: a room in a palace, a princess, a dog. But what would it be to paint not those things, but the experience of seeing those things? Las Meninas is sufficiently sophisticated and masterfully executed to help us explore this question. Of course, there are many kinds of paintings: some abstract, some conceptual, some with more traditional subjects. Let us start with a focus on naturalistically depictive paintings: paintings that aim to cause an experience in (...) the viewer that is similar to the experience the viewer might have were they to see, in a way not mediated by paint, the subject of the painting. Of course, many or even most paintings do not strictly adhere to this aim; indeed, their artistry and expressiveness often consist in the ways in which this aim is subverted. For example, no viewer of the scene that Las Meninas depicts -- not even King Philip IV and Queen Mariana themselves -- would see what Velasquez paints in the mirror on the back wall. Other artists, such as Escher and Magritte, are even more blatant in their transgression of naturalism. But even in such cases, the aim of naturalistic depiction is the departure point for the aesthetic journey of perception and meaning. Asking our question is a natural consequence of rejecting dualism: if experiences are as much a part of the natural world as canvases, courtiers and Chamberlains, then they, too, should be capable of being painted. On the other hand, only the visible can be depicted in the sense described above, and rejecting dualism does not bring with it the implication that everything that is, is visible. One answer to our question, then, is pessimistic: there can be no painting of an experience, because experiences cannot be seen. Unlike the Infanta Margarita, and like justice, the number two, or feudal obligation, experiences, on this view, are not visible. But is this pessimism tenable? Wittgenstein writes: 'The timidity does not seem to be merely associated, outwardly connected, with the face; but fear is alive there, alive, in the features'. Similarly, McDowell maintains that we see another's pain in their expression, and their behaviour. To think otherwise invites solipsism. (shrink)
Searle (1980) constructed the Chinese Room (CR) to argue against what he called \Strong AI": the claim that a computer can understand by virtue of running a program of the right sort. Margaret Boden (1990), in giving the English Reply to the Chinese Room argument, has pointed out that there isunderstanding in the Chinese Room: the understanding required to recognize the symbols, the understanding of English required to read the rulebook, etc. I elaborate on and defend this response to Searle. (...) In particular, I use the insight of the English Reply to contend that Searle's Chinese Room cannot argue against what I call the claim of \Weak Strong AI": there are some cases of understanding that a computer can achieve solely by virtue of that computer running a program. I refute several objections to my defense of the Weak Strong AI thesis. (shrink)
It is by now commonly agreed that the proper study of consciousness requires a multidisciplinary approach which focuses on the varieties and dimensions of conscious experience from different angles. This book, which is based on a workshop held at the University of Skövde, Sweden, provides a microcosm of the emerging discipline of consciousness studies and focuses on some important but neglected aspects of consciousness. The book brings together philosophy, psychology, cognitive neuroscience, linguistics, cognitive and computer science, biology, physics, art and (...) the new media. It contains critical studies of subjectivity vs objectivity, nonconceptuality vs conceptuality, language, evolutionary aspects, neural correlates, microphysical level, creativity, visual arts and dreams. It is suitable as a text-book for a third-year undergraduate or a graduate seminar on consciousness studies. (shrink)
This article describes a heuristic argument for understanding certain physical systems in terms of properties that resemble the beliefs and goals of folk psychology. The argument rests on very simple assumptions. The core of the argument is that predictions about certain events can legitimately be based on assumptions about later events, resembling Aristotelian ‘final causation’; however, more nuanced causal entities must be introduced into these types of explanation in order for them to remain consistent with a causally local Universe.
The scientific field of Artificial Intelligence (AI) began in the 1950s but the concept of artificial intelligence, the idea of something with mind-like attributes, predates it by centuries. This historically rich concept has served as a blueprint for the research into intelligent machines. But it also has staggering implications for our notions of who we are: our psychology, biology, philosophy, technology and society. This reference work provides scholars in both the humanities and the sciences with the material essential for charting (...) the development of this concept. The set brings together; * primary texts from antiquity to the present, including the crucial foundational texts which defined the field of AI * historical accounts, including both comprehensive overviews and detailed snapshots of key periods * secondary material discussing the intellectual issues and implications which place the concept in a wider context. (shrink)
The European Association for Cognitive Systems is the association resulting from the EUCog network, which has been active since 2006. It has ca. 1000 members and is currently chaired by Vincent C. Müller. We ran our annual conference on December 08-09 2016, kindly hosted by the Technical University of Vienna with Markus Vincze as local chair. The invited speakers were David Vernon and Paul F.M.J. Verschure. Out of the 49 submissions for the meeting, we accepted 18 a papers and 25 (...) as posters (after double-blind reviewing). Papers are published here as “full papers” or “short papers” while posters are published here as “short papers” or “abstracts”. Some of the papers presented at the conference will be published in a separate special volume on ‘Cognitive Robot Architectures’ with the journal Cognitive Systems Research. - RC, VCM, YS, MV. (shrink)
Some foundational conceptual issues concerning anticipatory systems are identified and discussed: 1) The doubly temporal nature of anticipation is noted: anticipations are directed toward one time, and exist at another; 2) Anticipatory systems can be open: they can perturb and be perturbed by states external to the system; 3) Anticipation may be facilitated by a system modeling the relation between its own output, its environment, and its future input; 4) Anticipations must be a part of the system whose anticipations they (...) are. Each of these points are made more precise by considering what changes they require to be made to the basic equation characterising anticipatory systems. In addition, some philosophical questions concerning the content of anticipatory representations are considered. (shrink)
Previous work [Chrisley & Sloman, 2016, 2017] has argued that a capacity for certain kinds of meta-knowledge is central to modeling consciousness, especially the recalcitrant aspects of qualia, in computational architectures. After a quick review of that work, this paper presents a novel objection to Frank Jackson's Knowledge Argument (KA) against physicalism, an objec- tion in which such meta-knowledge also plays a central role. It is first shown that the KA's supposition of a person, Mary, who is physically omniscient, and (...) yet who has not experienced seeing red, is logically inconsistent, due to the existence of epistemic blindspots for Mary. It is then shown that even if one makes the KA consistent by supposing a more limited physical omniscience for Mary, this revised argument is invalid. This demonstration is achieved via the construction of a physical fact (a recursive conditional epistemic blindspot) that Mary cannot know before she experiences seeing red for the first time, but which she can know afterward. After considering and refuting some counter-arguments, the paper closes with a discussion of the implications of this argument for machine consciousness, and vice versa. (shrink)
Velmans’ paper raises three problems concerning mental causation: (1) How can consciousness affect the physical, given that the physical world appears causally closed? 10 (2) How can one be in conscious control of processes of which one is not consciously aware? (3) Conscious experiences appear to come too late to causally affect the processes to which they most obviously relate. In an appendix Velmans gives his reasons for refusing to resolve these problems through adopting the position (which he labels ‘physicalism’) (...) that ‘consciousness is nothing more than a state of the brain’. The rest of the paper, then, is an attempt to solve these problems without embracing a reductionist physicalism. Velmans’ solution to the first problem is ‘ontological monism combined with epistemological dualism’: First-person and third-person accounts are two different ways of knowing the same facts. This kind of reply is not new; it is, for example, a twist on the position expressed in Davidson (1970). True, there are substantial differences: For one, Davidson reconciles the tension between descriptions of events in mentalistic and physicalist language, not between firstand third-person descriptions of states; for another, Davidson actually provides an argument for his position, although to do so he assumes that there are no psycho-physical (or indeed, psycho-psycho) laws, something which I suspect Velmans would be reluctant to do. Nevertheless, they have in common the idea that the causal efficacy of the mental is not at odds with the causal closure of physics, since a mind-involving causal story is just another way of talking about the same facts that a purely physical causal story talks about. This ‘dual-aspect’ approach is a popular tactic for resolving the mind--body problem, but it has some well-known problems, and it is unfortunate Velmans doesn’t reply to these standard objections. For example, a frequently discussed issue in connection with theories of mental causation is the problem of overdetermination (see, e.g., Unger, 1977; Peacocke, 1979). (shrink)
"Let us call whoever invented the zip "Julius"." With this stipulation, Gareth Evans introduced "Julius" into the language as one of a category of terms that seem to lie somewhere between definite descriptions (such as "whoever invented the zip") and proper names (such as "John", or "Julius" as usually used) (Evans 1982: 31). He dubbed these terms "descriptive names"1, and used them as a foil against which to test several theories of reference: Frege's, Russell's, and his own. I want to (...) look at some tensions in the first two chapters of The Varieties of Reference, tensions in Evans' account of singular terms that become apparent his account of descriptive names in particular. Specifically, I will concentrate on his claim that although descriptive names are referring expressions, they are not Russellian terms (i. e., terms which cannot contribute to the expression of a thought when they lack a referent). A recurring theme in this paper, and perhaps its sole point of interest for those not directly concerned with how to account for singular terms, is an attempt to place the blame for Evans' difficulties with an aspect of his thinking and method which I have referred to as "anti-realism". This might be confusing, as the aspect I am criticising is often of a vague and general sort, more akin to the ancient idea that "man is the measure of all things" than to any of the technical modern positions for which the term "anti-realism" is now normally used. But to refer to this aspect as "Protagorean" would suggest that I am accusing Evans of having been some kind of relativist, which I have no wish to do. Furthermore, there are times when the aspect does take a form which has more similarities to than differences from conventional notions of anti-realism. (shrink)
(1) Van Gelder's concession that the dynamical hypothesis is not in opposition to computation in general does not agree well with his anticomputational stance. (2) There are problems with the claim that dynamic systems allow for nonrepresentational aspects of computation in a way in which digital computation cannot. (3) There are two senses of the “cognition is computation” claim and van Gelder argues against only one of them. (4) Dynamical systems as characterized in the target article share problems of universal (...) realizability with formal notions of computation, but differ in that there is no solution available for them. (5) The dynamical hypothesis cannot tell us what cognition is, because instantiating a particular dynamical system is neither necessary nor sufficient for being a cognitive agent. (shrink)
Standard, linguistic means of specifying the content of mental states do so by expressing the content in question. Such means fail when it comes to capturing non-conceptual aspects of visual experience, since no linguistic expression can adequately express such content. One alternative is to use depictions: images that either evoke (reproduce in the recipient) or refer to the content of the experience. Practical considerations concerning the generation and integration of such depictions argue in favour of a synthetic approach: the generation (...) of depictions through the use of an embodied, perceiving and acting agent, either virtual or real. This paper takes the first steps in an investigation as to how one might use a robot to specify the non-conceptual content of the visual experience of an (hypothetical) organism that the robot models. (shrink)
It is argued that standard arguments for the Externalism of mental states do not succeed in the case of pre-linguistic mental states. Further, it is noted that standard arguments for Internalism appeal to the principle that our individuation of mental states should be driven by what states are explanatory in our best cognitive science. This principle is used against the Internalist to reject the necessity of narrow individuation of mental states, even in the prelinguistic case. This is done by showing (...) how the explanation of some phenomena requires quantification over broadly-individuated, world-involving states; sometimes externalism is required. Although these illustrative phenomena are not mental, they are enough to show the general argumentative strategy to be incorrect: scientific explanation does not require narrowly-individuated states. (shrink)
have context-sensitive constituents, but rather because they sometimes have no constituents at all. The argument to be rejected depends on the assumption that one can only assign propositional contents to representations if one starts by assigning sub-propositional contents to atomic representations. I give some philosophical arguments and present a counterexample to show that this assumption is mistaken.
This Neurocomputing special issue is based on selected, expanded and significantly revised versions of papers presented at the Second International Conference on Brain Inspired Cognitive Systems (BICS 2006) held at Lesvos, Greece, from 10 to 14 October 2006. The aim of BICS 2006, which followed the very successful first BICS 2004 held at Stirling, Scotland, was to bring together leading scientists and engineers who use analytic, syntactic and computational methods both to understand the prodigious processing properties of biological systems and, (...) specifically, of the brain, and to exploit such knowledge to advance computational methods towards ever higher levels of cognitive competence. The biennial BICS Conference Series (with BICS 2008 recently held in Sao Luis, Brazil, 24–27 June, and BICS 2010 due to be held in Madrid, Spain) aims to become a major point of contact for research scientists, engineers and practitioners throughout the world in the fields of cognitive and computational systems inspired by the brain and biology. The first paper in this special issue is by Carnell who presents an analysis of the use of Hebbian and Anti-Hebbian spike timedependent plasticity (STDP) learning functions within the context of recurrent spiking neural networks. He shows that under specific conditions Hebbian and Anti-Hebbian learning can be considered approximately equivalent. Finally, the author demonstrates that such a network habituates to a given stimulus and is capable of detecting subtle variations in the structure of the stimuli itself. Hodge, O’Keefe and Austin present a binary neural shape matcher using Johnson counters and chain codes. They show that images may be matched as whole images or using shape matching. Finally, they demonstrate shape matching using a binary associative-memory neural network to index and match chain codes where the chain code elements are represented by Johnson codes. (shrink)
It is claimed that there are pre-objective phenomena, which cognitive science should explain by employing the notion of non-conceptual representational content. It is argued that a match between parallel distributed processing (PDP) and non-conceptual content (NCC) not only provides a means of refuting recent criticisms of PDP as a cognitive architecture; it also provides a vehicle for NCC that is required by naturalism. A connectionist cognitive mapping algorithm is used as a case study to examine the affinities between PDP and (...) NCC. (shrink)