In this article the question is raised whether artificial intelligence has any psychological relevance, i.e. contributes to our knowledge of how the mind/brain works. It is argued that the psychological relevance of artificial intelligence of the symbolic kind is questionable as yet, since there is no indication that the brain structurally resembles or operates like a digital computer. However, artificial intelligence of the connectionist kind may have psychological relevance, not because the brain is a neural network, but (...) because connectionist networks exhibit operating characteristics which mimic operant behavior. Finally it is concluded that, since most of the work done so far in AI and Law is of the symbolic kind, it has as yet contributed little to our understanding of the legal mind. (shrink)
The article investigates the interplay of moral rules in computer simulation. The investigation is based on two situations which are well-known to game theory: the prisoner''s dilemma and the game of Chicken. The prisoner''s dilemma can be taken to represent contractual situations, the game of Chicken represents a competitive situation on the one hand and the provision for a common good on the other. Unlike the rules usually used in game theory, each player knows the other''s strategy. In that way, (...) ever higher levels of reflection are reached reciprocally. Such strategies can be interpreted as moral rules.Artificial morality is related to the discipline of Artificial Life. As in artificial life, the use of genetic algorithms suggests itself. Rules of behaviour split and reunite as chromosome strings do. (shrink)
In this paper I start from a definition of “culture of the artificial” which might be stated by referring to the background of philosophical, methodological, pragmatical assumptions which characterizes the development of the information processing analysis of mental processes and of some trends in contemporary cognitive science: in a word, the development of AI as a candidate science of mind. The aim of this paper is to show how (with which plausibility and limitations) the discovery of the mentioned background (...) might be dated back to a period preceding the cybernetic era, the decade 1930–1940 at least. Therefore a somewhat detailed analysis of Hull's “robot approach” is given, as well as of some of its independent and future developments. -/- Reprinted in R.L. Chrisley (ed.), Artificial Intelligence: Critical Concepts in Cognitive Science, vol. 1, Routledge, London and New York, 2000, pp. 301-326. (shrink)
Software agents’ ability to interact within different open systems, designed by different groups, presupposes an agreement on an unambiguous definition of a set of concepts, used to describe the context of the interaction and the communication language the agents can use. Agents’ interactions ought to allow for reliable expectations on the possible evolution of the system; however, in open systems interacting agents may not conform to predefined specifications. A possible solution is to define interaction environments including a normative component, with (...) suitable rules to regulate the behaviour of agents. To tackle this problem we propose an application-independent metamodel of artificial institutions that can be used to define open multiagent systems. In our view an artificial institution is made up by an ontology that models the social context of the interaction, a set of authorizations to act on the institutional context, a set of linguistic conventions for the performance of institutional actions and a system of norms that are necessary to constrain the agents’ actions. (shrink)
Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that (...)artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them. (shrink)
Jan Greben criticized fine-tuning by taking seriously the idea that “nature is quantum mechanical”. I argue that this quantum view is limited, and that fine-tuning is real, in the sense that our current physical models require fine-tuning. Second, I examine and clarify many difficult and fundamental issues raised by Rüdiger Vaas’ comments on Cosmological Artificial Selection.
While the recent special issue of JCS on machine consciousness (Volume 14, Issue 7) was in preparation, a collection of papers on the same topic, entitled Artificial Consciousness and edited by Antonio Chella and Riccardo Manzotti, was published. 1 The editors of the JCS special issue, Ron Chrisley, Robert Clowes and Steve Torrance, thought it would be a timely and productive move to have authors of papers in their collection review the papers in the Chella and Manzotti book, and (...) include these reviews in the special issue of the journal. Eight of the JCS authors (plus Uziel Awret) volunteered to review one or more of the fifteen papers in Artificial Consciousness; these individual reviews were then collected together with a minimal amount of editing to produce a seamless chapter-by-chapter review of the entire book. Because the number and length of contributions to the JCS issue was greater than expected, the collective review of Artificial Con- sciousness had to be omitted, but here at last it is. Each paper’s review is written by a single author, so any comments made may not reflect the opinions of all nine of the joint authors! (shrink)
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...) Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificial intelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled (...) theories with respect to the prerequisites that an ICT must satisfy in order to count as a moral agent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious. (shrink)
The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently (...) a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)
In the United States, the decision of whether to withdraw or continue to provide artificial nutrition and hydration (ANH) for patients in a permanent vegetative state (PVS) is placed largely in the hands of surrogate decision-makers, such as spouses and immediate family members. This practice would seem to be consistent with a strong national emphasis on autonomy and patient-centered healthcare. When there is ambiguity as to the patient's advanced wishes, the presumption has been that decisions should weigh in favor (...) of maintaining life, and therefore, that it is the withdrawal rather than the continuation of ANH that requires particular justification. I will argue that this default position should be reversed. Instead, I will argue that the burden of justification lies with those who would continue artificial nutrition and hydration (ANH), and in the absence of knowledge as to the patient's advanced wishes, it is better to discontinue ANH. In particular, I will argue that among patients in PVS, there is not a compelling interest in being kept alive; that in general, we commit a worse violation of autonomy by continuing ANH when the patient's wishes are unknown; and that more likely than not, the maintenance of ANH as a bridge to a theoretical future time of recovery goes against the best interests of the patient. (shrink)
Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (...) (most interestingly for us, of AAs). We conclude that there is substantial and important scope, particularly in Computer Ethics, for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility. This complements the more traditional approach, common at least since Montaigne and Descartes, which considers whether or not (artificial) agents have mental states, feelings, emotions and so on. By focussing directly on mind-less morality we are able to avoid that question and also many of the concerns of Artificial Intelligence. A vital component in our approach is the Method of Abstraction for analysing the level of abstraction (LoA) at which an agent is considered to act. The LoA is determined by the way in which one chooses to describe, analyse and discuss a system and its context. The Method of Abstraction is explained in terms of an interface or set of features or observables at a given LoA. Agenthood, and in particular moral agenthood, depends on a LoA. Our guidelines for agenthood are: interactivity (response to stimulus by change of state), autonomy (ability to change state without stimulus) and adaptability (ability to change the transition rules by which state is changed) at a given LoA. Morality may be thought of as a threshold defined on the observables in the interface determining the LoA under consideration. An agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a software or digital system, and the observables are numerical. Finally we review the consequences for Computer Ethics of our approach. In conclusion, this approach facilitates the discussion of the morality of agents not only in Cyberspace but also in the biosphere, where animals can be considered moral agents without their having to display free will, emotions or mental states, and in social contexts, where systems like organizations can play the role of moral agents. The primary cost of this facility is the extension of the class of agents and moral agents to embrace AAs. (shrink)
Instead of using low-level neurophysiology mimicking and exploratory programming methods commonly used in the machine consciousness field, the hierarchical Operational Architectonics (OA) framework of brain and mind functioning proposes an alternative conceptual-theoretical framework as a new direction in the area of model-driven machine (robot) consciousness engineering. The unified brain-mind theoretical OA model explicitly captures (though in an informal way) the basic essence of brain functional architecture, which indeed constitutes a theory of consciousness. The OA describes the neurophysiological basis of the (...) phenomenal level of brain organization. In this context the problem of producing man-made “machine” consciousness and “artificial” thought is a matter of duplicating all levels of the operational architectonics hierarchy (with its inherent rules and mechanisms) found in the brain electromagnetic field. We hope that the conceptual-theoretical framework described in this paper will stimulate the interest of mathematicians and/or computer scientists to abstract and formalize principles of hierarchy of brain operations which are the building blocks for phenomenal consciousness and thought. (shrink)
Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which is (...) to say, I try to show that this challenge can in fact be met by AI in the foreseeable future. (shrink)
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to (...) discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies. (shrink)
In this paper I provide an epistemological context for Artificial Life projects. Later on, the insights which such projects will exhibit may be used as a general direction for further Artificial Life implementations. The purpose of such a model is to demonstrate by way of simulation how higher cognitive structures may emerge from building invariants by simple sensorimotor beings. By using the bottom-up methodology of Artificial Life, it is hoped to overcome problems that arise from dealing with (...) complex systems, such as the phenomenon of cognition. The research will lead to both epistemological and technical implications. The proposed ALife model is intended to point out the usefulness of an interdisciplinary approach including methodological approaches from disciplines such as Artificial Intelligence, Cognitive Science, Theoretical Biology, and Artificial Life. I try to put them in one single context. The epistemological background which is necessary for this purpose comes from the ideas developed in both epistemological and psychological Constructivism. The model differs from other ALife approaches— and is somewhat radical in this sense—as it tries to start on the lowest possible level, i.e. avoids several a priori assumptions and anthropocentric ascriptions. Due to this characterization, the project may be alternatively viewed as testing the complementary relationship between epistemology and methodology. (shrink)
The emotions have been one of the most fertile areas of study in psychology, neuroscience, and other cognitive disciplines. Yet as influential as the work in those fields is, it has not yet made its way to the desks of philosophers who study the nature of mind. Passionate Engines unites the two for the first time, providing both a survey of what emotions can tell us about the mind, and an argument for how work in the cognitive disciplines can help (...) us develop new ways of understanding the mind as a whole. Craig DeLancey shows that our best philosophical and scientific understanding of the emotions provides essential insights on key issues in the philosophy of mind and artificial intelligence: intentionality, aesthetics, rationality, action theory, moral psychology, consciousness, ontology and autonomy. He provides an accessible overview of the science of emotion, explaining with minimal jargon the technical issues that arise. The book also offers new ways to understand the mind, suggesting that it is autonomy--and not cognition--that should be the core problem of the philosophy of mind, cognitive science, and artificial intelligence. DeLancey argues that the philosophy of mind has been held back by an impoverished view of naturalism, and that a proper appreciation of the complexity of the sciences of mind, readily demonstrated by the science of emotion, will overcome this. Passionate Engines provides a unique, contemporary view of the link between science and philosophy, offering a bold new way of looking at the mind for scholars in a range of disciplines. Its accessible and refreshing approach will appeal to philosophers, psychologists, computer scientists, others in the cognitive disciplines, and lay people interested in the mind. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such (...) question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
According to the scenario of cosmological artificial selection (CAS) and artificial cosmogenesis, our universe was created and possibly even fine-tuned by cosmic engineers in another universe. This approach shall be compared to other explanations, and some far-reaching problems of it shall be discussed.
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also (...) argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them. (shrink)
The peculiarity of the relationship between philosophy and Artificial Intelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. Moreover, (...) this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view. (shrink)
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for (...) computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors. (shrink)
Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule-following mechanism in order to induce grammatical regularities. In this article, we present a set of neural network (...) studies demonstrating that a single statistical mechanism can mimic the apparent discovery of structural regularities, beyond the segmentation of speech. We argue that our results undermine one argument for the MOM hypothesis. (shrink)
A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...) discuss ways in which philosophical problems of scepticism are related to the problems faced by knowledge representation. We suggest that some of the methods that philosophers have developed to address the problems of epistemology may be relevant to the problems of representing knowledge within artificial agents. (shrink)
This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, (...) agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent. (shrink)
Artificial Life (ALife) has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other (...) is “wet” ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife’s most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents. (shrink)
This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis of (...) this phenomenon. The analysis first focuses on an agent’s trustworthiness , this one is presented as the necessary requirement for e-trust to occur. Then, a new definition of e-trust as a second-order-property of first-order relations is presented. It is shown that the second-order-property of e-trust has the effect of minimising an agent’s effort and commitment in the achievement of a given goal. On this basis, a method is provided for the objective assessment of the levels of e-trust occurring among the artificial agents of a distributed artificial system. (shrink)
Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificial intelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research.
The aims of this paper are threefold: To show that game-playing (GP), the discipline of Artificial Intelligence (AI) concerned with the development of automated game players, has a strong epistemological relevance within both AI and the vast area of cognitive sciences. In this context games can be seen as a way of securely reducing (segmenting) real-world complexity, thus creating the laboratory environment necessary for testing the diverse types and facets of intelligence produced by computer models. This paper aims to (...) promote the belief that games represent an excellent tool for the project of computational psychology (CP). To underline how, despite this, GP has mainly adopted an engineering-inspired methodology and in doing so has distorted the framework of cognitive functionalism. Many successes (i.e. chess, checkers) have been achieved refusing human-like reasoning. The AI has appeared to work well despite ignoring an intrinsic motivation, that of creating an explanatory link between machines and mind. To assert that substantial improvements in GP may be obtained in the future only by renewed interest in human-inspired models of reasoning and in other cognitive studies. In fact, if we increase the complexity of games (from NP-Completeness to AI-Completeness) in order to reproduce real-life problems, computer science techniques enter an impasse. Many of AI’s recent GP experiences can be shown to validate this. The lack of consistent philosophical foundations for cognitive AI and the minimal philosophical commitment of AI investigation are two of the major reasons that play an important role in explaining why CP has been overlooked. (shrink)
The book provides a valuable text for undergraduate and graduate courses on the historical and theoretical issues of Cognitive Science, Artificial Intelligence, Psychology, Neuroscience, and the Philosophy of Mind. The book should also be of interest for researchers in these fields, who will find in it analyses of certain crucial issues in both the earlier and more recent history of their disciplines, as well as interesting overall insights into the current debate on the nature of mind.
This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
The term “Contemplative sciences” refers to an interdisciplinary approach to mind that aims at a better understanding of alternative states of consciousness, like those obtained trough deep concentration and meditation, mindfulness and other “superior” or “spiritual” mental states. There is, however, a key discipline missing: artificial intelligence. AI has forgotten its original aims to create intelligent machines that could help us to understand better what intelligence is and is more worried about pragmatical stuff, so almost nobody in the field (...) seems to be interested to join this new effort of contemplative science. In this paper, I would like to accomplish the following: (1) To give a brief description of the field of “contemplative sciences;” (2) To argue why AI should actively join this new paradigm on the study of the mind; and (3) To set up a research program on artificial wisdom: that is to design computational systems that can model at least some relevant aspects of human wisdom. (shrink)
Cyberfeminism and Artificial Life examines construction, manipulation and re-definition of life in contemporary technoscientific culture. It takes a critical political view of the concept of life as information, tracing this through the new biology and the changing discipline of artificial life and its manifestation in art, language, literature, commerce and entertainment. From cloning to computer games, and incorporating an analysis of hardware, software and 'wetware', Sarah Kember demonstrates how this relatively marginal field connects with, and connects up global (...) networks of information systems. As well as offering suggestions for the evolution of [cyber]feminism in Alife environments, the author identifies the emergence of posthumanism; an ethics of the posthuman subject mobilized in the tension between cold war and post-cold war politics, psychological and biological machines, centralized and de-centralized control, top-down and bottom-up processing, autonomous and autopoietic organisms, cloning and transgenesis, species-self and other species. Ultimately, this book aims to re-focus concern on the ethics rather than on the 'nature' of life-as-it-could-be. (shrink)
Abstract Artificial language philosophy (also called ‘ideal language philosophy’) is the position that philosophical problems are best solved or dissolved through a reform of language. Its underlying methodology—the development of languages for specific purposes—leads to a conventionalist view of language in general and of concepts in particular. I argue that many philosophical practices can be reinterpreted as applications of artificial language philosophy. In addition, many factually occurring interrelations between the sciences and philosophy of science are justified and clarified (...) by the assumption of an artificial language methodology. Content Type Journal Article Category Original paper in Philosophy of Science Pages 1-23 DOI 10.1007/s13194-011-0042-6 Authors Sebastian Lutz, Theoretical Philosophy Unit, Utrecht University, Postbus 80126, 3508 TC Utrecht, The Netherlands Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912. (shrink)
Medieval Arabic algebra is a good example of an artificial language.Yet despite its abstract, formal structure, its utility was restricted to problem solving. Geometry was the branch of mathematics used for expressing theories. While algebra was an art concerned with finding specific unknown numbers, geometry dealtwith generalmagnitudes.Algebra did possess the generosity needed to raise it to a more theoretical level—in the ninth century Abū Kāmil reinterpreted the algebraic unknown “thing” to prove a general result. But mathematicians had no motive (...) to rework their theories in algebraic form. Because it offered no advantage over geometry, algebra remained a practical art in both the Islamic world and in Europe until the scientific uphevals of the 17th–18th centuries. (shrink)
This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (...) (1969) study of refutational argument, this study considers points of contact between opposing arguments that emerged in opposing loci, dissociations, and casuistic reasoning. In particular, it shows how perceptions of AI were reframed and rehabilitated through metaphorical language, reversal of the philosophical pair artificial/natural, appeals to the paradigm case, and use of the loci of quantity and essence. Furthermore, examining responses to the book in subsequent arguments indicates the topoi characteristic of the rhetoric of technology advocacy. (shrink)
The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in an off-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently a (...) TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)
Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is theproduct of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product ofnonhuman agency, and so includes natural disasters such asearthquakes, floods, disease and famine; and finally, that morecomplex cases are appropriately analysed as a combination of MEand NE. Recently, as a result of developments in autonomousagents in cyberspace, a new class of interesting and (...) importantexamples of hybrid evil has come to light. In this paper, it iscalled artificial evil (AE) and a case is made for considering itto complement ME and NE to produce a more adequate taxonomy. Byisolating the features that have led to the appearance of AE,cyberspace is characterised as a self-contained environment thatforms the essential component in any foundation of the emergingfield of Computer Ethics (CE). It is argued that this goes someway towards providing a methodological explanation of whycyberspace is central to so many of CE's concerns; and it isshown how notions of good and evil can be formulated incyberspace. Of considerable interest is how the propensity for anagent's action to be morally good or evil can be determined evenin the absence of biologically sentient participants and thusallows artificial agents not only to perpetrate evil (and forthat matter good) but conversely to `receive' or `suffer from'it. The thesis defended is that the notion of entropy structure,which encapsulates human value judgement concerning cyberspace ina formal mathematical definition, is sufficient to achieve thispurpose and, moreover, that the concept of AE can be determinedformally, by mathematical methods. A consequence of this approachis that the debate on whether CE should be considered unique, andhence developed as a Macroethics, may be viewed, constructively,in an alternative manner. The case is made that whilst CE issuesare not uncontroversially unique, they are sufficiently novel torender inadequate the approach of standard Macroethics such asUtilitarianism and Deontologism and hence to prompt the searchfor a robust ethical theory that can deal with them successfully.The name Information Ethics (IE) is proposed for that theory. Itis argued that the uniqueness of IE is justified by its beingnon-biologically biased and patient-oriented: IE is anEnvironmental Macroethics based on the concept of data entityrather than life. It follows that the novelty of CE issues suchas AE can be appreciated properly because IE provides a newperspective (though not vice versa). In light of the discussionprovided in this paper, it is concluded that Computer Ethics isworthy of independent study because it requires its ownapplication-specific knowledge and is capable of supporting amethodological foundation, Information Ethics. (shrink)
Artificial Intelligence has become big business in the military and in many industries. In spite of this growth there still remains no consensus about what AI really is. The major factor which seems to be responsible for this is the lack of agreement about the relationship between behavior and intelligence. In part certain ethical concerns generated from saying who, what and how intelligence is determined may be facilitating this lack of agreement.
I survey four categories of factors that might give a digital mind, such as an upload or an artificial general intelligence, an advantage over humans. Hardware advantages include greater serial speeds and greater parallel speeds. Self-improvement advantages include improvement of algorithms, design of new mental modules, and modification of motivational system. Co-operative advantages include copyability, perfect co-operation, improved communication, and transfer of skills. Human handicaps include computational limitations and faulty heuristics, human-centric biases, and socially motivated cognition. The shape of (...) hardware growth curves, as well as the ease of modifying minds, are found to have a major impact on how quickly a digital mind may take advantage of these factors. (shrink)
The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...) pretends to do completely without the distinction, while using agent-centered concepts all the way. It is argued that the rejection of agent level explanations in favour of mechanistic ones is due to an unmotivated need to choose among representationalism and eliminativism. The dilemma is a false one if the possibility of a radical form of externalism is considered. (shrink)
Recent work in artificial intelligence has increasingly turned to argumentation as a rich, interdisciplinary area of research that can provide new methods related to evidence and reasoning in the area of law. Douglas Walton provides an introduction to basic concepts, tools and methods in argumentation theory and artificial intelligence as applied to the analysis and evaluation of witness testimony. He shows how witness testimony is by its nature inherently fallible and sometimes subject to disastrous failures. At the same (...) time such testimony can provide evidence that is not only necessary but inherently reasonable for logically guiding legal experts to accept or reject a claim. Walton shows how to overcome the traditional disdain for witness testimony as a type of evidence shown by logical positivists, and the views of trial sceptics who doubt that trial rules deal with witness testimony in a way that yields a rational decision-making process. (shrink)
The term âthe artificialâ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of (...) the dominant philosophy of logical positivism. By contrast, artificial intelligence (AI) adopted an anti-positivist position which left it marginalised until the 1980s when two camps emerged: technical AI which reverted to positivism, and strong AI which reified intelligence. Strong AI's commitment to the computer as a symbol processing machine and its use of models links it to late-modernism. The more directly experiential Virtual Reality (VR) more closely reflects the contemporary cultural climate of postmodernism. It is VR, rather than AI, that is more likely to form the basis of a culture of the artificial. (shrink)