About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Bostrom manuscriptMüller 2014
Introductions Müller 2013White 2015Gunkel 2012
Related categories

811 found
Order:
1 — 50 / 811
Material to categorize
  1. Consensus and Authenticity in Representation: Simulation as Participative Theatre. [REVIEW]Michael T. Black - 1993 - AI and Society 7 (1):40-51.
    Representation was invented as an issue during the 17th century in response to specific developments in the technology of simulation. It remains an issue of central importance today in the design of information systems and approaches to artificial intelligence. Our cultural legacy of thought about representation is enormous but as inhibiting as it is productive. The challenge to designers of representative technology is to reshape this legacy by enlarging the politics rather than the technics of simulation.
  2. Computational Neural Modeling and the Philosophy of Ethics Reflections on the Particularism-Generalism Debate.Mar Cello Guarim - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  3. Design, Development, and Evaluation of an Interactive Simulator for Engineering Ethics Education (Seee).Christopher A. Chung & Michael Alfred - 2009 - Science and Engineering Ethics 15 (2):189-199.
    Societal pressures, accreditation organizations, and licensing agencies are emphasizing the importance of ethics in the engineering curriculum. Traditionally, this subject has been taught using dogma, heuristics, and case study approaches. Most recently a number of organizations have sought to increase the utility of these approaches by utilizing the Internet. Resources from these organizations include on-line courses and tests, videos, and DVDs. While these individual approaches provide a foundation on which to base engineering ethics, they may be limited in developing a (...)
  4. The Energetic Dimension of Emotions: An Evolution-Based Computer Simulation with General Implications.Luc Ciompi & Martin Baatz - 2008 - Biological Theory 3 (1):42-50.
    Viewed from an evolutionary standpoint, emotions can be understood as situation-specific patterns of energy consumption related to behaviors that have been selected by evolution for their survival value, such as environmental exploration, flight or fight, and socialization. In the present article, the energy linked with emotions is investigated by a strictly energy-based simulation of the evolution of simple autonomous agents provided with random cognitive and motor capacities and operating among food and predators. Emotions are translated into evolving patterns of energy (...)
  5. Linguistic Anchors in the Sea of Thought?Andy Clark - 1996 - Pragmatics and Cognition 4 (1):93-103.
    Andy Clark is currently Professor of Philosophy and Director of the Philosophy/Neuroscience/Psychology program at Washington University in St. Louis, Missouri. He is the author of two books MICROCOGNITION (MIT Press/Bradford Books 1989) and ASSOCIATIVE ENGINES (MIT Press/Bradford Books, 1993) as well as numerous papers and four edited volumes. He is an ex- committee member of the British Society for the Philosophy of Science and of the Society for Artificial Intelligence and the Simulation of Behavior. Awards include a visiting Fellowship at (...)
  6. Dialogues in Natural Language with Guru, a Psychologic Inference Engine.Kenneth M. Colby, Peter M. Colby & Robert J. Stoller - 1990 - Philosophical Psychology 3 (2 & 3):171 – 186.
    The aim of this project was to explore the possibility of constructing a psychologic inference engine that might enhance introspective self-awareness by delivering inferences about a user based on what he said in interactive dialogues about his closest opposite-sex relation. To implement this aim, we developed a computer program (guru) with the capacity to simulate human conversation in colloquial natural language. The psychologic inferences offered represent the authors' simulations of their commonsense psychology responses to expected user-input expressions. The heuristics of (...)
  7. DARES: Documents Annotation and Recombining System—Application to the European Law. [REVIEW]Fady Farah & François Rousselot - 2007 - Artificial Intelligence and Law 15 (2):83-102.
    Accessing legislation via the Internet is more and more frequent. As a result, systems that allow consultation of law texts are becoming more and more powerful. This paper presents DARES, a generic system which can be adapted to any domain to handle documents production needs. It is based on an annotation engine which allows obtaining XML documents inputs as required by the system, and on an XML fragments recombining system. The latter operates using a fragment manipulation functions toolbox to generate (...)
  8. On the Role of AI in the Ongoing Paradigm Shift Within the Cognitive Sciences.Tom Froese - 2007 - In M. Lungarella (ed.), 50 Years of AI. Springer Verlag.
    This paper supports the view that the ongoing shift from orthodox to embodied-embedded cognitive science has been significantly influenced by the experimental results generated by AI research. Recently, there has also been a noticeable shift toward enactivism, a paradigm which radicalizes the embodied-embedded approach by placing autonomous agency and lived subjectivity at the heart of cognitive science. Some first steps toward a clarification of the relationship of AI to this further shift are outlined. It is concluded that the success of (...)
  9. On the Re-Materialization of the Virtual.Ismo Kantola - 2013 - AI and Society 28 (2):189-198.
    The so-called new economy based on the global network of digitalized communication was welcomed as a platform of innovations and as a vehicle of advancement of democracy. The concept of virtuality captures the essence of the new economy: efficiency and free access. In practice, the new economy has developed into an heterogenic entity dominated by practices such as propagation of trust and commitment to standards and standard-like technological solutions; entrenchment of locally strategic subsystems; surveillance of unwanted behavior. Five empirical cases (...)
  10. Special Issue on Social Impact of AI: Killer Robots or Friendly Fridges. [REVIEW]Greg Michaelson & Ruth Aylett - 2011 - AI and Society 26 (4):317-318.
  11. Evolution: The Computer Systems Engineer Designing Minds.Aaron Sloman - 2011 - Avant: Trends in Interdisciplinary Studies 2 (2):45–69.
    What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...)
  12. Knowledge-Based Systems and Issues of Integration: A Commercial Perspective. [REVIEW]Karl M. Wiig - 1988 - AI and Society 2 (3):209-233.
    Commercial applications of knowledge-based systems are changing from an embryonic to a growth business. Knowledge is classified by levels and types to differentiate various knowledge-based systems. Applications are categorized by size, generic types, and degree of intelligence to establish a framework for discussion of progress and implications. A few significant commercial applications are identified and perspectives and implications of these and other systems are discussed. Perspectives relate to development paths, delivery modes, types of integration, and resource requirements. Discussion includes organizational (...)
  13. Simulation, Theory, and the Frame Problem: The Interpretive Moment.William S. Wilkerson - 2001 - Philosophical Psychology 14 (2):141-153.
    The theory-theory claims that the explanation and prediction of behavior works via the application of a theory, while the simulation theory claims that explanation works by putting ourselves in others' places and noting what we would do. On either account, in order to develop a prediction or explanation of another person's behavior, one first needs to have a characterization of that person's current or recent actions. Simulation requires that I have some grasp of the other person's behavior to project myself (...)
  14. University in Second Life —the Experiment's Results.Andrzej Wodecki & Rafał Moczadło - 2009 - Dialogue and Universalism 19 (1-2):109-120.
    The article presents some conclusions arising from an educational experiment conducted by the University Centre for Distance Learning (Maria Curie-Skłodowska University in Lublin). The aim of the experiment was to verify the applicability of Second Life for educational purposes. The most important conclusion of the experiment is that SL is not as much productive as an e-learning platform but it is quite efficient for the realization of multi-disciplinary projects. It is also effective as a tool for creating digital animations and (...)
  15. Human Factors in Information Technology: The Socio-Organisational Aspects of Expert Systems Design. [REVIEW]Evans E. Woherem - 1991 - AI and Society 5 (1):18-33.
    This paper looks beyond the mostly technical and business issues that currently inform the design of knowledge-based systems (e.g., expert systems) to point out that there is also a social and organisational (a socio-organisational) dimension to the issues affecting the design decisions of expert systems and other information technologies. It argues that whilst technical and business issues are considered before the design of Expert Systems, that socio-organisational issues determine the acceptance and long-run utility of the technology after it has been (...)
  16. A Conceptual Framework for Society-Oriented Decision Support.Yingjie Yang, David Gillingwater & Chris Hinde - 2005 - AI and Society 19 (3):279-291.
    Inspired by the operation of human social organisation, this paper presents a new architecture—a pyramid-committee—for developing society-oriented intelligence, whose structure imitates the organisation of human society in its decision making. The system takes a pyramid-like hierarchical structure with links in the pyramid forming a semi-lattice, which relate not only to nodes in the same layer, but also to others in different layers. The output of the system is a result of the negotiation and balancing of different interests. For such a (...)
  17. Agent-Based Simulation and Sociological Understanding.Petri Ylikoski - 2014 - Perspectives on Science 22 (3):318-335.
    This article discusses agent-based simulation (ABS) as a tool of sociological understanding. I argue that agent-based simulations can play an important role in the expansion of explanatory understanding in the social sciences. The argument is based on an inferential account of understanding (Ylikoski 2009, Ylikoski & Kuorikoski 2010), according to which computer simulations increase our explanatory understanding by expanding our ability to make what-if inferences about social processes and by making these inferences more reliable. The inferential account also suggests a (...)
  18. Could Embodied Simulation Be a by-Product of Emotion Perception?Edoardo Zamuner & Julian Kiverstein - 2010 - Behavioral and Brain Sciences 33 (6):449 - 449.
    The SIMS model claims that it is by means of an embodied simulation that we determine the meaning of an observed smile. This suggests that crucial interpretative work is done in the mapping that takes us from a perceived smile to the activation of one's own facial musculature. How is this mapping achieved? Might it depend upon a prior interpretation arrived at on the basis of perceptual and contextual information?
  19. Institutionalizing Expert Systems: Guidelines and Legal Concerns. [REVIEW]Janet S. Zeide & Jay Liebowitz - 1992 - AI and Society 6 (3):287-293.
    Often, knowledge engineers become so involved in the development process of the expert system that they fail to look further down the road toward the expert system's institutionalization within the organization. Institutionalization is an important component of the expert system planning process. More specifically, the legal issues associated with expert systems development and deployment are critical institutionalization factors. This paper looks at some expert system institutionalization guidelines, and then focuses on legal considerations.
  20. An Australian Perspective on Research and Development Required for the Construction of Applied Legal Decision Support Systems.John Zeleznikow - 2002 - Artificial Intelligence and Law 10 (4):237-260.
    At the Donald Berman Laboratory for Information Technology and Law, La TrobeUniversity Australia, we have been building legal decision support systems for a dozenyears. Whilst most of our energy has been devoted to conducting research in ArtificialIntelligence and Law, over the past few years we have increasingly focused uponbuilding legal decision support systems that have a commercial focus.In this paper we discuss the evolution of our systems. We begin with a discussion ofrule-based systems and discuss the transition to hybrid rule-based/case-based (...)
  21. The Communication Structure of Epistemic Communities.Kevin J. S. Zollman - 2007 - Philosophy of Science 74 (5):574-587.
    Increasingly, epistemologists are becoming interested in social structures and their effect on epistemic enterprises, but little attention has been paid to the proper distribution of experimental results among scientists. This paper will analyze a model first suggested by two economists, which nicely captures one type of learning situation faced by scientists. The results of a computer simulation study of this model provide two interesting conclusions. First, in some contexts, a community of scientists is, as a whole, more reliable when its (...)
Moral Status of Artificial Systems
  1. Ethics for Things.Alison Adam - 2008 - Ethics and Information Technology 10 (2-3):149-154.
    This paper considers the ways that Information Ethics (IE) treats things. A number of critics have focused on IE’s move away from anthropocentrism to include non-humans on an equal basis in moral thinking. I enlist Actor Network Theory, Dennett’s views on ‹as if’ intentionality and Magnani’s characterization of ‹moral mediators’. Although they demonstrate different philosophical pedigrees, I argue that these three theories can be pressed into service in defence of IE’s treatment of things. Indeed the support they lend to the (...)
  2. Humanoid Robots: A New Kind of Tool.Bryan Adams, Cynthia Breazeal, Rodney Brooks & Brian Scassellati - 2000 - IEEE Intelligent Systems 15 (4):25-31.
    In his 1923 play R.U.R.: Rossum s Universal Robots, Karel Capek coined In 1993, we began a humanoid robotics project aimed at constructing a robot for use in exploring theories of human intelligence. In this article, we describe three aspects of our research methodology that distinguish our work from other humanoid projects. First, our humanoid robots are designed to act autonomously and safely in natural workspaces with people. Second, our robots are designed to interact socially with people by exploiting natural (...)
  3. Rethinking Autonomy.Richard Alterman - 2000 - Minds and Machines 10 (1):15-30.
    This paper explores the assumption of autonomy. Several arguments are presented against the assumption of runtime autonomy as a principle of design for artificial intelligence systems. The arguments vary from being theoretical, to practical, and to analytic. The latter parts of the paper focus on one strategy for building non-autonomous systems (the practice view). One critical theme is that intelligence is not located in the system alone, it emerges from a history of interactions among user, builder, and designer over a (...)
  4. Artificial Brains & Holographic Bodies: Facing the Questions of Progress.John Altmann - manuscript
    This essay discusses the ambitious plans of one Dmitry Itskov who by 2045 wishes to see immortality achieved by way of Artificial Brains and Holographic Bodies. I discuss the ethical implications of such a possibility coming to pass.
  5. Machine Ethics.M. Anderson & S. Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
  6. The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW]Michael Anderson & Susan Leigh Anderson - 2007 - Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
  7. Philosophical Concerns with Machine Ethics.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  8. Asimov's “Three Laws of Robotics” and Machine Metaethics.Susan Leigh Anderson - 2008 - AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
  9. Contracting Agents: Legal Personality and Representation. [REVIEW]Francisco Andrade, Paulo Novais, José Machado & José Neves - 2007 - Artificial Intelligence and Law 15 (4):357-373.
    The combined use of computers and telecommunications and the latest evolution in the field of Artificial Intelligence brought along new ways of contracting and of expressing will and declarations. The question is, how far we can go in considering computer intelligence and autonomy, how can we legally deal with a new form of electronic behaviour capable of autonomous action? In the field of contracting, through Intelligent Electronic Agents, there is an imperious need of analysing the question of expression of consent, (...)
  10. Richard Susskind, The Future of Law, Facing Challenges of Information Technology.Oskamp Anja - 1999 - Artificial Intelligence and Law 7 (4):387-391.
  11. The Robot Didn't Do It: A Position Paper for the Workshop on Anticipatory Ethics, Responsibility and Artificial Agents.Ronald C. Arkin - 2013 - Workshop on Anticipatory Ethics, Responsibility and Artificial Agents 2013.
    This position paper addresses the issue of responsibility in the use of autonomous robotic systems. We are nowhere near autonomy in the philosophical sense, i.e., where there exists free agency and moral culpability for a non-human artificial agent. Sentient robots and the singularity are not concerns in the near to mid-term. While agents such as corporations can be held legally responsible for their actions, these exist of organizations under the direct control of humans. Intelligent robots, by virtue of their autonomous (...)
  12. Can Artificial Intelligences Suffer From Mental Illness? A Philosophical Matter to Consider.Hutan Ashrafian - 2017 - Science and Engineering Ethics 23 (2):403-412.
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, (...)
  13. The Moral Status of Artificial Life.Bernard Baertschi - 2012 - Environmental Values 21 (1):5 - 18.
    Recently at the J. Craig Venter Institute, a microorganism has been created through synthetic biology. In the future, more complex living beings will very probably be produced. In our natural environment, we live amongst a whole variety of beings. Some of them have moral status — they have a moral importance and we cannot treat them in just any way we please —; some do not. When it becomes possible to create artificially living beings who naturally possess moral status, will (...)
  14. Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
  15. The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
  16. Computers, Postmodernism and the Culture of the Artificial.Colin Beardon - 1994 - AI and Society 8 (1):1-16.
    The term ‘the artificial’ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of (...)
  17. Social Robots-Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction.Barbara Becker - 2006 - International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
  18. Considerations About the Relationship Between Animal and Machine Ethics.Oliver Bendel - 2016 - AI and Society 31 (1):103-108.
  19. Autonomous Machine Agency.Don Berkich - 2002 - Dissertation, University of Massachusetts Amherst
    Is it possible to construct a machine that can act of its own accord? There are a number of skeptical arguments which conclude that autonomous machine agency is impossible. Yet if autonomous machine agency is impossible, then serious doubt is cast on the possibility of autonomous human action, at least on the widely held assumption that some form of materialism is true. The purpose of this dissertation is to show that autonomous machine agency is possible, thereby showing that the autonomy (...)
  20. Robots, Ethics and Language.Ingrid Björk & Iordanis Kavathatzopoulos - 2015 - Acm Sigcas Computers and Society 45 (3):270-273.
  21. Intelligence Unbound: The Future of Uploaded and Machine Minds.Russell Blackford & Damien Broderick (eds.) - 2014 - Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  22. Imitation Games: Turing, Menard, Van Meegeren. [REVIEW]Brian P. Bloomfield & Theo Vurdubakis - 2003 - Ethics and Information Technology 5 (1):27-38.
    For many, the very idea of an artificialintelligence has always been ethicallytroublesome. The putative ability of machinesto mimic human intelligence appears to callinto question the stability of taken forgranted boundaries between subject/object,identity/similarity, free will/determinism,reality/simulation, etc. The artificiallyintelligent object thus appears to threaten thehuman subject with displacement and redundancy.This article takes as its starting point AlanTuring''s famous ''imitation game,'' (the socalled ''Turing Test''), here treated as aparable of the encounter between human originaland machine copy – the born and the made. Thecultural (...)
  23. Ethical Issues in Advanced Artificial Intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
  24. When Machines Outsmart Humans.Nick Bostrom - manuscript
    Artificial intelligence is a possibility that should not be ignored in any serious thinking about the future, and it raises many profound issues for ethics and public policy that philosophers ought to start thinking about. This article outlines the case for thinking that human-level machine intelligence might well appear within the next half century. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, (...)
  25. Transhumanist Values.Nick Bostrom - 2005 - Journal of Philosophical Research 30 (Supplement):3-14.
    Transhumanism is a loosely defined movement that has developed gradually over the past two decades. [1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.
  26. Intelligence Unbound.Damien Broderick (ed.) - 2014 - Wiley.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  27. On the Legal Responsibility of Autonomous Machines.Bartosz Brożek & Marek Jakubiec - 2017 - Artificial Intelligence and Law 25 (3):293-304.
    The paper concerns the problem of the legal responsibility of autonomous machines. In our opinion it boils down to the question of whether such machines can be seen as real agents through the prism of folk-psychology. We argue that autonomous machines cannot be granted the status of legal agents. Although this is quite possible from purely technical point of view, since the law is a conventional tool of regulating social interactions and as such can accommodate various legislative constructs, including legal (...)
  28. Of, for, and by the People: The Legal Lacuna of Synthetic Persons.Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant - 2017 - Artificial Intelligence and Law 25 (3):273-291.
    Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We (...)
  29. Artificial Moral Agents: Saviors or Destroyers? [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
1 — 50 / 811