About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Bostrom manuscriptMüller 2014
Introductions Müller 2013White 2015Gunkel 2012
Related categories

860 found
Order:
1 — 50 / 860
Material to categorize
  1. Computational Neural Modeling and the Philosophy of Ethics Reflections on the Particularism-Generalism Debate.Mar Cello Guarim - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  2. Design, Development, and Evaluation of an Interactive Simulator for Engineering Ethics Education (Seee).Christopher A. Chung & Michael Alfred - 2009 - Science and Engineering Ethics 15 (2):189-199.
    Societal pressures, accreditation organizations, and licensing agencies are emphasizing the importance of ethics in the engineering curriculum. Traditionally, this subject has been taught using dogma, heuristics, and case study approaches. Most recently a number of organizations have sought to increase the utility of these approaches by utilizing the Internet. Resources from these organizations include on-line courses and tests, videos, and DVDs. While these individual approaches provide a foundation on which to base engineering ethics, they may be limited in developing a (...)
  3. The Energetic Dimension of Emotions: An Evolution-Based Computer Simulation with General Implications.Luc Ciompi & Martin Baatz - 2008 - Biological Theory 3 (1):42-50.
    Viewed from an evolutionary standpoint, emotions can be understood as situation-specific patterns of energy consumption related to behaviors that have been selected by evolution for their survival value, such as environmental exploration, flight or fight, and socialization. In the present article, the energy linked with emotions is investigated by a strictly energy-based simulation of the evolution of simple autonomous agents provided with random cognitive and motor capacities and operating among food and predators. Emotions are translated into evolving patterns of energy (...)
  4. Linguistic Anchors in the Sea of Thought?Andy Clark - 1996 - Pragmatics and Cognition 4 (1):93-103.
    Andy Clark is currently Professor of Philosophy and Director of the Philosophy/Neuroscience/Psychology program at Washington University in St. Louis, Missouri. He is the author of two books MICROCOGNITION (MIT Press/Bradford Books 1989) and ASSOCIATIVE ENGINES (MIT Press/Bradford Books, 1993) as well as numerous papers and four edited volumes. He is an ex- committee member of the British Society for the Philosophy of Science and of the Society for Artificial Intelligence and the Simulation of Behavior. Awards include a visiting Fellowship at (...)
  5. Dialogues in Natural Language with Guru, a Psychologic Inference Engine.Kenneth M. Colby, Peter M. Colby & Robert J. Stoller - 1990 - Philosophical Psychology 3 (2 & 3):171 – 186.
    The aim of this project was to explore the possibility of constructing a psychologic inference engine that might enhance introspective self-awareness by delivering inferences about a user based on what he said in interactive dialogues about his closest opposite-sex relation. To implement this aim, we developed a computer program (guru) with the capacity to simulate human conversation in colloquial natural language. The psychologic inferences offered represent the authors' simulations of their commonsense psychology responses to expected user-input expressions. The heuristics of (...)
  6. DARES: Documents Annotation and Recombining System—Application to the European Law. [REVIEW]Fady Farah & François Rousselot - 2007 - Artificial Intelligence and Law 15 (2):83-102.
    Accessing legislation via the Internet is more and more frequent. As a result, systems that allow consultation of law texts are becoming more and more powerful. This paper presents DARES, a generic system which can be adapted to any domain to handle documents production needs. It is based on an annotation engine which allows obtaining XML documents inputs as required by the system, and on an XML fragments recombining system. The latter operates using a fragment manipulation functions toolbox to generate (...)
  7. On the Role of AI in the Ongoing Paradigm Shift Within the Cognitive Sciences.Tom Froese - 2007 - In M. Lungarella (ed.), 50 Years of AI. Springer Verlag.
    This paper supports the view that the ongoing shift from orthodox to embodied-embedded cognitive science has been significantly influenced by the experimental results generated by AI research. Recently, there has also been a noticeable shift toward enactivism, a paradigm which radicalizes the embodied-embedded approach by placing autonomous agency and lived subjectivity at the heart of cognitive science. Some first steps toward a clarification of the relationship of AI to this further shift are outlined. It is concluded that the success of (...)
  8. On the Re-Materialization of the Virtual.Ismo Kantola - 2013 - AI and Society 28 (2):189-198.
    The so-called new economy based on the global network of digitalized communication was welcomed as a platform of innovations and as a vehicle of advancement of democracy. The concept of virtuality captures the essence of the new economy: efficiency and free access. In practice, the new economy has developed into an heterogenic entity dominated by practices such as propagation of trust and commitment to standards and standard-like technological solutions; entrenchment of locally strategic subsystems; surveillance of unwanted behavior. Five empirical cases (...)
  9. Special Issue on Social Impact of AI: Killer Robots or Friendly Fridges. [REVIEW]Greg Michaelson & Ruth Aylett - 2011 - AI and Society 26 (4):317-318.
  10. Evolution: The Computer Systems Engineer Designing Minds.Aaron Sloman - 2011 - Avant: Trends in Interdisciplinary Studies 2 (2):45–69.
    What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...)
  11. Classification of the Global Solutions of the AI Safety Problem.Alexey Turchin - manuscript
    There are two types of AI safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI, but do not explain how to prevent the creation of dangerous AI. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four levels: 1. No AI; AI technology (...)
Moral Status of Artificial Systems
  1. Ethics for Things.Alison Adam - 2008 - Ethics and Information Technology 10 (2-3):149-154.
    This paper considers the ways that Information Ethics (IE) treats things. A number of critics have focused on IE’s move away from anthropocentrism to include non-humans on an equal basis in moral thinking. I enlist Actor Network Theory, Dennett’s views on ‹as if’ intentionality and Magnani’s characterization of ‹moral mediators’. Although they demonstrate different philosophical pedigrees, I argue that these three theories can be pressed into service in defence of IE’s treatment of things. Indeed the support they lend to the (...)
  2. Humanoid Robots: A New Kind of Tool.Bryan Adams, Cynthia Breazeal, Rodney Brooks & Brian Scassellati - 2000 - IEEE Intelligent Systems 15 (4):25-31.
    In his 1923 play R.U.R.: Rossum s Universal Robots, Karel Capek coined In 1993, we began a humanoid robotics project aimed at constructing a robot for use in exploring theories of human intelligence. In this article, we describe three aspects of our research methodology that distinguish our work from other humanoid projects. First, our humanoid robots are designed to act autonomously and safely in natural workspaces with people. Second, our robots are designed to interact socially with people by exploiting natural (...)
  3. Rethinking Autonomy.Richard Alterman - 2000 - Minds and Machines 10 (1):15-30.
    This paper explores the assumption of autonomy. Several arguments are presented against the assumption of runtime autonomy as a principle of design for artificial intelligence systems. The arguments vary from being theoretical, to practical, and to analytic. The latter parts of the paper focus on one strategy for building non-autonomous systems (the practice view). One critical theme is that intelligence is not located in the system alone, it emerges from a history of interactions among user, builder, and designer over a (...)
  4. Artificial Brains & Holographic Bodies: Facing the Questions of Progress.John Altmann - manuscript
    This essay discusses the ambitious plans of one Dmitry Itskov who by 2045 wishes to see immortality achieved by way of Artificial Brains and Holographic Bodies. I discuss the ethical implications of such a possibility coming to pass.
  5. Machine Ethics.M. Anderson & S. Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
  6. The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW]Michael Anderson & Susan Leigh Anderson - 2007 - Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
  7. Philosophical Concerns with Machine Ethics.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  8. Asimov's “Three Laws of Robotics” and Machine Metaethics.Susan Leigh Anderson - 2008 - AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
  9. Contracting Agents: Legal Personality and Representation. [REVIEW]Francisco Andrade, Paulo Novais, José Machado & José Neves - 2007 - Artificial Intelligence and Law 15 (4):357-373.
    The combined use of computers and telecommunications and the latest evolution in the field of Artificial Intelligence brought along new ways of contracting and of expressing will and declarations. The question is, how far we can go in considering computer intelligence and autonomy, how can we legally deal with a new form of electronic behaviour capable of autonomous action? In the field of contracting, through Intelligent Electronic Agents, there is an imperious need of analysing the question of expression of consent, (...)
  10. Richard Susskind, The Future of Law, Facing Challenges of Information Technology.Oskamp Anja - 1999 - Artificial Intelligence and Law 7 (4):387-391.
  11. The Robot Didn't Do It: A Position Paper for the Workshop on Anticipatory Ethics, Responsibility and Artificial Agents.Ronald C. Arkin - 2013 - Workshop on Anticipatory Ethics, Responsibility and Artificial Agents 2013.
    This position paper addresses the issue of responsibility in the use of autonomous robotic systems. We are nowhere near autonomy in the philosophical sense, i.e., where there exists free agency and moral culpability for a non-human artificial agent. Sentient robots and the singularity are not concerns in the near to mid-term. While agents such as corporations can be held legally responsible for their actions, these exist of organizations under the direct control of humans. Intelligent robots, by virtue of their autonomous (...)
  12. Humans and Hosts in Westworld: What's the Difference?Marcus Arvan - 2018 - In James South & Kimberly Engels (eds.), Westworld and Philosophy. Oxford: Wiley-Blackwell. pp. 26-38.
    This chapter argues there are many hints in the dialogue, plot, and physics of the first season of Westworld that the events in the show do not take place within a theme park, but rather in a virtual reality (VR) world that people "visit" to escape the "real world." The philosophical implications I draw are several. First, to be simulated is to be real: simulated worlds are every bit as real as "the real world", and simulated people (hosts) are every (...)
  13. Can Artificial Intelligences Suffer From Mental Illness? A Philosophical Matter to Consider.Hutan Ashrafian - 2017 - Science and Engineering Ethics 23 (2):403-412.
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, (...)
  14. The Moral Status of Artificial Life.Bernard Baertschi - 2012 - Environmental Values 21 (1):5 - 18.
    Recently at the J. Craig Venter Institute, a microorganism has been created through synthetic biology. In the future, more complex living beings will very probably be produced. In our natural environment, we live amongst a whole variety of beings. Some of them have moral status — they have a moral importance and we cannot treat them in just any way we please —; some do not. When it becomes possible to create artificially living beings who naturally possess moral status, will (...)
  15. Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
  16. The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
  17. Computers, Postmodernism and the Culture of the Artificial.Colin Beardon - 1994 - AI and Society 8 (1):1-16.
    The term ‘the artificial’ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of (...)
  18. Social Robots-Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction.Barbara Becker - 2006 - International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
  19. Considerations About the Relationship Between Animal and Machine Ethics.Oliver Bendel - 2016 - AI and Society 31 (1):103-108.
  20. Autonomous Machine Agency.Don Berkich - 2002 - Dissertation, University of Massachusetts Amherst
    Is it possible to construct a machine that can act of its own accord? There are a number of skeptical arguments which conclude that autonomous machine agency is impossible. Yet if autonomous machine agency is impossible, then serious doubt is cast on the possibility of autonomous human action, at least on the widely held assumption that some form of materialism is true. The purpose of this dissertation is to show that autonomous machine agency is possible, thereby showing that the autonomy (...)
  21. Robots, Ethics and Language.Ingrid Björk & Iordanis Kavathatzopoulos - 2015 - Acm Sigcas Computers and Society 45 (3):270-273.
  22. Intelligence Unbound: The Future of Uploaded and Machine Minds.Russell Blackford & Damien Broderick (eds.) - 2014 - Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  23. Imitation Games: Turing, Menard, Van Meegeren. [REVIEW]Brian P. Bloomfield & Theo Vurdubakis - 2003 - Ethics and Information Technology 5 (1):27-38.
    For many, the very idea of an artificialintelligence has always been ethicallytroublesome. The putative ability of machinesto mimic human intelligence appears to callinto question the stability of taken forgranted boundaries between subject/object,identity/similarity, free will/determinism,reality/simulation, etc. The artificiallyintelligent object thus appears to threaten thehuman subject with displacement and redundancy.This article takes as its starting point AlanTuring''s famous ''imitation game,'' (the socalled ''Turing Test''), here treated as aparable of the encounter between human originaland machine copy – the born and the made. Thecultural (...)
  24. Ethical Issues in Advanced Artificial Intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
  25. When Machines Outsmart Humans.Nick Bostrom - manuscript
    Artificial intelligence is a possibility that should not be ignored in any serious thinking about the future, and it raises many profound issues for ethics and public policy that philosophers ought to start thinking about. This article outlines the case for thinking that human-level machine intelligence might well appear within the next half century. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, (...)
  26. Transhumanist Values.Nick Bostrom - 2005 - Journal of Philosophical Research 30 (Supplement):3-14.
    Transhumanism is a loosely defined movement that has developed gradually over the past two decades. [1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.
  27. Intelligence Unbound.Damien Broderick (ed.) - 2014 - Wiley.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  28. On the Legal Responsibility of Autonomous Machines.Bartosz Brożek & Marek Jakubiec - 2017 - Artificial Intelligence and Law 25 (3):293-304.
    The paper concerns the problem of the legal responsibility of autonomous machines. In our opinion it boils down to the question of whether such machines can be seen as real agents through the prism of folk-psychology. We argue that autonomous machines cannot be granted the status of legal agents. Although this is quite possible from purely technical point of view, since the law is a conventional tool of regulating social interactions and as such can accommodate various legislative constructs, including legal (...)
  29. Of, for, and by the People: The Legal Lacuna of Synthetic Persons.Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant - 2017 - Artificial Intelligence and Law 25 (3):273-291.
    Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We (...)
  30. Artificial Moral Agents: Saviors or Destroyers? [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  31. Utopia Without Work? Myth, Machines and Public Policy.Edmund Byrne - 1985 - In P. T. Durbin (ed.), Research in Philosophy and Technology VIII. Greenwich, CT: JAI Press. pp. 133-148.
  32. R.U.R. - Rossum’s Universal Robots.Karel Čapek - 1920 - Aventinum.
    The play begins in a factory that makes artificial people, called roboti (robots), from synthetic organic matter. They seem happy to work for humans at first, but that changes, and a hostile robot rebellion leads to the extinction of the human race.
  33. Toward a Comparative Theory of Agents.Rafael Capurro - 2012 - AI and Society 27 (4):479-488.
    The purpose of this paper is to address some of the questions on the notion of agent and agency in relation to property and personhood. I argue that following the Kantian criticism of Aristotelian metaphysics, contemporary biotechnology and information and communication technologies bring about a new challenge—this time, with regard to the Kantian moral subject understood in the subject’s unique metaphysical qualities of dignity and autonomy. The concept of human dignity underlies the foundation of many democratic systems, particularly in Europe (...)
  34. Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
  35. The Vital Machine: A Study of Technology and Organic Life.David F. Channell - 1991 - Oxford University Press.
    In 1738, Jacques Vaucanson unveiled his masterpiece before the court of Louis XV: a gilded copper duck that ate, drank, quacked, flapped its wings, splashed about, and, most astonishing of all, digested its food and excreted the remains. The imitation of life by technology fascinated Vaucanson's contemporaries. Today our technology is more powerful, but our fascination is tempered with apprehension. Artificial intelligence and genetic engineering, to name just two areas, raise profoundly disturbing ethical issues that undermine our most fundamental beliefs (...)
  36. Agencéité et responsabilité des agents artificiels.Louis Chartrand - 2017 - Éthique Publique 19 (2).
    -/- Les agents artificiels et les nouvelles technologies de l’information, de par leur capacité à établir de nouvelles dynamiques de transfert d’information, ont des effets perturbateurs sur les écosystèmes épistémiques. Se représenter la responsabilité pour ces chambardements représente un défi considérable : comment ce concept peut-il rendre compte de son objet dans des systèmes complexes dans lesquels il est difficile de rattacher l’action à un agent ou à une agente ? Cet article présente un aperçu du concept d’écosystème épistémique et (...)
  37. Artificial Agents - Personhood in Law and Philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it is not necessary or desirable to postulate artificial (...)
  38. Synthetic Biology and the Moral Significance of Artificial Life: A Reply to Douglas, Powell and Savulescu.Andreas Christiansen - 2016 - Bioethics 30 (5):372-379.
    I discuss the moral significance of artificial life within synthetic biology via a discussion of Douglas, Powell and Savulescu's paper 'Is the creation of artificial life morally significant’. I argue that the definitions of 'artificial life’ and of 'moral significance’ are too narrow. Douglas, Powell and Savulescu's definition of artificial life does not capture all core projects of synthetic biology or the ethical concerns that have been voiced, and their definition of moral significance fails to take into account the possibility (...)
  39. Responsibility and the Moral Phenomenology of Using Self-Driving Cars.Mark Coeckelbergh - 2016 - Applied Artificial Intelligence 30 (8):748-757.
    This paper explores how the phenomenology of using self-driving cars influences conditions for exercising and ascribing responsibility. First, a working account of responsibility is presented, which identifies two classic Aristotelian conditions for responsibility and adds a relational one, and which makes a distinction between responsibility for (what one does) and responsibility to (others). Then, this account is applied to a phenomenological analysis of what happens when we use a self-driving car and participate in traffic. It is argued that self-driving cars (...)
1 — 50 / 860