About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017
Introductions Müller 2013White 2015Gunkel 2012
Related categories

940 found
1 — 50 / 940
Moral Status of Artificial Systems
  1. Ethics for Things.Alison Adam - 2008 - Ethics and Information Technology 10 (2-3):149-154.
    This paper considers the ways that Information Ethics (IE) treats things. A number of critics have focused on IE’s move away from anthropocentrism to include non-humans on an equal basis in moral thinking. I enlist Actor Network Theory, Dennett’s views on ‹as if’ intentionality and Magnani’s characterization of ‹moral mediators’. Although they demonstrate different philosophical pedigrees, I argue that these three theories can be pressed into service in defence of IE’s treatment of things. Indeed the support they lend to the (...)
  2. Humanoid Robots: A New Kind of Tool.Bryan Adams, Cynthia Breazeal, Rodney Brooks & Brian Scassellati - 2000 - IEEE Intelligent Systems 15 (4):25-31.
    In his 1923 play R.U.R.: Rossum s Universal Robots, Karel Capek coined In 1993, we began a humanoid robotics project aimed at constructing a robot for use in exploring theories of human intelligence. In this article, we describe three aspects of our research methodology that distinguish our work from other humanoid projects. First, our humanoid robots are designed to act autonomously and safely in natural workspaces with people. Second, our robots are designed to interact socially with people by exploiting natural (...)
  3. Rethinking Autonomy.Richard Alterman - 2000 - Minds and Machines 10 (1):15-30.
    This paper explores the assumption of autonomy. Several arguments are presented against the assumption of runtime autonomy as a principle of design for artificial intelligence systems. The arguments vary from being theoretical, to practical, and to analytic. The latter parts of the paper focus on one strategy for building non-autonomous systems (the practice view). One critical theme is that intelligence is not located in the system alone, it emerges from a history of interactions among user, builder, and designer over a (...)
  4. Artificial Brains & Holographic Bodies: Facing the Questions of Progress.John Altmann - manuscript
    This essay discusses the ambitious plans of one Dmitry Itskov who by 2045 wishes to see immortality achieved by way of Artificial Brains and Holographic Bodies. I discuss the ethical implications of such a possibility coming to pass.
  5. Machine Ethics.M. Anderson & S. Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
  6. The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW]Michael Anderson & Susan Leigh Anderson - 2007 - Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
  7. Philosophical Concerns with Machine Ethics.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  8. Asimov's “Three Laws of Robotics” and Machine Metaethics.Susan Leigh Anderson - 2008 - AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
  9. Contracting Agents: Legal Personality and Representation. [REVIEW]Francisco Andrade, Paulo Novais, José Machado & José Neves - 2007 - Artificial Intelligence and Law 15 (4):357-373.
    The combined use of computers and telecommunications and the latest evolution in the field of Artificial Intelligence brought along new ways of contracting and of expressing will and declarations. The question is, how far we can go in considering computer intelligence and autonomy, how can we legally deal with a new form of electronic behaviour capable of autonomous action? In the field of contracting, through Intelligent Electronic Agents, there is an imperious need of analysing the question of expression of consent, (...)
  10. Richard Susskind, The Future of Law, Facing Challenges of Information Technology.Oskamp Anja - 1999 - Artificial Intelligence and Law 7 (4):387-391.
  11. The Robot Didn't Do It: A Position Paper for the Workshop on Anticipatory Ethics, Responsibility and Artificial Agents.Ronald C. Arkin - 2013 - Workshop on Anticipatory Ethics, Responsibility and Artificial Agents 2013.
    This position paper addresses the issue of responsibility in the use of autonomous robotic systems. We are nowhere near autonomy in the philosophical sense, i.e., where there exists free agency and moral culpability for a non-human artificial agent. Sentient robots and the singularity are not concerns in the near to mid-term. While agents such as corporations can be held legally responsible for their actions, these exist of organizations under the direct control of humans. Intelligent robots, by virtue of their autonomous (...)
  12. Humans and Hosts in Westworld: What's the Difference?Marcus Arvan - 2018 - In James South & Kimberly Engels (eds.), Westworld and Philosophy. Oxford: Wiley-Blackwell. pp. 26-38.
    This chapter argues there are many hints in the dialogue, plot, and physics of the first season of Westworld that the events in the show do not take place within a theme park, but rather in a virtual reality (VR) world that people "visit" to escape the "real world." The philosophical implications I draw are several. First, to be simulated is to be real: simulated worlds are every bit as real as "the real world", and simulated people (hosts) are every (...)
  13. Can Artificial Intelligences Suffer From Mental Illness? A Philosophical Matter to Consider.Hutan Ashrafian - 2017 - Science and Engineering Ethics 23 (2):403-412.
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, (...)
  14. Biocentrism and Artificial Life.Robin Attfield - 2012 - Environmental Values 21 (1):83 - 94.
    Biocentrism maintains that all living creatures have moral standing, but need not claim that all have equal moral significance. This moral standing extends to organisms generated through human interventions, whether by conventional breeding, genetic engineering, or synthetic biology. Our responsibilities with regard to future generations seem relevant to non-human species as well as future human generations and their quality of life. Likewise the Precautionary Principle appears to raise objections to the generation of serious or irreversible changes to the quality of (...)
  15. The Moral Status of Artificial Life.Bernard Baertschi - 2012 - Environmental Values 21 (1):5 - 18.
    Recently at the J. Craig Venter Institute, a microorganism has been created through synthetic biology. In the future, more complex living beings will very probably be produced. In our natural environment, we live amongst a whole variety of beings. Some of them have moral status — they have a moral importance and we cannot treat them in just any way we please —; some do not. When it becomes possible to create artificially living beings who naturally possess moral status, will (...)
  16. Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
  17. The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
  18. Computers, Postmodernism and the Culture of the Artificial.Colin Beardon - 1994 - AI and Society 8 (1):1-16.
    The term ‘the artificial’ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of (...)
  19. Social Robots-Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction.Barbara Becker - 2006 - International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
  20. AAAI: An Argument Against Artificial Intelligence.Sander Beckers - 2018 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer. pp. 235-247.
    The ethical concerns regarding the successful development of an Artificial Intelligence have received a lot of attention lately. The idea is that even if we have good reason to believe that it is very unlikely, the mere possibility of an AI causing extreme human suffering is important enough to warrant serious consideration. Others look at this problem from the opposite perspective, namely that of the AI itself. Here the idea is that even if we have good reason to believe that (...)
  21. Considerations About the Relationship Between Animal and Machine Ethics.Oliver Bendel - 2016 - AI and Society 31 (1):103-108.
  22. The Ethics of Artificial Intelligence: Superintelligence, Life 3.0 and Robot Rights.Kati Tusinski Berg - 2018 - Journal of Media Ethics 33 (3):151-153.
  23. Autonomous Machine Agency.Don Berkich - 2002 - Dissertation, University of Massachusetts Amherst
    Is it possible to construct a machine that can act of its own accord? There are a number of skeptical arguments which conclude that autonomous machine agency is impossible. Yet if autonomous machine agency is impossible, then serious doubt is cast on the possibility of autonomous human action, at least on the widely held assumption that some form of materialism is true. The purpose of this dissertation is to show that autonomous machine agency is possible, thereby showing that the autonomy (...)
  24. Robots, Ethics and Language.Ingrid Björk & Iordanis Kavathatzopoulos - 2015 - Acm Sigcas Computers and Society 45 (3):270-273.
  25. Intelligence Unbound: The Future of Uploaded and Machine Minds.Russell Blackford & Damien Broderick (eds.) - 2014 - Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  26. Imitation Games: Turing, Menard, Van Meegeren. [REVIEW]Brian P. Bloomfield & Theo Vurdubakis - 2003 - Ethics and Information Technology 5 (1):27-38.
    For many, the very idea of an artificialintelligence has always been ethicallytroublesome. The putative ability of machinesto mimic human intelligence appears to callinto question the stability of taken forgranted boundaries between subject/object,identity/similarity, free will/determinism,reality/simulation, etc. The artificiallyintelligent object thus appears to threaten thehuman subject with displacement and redundancy.This article takes as its starting point AlanTuring''s famous ''imitation game,'' (the socalled ''Turing Test''), here treated as aparable of the encounter between human originaland machine copy – the born and the made. Thecultural (...)
  27. ‘I Interact Therefore I Am’: The Self as a Historical Product of Dialectical Attunement.Dimitris Bolis & Leonhard Schilbach - forthcoming - Topoi:1-14.
    In this article, moving from being to becoming, we construe the ‘self’ as a dynamic process rather than as a static entity. To this end we draw on dialectics and Bayesian accounts of cognition. The former allows us to holistically consider the ‘self’ as the interplay between internalization and externalization and the latter to operationalize our suggestion formally. Internalization is considered here as the co-construction of bodily hierarchical models of the world and the organism, while externalization is taken as the (...)
  28. Ethical Issues in Advanced Artificial Intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
  29. When Machines Outsmart Humans.Nick Bostrom - manuscript
    Artificial intelligence is a possibility that should not be ignored in any serious thinking about the future, and it raises many profound issues for ethics and public policy that philosophers ought to start thinking about. This article outlines the case for thinking that human-level machine intelligence might well appear within the next half century. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, (...)
  30. Transhumanist Values.Nick Bostrom - 2005 - Journal of Philosophical Research 30 (Supplement):3-14.
    Transhumanism is a loosely defined movement that has developed gradually over the past two decades. [1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.
  31. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
  32. The Moral Standing of Natural Objects.Andrew Brennan - 1984 - Environmental Ethics 6 (1):35-56.
    Human beings are, as far as we know, the only animals to have moral concerns and to adopt moralities, but it would be a mistake to be misled by this fact into thinking that humans are also the only proper objects of moral consideration. I argue that we ought to allow even nonliving things a significant moral status, thus denying the condusion of much contemporary moral thinking. First, I consider the possibilityof giving moral consideration to nonliving things. Second, I put (...)
  33. Meeting Floridi's Challenge to Artificial Intelligence From the Knowledge-Game Test for Self-Consciousness.Selmer Bringsjord - 2010 - Metaphilosophy 41 (3):292-312.
    Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which is to (...)
  34. Animals, Zombanimals, and the Total Turing Test: The Essence of Artificial Intelligence.Selmer Bringsjord - 2000 - Journal of Logic Language and Information 9 (4):397-418.
    Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...)
  35. Intelligence Unbound.Damien Broderick (ed.) - 2014 - Wiley.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  36. On the Legal Responsibility of Autonomous Machines.Bartosz Brożek & Marek Jakubiec - 2017 - Artificial Intelligence and Law 25 (3):293-304.
    The paper concerns the problem of the legal responsibility of autonomous machines. In our opinion it boils down to the question of whether such machines can be seen as real agents through the prism of folk-psychology. We argue that autonomous machines cannot be granted the status of legal agents. Although this is quite possible from purely technical point of view, since the law is a conventional tool of regulating social interactions and as such can accommodate various legislative constructs, including legal (...)
  37. Of, for, and by the People: The Legal Lacuna of Synthetic Persons.Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant - 2017 - Artificial Intelligence and Law 25 (3):273-291.
    Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We (...)
  38. Artificial Moral Agents: Saviors or Destroyers? [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  39. Utopia Without Work? Myth, Machines and Public Policy.Edmund Byrne - 1985 - In P. T. Durbin (ed.), Research in Philosophy and Technology VIII. Greenwich, CT: JAI Press. pp. 133-148.
    A critique of the prediction that technology will end humans' direct involvement in work. Contentions: a workless world is not without qualification desirable; it is not attainable by technology alone; the end sought does not in and by itself justify present job ending applications. Underlying these contentions: a claim that utopian visions with regard to work function as ideologies. Evidence for this claim derived from revisiting past non-industrial and industrial fantasies regarding a work-free utopia.
  40. R.U.R. - Rossum’s Universal Robots.Karel Čapek - 1920 - Aventinum.
    The play begins in a factory that makes artificial people, called roboti (robots), from synthetic organic matter. They seem happy to work for humans at first, but that changes, and a hostile robot rebellion leads to the extinction of the human race.
  41. Toward a Comparative Theory of Agents.Rafael Capurro - 2012 - AI and Society 27 (4):479-488.
    The purpose of this paper is to address some of the questions on the notion of agent and agency in relation to property and personhood. I argue that following the Kantian criticism of Aristotelian metaphysics, contemporary biotechnology and information and communication technologies bring about a new challenge—this time, with regard to the Kantian moral subject understood in the subject’s unique metaphysical qualities of dignity and autonomy. The concept of human dignity underlies the foundation of many democratic systems, particularly in Europe (...)
  42. Ethics and Robotics.Raphael Capurro & Michael Nagenborg (eds.) - 2009 - Akademische Verlagsgesellschaft.
    P. M. Asaro: What should We Want from a Robot Ethic? G. Tamburrini: Robot Ethics: A View from the Philosophy of Science B. Becker: Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction E. Datteri, G. Tamburrini: Ethical Reflections on Health Care Robotics P. Lin, G. Bekey, K. Abney: Robots in War: Issues of Risk and Ethics J. Altmann: Preventive Arms Control for Uninhabited Military Vehicles J. Weber: Robotic warfare, Human Rights & The Rhetorics of Ethical Machines T. (...)
  43. Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
  44. The Vital Machine: A Study of Technology and Organic Life.David F. Channell - 1991 - Oxford University Press.
    In 1738, Jacques Vaucanson unveiled his masterpiece before the court of Louis XV: a gilded copper duck that ate, drank, quacked, flapped its wings, splashed about, and, most astonishing of all, digested its food and excreted the remains. The imitation of life by technology fascinated Vaucanson's contemporaries. Today our technology is more powerful, but our fascination is tempered with apprehension. Artificial intelligence and genetic engineering, to name just two areas, raise profoundly disturbing ethical issues that undermine our most fundamental beliefs (...)
  45. Agencéité et responsabilité des agents artificiels.Louis Chartrand - 2017 - Éthique Publique 19 (2).
    -/- Les agents artificiels et les nouvelles technologies de l’information, de par leur capacité à établir de nouvelles dynamiques de transfert d’information, ont des effets perturbateurs sur les écosystèmes épistémiques. Se représenter la responsabilité pour ces chambardements représente un défi considérable : comment ce concept peut-il rendre compte de son objet dans des systèmes complexes dans lesquels il est difficile de rattacher l’action à un agent ou à une agente ? Cet article présente un aperçu du concept d’écosystème épistémique et (...)
  46. Artificial Agents - Personhood in Law and Philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it is not necessary or desirable to postulate artificial (...)
  47. Synthetic Biology and the Moral Significance of Artificial Life: A Reply to Douglas, Powell and Savulescu.Andreas Christiansen - 2016 - Bioethics 30 (5):372-379.
    I discuss the moral significance of artificial life within synthetic biology via a discussion of Douglas, Powell and Savulescu's paper 'Is the creation of artificial life morally significant’. I argue that the definitions of 'artificial life’ and of 'moral significance’ are too narrow. Douglas, Powell and Savulescu's definition of artificial life does not capture all core projects of synthetic biology or the ethical concerns that have been voiced, and their definition of moral significance fails to take into account the possibility (...)
  48. Responsibility and the Moral Phenomenology of Using Self-Driving Cars.Mark Coeckelbergh - 2016 - Applied Artificial Intelligence 30 (8):748-757.
    This paper explores how the phenomenology of using self-driving cars influences conditions for exercising and ascribing responsibility. First, a working account of responsibility is presented, which identifies two classic Aristotelian conditions for responsibility and adds a relational one, and which makes a distinction between responsibility for (what one does) and responsibility to (others). Then, this account is applied to a phenomenological analysis of what happens when we use a self-driving car and participate in traffic. It is argued that self-driving cars (...)
  49. Good Healthcare is in the “How”.Mark Coeckelbergh - 2015 - In Machine Medical Ethics. Springer. pp. 33-47.
    What do we mean by good healthcare, and do machines threaten it? If good care requires expertise, then what kind of expertise is this? If good care is “human” care, does this necessarily mean “non-technological” care? If not, then what should be the precise role of machines in medicine and healthcare? This chapter argues that good care relies on expert know-how and skills that enable care givers to care-fully engage with patients. Evaluating the introduction of new technologies such as robots (...)
  50. The Invisible Robots of Global Finance: Making Visible Machines, People, and Places.Mark Coeckelbergh - 2015 - Acm Sigcas Computers and Society 45 (3):287-289.
    One of the barriers for doing ethics of technology in the domain of finance is that financial technologies usually remain invisible. These hidden and unseen devices, machines, and infrastructures have to be revealed. This paper shows how the "robots" of finance, which function as distance technologies, are not only themselves invisible, but also hide people and places, which is ethically and politically problematic. Furthermore, "the market" appears as a ghostly artificial agent, again rendering humans invisible and making it difficult to (...)
1 — 50 / 940