About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017
Introductions Müller 2013White 2015Gunkel 2012
Related categories

866 found
Order:
1 — 50 / 866
Moral Status of Artificial Systems
  1. Ethics for Things.Alison Adam - 2008 - Ethics and Information Technology 10 (2-3):149-154.
    This paper considers the ways that Information Ethics (IE) treats things. A number of critics have focused on IE’s move away from anthropocentrism to include non-humans on an equal basis in moral thinking. I enlist Actor Network Theory, Dennett’s views on ‹as if’ intentionality and Magnani’s characterization of ‹moral mediators’. Although they demonstrate different philosophical pedigrees, I argue that these three theories can be pressed into service in defence of IE’s treatment of things. Indeed the support they lend to the (...)
  2. Humanoid Robots: A New Kind of Tool.Bryan Adams, Cynthia Breazeal, Rodney Brooks & Brian Scassellati - 2000 - IEEE Intelligent Systems 15 (4):25-31.
    In his 1923 play R.U.R.: Rossum s Universal Robots, Karel Capek coined In 1993, we began a humanoid robotics project aimed at constructing a robot for use in exploring theories of human intelligence. In this article, we describe three aspects of our research methodology that distinguish our work from other humanoid projects. First, our humanoid robots are designed to act autonomously and safely in natural workspaces with people. Second, our robots are designed to interact socially with people by exploiting natural (...)
  3. Rethinking Autonomy.Richard Alterman - 2000 - Minds and Machines 10 (1):15-30.
    This paper explores the assumption of autonomy. Several arguments are presented against the assumption of runtime autonomy as a principle of design for artificial intelligence systems. The arguments vary from being theoretical, to practical, and to analytic. The latter parts of the paper focus on one strategy for building non-autonomous systems (the practice view). One critical theme is that intelligence is not located in the system alone, it emerges from a history of interactions among user, builder, and designer over a (...)
  4. Artificial Brains & Holographic Bodies: Facing the Questions of Progress.John Altmann - manuscript
    This essay discusses the ambitious plans of one Dmitry Itskov who by 2045 wishes to see immortality achieved by way of Artificial Brains and Holographic Bodies. I discuss the ethical implications of such a possibility coming to pass.
  5. Machine Ethics.M. Anderson & S. Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
  6. The Status of Machine Ethics: A Report From the AAAI Symposium. [REVIEW]Michael Anderson & Susan Leigh Anderson - 2007 - Minds and Machines 17 (1):1-10.
    This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
  7. Philosophical Concerns with Machine Ethics.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  8. Asimov's “Three Laws of Robotics” and Machine Metaethics.Susan Leigh Anderson - 2008 - AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
  9. Contracting Agents: Legal Personality and Representation. [REVIEW]Francisco Andrade, Paulo Novais, José Machado & José Neves - 2007 - Artificial Intelligence and Law 15 (4):357-373.
    The combined use of computers and telecommunications and the latest evolution in the field of Artificial Intelligence brought along new ways of contracting and of expressing will and declarations. The question is, how far we can go in considering computer intelligence and autonomy, how can we legally deal with a new form of electronic behaviour capable of autonomous action? In the field of contracting, through Intelligent Electronic Agents, there is an imperious need of analysing the question of expression of consent, (...)
  10. Richard Susskind, The Future of Law, Facing Challenges of Information Technology.Oskamp Anja - 1999 - Artificial Intelligence and Law 7 (4):387-391.
  11. The Robot Didn't Do It: A Position Paper for the Workshop on Anticipatory Ethics, Responsibility and Artificial Agents.Ronald C. Arkin - 2013 - Workshop on Anticipatory Ethics, Responsibility and Artificial Agents 2013.
    This position paper addresses the issue of responsibility in the use of autonomous robotic systems. We are nowhere near autonomy in the philosophical sense, i.e., where there exists free agency and moral culpability for a non-human artificial agent. Sentient robots and the singularity are not concerns in the near to mid-term. While agents such as corporations can be held legally responsible for their actions, these exist of organizations under the direct control of humans. Intelligent robots, by virtue of their autonomous (...)
  12. Humans and Hosts in Westworld: What's the Difference?Marcus Arvan - 2018 - In James South & Kimberly Engels (eds.), Westworld and Philosophy. Oxford: Wiley-Blackwell. pp. 26-38.
    This chapter argues there are many hints in the dialogue, plot, and physics of the first season of Westworld that the events in the show do not take place within a theme park, but rather in a virtual reality (VR) world that people "visit" to escape the "real world." The philosophical implications I draw are several. First, to be simulated is to be real: simulated worlds are every bit as real as "the real world", and simulated people (hosts) are every (...)
  13. Can Artificial Intelligences Suffer From Mental Illness? A Philosophical Matter to Consider.Hutan Ashrafian - 2017 - Science and Engineering Ethics 23 (2):403-412.
    The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have either robot or human mental disorders, (...)
  14. The Moral Status of Artificial Life.Bernard Baertschi - 2012 - Environmental Values 21 (1):5 - 18.
    Recently at the J. Craig Venter Institute, a microorganism has been created through synthetic biology. In the future, more complex living beings will very probably be produced. In our natural environment, we live amongst a whole variety of beings. Some of them have moral status — they have a moral importance and we cannot treat them in just any way we please —; some do not. When it becomes possible to create artificially living beings who naturally possess moral status, will (...)
  15. Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines.John Basl - 2014 - Philosophy and Technology 27 (1):79-96.
    In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...)
  16. The Ethics of Creating Artificial Consciousness.John Basl - 2013 - APA Newsletter on Philosophy and Computers 13 (1):23-29.
  17. Computers, Postmodernism and the Culture of the Artificial.Colin Beardon - 1994 - AI and Society 8 (1):1-16.
    The term ‘the artificial’ can only be given a precise meaning in the context of the evolution of computational technology and this in turn can only be fully understood within a cultural setting that includes an epistemological perspective. The argument is illustrated in two case studies from the history of computational machinery: the first calculating machines and the first programmable computers. In the early years of electronic computers, the dominant form of computing was data processing which was a reflection of (...)
  18. Social Robots-Emotional Agents: Some Remarks on Naturalizing Man-Machine Interaction.Barbara Becker - 2006 - International Review of Information Ethics 6:37-45.
    The construction of embodied conversational agents - robots as well as avatars - seem to be a new challenge in the field of both cognitive AI and human-computer-interface development. On the one hand, one aims at gaining new insights in the development of cognition and communication by constructing intelligent, physical instantiated artefacts. On the other hand people are driven by the idea, that humanlike mechanical dialog-partners will have a positive effect on human-machine-communication. In this contribution I put for discussion whether (...)
  19. Considerations About the Relationship Between Animal and Machine Ethics.Oliver Bendel - 2016 - AI and Society 31 (1):103-108.
  20. Autonomous Machine Agency.Don Berkich - 2002 - Dissertation, University of Massachusetts Amherst
    Is it possible to construct a machine that can act of its own accord? There are a number of skeptical arguments which conclude that autonomous machine agency is impossible. Yet if autonomous machine agency is impossible, then serious doubt is cast on the possibility of autonomous human action, at least on the widely held assumption that some form of materialism is true. The purpose of this dissertation is to show that autonomous machine agency is possible, thereby showing that the autonomy (...)
  21. Robots, Ethics and Language.Ingrid Björk & Iordanis Kavathatzopoulos - 2015 - Acm Sigcas Computers and Society 45 (3):270-273.
  22. Intelligence Unbound: The Future of Uploaded and Machine Minds.Russell Blackford & Damien Broderick (eds.) - 2014 - Wiley-Blackwell.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  23. Imitation Games: Turing, Menard, Van Meegeren. [REVIEW]Brian P. Bloomfield & Theo Vurdubakis - 2003 - Ethics and Information Technology 5 (1):27-38.
    For many, the very idea of an artificialintelligence has always been ethicallytroublesome. The putative ability of machinesto mimic human intelligence appears to callinto question the stability of taken forgranted boundaries between subject/object,identity/similarity, free will/determinism,reality/simulation, etc. The artificiallyintelligent object thus appears to threaten thehuman subject with displacement and redundancy.This article takes as its starting point AlanTuring''s famous ''imitation game,'' (the socalled ''Turing Test''), here treated as aparable of the encounter between human originaland machine copy – the born and the made. Thecultural (...)
  24. Ethical Issues in Advanced Artificial Intelligence.Nick Bostrom - manuscript
    The ethical issues related to the possible future creation of machines with general intellectual capabilities far outstripping those of humans are quite distinct from any ethical problems arising in current automation and information systems. Such superintelligence would not be just another technological development; it would be the most important invention ever made, and would lead to explosive progress in all scientific and technological fields, as the superintelligence would conduct research with superhuman efficiency. To the extent that ethics is a cognitive (...)
  25. When Machines Outsmart Humans.Nick Bostrom - manuscript
    Artificial intelligence is a possibility that should not be ignored in any serious thinking about the future, and it raises many profound issues for ethics and public policy that philosophers ought to start thinking about. This article outlines the case for thinking that human-level machine intelligence might well appear within the next half century. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, (...)
  26. Transhumanist Values.Nick Bostrom - 2005 - Journal of Philosophical Research 30 (Supplement):3-14.
    Transhumanism is a loosely defined movement that has developed gradually over the past two decades. [1] It promotes an interdisciplinary approach to understanding and evaluating the opportunities for enhancing the human condition and the human organism opened up by the advancement of technology. Attention is given to both present technologies, like genetic engineering and information technology, and anticipated future ones, such as molecular nanotechnology and artificial intelligence.
  27. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical (...)
  28. Intelligence Unbound.Damien Broderick (ed.) - 2014 - Wiley.
    _Intelligence Unbound_ explores the prospects, promises, and potential dangers of machine intelligence and uploaded minds in a collection of state-of-the-art essays from internationally recognized philosophers, AI researchers, science fiction authors, and theorists. Compelling and intellectually sophisticated exploration of the latest thinking on Artificial Intelligence and machine minds Features contributions from an international cast of philosophers, Artificial Intelligence researchers, science fiction authors, and more Offers current, diverse perspectives on machine intelligence and uploaded minds, emerging topics of tremendous interest Illuminates the nature (...)
  29. On the Legal Responsibility of Autonomous Machines.Bartosz Brożek & Marek Jakubiec - 2017 - Artificial Intelligence and Law 25 (3):293-304.
    The paper concerns the problem of the legal responsibility of autonomous machines. In our opinion it boils down to the question of whether such machines can be seen as real agents through the prism of folk-psychology. We argue that autonomous machines cannot be granted the status of legal agents. Although this is quite possible from purely technical point of view, since the law is a conventional tool of regulating social interactions and as such can accommodate various legislative constructs, including legal (...)
  30. Of, for, and by the People: The Legal Lacuna of Synthetic Persons.Joanna J. Bryson, Mihailis E. Diamantis & Thomas D. Grant - 2017 - Artificial Intelligence and Law 25 (3):273-291.
    Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We (...)
  31. Artificial Moral Agents: Saviors or Destroyers? [REVIEW]Jeff Buechner - 2010 - Ethics and Information Technology 12 (4):363-370.
  32. Utopia Without Work? Myth, Machines and Public Policy.Edmund Byrne - 1985 - In P. T. Durbin (ed.), Research in Philosophy and Technology VIII. Greenwich, CT: JAI Press. pp. 133-148.
  33. R.U.R. - Rossum’s Universal Robots.Karel Čapek - 1920 - Aventinum.
    The play begins in a factory that makes artificial people, called roboti (robots), from synthetic organic matter. They seem happy to work for humans at first, but that changes, and a hostile robot rebellion leads to the extinction of the human race.
  34. Toward a Comparative Theory of Agents.Rafael Capurro - 2012 - AI and Society 27 (4):479-488.
    The purpose of this paper is to address some of the questions on the notion of agent and agency in relation to property and personhood. I argue that following the Kantian criticism of Aristotelian metaphysics, contemporary biotechnology and information and communication technologies bring about a new challenge—this time, with regard to the Kantian moral subject understood in the subject’s unique metaphysical qualities of dignity and autonomy. The concept of human dignity underlies the foundation of many democratic systems, particularly in Europe (...)
  35. Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
  36. The Vital Machine: A Study of Technology and Organic Life.David F. Channell - 1991 - Oxford University Press.
    In 1738, Jacques Vaucanson unveiled his masterpiece before the court of Louis XV: a gilded copper duck that ate, drank, quacked, flapped its wings, splashed about, and, most astonishing of all, digested its food and excreted the remains. The imitation of life by technology fascinated Vaucanson's contemporaries. Today our technology is more powerful, but our fascination is tempered with apprehension. Artificial intelligence and genetic engineering, to name just two areas, raise profoundly disturbing ethical issues that undermine our most fundamental beliefs (...)
  37. Agencéité et responsabilité des agents artificiels.Louis Chartrand - 2017 - Éthique Publique 19 (2).
    -/- Les agents artificiels et les nouvelles technologies de l’information, de par leur capacité à établir de nouvelles dynamiques de transfert d’information, ont des effets perturbateurs sur les écosystèmes épistémiques. Se représenter la responsabilité pour ces chambardements représente un défi considérable : comment ce concept peut-il rendre compte de son objet dans des systèmes complexes dans lesquels il est difficile de rattacher l’action à un agent ou à une agente ? Cet article présente un aperçu du concept d’écosystème épistémique et (...)
  38. Artificial Agents - Personhood in Law and Philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it is not necessary or desirable to postulate artificial (...)
  39. Synthetic Biology and the Moral Significance of Artificial Life: A Reply to Douglas, Powell and Savulescu.Andreas Christiansen - 2016 - Bioethics 30 (5):372-379.
    I discuss the moral significance of artificial life within synthetic biology via a discussion of Douglas, Powell and Savulescu's paper 'Is the creation of artificial life morally significant’. I argue that the definitions of 'artificial life’ and of 'moral significance’ are too narrow. Douglas, Powell and Savulescu's definition of artificial life does not capture all core projects of synthetic biology or the ethical concerns that have been voiced, and their definition of moral significance fails to take into account the possibility (...)
  40. Responsibility and the Moral Phenomenology of Using Self-Driving Cars.Mark Coeckelbergh - 2016 - Applied Artificial Intelligence 30 (8):748-757.
    This paper explores how the phenomenology of using self-driving cars influences conditions for exercising and ascribing responsibility. First, a working account of responsibility is presented, which identifies two classic Aristotelian conditions for responsibility and adds a relational one, and which makes a distinction between responsibility for (what one does) and responsibility to (others). Then, this account is applied to a phenomenological analysis of what happens when we use a self-driving car and participate in traffic. It is argued that self-driving cars (...)
  41. Good Healthcare is in the “How”.Mark Coeckelbergh - 2015 - In Machine Medical Ethics. Springer. pp. 33-47.
    What do we mean by good healthcare, and do machines threaten it? If good care requires expertise, then what kind of expertise is this? If good care is “human” care, does this necessarily mean “non-technological” care? If not, then what should be the precise role of machines in medicine and healthcare? This chapter argues that good care relies on expert know-how and skills that enable care givers to care-fully engage with patients. Evaluating the introduction of new technologies such as robots (...)
  42. The Invisible Robots of Global Finance: Making Visible Machines, People, and Places.Mark Coeckelbergh - 2015 - Acm Sigcas Computers and Society 45 (3):287-289.
    One of the barriers for doing ethics of technology in the domain of finance is that financial technologies usually remain invisible. These hidden and unseen devices, machines, and infrastructures have to be revealed. This paper shows how the "robots" of finance, which function as distance technologies, are not only themselves invisible, but also hide people and places, which is ethically and politically problematic. Furthermore, "the market" appears as a ghostly artificial agent, again rendering humans invisible and making it difficult to (...)
  43. The Invisible Robots of Global Finance.Mark Coeckelbergh - 2015 - Acm Sigcas Computers and Society 45 (3):287-289.
    One of the barriers for doing ethics of technology in the domain of finance is that financial technologies usually remain invisible. These hidden and unseen devices, machines, and infrastructures have to be revealed. This paper shows how the “robots” of finance, which function as distance technologies, are not only themselves invisible, but also hide people and places, which is ethically and politically problematic. Furthermore, “the market” appears as a ghostly artificial agent, again rendering humans invisible and making it difficult to (...)
  44. The Tragedy of the Master: Automation, Vulnerability, and Distance.Mark Coeckelbergh - 2015 - Ethics and Information Technology 17 (3):219-229.
    Responding to long-standing warnings that robots and AI will enslave humans, I argue that the main problem we face is not that automation might turn us into slaves but, rather, that we remain masters. First I construct an argument concerning what I call ‘the tragedy of the master’: using the master–slave dialectic, I argue that automation technologies threaten to make us vulnerable, alienated, and automated masters. I elaborate the implications for power, knowledge, and experience. Then I critically discuss and question (...)
  45. David J. Gunkel: The Machine Question: Critical Perspectives on AI, Robots, and Ethics. [REVIEW]Mark Coeckelbergh - 2013 - Ethics and Information Technology 15 (3):235-238.
  46. E-Care as Craftsmanship: Virtuous Work, Skilled Engagement, and Information Technology in Health Care.Mark Coeckelbergh - 2013 - Medicine, Health Care and Philosophy 16 (4):807-816.
    Contemporary health care relies on electronic devices. These technologies are not ethically neutral but change the practice of care. In light of Sennett’s work and that of other thinkers (Dewey, Dreyfus, Borgmann) one worry is that “e-care”—care by means of new information and communication technologies—does not promote skilful and careful engagement with patients and hence is neither conducive to the quality of care nor to the virtues of the care worker. Attending to the kinds of knowledge involved in care work (...)
  47. Personal Robots, Appearance, and Human Good.Mark Coeckelbergh - 2009 - International Journal of Social Robotics 1 (3):217-221.
    The development of pet robots, toy robots, and sex robots suggests a near-future scenario of habitual living with 'personal' robots. How should we evaluate their potential impact on the quality of our lives and existence?In this paper, I argue for an approach to ethics of personal robots that advocates a methodological turn from robots to humans, from mind to interaction, from intelligent thinking to social-emotional being, from reality to appearance, from right to good, from external criteria to good internal to (...)
  48. Turing, Searle, and the Wizard of Oz: Life and Custom Among the Automata or How Ought We to Assess the Attribution of Capacities of Living Systems to Technological Artefacts?S. D. Noam Cook - 2010 - Techné: Research in Philosophy and Technology 14 (2):88-102.
    Since the middle of the 20th century there has been a significant debate about the attribution of capacities of living systems, particularly humans, to technological artefacts, especially computers—from Turing’s opening gambit, to subsequent considerations of artificial intelligence, to recent claims about artificial life. Some now argue that the capacities of future technologies will ultimately make it impossible to draw any meaningful distinctions between humans and machines. Such issues center on what sense, if any, it makes to claim that gadgets can (...)
  49. Turing, Searle, and the Wizard of Oz.S. D. Noam Cook - 2010 - Techne 14 (2):88-102.
    Since the middle of the 20th century there has been a significant debate about the attribution of capacities of living systems, particularly humans, to technological artefacts, especially computers—from Turing’s opening gambit, to subsequent considerations of artificial intelligence, to recent claims about artificial life. Some now argue that the capacities of future technologies will ultimately make it impossible to draw any meaningful distinctions between humans and machines. Such issues center on what sense, if any, it makes to claim that gadgets can (...)
  50. From Judgment to Calculation.Mike Cooley - 2007 - AI and Society 21 (4):395-409.
    We only regard a system or a process as being “scientific” if it displays the three predominant characteristics of the natural sciences: predictability, repeatability and quantifiability. This by definition precludes intuition, subjective judgement, tacit knowledge, heuristics, dreams, etc. in other words, those attributes which are peculiarly human. Furthermore, this is resulting in a shift from judgment to calculation giving rise, in some cases, to an abject dependency on the machine and an inability to disagree with the outcome or even question (...)
1 — 50 / 866