About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al forthcoming, Tasioulas 2019
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, See also https://plato.stanford.edu/entries/ethics-ai/
Related categories

1296 found
Order:
1 — 50 / 1296
Material to categorize
  1. Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule.Tyler L. Jaynes - 2020 - AI and Society 35 (2):343-354.
    The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
Moral Status of Artificial Systems
  1. Welcoming Robots Into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  2. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the engineering sense (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. Sulla moralità artificiale. Le decisioni delle macchine tra etica e diritto.Daniela Tafani - 2020 - Rivista di Filosofia 1 (111):81-103.
    In the contemporary debate on artificial morality, the trolley problem has found a new field of application, in the “ethics of crashes” with self-driving cars. The paper aims to show that the trolley dilemma is out of place, in the context of automated traffic, not only with regard to the object of the dilemma (which human being should be sacrificed, in crashes with inevitable fatal consequences), but also with regard to the subject to whom it is up to decide. In (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  5. ‘I Interact Therefore I Am’: The Self as a Historical Product of Dialectical Attunement.Dimitris Bolis & Leonhard Schilbach - 2018 - Topoi:1-14.
    In this article, moving from being to becoming, we construe the ‘self’ as a dynamic process rather than as a static entity. To this end we draw on dialectics and Bayesian accounts of cognition. The former allows us to holistically consider the ‘self’ as the interplay between internalization and externalization and the latter to operationalize our suggestion formally. Internalization is considered here as the co-construction of bodily hierarchical models of the world and the organism, while externalization is taken as the (...)
    Remove from this list   Direct download (4 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  6. The Artificial View: Toward a Non-Anthropocentric Account of Moral Patiency.Fabio Tollon - forthcoming - Ethics and Information Technology.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. The Hard Problem of AI Rights.Adam J. Andreotta - forthcoming - AI and Society:1-14.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. How Do Technological Artefacts Embody Moral Values?Michael Klenk - forthcoming - Philosophy and Technology:1-20.
    According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Applying a Principle of Explicability to AI Research in Africa: Should We Do It?Mary Carman & Benjamin Rosman - forthcoming - Ethics and Information Technology.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. Autonomous Vehicles, Trolley Problems, and the Law.Stephen S. Wu - 2020 - Ethics and Information Technology 22 (1):1-13.
    Autonomous vehicles have the potential to save tens of thousands of lives, but legal and social barriers may delay or even deter manufacturers from offering fully automated vehicles and thereby cost lives that otherwise could be saved. Moral philosophers use “thought experiments” to teach us about what ethics might say about the ethical behavior of AVs. If a manufacturer designing an AV decided to make what it believes is an ethical choice to save a large group of lives by steering (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Kevin Macnish: The Ethics of Surveillance: An Introduction: Routledge, London and New York, 2018, ISBN 978-1138643796, $45.95.Tony Doyle - 2020 - Ethics and Information Technology 22 (1):39-42.
  13. A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  15. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions.Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo & Luciano Floridi - 2020 - Science and Engineering Ethics 26 (1):89-120.
    Artificial intelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  17. Gods of Transhumanism.Alex V. Halapsis - 2019 - Anthropological Measurements of Philosophical Research 16:78-90.
    Purpose of the article is to identify the religious factor in the teaching of transhumanism, to determine its role in the ideology of this flow of thought and to identify the possible limits of technology interference in human nature. Theoretical basis. The methodological basis of the article is the idea of transhumanism. Originality. In the foreseeable future, robots will be able to pass the Turing test, become “electronic personalities” and gain political rights, although the question of the possibility of machine (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  19. The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity.Adrian Mróz - 2019 - Kultura I Historia 36 (2):17-40.
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Artificiële Intelligentie En Normatieve Ethiek : Wie is Verantwoordelijk Voor de Misdaden van LAWS?1.Lode Lauwaert - 2019 - Algemeen Nederlands Tijdschrift voor Wijsbegeerte 111 (4):585-603.
    Artificial intelligence and normative ethics: Who is responsible for the crime of LAWS?In his text “Killer Robots”, Robert Sparrow holds that killer robots should be forbidden. This conclusion is based on two premises. The first is that attributive responsibility is a necessary condition for admitting an action; the second premise is that the use of killer robots is accompanied by a responsibility gap. Although there are good reasons to conclude that killer robots should be banned, the article shows that Sparrow's (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  22. The Disciplinary Power of Predictive Algorithms: A Foucauldian Perspective.Paul B. de Laat - 2019 - Ethics and Information Technology 21 (4):319-329.
    Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that “governance by discipline” has given way to “governance by risk”. The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian discipline. Compliance to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. Critical Analysis of the “No Relevant Difference” Argument in Defense of the Rights of Artificial Intelligences.Ali Reza Mazarian - 2019 - Journal of Philosophical Theological Research 21 (79):165-190.
    Received: 31/10/2018 | Accepted: 28/02/2019 There are many new philosophical queries about the moral status and rights of artificial intelligences; questions such as whether such entities can be considered as morally responsible entities and as having special rights. Recently, the contemporary philosophy of mind philosopher, Eric Schwitzgebel, has tried to defend the possibility of equal rights of AIs and human beings, by designing a new argument. In this paper, after an introduction, the author reviews and analyzes the main argument and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. AI Assistants and the Paradox of Internal Automaticity.William A. Bauer & Veljko Dubljević - forthcoming - Neuroethics:1-8.
    What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  27. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - forthcoming - Philosophy and Technology:1-24.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. The Explainable AI research program aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory contributions. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. The Problem of Superintelligence: Political, Not Technological.Wolfhart Totschnig - 2019 - AI and Society 34 (4):907-920.
    The thinkers who have reflected on the problem of a coming superintelligence have generally seen the issue as a technological problem, a problem of how to control what the superintelligence will do. I argue that this approach is probably mistaken because it is based on questionable assumptions about the behavior of intelligent agents and, moreover, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. The Extended Corporate Mind: When Corporations Use AI to Break the Law.Mihailis Diamantis - forthcoming - North Carolina Law Review.
    Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Four Key Questions in Philosophy of Technology.Alexander V. Mikhailovski - 2019 - Epistemology and Philosophy of Science 56 (3):225-233.
    This article discusses Hans Poser’s new book “Homo creator”. It aims to open the philosophy of technology to ontological, epistemological and ethical problems. The keynote of the book serves the conviction that the technical creativity builds the core of the engineering. Modal concepts as possibility, necessity, contingency and reality are used in a systematic way to characterize technology. Technological artifacts essentially depend on a special type of interpretation. The central ontological problem consists in the fact that technology is based on (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  31. Other Minds, Other Intelligences: The Problem of Attributing Agency to Machines.Sven Nyholm - 2019 - Cambridge Quarterly of Healthcare Ethics 28 (4):592-598.
    John Harris discusses the problem of other minds, not as it relates to other human minds, but rather as it relates to artificial intelligences. He also discusses what might be called bilateral mind-reading: humans trying to read the minds of artificial intelligences and artificial intelligences trying to read the minds of humans. Lastly, Harris discusses whether super intelligent AI – if it could be created – should be afforded moral consideration, and also how we might convince super intelligent AI that (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots.Minoru Asada - 2019 - Philosophies 4 (3):38-0.
    In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots. In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system that promotes the emergence of the concept of self scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Why Friendly AIs Won’T Be That Friendly: A Friendly Reply to Muehlhauser and Bostrom.Robert James M. Boyles & Jeremiah Joven Joaquin - 2019 - AI and Society:1–3.
    In “Why We Need Friendly AI”, Luke Muehlhauser and Nick Bostrom propose that for our species to survive the impending rise of superintelligent AIs, we need to ensure that they would be human-friendly. This discussion note offers a more natural but bleaker outlook: that in the end, if these AIs do arise, they won’t be that friendly.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. First Steps Towards an Ethics of Robots and Artificial Intelligence.John Tasioulas - 2019 - Journal of Practical Ethics 7 (1):61-95.
    This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  37. Reviewing Tests for Machine Consciousness.A. Elamrani & R. V. Yampolskly - 2019 - Journal of Consciousness Studies 26 (5-6):35-64.
    The accelerating advances in the fields of neuroscience, artificial intelligence, and robotics have been garnering interest and raising new philosophical, ethical, or practical questions that depend on whether or not there may exist a scientific method of probing consciousness in machines. This paper provides an analytic review of the existing tests for machine consciousness proposed in the academic literature over the past decade, and an overview of the diverse scientific communities involved in this enterprise. The tests put forward in their (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  38. Artificial Intelligence and Environmental Ethics: Moral, Legal Right of Artificial Intelligence.Kim Myungsik - 2018 - Environmental Philosophy 25:5-30.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. The Future Impact of Artificial Intelligence on Humans and Human Rights.Steven Livingston & Mathias Risse - 2019 - Ethics and International Affairs 33 (2):141-158.
  40. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - AIES: AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Man as ‘Aggregate of Data’.Sjoukje van der Meulen & Max Bruinsma - 2019 - AI and Society 34 (2):343-354.
    Since the emergence of the innovative field of artificial intelligence in the 1960s, the late Hubert Dreyfus insisted on the ontological distinction between man and machine, human and artificial intelligence. In the different editions of his classic and influential book What computers can’t do, he posits that an algorithmic machine can never fully simulate the complex functioning of the human mind—not now, nor in the future. Dreyfus’ categorical distinctions between man and machine are still relevant today, but their relation has (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. Autonomous Weapons Systems, Killer Robots and Human Dignity.Amanda Sharkey - 2019 - Ethics and Information Technology 21 (2):75-87.
    One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro, 2012; Docherty, 2014; Heyns, 2017; Ulgen, 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher, 2016; Pop, 2018; Saxton, 2016). This paper critically examines the relationship between human dignity and autonomous weapons systems. Three main types of objection to AWS are identified; (i) arguments based on technology and the (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Nonconscious Cognitive Suffering: Considering Suffering Risks of Embodied Artificial Intelligence.Steven Umbrello & Stefan Lorenz Sorgner - 2019 - Philosophies 4 (2):24.
    Strong arguments have been formulated that the computational limits of disembodied artificial intelligence (AI) will, sooner or later, be a problem that needs to be addressed. Similarly, convincing cases for how embodied forms of AI can exceed these limits makes for worthwhile research avenues. This paper discusses how embodied cognition brings with it other forms of information integration and decision-making consequences that typically involve discussions of machine cognition and similarly, machine consciousness. N. Katherine Hayles’s novel conception of nonconscious cognition in (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  44. The Oxford Handbook of Ethics of AI.Markus Dubber, Frank Pasquale & Sunit Das (eds.) - forthcoming - Oxford University Press.
    This 44-chapter volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches. The term "A.I." is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. Genomic Obsolescence: What Constitutes an Ontological Threat to Human Nature?Michal Klincewicz & Lily Frank - 2019 - American Journal of Bioethics 19 (7):39-40.
  46. Superintelligence as Moral Philosopher.J. Corabi - 2017 - Journal of Consciousness Studies 24 (5-6):128-149.
    Non-biological superintelligent artificial minds are scary things. Some theorists believe that if they came to exist, they might easily destroy human civilization, even if destroying human civilization was not a high priority for them. Consequently, philosophers are increasingly worried about the future of human beings and much of the rest of the biological world in the face of the potential development of superintelligent AI. This paper explores whether the increased attention philosophers have paid to the dangers of superintelligent AI is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  47. Artificial Intelligence and the Ethics of Human Extinction.T. Lorenc - 2015 - Journal of Consciousness Studies 22 (9-10):194-214.
    The potential long-term benefits and risks of technological progress in artificial intelligence and related fields are sub-stantial. The risks include total human extinction as a result of unfriendly superintelligent AI, while the benefits include the liberation of human existence from death and suffering through mind uploading. One approach to mitigating the risk would be to engineer ethical principles into AI devices. However, this may not be possible, due to the nature of ethical agency. Even if it is possible, these principles, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. The Picture of Artificial Intelligence and the Secularization of Thought.King-Ho Leung - 2019 - Political Theology 20 (6):457-471.
    This article offers a critical interpretation of Artificial Intelligence (AI) as a philosophical notion which exemplifies a secular conception of thinking. One way in which AI notably differs from the conventional understanding of “thinking” is that, according to AI, “intelligence” or “thinking” does not necessarily require “life” as a precondition: that it is possible to have “thinking without life.” Building on Charles Taylor’s critical account of secularity as well as Hubert Dreyfus’ influential critique of AI, this article offers a theological (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Bias in Information, Algorithms, and Systems.Alan Rubel, Clinton Castro & Adam Pham - 2018 - In Jo Bates, Paul D. Clough, Robert Jäschke & Jahna Otterbacher (eds.), Proceedings of the International Workshop on Bias in Information, Algorithms, and Systems (BIAS). pp. 9-13.
    We argue that an essential element of understanding the moral salience of algorithmic systems requires an analysis of the relation between algorithms and agency. We outline six key ways in which issues of agency, autonomy, and respect for persons can conflict with algorithmic decision-making.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1296