About this topic
Summary The moral status of artificial systems is an increasingly open discussion due to the increasing ubiquity of increasingly intelligent machine systems. Questions range from those about the "smart" systems controlling traffic lights to those controlling missile systems to those counting votes, to questions about degrees of responsibility due semi-autonomous drones and their pilots given operating conditions at either end of the joystick, and finally to questions about the relative moral status of "fully autonomous" artificial agents, "Terminator"s and "Wall-E"s. Prior to the rise of intelligent machines, the issue may have seemed moot. Kant had made the status of anything that is not an end in itself very clear - it had a price, and you could buy and sell it. If its manufacture runs contrary to the categorical imperative, then it is immoral, e.g. there are no semi-autonomous flying missile launchers in the kingdom of ends, so no Kantan moral agent could ever will their creation. Even earlier, after using a number of physical models to describe the dynamics of cognition in the Thaetatus, Socrates tells us that some things "have infinity within them" - i.e. can't be ascribed a limited value - and others not. As machines exemplifying and then embodying such capacities typically reserved to human beings (Kant, famously for example, writes that we know only human beings to be able to answer to moral responsibility) are trained and learn, questions of robot psychology and motivation, autonomy as a capacity for self-determination, and so political and moral status under conventional law become important. To date, established conventions are typically taken as a given, as engineers have focused mainly on delivering non-autonomous machines and other artificial systems as tools for industry. However, even with limited applications in for example artificial companions, pets, interesting new issues have emerged. For example, can a human being fall in love with a computer program of adequate complexity? What about a robot sex industry? Artificial nurses? If an artificial nurse refuses a human doctor's order to remove life support from a child because his parents cannot pay the medical bills, is the nurse a hero, or is it malfunctioning? Closer to the moment, questions about expert systems and automation of transport, manufacturing and logistics raise important moral questions about the role of artificial systems in the displacement of human workers, public safety, as well as questions concerning the redirection of crucial natural resources to the maintenance of centrally controlled artificial systems at the expense of local human systems. Issues such as these make the relative status of widely distributed artificial systems an important area of discourse. This is especially true with intelligent machine technologies - AI. Recent use of drones in surveillance and wars of aggression, and the relationship of the research community to these end-user activities of course raise the same ethical questions which faced scientists developing the nuclear bomb in the middle 20th century. Thus, we can see that questions about the moral status of artificial systems - especially "intelligent" and "intelligence" systems - arise from the perspectives of the potential product, the engineer ultimately responsible (c.f. IEEE ethics for engineers), and the "end-user" left to live in terms of the artificial systems so established. Finally, given the diverse fields confronting similar issues as increasingly intelligent machines are integrated into various aspects of daily life, discourse on the relative moral status of artificial systems promises to be an increasingly integrative one, as well. 
Related

Contents
515 found
Order:
1 — 50 / 515
  1. The Moral Addressor Account of Moral Agency.Dorna Behdadi - manuscript
    According to the practice-focused approach to moral agency, a participant stance towards an entity is warranted by the extent to which this entity qualifies as an apt target of ascriptions of moral responsibility, such as blame. Entities who are not eligible for such reactions are exempted from moral responsibility practices, and thus denied moral agency. I claim that many typically exempted cases may qualify as moral agents by being eligible for a distinct participant stance. When we participate in moral responsibility (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Is simulation a substitute for experimentation?Isabelle Peschard - manuscript
    It is sometimes said that simulation can serve as epistemic substitute for experimentation. Such a claim might be suggested by the fast-spreading use of computer simulation to investigate phenomena not accessible to experimentation (in astrophysics, ecology, economics, climatology, etc.). But what does that mean? The paper starts with a clarification of the terms of the issue and then focuses on two powerful arguments for the view that simulation and experimentation are ‘epistemically on a par’. One is based on the claim (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  3. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  4. Anti-natalism and the creation of artificial minds.Bartek Chomanski - forthcoming - Journal of Applied Philosophy.
    Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  5. If robots are people, can they be made for profit? Commercial implications of robot personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski - forthcoming - Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI etc), animals, and patients with ‘locked in’ syndrome. Do these entities have basic moral standing? Could they count as true friends or intimate partners? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   17 citations  
  10. AI and the Law: Can Legal Systems Help Us Maximize Paperclips while Minimizing Deaths?Mihailis E. Diamantis, Rebekah Cochran & Miranda Dam - forthcoming - In Technology Ethics: A Philosophical Introduction and Readings.
    This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. -/- Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - forthcoming - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - forthcoming - Phenomenology and the Cognitive Sciences:1-22.
    Advances in artificial intelligence create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. To answer these questions, the paper argues that (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Ethics for artificial intellects.John Storrs Hall - forthcoming - Nanoethics: The Ethical and Social Implications of Nanotechnology.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. Quantum of Wisdom.Brett Karlan & Colin Allen - forthcoming - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. Hoboken, NJ: Wiley-Blackwell. pp. 1-6.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  17. The political choreography of the Sophia robot: beyond robot rights and citizenship to political performances for the social robotics market.Jaana Parviainen & Mark Coeckelbergh - forthcoming - AI and Society.
    A humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence. Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  18. Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20. On and beyond artifacts in moral relations: accounting for power and violence in Coeckelbergh’s social relationism.Fabio Tollon & Kiasha Naidoo - forthcoming - AI and Society:1-10.
    The ubiquity of technology in our lives and its culmination in artificial intelligence raises questions about its role in our moral considerations. In this paper, we address a moral concern in relation to technological systems given their deep integration in our lives. Coeckelbergh develops a social-relational account, suggesting that it can point us toward a dynamic, historicised evaluation of moral concern. While agreeing with Coeckelbergh’s move away from grounding moral concern in the ontological properties of entities, we suggest that it (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues. Springer International Publishing.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Sustainability of Artificial Intelligence: Reconciling human rights with legal rights of robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Thinking unwise: a relational u-turn.Nicholas Barrow - 2023 - In Social Robots in Social Institutions: Proceedings of RoboPhilosophy 2022.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  24. When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  25. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Embodied Experience in Socially Participatory Artificial Intelligence.Mark Graves - 2023 - Zygon.
    As artificial intelligence (AI) becomes progressively more engaged with society, its shift from technical tool to participating in society raises questions about AI personhood. Drawing upon developmental psychology and systems theory, a mediating structure for AI proto-personhood is defined analogous to an early stage of human development. The proposed AI bridges technical, psychological, and theological perspectives on near-future AI and is structured by its hardware, software, computational, and sociotechnical systems through which it experiences its world as embodied (even for putatively (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns.Elisabeth Hildt - 2023 - American Journal of Bioethics Neuroscience 14 (2):58-71.
    Can machines be conscious and what would be the ethical implications? This article gives an overview of current robotics approaches toward machine consciousness and considers factors that hamper an understanding of machine consciousness. After addressing the epistemological question of how we would know whether a machine is conscious and discussing potential advantages of potential future machine consciousness, it outlines the role of consciousness for ascribing moral status. As machine consciousness would most probably differ considerably from human consciousness, several complex questions (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  29. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Swansea: Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31. Information Communication Technology.Christopher Quintana - 2023 - In Mortimer Sellers & Stephan Kirste (eds.), Encyclopedia of the Philosophy of Law and Social Philosophy. Springer Dordrecht.
    This encyclopedia entry provides an introductory examination information communication technology (ICT) as a subject of moral, social, and legal analysis. The entry begins with a survey of philosophical perspectives on human-computer interaction such as the moral agency of artifacts, mediation theory, trans or posthumanism, and extension theory. The entry then turns to survey normative and epistemic issues in ICT including the nature of socially disruptive technology, the outsourcing of human capabilities, privacy, echo chambers, epistemic bubbles, and the effect of ICTs (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Non-Human Moral Status: Problems with Phenomenal Consciousness.Joshua Shepherd - 2023 - American Journal of Bioethics Neuroscience 14 (2):148-157.
    Consciousness-based approaches to non-human moral status maintain that consciousness is necessary for (some degree or level of) moral status. While these approaches are intuitive to many, in this paper I argue that the judgment that consciousness is necessary for moral status is not secure enough to guide policy regarding non-humans, that policies responsive to the moral status of non-humans should take seriously the possibility that psychological features independent of consciousness are sufficient for moral status. Further, I illustrate some practical consequences (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  33. Who is controlling whom? Reframing “meaningful human control” of AI systems in security.Pascal Vörös, Serhiy Kandul, Thomas Burri & Markus Christen - 2023 - Ethics and Information Technology 25 (1):1-7.
    Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  36. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We encode (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37. Why the Epistemic Objection Against Using Sentience as Criterion of Moral Status is Flawed.Leonard Dung - 2022 - Science and Engineering Ethics 28 (6):1-15.
    According to a common view, sentience is necessary and sufficient for moral status. In other words, whether a being has intrinsic moral relevance is determined by its capacity for conscious experience. The _epistemic objection_ derives from our profound uncertainty about sentience. According to this objection, we cannot use sentience as a _criterion_ to ascribe moral status in practice because we won’t know in the foreseeable future which animals and AI systems are sentient while ethical questions regarding the possession of moral (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Sven Nyholm, Humans and Robots; Ethics, Agency and Anthropomorphism.Lydia Farina - 2022 - Journal of Moral Philosophy 19 (2):221-224.
    How should human beings and robots interact with one another? Nyholm’s answer to this question is given below in the form of a conditional: If a robot looks or behaves like an animal or a human being then we should treat them with a degree of moral consideration (p. 201). Although this is not a novel claim in the literature on ai ethics, what is new is the reason Nyholm gives to support this claim; we should treat robots that look (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Are superintelligent robots entitled to human rights?John-Stewart Gordon - 2022 - Ratio 35 (3):181-193.
  40. Theological Foundations for Moral Artificial Intelligence.Mark Graves - 2022 - Journal of Moral Theology 11 (Special Issue 1):182-211.
    The expanding social role and continued development of artificial intelligence (AI) needs theological investigation of its anthropological and moral potential. A pragmatic theological anthropology adapted for AI can characterize moral AI as experiencing its natural, social, and moral world through interpretations of its external reality as well as its self-reckoning. Systems theory can further structure insights into an AI social self that conceptualizes itself within Ignacio Ellacuria’s historical reality and its moral norms through Thomistic ideogenesis. This enables a conceptualization process (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  41. Artificial Intelligence and Moral Theology: A Conversation.Brian Patrick Green, Matthew J. Gaudet, Levi Checketts, Brian Cutter, Noreen Herzfeld, Cory Andrew Labrecque, Anselm Ramelow, Paul Scherz, Marga Vega, Andrea Vicini & Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):13-40.
  42. Ex Machina: Testing Machines for Consciousness and Socio-Relational Machine Ethics.Harrison S. Jackson - 2022 - Journal of Science Fiction and Philosophy 5.
    Ex Machina is a 2014 science-fiction film written and directed by Alex Garland, centered around the creation of a human-like artificial intelligence (AI) named Ava. The plot focuses on testing Ava for consciousness by offering a unique reinterpretation of the Turing Test. The film offers an excellent thought experiment demonstrating the consequences of various approaches to a potentially conscious AI. In this paper, I will argue that intelligence testing has significant epistemological shortcomings that necessitate an ethical approach not reliant on (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. Rule by Automation: How Automated Decision Systems Promote Freedom and Equality.Athmeya Jayaram & Jacob Sparks - 2022 - Moral Philosophy and Politics 9 (2):201-218.
    Using automated systems to avoid the need for human discretion in government contexts – a scenario we call ‘rule by automation’ – can help us achieve the ideal of a free and equal society. Drawing on relational theories of freedom and equality, we explain how rule by automation is a more complete realization of the rule of law and why thinkers in these traditions have strong reasons to support it. Relational theories are based on the absence of human domination and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Automating anticorruption?María Carolina Jiménez & Emanuela Ceva - 2022 - Ethics and Information Technology 24 (4):1-14.
    The paper explores some normative challenges concerning the integration of Machine Learning (ML) algorithms into anticorruption in public institutions. The challenges emerge from the tensions between an approach treating ML algorithms as allies to an exclusively legalistic conception of anticorruption and an approach seeing them within an institutional ethics of office accountability. We explore two main challenges. One concerns the variable opacity of some ML algorithms, which may affect public officeholders’ capacity to account for institutional processes relying upon ML techniques. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. The Illusion of Agency in Human–Computer Interaction.Michael Madary - 2022 - Neuroethics 15 (1):1-15.
    This article makes the case that our digital devices create illusions of agency. There are times when users feel as if they are in control when in fact they are merely responding to stimuli on the screen in predictable ways. After the introduction, the second section of the article offers examples of illusions of agency that do not involve human–computer interaction in order to show that such illusions are possible and not terribly uncommon. The third and fourth sections of the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  46. Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Cham: Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Transparent human – (non-) transparent technology? The Janus-faced call for transparency in AI-based health care technologies.Tabea Ott & Peter Dabrock - 2022 - Frontiers in Genetics 13.
    The use of Artificial Intelligence and Big Data in health care opens up new opportunities for the measurement of the human. Their application aims not only at gathering more and better data points but also at doing it less invasive. With this change in health care towards its extension to almost all areas of life and its increasing invisibility and opacity, new questions of transparency arise. While the complex human-machine interactions involved in deploying and using AI tend to become non-transparent, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. MASS SURVEILLANCE, BEHAVIOURAL CONTROL, AND PSYCHOLOGICAL COERCION THE MORAL ETHICAL RISKS IN COMMERCIAL DEVICES.Yang Immanuel Pachankis - 2022 - In David C. Wyld & Dhinaharan Nagamalai (eds.), Computer Science and Information Technology. Chennai, India: pp. 151-168.
    The research observed, in parallel and comparatively, a surveillance state’s use of communication & cyber networks with satellite applications for power political & realpolitik purposes, in contrast to the outer space security & legit scientific purpose driven cybernetics. The research adopted a psychoanalytic & psychosocial method of observation in the organizational behaviors of the surveillance state, and a theoretical physics, astrochemical, & cosmological feedback method in the contrast group of cybernetics. Military sociology and multilateral movements were adopted in the diagnostic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. What lies behind AGI: ethical concerns related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 515