About this topic
Summary The moral status of artificial systems is an increasingly open discussion due to the increasing ubiquity of increasingly intelligent machine systems. Questions range from those about the "smart" systems controlling traffic lights to those controlling missile systems to those counting votes, to questions about degrees of responsibility due semi-autonomous drones and their pilots given operating conditions at either end of the joystick, and finally to questions about the relative moral status of "fully autonomous" artificial agents, "Terminator"s and "Wall-E"s. Prior to the rise of intelligent machines, the issue may have seemed moot. Kant had made the status of anything that is not an end in itself very clear - it had a price, and you could buy and sell it. If its manufacture runs contrary to the categorical imperative, then it is immoral, e.g. there are no semi-autonomous flying missile launchers in the kingdom of ends, so no Kantan moral agent could ever will their creation. Even earlier, after using a number of physical models to describe the dynamics of cognition in the Thaetatus, Socrates tells us that some things "have infinity within them" - i.e. can't be ascribed a limited value - and others not. As machines exemplifying and then embodying such capacities typically reserved to human beings (Kant, famously for example, writes that we know only human beings to be able to answer to moral responsibility) are trained and learn, questions of robot psychology and motivation, autonomy as a capacity for self-determination, and so political and moral status under conventional law become important. To date, established conventions are typically taken as a given, as engineers have focused mainly on delivering non-autonomous machines and other artificial systems as tools for industry. However, even with limited applications in for example artificial companions, pets, interesting new issues have emerged. For example, can a human being fall in love with a computer program of adequate complexity? What about a robot sex industry? Artificial nurses? If an artificial nurse refuses a human doctor's order to remove life support from a child because his parents cannot pay the medical bills, is the nurse a hero, or is it malfunctioning? Closer to the moment, questions about expert systems and automation of transport, manufacturing and logistics raise important moral questions about the role of artificial systems in the displacement of human workers, public safety, as well as questions concerning the redirection of crucial natural resources to the maintenance of centrally controlled artificial systems at the expense of local human systems. Issues such as these make the relative status of widely distributed artificial systems an important area of discourse. This is especially true with intelligent machine technologies - AI. Recent use of drones in surveillance and wars of aggression, and the relationship of the research community to these end-user activities of course raise the same ethical questions which faced scientists developing the nuclear bomb in the middle 20th century. Thus, we can see that questions about the moral status of artificial systems - especially "intelligent" and "intelligence" systems - arise from the perspectives of the potential product, the engineer ultimately responsible (c.f. IEEE ethics for engineers), and the "end-user" left to live in terms of the artificial systems so established. Finally, given the diverse fields confronting similar issues as increasingly intelligent machines are integrated into various aspects of daily life, discourse on the relative moral status of artificial systems promises to be an increasingly integrative one, as well. 
Related

Contents
583 found
Order:
1 — 50 / 583
  1. The Moral Addressor Account of Moral Agency.Dorna Behdadi - manuscript
    According to the practice-focused approach to moral agency, a participant stance towards an entity is warranted by the extent to which this entity qualifies as an apt target of ascriptions of moral responsibility, such as blame. Entities who are not eligible for such reactions are exempted from moral responsibility practices, and thus denied moral agency. I claim that many typically exempted cases may qualify as moral agents by being eligible for a distinct participant stance. When we participate in moral responsibility (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  2. The argument for near-term human disempowerment through AI.Leonard Dung - manuscript
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: First, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. Is simulation a substitute for experimentation?Isabelle Peschard - manuscript
    It is sometimes said that simulation can serve as epistemic substitute for experimentation. Such a claim might be suggested by the fast-spreading use of computer simulation to investigate phenomena not accessible to experimentation (in astrophysics, ecology, economics, climatology, etc.). But what does that mean? The paper starts with a clarification of the terms of the issue and then focuses on two powerful arguments for the view that simulation and experimentation are ‘epistemically on a par’. One is based on the claim (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  4. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Consciousness, Machines, and Moral Status.Henry Shevlin - manuscript
    In light of recent breakneck pace in machine learning, questions about whether near-future artificial systems might be conscious and possess moral status are increasingly pressing. This paper argues that as matters stand these debates lack any clear criteria for resolution via the science of consciousness. Instead, insofar as they are settled at all, it is likely to be via shifts in public attitudes brought about by the increasingly close relationships between humans and AI users. Section 1 of the paper I (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Three mistakes in the moral mathematics of existential risk.David Thorstad - manuscript
    Longtermists have recently argued that it is overwhelmingly important to do what we can to mitigate existential risks to humanity. I consider three mistakes that are often made in calculating the value of existential risk mitigation: focusing on cumulative risk rather than period risk; ignoring background risk; and neglecting population dynamics. I show how correcting these mistakes pushes the value of existential risk mitigation substantially below leading estimates, potentially low enough to threaten the normative case for existential risk mitigation. I (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. The Point of Blaming AI Systems.Hannah Altehenger & Leonhard Menges - forthcoming - Journal of Ethics and Social Philosophy.
    As Christian List (2021) has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among others things, that we ask whether it makes sense (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Supporting human autonomy in AI systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  9. Sims and Vulnerability: On the Ethics of Creating Emulated Minds.Bartek Chomanski - forthcoming - Science and Engineering Ethics.
    It might become possible to build artificial minds with the capacity for experience. This raises a plethora of ethical issues, explored, among others, in the context of whole brain emulations (WBE). In this paper, I will take up the problem of vulnerability – given, for various reasons, less attention in the literature – that the conscious emulations will likely exhibit. Specifically, I will examine the role that vulnerability plays in generating ethical issues that may arise when dealing with WBEs. I (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Anti-natalism and the creation of artificial minds.Bartek Chomanski - forthcoming - Journal of Applied Philosophy.
    Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  11. If robots are people, can they be made for profit? Commercial implications of robot personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. ChatGPT: towards an AI subjectivity.Kristian D'Amato - forthcoming - AI and Society.
    Motivated by the question of responsible AI and value alignment, I seek to offer a uniquely Foucauldian reconstruction of the problem as the emergence of an ethical subject in a disciplinary setting. This reconstruction contrasts with the strictly human-oriented programme typical to current scholarship that often views technology in instrumental terms. With this in mind, I problematise the concept of a technological subjectivity through an exploration of various aspects of ChatGPT in light of Foucault’s work, arguing that current systems lack (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   22 citations  
  15. Reasons to Respond to AI Emotional Expressions.Rodrigo Díaz & Jonas Blatter - forthcoming - American Philosophical Quarterly.
    Human emotional expressions can communicate the emotional state of the expresser, but they can also communicate appeals to perceivers. For example, sadness expressions such as crying request perceivers to aid and support, and anger expressions such as shouting urge perceivers to back off. Some contemporary artificial intelligence (AI) systems can mimic human emotional expressions in a (more or less) realistic way, and they are progressively being integrated into our daily lives. How should we respond to them? Do we have reasons (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17. How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. AI and the Law: Can Legal Systems Help Us Maximize Paperclips while Minimizing Deaths?Mihailis E. Diamantis, Rebekah Cochran & Miranda Dam - forthcoming - In Technology Ethics: A Philosophical Introduction and Readings.
    This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. -/- Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  19. Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - forthcoming - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. Ethics for artificial intellects.John Storrs Hall - forthcoming - Nanoethics: The Ethical and Social Implications of Nanotechnology.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  22. The Perfect Politician.Theodore M. Lechterman - forthcoming - In David Edmonds (ed.), AI Morality. Oxford: Oxford University Press.
    Ideas for integrating AI into politics are now emerging and advancing at accelerating pace. This chapter highlights a few different varieties and show how they reflect different assumptions about the value of democracy. We cannot make informed decisions about which, if any, proposals to pursue without further reflection on what makes democracy valuable and how current conditions fail to fully realize it. Recent advances in political philosophy provide some guidance but leave important questions open. If AI advances to a state (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  24. Medical AI: Is Trust Really the Issue?Jakob Thrane Mainz - forthcoming - Journal of Medical Ethics.
    I discuss an influential argument put forward by Joshua Hatherley. Drawing on influential philosophical accounts of inter-personal trust, Hatherley claims that medical Artificial Intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections to Hatherley’s claims: (1) (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Conformism, Ignorance & Injustice: AI as a Tool of Epistemic Oppression.Martin Miragoli - forthcoming - Episteme: A Journal of Social Epistemology.
    From music recommendation to assessment of asylum applications, machine-learning algorithms play a fundamental role in our lives. Naturally, the rise of AI implementation strategies has brought to public attention the ethical risks involved. However, the dominant anti-discrimination discourse, too often preoccupied with identifying particular instances of harmful AIs, has yet to bring clearly into focus the more structural roots of AI-based injustice. This paper addresses the problem of AI-based injustice from a distinctively epistemic angle. More precisely, I argue that the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Will intelligent machines become moral patients?Parisa Moosavi - forthcoming - Philosophy and Phenomenological Research.
    This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Can we design artificial persons without being manipulative?Maciej Musiał - forthcoming - AI and Society:1-10.
    If we could build artificial persons with a moral status comparable to this of a typical human being, how should we design those APs in the right way? This question has been addressed mainly in terms of designing APs devoted to being servants and debated in reference to their autonomy and the harm they might experience. Recently, it has been argued that even if developing AP servants would neither deprive them of autonomy nor cause any net harm, then developing such (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. The political choreography of the Sophia robot: beyond robot rights and citizenship to political performances for the social robotics market.Jaana Parviainen & Mark Coeckelbergh - forthcoming - AI and Society.
    A humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence. Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  29. Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues. Springer International Publishing.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Sustainability of Artificial Intelligence: Reconciling human rights with legal rights of robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. The Hardware Turn in the Digital Discourse: An Analysis, Explanation, and Potential Risk.Luciano Floridi - 2024 - Philosophy and Technology 37 (1):1-7.
  35. Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - 2024 - Phenomenology and the Cognitive Sciences 23 (1):21-42.
    Advances in artificial intelligence (AI) create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. (1) To answer these questions, the paper (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Decolonial AI as Disenclosure.Warmhold Jan Thomas Mollema - 2024 - Open Journal of Social Sciences 12 (2):574-603.
    The development and deployment of machine learning and artificial intelligence (AI) engender “AI colonialism”, a term that conceptually overlaps with “data colonialism”, as a form of injustice. AI colonialism is in need of decolonization for three reasons. Politically, because it enforces digital capitalism’s hegemony. Ecologically, as it negatively impacts the environment and intensifies the extraction of natural resources and consumption of energy. Epistemically, since the social systems within which AI is embedded reinforce Western universalism by imposing Western colonial values on (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Thinking unwise: a relational u-turn.Nicholas Barrow - 2023 - In Social Robots in Social Institutions: Proceedings of RoboPhilosophy 2022.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  38. Artificial Dispositions: Investigating Ethical and Metaphysical Issues.William A. Bauer & Anna Marmodoro (eds.) - 2023 - Bloomsbury.
    We inhabit a world not only full of natural dispositions independent of human design, but also artificial dispositions created by our technological prowess. How do these dispositions, found in automation, computation, and artificial intelligence applications, differ metaphysically from their natural counterparts? This collection investigates artificial dispositions: what they are, the roles they play in artificial systems, and how they impact our understanding of the nature of reality, the structure of minds, and the ethics of emerging technologies. It is divided into (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  39. From AI for people to AI for the world and the universe.Seth D. Baum & Andrea Owe - 2023 - AI and Society 38 (2):679-680.
    Recent work in AI ethics often calls for AI to advance human values and interests. The concept of “AI for people” is one notable example. Though commendable in some respects, this work falls short by excluding the moral significance of nonhumans. This paper calls for a shift in AI ethics to more inclusive paradigms such as “AI for the world” and “AI for the universe”. The paper outlines the case for more inclusive paradigms and presents implications for moral philosophy and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target cases (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Artificial Consciousness Is Morally Irrelevant.Bruce P. Blackshaw - 2023 - American Journal of Bioethics Neuroscience 14 (2):72-74.
    It is widely agreed that possession of consciousness contributes to an entity’s moral status, even if it is not necessary for moral status (Levy and Savulescu 2009). An entity is considered to have...
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45. Should the State Prohibit the Production of Artificial Persons?Bartek Chomanski - 2023 - Journal of Libertarian Studies 27.
    This article argues that criminal law should not, in general, prevent the creation of artificially intelligent servants who achieve humanlike moral status, even though it may well be immoral to construct such beings. In defending this claim, a series of thought experiments intended to evoke clear intuitions is proposed, and presuppositions about any particular theory of criminalization or any particular moral theory are kept to a minimum.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. A Conditional Defense of the Use of Algorithms in Criminal Sentencing.Ken Daley - 2023 - Techné Research in Philosophy and Technology 27 (1):1-20.
    The presence of predictive AI has steadily expanded into ever-increasing aspects of civil society. I aim to show that despite reasons for believing the use of such systems is currently problematic, these worries give no indication of their future potential. I argue that the absence of moral limits on how we might manipulate automated systems, together with the likelihood that they are more easily manipulated in the relevant ways than humans, suggests that such systems will eventually outstrip the human ability (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Moral Uncertainty and Our Relationships with Unknown Minds.John Danaher - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):482-495.
    We are sometimes unsure of the moral status of our relationships with other entities. Recent case studies in this uncertainty include our relationships with artificial agents (robots, assistant AI, etc.), animals, and patients with “locked-in” syndrome. Do these entities have basic moral standing? Could they count as true friends or lovers? What should we do when we do not know the answer to these questions? An influential line of reasoning suggests that, in such cases of moral uncertainty, we need meta-moral (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48. Enhancing human agency through redress in Artificial Intelligence Systems.Rosanna Fanni, Valerie Eveline Steinkogler, Giulia Zampedri & Jo Pierson - 2023 - AI and Society 38 (2):537-547.
    Recently, scholars across disciplines raised ethical, legal and social concerns about the notion of human intervention, control, and oversight over Artificial Intelligence (AI) systems. This observation becomes particularly important in the age of ubiquitous computing and the increasing adoption of AI in everyday communication infrastructures. We apply Nicholas Garnham's conceptual perspective on mediation to users who are challenged both individually and societally when interacting with AI-enabled systems. One way to increase user agency are mechanisms to contest faulty or flawed AI (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical principles (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Encountering Artificial Intelligence: Ethical and Anthropological Reflections.Matthew J. Gaudet, Paul Scherz, Noreen Herzfeld, Jordan Joseph Wales, Nathan Colaner, Jeremiah Coogan, Mariele Courtois, Brian Cutter, David E. DeCosse, Justin Charles Gable, Brian Green, James Kintz, Cory Andrew Labrecque, Catherine Moon, Anselm Ramelow, John P. Slattery, Ana Margarita Vega, Luis G. Vera, Andrea Vicini & Warren von Eschenbach - 2023 - Eugene, OR: Pickwick Press.
    What does it mean to consider the world of AI through a Christian lens? Rapid developments in AI continue to reshape society, raising new ethical questions and challenging our understanding of the human person. Encountering Artificial Intelligence draws on Pope Francis’ discussion of a culture of encounter and broader themes in Catholic social thought in order to examine how current AI applications affect human relationships in various social spheres and offers concrete recommendations for better implementation. The document also explores questions (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 583