About this topic
Summary The moral status of artificial systems is an increasingly open discussion due to the increasing ubiquity of increasingly intelligent machine systems. Questions range from those about the "smart" systems controlling traffic lights to those controlling missile systems to those counting votes, to questions about degrees of responsibility due semi-autonomous drones and their pilots given operating conditions at either end of the joystick, and finally to questions about the relative moral status of "fully autonomous" artificial agents, "Terminator"s and "Wall-E"s. Prior to the rise of intelligent machines, the issue may have seemed moot. Kant had made the status of anything that is not an end in itself very clear - it had a price, and you could buy and sell it. If its manufacture runs contrary to the categorical imperative, then it is immoral, e.g. there are no semi-autonomous flying missile launchers in the kingdom of ends, so no Kantan moral agent could ever will their creation. Even earlier, after using a number of physical models to describe the dynamics of cognition in the Thaetatus, Socrates tells us that some things "have infinity within them" - i.e. can't be ascribed a limited value - and others not. As machines exemplifying and then embodying such capacities typically reserved to human beings (Kant, famously for example, writes that we know only human beings to be able to answer to moral responsibility) are trained and learn, questions of robot psychology and motivation, autonomy as a capacity for self-determination, and so political and moral status under conventional law become important. To date, established conventions are typically taken as a given, as engineers have focused mainly on delivering non-autonomous machines and other artificial systems as tools for industry. However, even with limited applications in for example artificial companions, pets, interesting new issues have emerged. For example, can a human being fall in love with a computer program of adequate complexity? What about a robot sex industry? Artificial nurses? If an artificial nurse refuses a human doctor's order to remove life support from a child because his parents cannot pay the medical bills, is the nurse a hero, or is it malfunctioning? Closer to the moment, questions about expert systems and automation of transport, manufacturing and logistics raise important moral questions about the role of artificial systems in the displacement of human workers, public safety, as well as questions concerning the redirection of crucial natural resources to the maintenance of centrally controlled artificial systems at the expense of local human systems. Issues such as these make the relative status of widely distributed artificial systems an important area of discourse. This is especially true with intelligent machine technologies - AI. Recent use of drones in surveillance and wars of aggression, and the relationship of the research community to these end-user activities of course raise the same ethical questions which faced scientists developing the nuclear bomb in the middle 20th century. Thus, we can see that questions about the moral status of artificial systems - especially "intelligent" and "intelligence" systems - arise from the perspectives of the potential product, the engineer ultimately responsible (c.f. IEEE ethics for engineers), and the "end-user" left to live in terms of the artificial systems so established. Finally, given the diverse fields confronting similar issues as increasingly intelligent machines are integrated into various aspects of daily life, discourse on the relative moral status of artificial systems promises to be an increasingly integrative one, as well. 
Related categories

471 found
Order:
1 — 50 / 471
  1. The Moral Addressor Account of Moral Agency.Dorna Behdadi - manuscript
    According to the practice-focused approach to moral agency, a participant stance towards an entity is warranted by the extent to which this entity qualifies as an apt target of ascriptions of moral responsibility, such as blame. Entities who are not eligible for such reactions are exempted from moral responsibility practices, and thus denied moral agency. I claim that many typically exempted cases may qualify as moral agents by being eligible for a distinct participant stance. When we participate in moral responsibility (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Is Simulation a Substitute for Experimentation?Isabelle Peschard - manuscript
    It is sometimes said that simulation can serve as epistemic substitute for experimentation. Such a claim might be suggested by the fast-spreading use of computer simulation to investigate phenomena not accessible to experimentation (in astrophysics, ecology, economics, climatology, etc.). But what does that mean? The paper starts with a clarification of the terms of the issue and then focuses on two powerful arguments for the view that simulation and experimentation are ‘epistemically on a par’. One is based on the claim (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark   2 citations  
  3. Anthropomorphism and the Impact on the Perception and Implementation of AI Systems.Marie Oldfield -
    Anthropomorphism has long been used as a way for humans to make sense of their surroundings. By converting abstract concepts into objects or concepts that we can relate to we discover a common language with which we can communicate i.e "by which one thing is described in terms of another" ?. Anthropomorphism is based in multiple fields such as, sociology, psychology, neurology philosophy etc. This technique has been seen across history in such fields as religion, fables and folk takes where (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  4. Thinking Unwise: A Relational U-Turn.Nicholas Barrow - forthcoming - In Social Robots in Social Institutions: Proceedings of RoboPhilosophy 2022.
    In this paper, I add to the recent flurry of research concerning the moral patiency of artificial beings. Focusing on David Gunkel's adaptation of Levinas, I identify and argue that the Relationist's extrinsic case-by-case approach of ascribing artificial moral status fails on two accounts. Firstly, despite Gunkel's effort to avoid anthropocentrism, I argue that Relationism is, itself, anthropocentric in virtue of how its case-by-case approach is, necessarily, assessed from a human perspective. Secondly I, in light of interpreting Gunkel's Relationism as (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  6. Anti-Natalism and the Creation of Artificial Minds.Bartek Chomanski - forthcoming - Journal of Applied Philosophy.
    Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. If Robots Are People, Can They Be Made for Profit? Commercial Implications of Robot Personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  9. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  10. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. Walking Through the Turing Wall.Albert Efimov - forthcoming - In Teces.
    Can the machines that play board games or recognize images only in the comfort of the virtual world be intelligent? To become reliable and convenient assistants to humans, machines need to learn how to act and communicate in the physical reality, just like people do. The authors propose two novel ways of designing and building Artificial General Intelligence (AGI). The first one seeks to unify all participants at any instance of the Turing test – the judge, the machine, the human (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Ethics for Artificial Intellects.John Storrs Hall - forthcoming - Nanoethics: The Ethical and Social Implications of Nanotechnology.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  13. Rule by Automation: How Automated Decision Systems Promote Freedom and Equality.Athmeya Jayaram & Jacob Sparks - forthcoming - Moral Philosophy and Politics.
    Using automated systems to avoid the need for human discretion in government contexts – a scenario we call ‘rule by automation’ – can help us achieve the ideal of a free and equal society. Drawing on relational theories of freedom and equality, we explain how rule by automation is a more complete realization of the rule of law and why thinkers in these traditions have strong reasons to support it. Relational theories are based on the absence of human domination and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. A Dilemma for Moral Deliberation in AI in Advance.Ryan Jenkins & Duncan Purves - forthcoming - International Journal of Applied Philosophy.
    Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Quantum of Wisdom.Brett Karlan & Colin Allen - forthcoming - In Greg Viggiano (ed.), Quantum Computing and AI: Social, Ethical, and Geo-Political Implications. Toronto, ON, Canada: University of Toronto Press. pp. 1-6.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Safety Requirements Vs. Crashing Ethically: What Matters Most for Policies on Autonomous Vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  17. On the Moral Status of Social Robots: Considering the Consciousness Criterion.Kestutis Mosakas - forthcoming - AI and Society:1-15.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence. One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with human (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  18. The Political Choreography of the Sophia Robot: Beyond Robot Rights and Citizenship to Political Performances for the Social Robotics Market.Jaana Parviainen & Mark Coeckelbergh - forthcoming - AI and Society.
    A humanoid robot named ‘Sophia’ has sparked controversy since it has been given citizenship and has done media performances all over the world. The company that made the robot, Hanson Robotics, has touted Sophia as the future of artificial intelligence. Robot scientists and philosophers have been more pessimistic about its capabilities, describing Sophia as a sophisticated puppet or chatbot. Looking behind the rhetoric about Sophia’s citizenship and intelligence and going beyond recent discussions on the moral status or legal personhood of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Mapping the Stony Road Toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. On and Beyond Artifacts in Moral Relations: Accounting for Power and Violence in Coeckelbergh’s Social Relationism.Fabio Tollon & Kiasha Naidoo - forthcoming - AI and Society:1-10.
    The ubiquity of technology in our lives and its culmination in artificial intelligence raises questions about its role in our moral considerations. In this paper, we address a moral concern in relation to technological systems given their deep integration in our lives. Coeckelbergh develops a social-relational account, suggesting that it can point us toward a dynamic, historicised evaluation of moral concern. While agreeing with Coeckelbergh’s move away from grounding moral concern in the ontological properties of entities, we suggest that it (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  21. Moral Zombies: Why Algorithms Are Not Moral Agents.Carissa Véliz - forthcoming - AI and Society:1-11.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  22. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Artificial Intelligence in Brain and Mental Health: Philosophical, Ethical & Policy Issues. Springer International Publishing.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Sustainability of Artificial Intelligence: Reconciling Human Rights with Legal Rights of Robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25. Sven Nyholm, Humans and Robots; Ethics, Agency and Anthropomorphism.Lydia Farina - 2022 - Journal of Moral Philosophy 19 (2):221-224.
    How should human beings and robots interact with one another? Nyholm’s answer to this question is given below in the form of a conditional: If a robot looks or behaves like an animal or a human being then we should treat them with a degree of moral consideration (p. 201). Although this is not a novel claim in the literature on ai ethics, what is new is the reason Nyholm gives to support this claim; we should treat robots that look (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Theological Foundations for Moral Artificial Intelligence.Mark Graves - 2022 - Journal of Moral Theology 11 (Special Issue 1):182-211.
    The expanding social role and continued development of artificial intelligence (AI) needs theological investigation of its anthropological and moral potential. A pragmatic theological anthropology adapted for AI can characterize moral AI as experiencing its natural, social, and moral world through interpretations of its external reality as well as its self-reckoning. Systems theory can further structure insights into an AI social self that conceptualizes itself within Ignacio Ellacuria’s historical reality and its moral norms through Thomistic ideogenesis. This enables a conceptualization process (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  27. Artificial Intelligence and Moral Theology: A Conversation.Brian Patrick Green, Matthew J. Gaudet, Levi Checketts, Brian Cutter, Noreen Herzfeld, Cory Andrew Labrecque, Anselm Ramelow, Paul Scherz, Marga Vega, Andrea Vicini & Jordan Joseph Wales - 2022 - Journal of Moral Theology 11 (Special Issue 1):13-40.
  28. Basic Issues in AI Policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Cham: Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. What Lies Behind AGI: Ethical Concerns Related to LLMs.Giada Pistilli - 2022 - Éthique Et Numérique 1 (1):59-68.
    This paper opens the philosophical debate around the notion of Artificial General Intelligence (AGI) and its application in Large Language Models (LLMs). Through the lens of moral philosophy, the paper raises questions about these AI systems' capabilities and goals, the treatment of humans behind them, and the risk of perpetuating a monoculture through language.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. The Hard Limit on Human Nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  31. The Ethics of Generating Posthumans: Philosophical and Theological Reflections on Bringing New Persons Into Existence.Trevor Stammers - 2022 - London, UK: Bloomsbury Academic.
    Is it possible, ethically speaking, to create posthuman and transhuman persons from a religious perspective? Who is responsible for post and transhuman creation? Can post and transhuman persons be morally accountable? Addressing such pressing ethical questions around post and transhuman creation, this volume considers the philosophical and theological arguments that define and stimulate contemporary debate. Contributors consider the full implications of creating post and transhuman beings by highlighting the role of new technologies in shaping new forms of consciousness, as well (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. The Hard Problem of AI Rights.Adam J. Andreotta - 2021 - AI and Society 36 (1):19-32.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  33. Prolegómenos a una ética para la robótica social.Júlia Pareto Boada - 2021 - Dilemata 34:71-87.
    Social robotics has a high disruptive potential, for it expands the field of application of intelligent technology to practical contexts of a relational nature. Due to their capacity to “intersubjectively” interact with people, social robots can take over new roles in our daily activities, multiplying the ethical implications of intelligent robotics. In this paper, we offer some preliminary considerations for the ethical reflection on social robotics, so that to clarify how to correctly orient the critical-normative thinking in this arduous task. (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  34. The Mandatory Ontology of Robot Responsibility.Marc Champagne - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):448–454.
    Do we suddenly become justified in treating robots like humans by positing new notions like “artificial moral agency” and “artificial moral responsibility”? I answer no. Or, to be more precise, I argue that such notions may become philosophically acceptable only after crucial metaphysical issues have been addressed. My main claim, in sum, is that “artificial moral responsibility” betokens moral responsibility to the same degree that a “fake orgasm” betokens an orgasm.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. Vertrouwen in de geneeskunde en kunstmatige intelligentie.Lily Frank & Michal Klincewicz - 2021 - Podium Voor Bioethiek 3 (28):37-42.
    Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  37. Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare.Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.) - 2021 - New York: Oxford University Press.
    The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapon Systems and the great powers have largely refused to support an effective ban. In this vacuum, the public has (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. In search of the moral status of AI: why sentience is a strong argument.Martin Gibert & Dominic Martin - 2021 - AI and Society 1:1-12.
    Is it OK to lie to Siri? Is it bad to mistreat a robot for our own pleasure? Under what condition should we grant a moral status to an artificial intelligence system? This paper looks at different arguments for granting moral status to an AI system: the idea of indirect duties, the relational argument, the argument from intelligence, the arguments from life and information, and the argument from sentience. In each but the last case, we find unresolved issues with the (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   3 citations  
  39. Moral Control and Ownership in AI Systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems are increasingly being used in multiple applications and receiving more attention from the public (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - 2021 - AI and Society 36:473–486.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  42. Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects.Nik Hynek & Anzhelika Solovyeva - 2021 - AI and Society 36 (1):79-99.
    The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban (...)
    Remove from this list   Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  43. The Question of Algorithmic Personhood and Being (Or: On the Tenuous Nature of Human Status and Humanity Tests in Virtual Spaces—Why All Souls Are ‘Necessarily’ Equal When Considered as Energy).Tyler Jaynes - 2021 - J (2571-8800) 3 (4):452-475.
    What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. On Human Genome Manipulation and Homo Technicus: The Legal Treatment of Non-Natural Human Subjects.Tyler L. Jaynes - 2021 - AI and Ethics 1 (3):331-345.
    Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  46. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47. A Citizen's Guide to Artificial Intelligence.James Maclaurin, John Danaher, John Zerilli, Colin Gavaghan, Alistair Knott, Joy Liddicoat & Merel Noorman - 2021 - Cambridge, MA, USA: MIT Press.
    A concise but informative overview of AI ethics and policy. -/- Artificial intelligence, or AI for short, has generated a staggering amount of hype in the past several years. Is it the game-changer it's been cracked up to be? If so, how is it changing the game? How is it likely to affect us as customers, tenants, aspiring homeowners, students, educators, patients, clients, prison inmates, members of ethnic and sexual minorities, and voters in liberal democracies? Authored by experts in fields (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  48. Rights for Robots: Artificial Intelligence, Animal and Environmental Law (2020) by Joshua Gellers. [REVIEW]Kamil Mamak - 2021 - Science and Engineering Ethics 27 (3):1-4.
  49. Artificial intelligence and moral rights.Martin Miernicki & Irene Ng - 2021 - AI and Society 36 (1):319-329.
    Whether copyrights should exist in content generated by an artificial intelligence is a frequently discussed issue in the legal literature. Most of the discussion focuses on economic rights, whereas the relationship of artificial intelligence and moral rights remains relatively obscure. However, as moral rights traditionally aim at protecting the author’s “personal sphere”, the question whether the law should recognize such protection in the content produced by machines is pressing; this is especially true considering that artificial intelligence is continuously further developed (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  50. Is It Time for Robot Rights? Moral Status in Artificial Entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 471