About this topic
Summary Ethical issues associated with AI are proliferating and rising to popular attention as intelligent machines become ubiquitous. For example, AIs can and do model aspects essential to moral agency and so offer tools for the investigation of consciousness and other aspects of cognition contributing to moral status (either ascribed or achieved). This has deep implications for our understanding of moral agency, and so of systems of ethics meant to account for and to provide for the development of such capacities. This raises the issue of responsible and/or blameworthy AIs operating openly in general society, with deep implications again for systems of ethics which must accommodate moral AIs. Consider also that human social infrastructure (e.g. energy grids, mass-transit systems) are increasingly moderated by increasingly intelligent machines. This alone raises many moral/ethical concerns. For example, who or what is responsible in the case of an accident due to system error, or due to design flaws, or due to proper operation outside of anticipated constraints? Finally, as AIs become increasingly intelligent, there seems some legitimate concern over the potential for AIs to manage human systems according to AI values, rather than as directly programmed by human designers. These issues often bear on the long-term safety of intelligent systems, and not only for individual human beings, but for the human race and life on Earth as a whole. These issues and many others are central to Ethics of AI. 
Key works Some works: Bostrom manuscriptMüller 2014, Müller 2016, Etzioni & Etzioni 2017, Dubber et al forthcoming, Tasioulas 2019, Müller forthcoming
Introductions Müller 2013, Gunkel 2012, Coeckelbergh 2020, see also  https://plato.stanford.edu/entries/ethics-ai/
Related categories

1661 found
Order:
1 — 50 / 1661
Material to categorize
  1. Measuring the Biases That Matter: The Ethical and Causal Foundations for Measures of Fairness in Algorithms.Jonathan Herington & Bruce Glymour - 2019 - Proceedings of the Conference on Fairness, Accountability, and Transparency 2019:269-278.
    Measures of algorithmic bias can be roughly classified into four categories, distinguished by the conditional probabilistic dependencies to which they are sensitive. First, measures of "procedural bias" diagnose bias when the score returned by an algorithm is probabilistically dependent on a sensitive class variable (e.g. race or sex). Second, measures of "outcome bias" capture probabilistic dependence between class variables and the outcome for each subject (e.g. parole granted or loan denied). Third, measures of "behavior-relative error bias" capture probabilistic dependence between (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Measuring Fairness in an Unfair World.Jonathan Herington - 2020 - Proceedings of AAAI/ACM Conference on AI, Ethics, and Society 2020:286-292.
    Computer scientists have made great strides in characterizing different measures of algorithmic fairness, and showing that certain measures of fairness cannot be jointly satisfied. In this paper, I argue that the three most popular families of measures - unconditional independence, target-conditional independence and classification-conditional independence - make assumptions that are unsustainable in the context of an unjust world. I begin by introducing the measures and the implicit idealizations they make about the underlying causal structure of the contexts in which they (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  3. Explicability of Artificial Intelligence in Radiology: Is a Fifth Bioethical Principle Conceptually Necessary?Frank Ursin, Cristian Timmermann & Florian Steger - forthcoming - Bioethics.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. AI for Social Good, AI for Datong.Pak-Hang Wong - 2021 - Informatio 26 (1):42-57.
    The Chinese government and technology companies assume a proactive stance towards digital technologies and AI and their roles in users’—and more generally, people’s—lives. This vision of ‘Tech for Good’, i.e., the development of good digital technologies and AI or the application of them for good, is also shared by major technology companies in the globe, e.g., Google, Microsoft, and Facebook. Interestingly, these initiatives have invited a number of critiques for their feasibility and desirability, particularly in relation to the social and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. “What is My Purpose?” Artificial Sentience Having an Existential Crisis in Rick and Morty.Alexander Maxwell - 2021 - Journal of Science Fiction and Philosophy 4.
    The American television show Rick and Morty, an animated science fiction sitcom, critiques speciesism in the context of bleak existentialist philosophy. Though the show focuses primarily on human characters, it also depicts various forms of artificial sentience, such as robots or clones, undergoing existential crises. It explicitly effaces any distinction between human sentience and artificial sentience, forcefully treating all sentient life with an equivalent respect (or disrespect). The show also problematizes human speciesism in relationship to terrestrial and extra-terrestrial life.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
Moral Status of Artificial Systems
  1. Artificial Moral Patients: Mentality, Intentionality, and Systematicity.Howard Nye & Tugba Yoldas - 2021 - International Review of Information Ethics 29:1-10.
    In this paper, we defend three claims about what it will take for an AI system to be a basic moral patient to whom we can owe duties of non-maleficence not to harm her and duties of beneficence to benefit her: (1) Moral patients are mental patients; (2) Mental patients are true intentional systems; and (3) True intentional systems are systematically flexible. We suggest that we should be particularly alert to the possibility of such systematically flexible true intentional systems developing (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2. Existential Risk From AI and Orthogonality: Can We Have It Both Ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio:1-12.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot be (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  3. Anti-Natalism and the Creation of Artificial Minds.Bartek Chomanski - forthcoming - Journal of Applied Philosophy.
    Must opponents of creating conscious artificial agents embrace anti-natalism? Must anti-natalists be against the creation of conscious artificial agents? This article examines three attempts to argue against the creation of potentially conscious artificial intelligence (AI) in the context of these questions. The examination reveals that the argumentative strategy each author pursues commits them to the anti-natalist position with respect to procreation; that is to say, each author's argument, if applied consistently, should lead them to embrace the conclusion that procreation is, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Tecno-especies: la humanidad que se hace a sí misma y los desechables.Mateja Kovacic & María G. Navarro - 2021 - Bajo Palabra. Revista de Filosofía 27 (II Epoca):45-62.
    Popular culture continues fuelling public imagination with things, human and non-human, that we might beco-me or confront. Besides robots, other significant tropes in popular fiction that generated images include non-human humans and cyborgs, wired into his-torically varying sociocultural realities. Robots and artificial intelligence are re-defining the natural order and its hierar-chical structure. This is not surprising, as natural order is always in flux, shaped by new scientific discoveries, especially the reading of the genetic code, that reveal and redefine relationships between (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  6. The Moral Addressor Account of Moral Agency.Dorna Behdadi - manuscript
    According to the practice-focused approach to moral agency, a participant stance towards an entity is warranted by the extent to which this entity qualifies as an apt target of ascriptions of moral responsibility, such as blame. Entities who are not eligible for such reactions are exempted from moral responsibility practices, and thus denied moral agency. I claim that many typically exempted cases may qualify as moral agents by being eligible for a distinct participant stance. When we participate in moral responsibility (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. Is It Time for Robot Rights? Moral Status in Artificial Entities.Vincent C. Müller - 2021 - Ethics and Information Technology:1-9.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8. Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military‐Industrial Complex.Isaac Taylor - 2021 - Journal of Applied Philosophy 38 (2):320-334.
  9. The Ethics of Generating Posthumans.Trevor Stammers - forthcoming - London, UK: Bloomsbury.
    The first book to explore the responsibilities owed to and the ethics of the relationships between posthumans and their creators.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  10. Moral Zombies: Why Algorithms Are Not Moral Agents.Carissa Véliz - forthcoming - AI and Society:1-11.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  11. Prolegómenos a una ética para la robótica social.Júlia Pareto Boada - 2021 - Dilemata 34:71-87.
    Social robotics has a high disruptive potential, for it expands the field of application of intelligent technology to practical contexts of a relational nature. Due to their capacity to “intersubjectively” interact with people, social robots can take over new roles in our daily activities, multiplying the ethical implications of intelligent robotics. In this paper, we offer some preliminary considerations for the ethical reflection on social robotics, so that to clarify how to correctly orient the critical-normative thinking in this arduous task. (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  12. A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" of Wallach and Allen [20], (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - forthcoming - Philosophy and Technology:1-20.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Remove from this list   Direct download (4 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  14. Autonomous Weapon Systems: Failing the Principle of Discrimination.Ariel Guersenzvaig - 2018 - IEEE Technology and Society Magazine 37 (1):55-61.
    In this article, I explore the ethical permissibility of autonomous weapon systems (AWSs), also colloquially known as killer robots: robotic weapons systems that are able to identify and engage a target without human intervention. I introduce the subject, highlight key technical issues, and provide necessary definitions and clarifications in order to limit the scope of the discussion. I argue for a (preemptive) ban on AWSs anchored in just war theory and International Humanitarian Law (IHL), which are both briefly introduced below.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  15. On Human Genome Manipulation and 'Homo Technicus': The Legal Treatment of Non-Natural Human Subjects.Tyler L. Jaynes - forthcoming - AI and Ethics.
    Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. Pragmatism for a Digital Society: The (In)Significance of Artificial Intelligence and Neural Technology.Matthew Sample & Eric Racine - 2021 - In Orsolya Friedrich, Andreas Wolkenstein, Christoph Bublitz, Ralf J. Jox & Eric Racine (eds.), Clinical Neurotechnology meets Artificial Intelligence. Springer. pp. 81-100.
    Headlines in 2019 are inundated with claims about the “digital society,” making sweeping assertions of societal benefits and dangers caused by a range of technologies. This situation would seem an ideal motivation for ethics research, and indeed much research on this topic is published, with more every day. However, ethics researchers may feel a sense of déjà vu, as they recall decades of other heavily promoted technological platforms, from genomics and nanotechnology to machine learning. How should ethics researchers respond to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects.Nik Hynek & Anzhelika Solovyeva - 2021 - AI and Society 36 (1):79-99.
    The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  18. Artificial intelligence and moral rights.Martin Miernicki & Irene Ng - 2021 - AI and Society 36 (1):319-329.
    Whether copyrights should exist in content generated by an artificial intelligence is a frequently discussed issue in the legal literature. Most of the discussion focuses on economic rights, whereas the relationship of artificial intelligence and moral rights remains relatively obscure. However, as moral rights traditionally aim at protecting the author’s “personal sphere”, the question whether the law should recognize such protection in the content produced by machines is pressing; this is especially true considering that artificial intelligence is continuously further developed (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  19. Moral Control and Ownership in AI Systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems are increasingly being used in multiple applications and receiving more attention from the public (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. The Hard Problem of AI Rights.Adam J. Andreotta - 2021 - AI and Society 36 (1):19-32.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Rights for Robots: Artificial Intelligence, Animal and Environmental Law (2020) by Joshua Gellers. [REVIEW]Kamil Mamak - 2021 - Science and Engineering Ethics 27 (3):1-4.
  22. Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare.Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.) - 2021 - New York: Oxford University Press.
    The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapon Systems and the great powers have largely refused to support an effective ban. In this vacuum, the public has (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Towards a Middle-Ground Theory of Agency for Artificial Intelligence.Louis Longin - 2020 - In M. Nørskov, J. Seibt & O. Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam, Netherlands: pp. 17-26.
    The recent rise of artificial intelligence (AI) systems has led to intense discussions on their ability to achieve higher-level mental states or the ethics of their implementation. One question, which so far has been neglected in the literature, is the question of whether AI systems are capable of action. While the philosophical tradition appeals to intentional mental states, others have argued for a widely inclusive theory of agency. In this paper, I will argue for a gradual concept of agency because (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - forthcoming - AI and Society.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  25. Empathy and Instrumentalization: Late Ancient Cultural Critique and the Challenge of Apparently Personal Robots.Jordan Joseph Wales - 2020 - In Marco Nørskov, Johanna Seibt & Oliver Santiago Quick (eds.), Culturally Sustainable Social Robotics: Proceedings of Robophilosophy 2020. Amsterdam: IOS Press. pp. 114-124.
    According to a tradition that we hold variously today, the relational person lives most personally in affective and cognitive empathy, whereby we enter subjective communion with another person. Near future social AIs, including social robots, will give us this experience without possessing any subjectivity of their own. They will also be consumer products, designed to be subservient instruments of their users’ satisfaction. This would seem inevitable. Yet we cannot live as personal when caught between instrumentalizing apparent persons (slaveholding) or numbly (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Surrogates and Artificial Intelligence: Why AI Trumps Family.Ryan Hubbard & Jake Greenblum - 2020 - Science and Engineering Ethics 26 (6):3217-3227.
    The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm. Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Instrumental Robots.Sebastian Köhler - 2020 - Science and Engineering Ethics 26 (6):3121-3141.
    Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own. These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  28. Legal Person- or Agenthood of Artificial Intelligence Technologies.Tanel Kerikmäe, Peeter Müürsepp, Henri Mart Pihl, Ondrej Ondrej Hamuľák & Hovsep Kocharyan - 2020 - Acta Baltica Historiae Et Philosophiae Scientiarum 8 (2):73-92.
    Artificial intelligence is developing rapidly. There are technologies available that fulfil several tasks better than humans can and even behave like humans to some extent. Thus, the situation prompts the question whether AI should be granted legal person- and/or agenthood? There have been similar situations in history where the legal status of slaves or indigenous peoples was discussed. Still, in those historical questions, the subjects under study were always natural persons, i.e., they were living beings belonging to the species Homo (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Just an Artifact: Why Machines Are Perceived as Moral Agents.Joanna J. Bryson & Philip P. Kime - 2011 - Ijcai Proceedings-International Joint Conference on Artificial Intelligence 22:1641.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  31. Understanding A.I. — Can and Should We Empathize with Robots?Susanne Schmetkamp - 2020 - Review of Philosophy and Psychology 11 (4):881-897.
    Expanding the debate about empathy with human beings, animals, or fictional characters to include human-robot relationships, this paper proposes two different perspectives from which to assess the scope and limits of empathy with robots: the first is epistemological, while the second is normative. The epistemological approach helps us to clarify whether we can empathize with artificial intelligence or, more precisely, with social robots. The main puzzle here concerns, among other things, exactly what it is that we empathize with if robots (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32. If Robots Are People, Can They Be Made for Profit? Commercial Implications of Robot Personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. Some Are More Equal Than Others.Marlena R. Fraune, Selma Šabanović & Eliot R. Smith - 2020 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 21 (3):303-328.
    How do people treat robot teammates compared to human opponents? Past research indicates that people favor, and behave more morally toward, ingroup than outgroup members. People also perceive that they have more moral responsibilities toward humans than nonhumans. This paper presents a 2×2×3 experimental study that placed participants into competing teams of humans and robots. We examined how people morally behave toward and perceive players depending on players’ Group Membership, Agent Type, and participant group Team Composition. Results indicated that participants (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Designing for Human Rights in AI.Jeroen van den Hoven & Evgeni Aizenberg - 2020 - Big Data and Society 7 (2).
    In the age of Big Data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. Artificial intelligence systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. It is becoming evident that these technological developments are consequential to people’s fundamental human rights. Despite increasing attention to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and values that should be adhered to in the design and deployment of artificial intelligence. These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  36. Mapping the Stony Road Toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. Do Automated Vehicles Face Moral Dilemmas? A Plea for a Political Approach.Javier Rodríguez-Alcázar, Lilian Bermejo-Luque & Alberto Molina-Pérez - forthcoming - Philosophy and Technology:1-22.
    How should automated vehicles react in emergency circumstances? Most research projects and scientific literature deal with this question from a moral perspective. In particular, it is customary to treat emergencies involving AVs as instances of moral dilemmas and to use the trolley problem as a framework to address such alleged dilemmas. Some critics have pointed out some shortcomings of this strategy and have urged to focus on mundane traffic situations instead of trolley cases involving AVs. Besides, these authors rightly point (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Mind the Gap: Responsible Robotics and the Problem of Responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  39. What is a Human?Peter H. Kahn, Hiroshi Ishiguro, Batya Friedman, Takayuki Kanda, Nathan G. Freier, Rachel L. Severson & Jessica Miller - 2007 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 8 (3):363-390.
    In this paper, we move toward offering psychological benchmarks to measure success in building increasingly humanlike robots. By psychological benchmarks we mean categories of interaction that capture conceptually fundamental aspects of human life, specified abstractly enough to resist their identity as a mere psychological instrument, but capable of being translated into testable empirical propositions. Nine possible benchmarks are considered: autonomy, imitation, intrinsic moral value, moral accountability, privacy, reciprocity, conventionality, creativity, and authenticity of relation. Finally, we discuss how getting the right (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Keeping the “Human in the Loop” in the Age of Artificial Intelligence: Accompanying Commentary for “Correcting the Brain?” by Rainey and Erden.Fabrice Jotterand & Clara Bosco - 2020 - Science and Engineering Ethics 26 (5):2455-2460.
    The benefits of Artificial Intelligence in medicine are unquestionable and it is unlikely that the pace of its development will slow down. From better diagnosis, prognosis, and prevention to more precise surgical procedures, AI has the potential to offer unique opportunities to enhance patient care and improve clinical practice overall. However, at this stage of AI technology development it is unclear whether it will de-humanize or re-humanize medicine. Will AI allow clinicians to spend less time on administrative tasks and technology (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  41. Dignity and Dissent in Humans and Non-Humans.Andreas Matthias - 2020 - Science and Engineering Ethics 26 (5):2497-2510.
    Is there a difference between human beings and those based on artificial intelligence that would affect their ability to be subjects of dignity? This paper first examines the philosophical notion of dignity as Immanuel Kant derives it from the moral autonomy of the individual. It then asks whether animals and AI systems can claim Kantian dignity or whether there is a sharp divide between human beings, animals and AI systems regarding their ability to be subjects of dignity. How this question (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. An Ethical Framework for the Design, Development, Implementation, and Assessment of Drones Used in Public Healthcare.Dylan Cawthorne & Aimee Robbins-van Wynsberghe - 2020 - Science and Engineering Ethics 26 (5):2867-2891.
    The use of drones in public healthcare is suggested as a means to improve efficiency under constrained resources and personnel. This paper begins by framing drones in healthcare as a social experiment where ethical guidelines are needed to protect those impacted while fully realizing the benefits the technology offers. Then we propose an ethical framework to facilitate the design, development, implementation, and assessment of drones used in public healthcare. Given the healthcare context, we structure the framework according to the four (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   4 citations  
  44. Emerging Moral Status Issues. [REVIEW]Christopher Gyngell & Julian J. Koplin - 2020 - Monash Bioethics Review 38 (2):95-104.
    Many controversies in bioethics turn on questions of moral status. Some moral status issues have received extensive bioethical attention, including those raised by abortion, embryo experimentation, and animal research. Beyond these established debates lie a less familiar set of moral status issues, many of which are tied to recent scientific breakthroughs. This review article surveys some key developments that raise moral status issues, including the development of in vitro brains, part-human animals, “synthetic” embryos, and artificial womb technologies. It introduces the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  45. On the Granting of Moral Standing to Artificial Intelligence: A Pragmatic, Empirically-Informed, Desire-Based Approach.Nicholas Alexander Novelli - 2020 - Dissertation, University of Edinburgh
    Ever-increasingly complex AI technology is being introduced into society, with ever-more impressive capabilities. As AI tech advances, it will become harder to tell whether machines are relevantly different from human beings in terms of the moral consideration they are owed. This is a significant practical concern. As more advanced AIs become part of our daily lives, we could face moral dilemmas where we are forced to choose between harming a human, or harming one or several of these machines. Given these (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1661