About this topic
Summary The moral status of artificial systems is an increasingly open discussion due to the increasing ubiquity of increasingly intelligent machine systems. Questions range from those about the "smart" systems controlling traffic lights to those controlling missile systems to those counting votes, to questions about degrees of responsibility due semi-autonomous drones and their pilots given operating conditions at either end of the joystick, and finally to questions about the relative moral status of "fully autonomous" artificial agents, "Terminator"s and "Wall-E"s. Prior to the rise of intelligent machines, the issue may have seemed moot. Kant had made the status of anything that is not an end in itself very clear - it had a price, and you could buy and sell it. If its manufacture runs contrary to the categorical imperative, then it is immoral, e.g. there are no semi-autonomous flying missile launchers in the kingdom of ends, so no Kantan moral agent could ever will their creation. Even earlier, after using a number of physical models to describe the dynamics of cognition in the Thaetatus, Socrates tells us that some things "have infinity within them" - i.e. can't be ascribed a limited value - and others not. As machines exemplifying and then embodying such capacities typically reserved to human beings (Kant, famously for example, writes that we know only human beings to be able to answer to moral responsibility) are trained and learn, questions of robot psychology and motivation, autonomy as a capacity for self-determination, and so political and moral status under conventional law become important. To date, established conventions are typically taken as a given, as engineers have focused mainly on delivering non-autonomous machines and other artificial systems as tools for industry. However, even with limited applications in for example artificial companions, pets, interesting new issues have emerged. For example, can a human being fall in love with a computer program of adequate complexity? What about a robot sex industry? Artificial nurses? If an artificial nurse refuses a human doctor's order to remove life support from a child because his parents cannot pay the medical bills, is the nurse a hero, or is it malfunctioning? Closer to the moment, questions about expert systems and automation of transport, manufacturing and logistics raise important moral questions about the role of artificial systems in the displacement of human workers, public safety, as well as questions concerning the redirection of crucial natural resources to the maintenance of centrally controlled artificial systems at the expense of local human systems. Issues such as these make the relative status of widely distributed artificial systems an important area of discourse. This is especially true with intelligent machine technologies - AI. Recent use of drones in surveillance and wars of aggression, and the relationship of the research community to these end-user activities of course raise the same ethical questions which faced scientists developing the nuclear bomb in the middle 20th century. Thus, we can see that questions about the moral status of artificial systems - especially "intelligent" and "intelligence" systems - arise from the perspectives of the potential product, the engineer ultimately responsible (c.f. IEEE ethics for engineers), and the "end-user" left to live in terms of the artificial systems so established. Finally, given the diverse fields confronting similar issues as increasingly intelligent machines are integrated into various aspects of daily life, discourse on the relative moral status of artificial systems promises to be an increasingly integrative one, as well. 
Related categories

405 found
Order:
1 — 50 / 405
  1. Is Simulation a Substitute for Experimentation?Isabelle Peschard - manuscript
    It is sometimes said that simulation can serve as epistemic substitute for experimentation. Such a claim might be suggested by the fast-spreading use of computer simulation to investigate phenomena not accessible to experimentation (in astrophysics, ecology, economics, climatology, etc.). But what does that mean? The paper starts with a clarification of the terms of the issue and then focuses on two powerful arguments for the view that simulation and experimentation are ‘epistemically on a par’. One is based on the claim (...)
    Remove from this list   Direct download  
    Translate
     
     
    Export citation  
     
    Bookmark  
  2. Value Sensitive Design to Achieve the UN SDGs with AI: A Case of Elderly Care Robots.Steven Umbrello, Marianna Capasso, Maurizio Balistreri, Alberto Pirni & Federica Merenda - manuscript
    The increasing automation and ubiquity of robotics deployed within the field of care boasts promising advantages. However, challenging ethical issues arise also as a consequence. This paper takes care robots for the elderly as the subject of analysis, building on previous literature in the domain of the ethics and design of care robots. It takes the value sensitive design (VSD) approach to technology design and extends its application to care robots by not only integrating the values of care, but also (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Applying a Principle of Explicability to AI Research in Africa: Should We Do It?Mary Carman & Benjamin Rosman - forthcoming - Ethics and Information Technology.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. If Robots Are People, Can They Be Made for Profit? Commercial Implications of Robot Personhood.Bartek Chomanski - forthcoming - AI and Ethics.
    It could become technologically possible to build artificial agents instantiating whatever properties are sufficient for personhood. It is also possible, if not likely, that such beings could be built for commercial purposes. This paper asks whether such commercialization can be handled in a way that is not morally reprehensible, and answers in the affirmative. There exists a morally acceptable institutional framework that could allow for building artificial persons for commercial gain. The paper first considers the minimal ethical requirements that any (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - forthcoming - Philosophy and Technology:1-20.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Remove from this list   Direct download (4 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  7. The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  8. What Matters for Moral Status: Behavioral or Cognitive Equivalence?John Danaher - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    Henry Shevlin’s paper—“How could we know when a robot was a moral patient?” – argues that we should recognize robots and artificial intelligence (AI) as psychological moral patients if they are cognitively equivalent to other beings that we already recognize as psychological moral patients (i.e., humans and, at least some, animals). In defending this cognitive equivalence strategy, Shevlin draws inspiration from the “behavioral equivalence” strategy that I have defended in previous work but argues that it is flawed in crucial respects. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. The Extended Corporate Mind: When Corporations Use AI to Break the Law.Mihailis Diamantis - forthcoming - North Carolina Law Review.
    Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  11. The Oxford Handbook of Ethics of AI.Markus Dubber, Frank Pasquale & Sunit Das (eds.) - forthcoming - Oxford University Press.
    This 44-chapter volume tackles a quickly-evolving field of inquiry, mapping the existing discourse as part of a general attempt to place current developments in historical context; at the same time, breaking new ground in taking on novel subjects and pursuing fresh approaches. The term "A.I." is used to refer to a broad range of phenomena, from machine learning and data mining to artificial general intelligence. The recent advent of more sophisticated AI systems, which function with partial or full autonomy and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. Debate: What is Personhood in the Age of AI?David J. Gunkel & Jordan Joseph Wales - forthcoming - AI and Society.
    In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that category, (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  13. Ethics for Artificial Intellects.John Storrs Hall - forthcoming - Nanoethics: The Ethical and Social Implications of Nanotechnology.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. On Human Genome Manipulation and 'Homo Technicus': The Legal Treatment of Non-Natural Human Subjects.Tyler L. Jaynes - forthcoming - AI and Ethics.
    Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. A Dilemma for Moral Deliberation in AI in Advance.Ryan Jenkins & Duncan Purves - forthcoming - International Journal of Applied Philosophy.
    Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16. How Do Technological Artefacts Embody Moral Values?Michael Klenk - forthcoming - Philosophy and Technology.
    According to some philosophers of technology, technology embodies moral values in virtue of its functional properties and the intentions of its designers. But this paper shows that such an account makes the values supposedly embedded in technology epistemically opaque and that it does not allow for values to change. Therefore, to overcome these shortcomings, the paper introduces the novel Affordance Account of Value Embedding as a superior alternative. Accordingly, artefacts bear affordances, that is, artefacts make certain actions likelier given the (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  17. Safety Requirements Vs. Crashing Ethically: What Matters Most for Policies on Autonomous Vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. Mapping the Stony Road Toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI regulation. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  20. Do Automated Vehicles Face Moral Dilemmas? A Plea for a Political Approach.Javier Rodríguez-Alcázar, Lilian Bermejo-Luque & Alberto Molina-Pérez - forthcoming - Philosophy and Technology:1-22.
    How should automated vehicles react in emergency circumstances? Most research projects and scientific literature deal with this question from a moral perspective. In particular, it is customary to treat emergencies involving AVs as instances of moral dilemmas and to use the trolley problem as a framework to address such alleged dilemmas. Some critics have pointed out some shortcomings of this strategy and have urged to focus on mundane traffic situations instead of trolley cases involving AVs. Besides, these authors rightly point (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21. The Ethics of Generating Posthumans.Trevor Stammers - forthcoming - London, UK: Bloomsbury.
    The first book to explore the responsibilities owed to and the ethics of the relationships between posthumans and their creators.
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  22. The Artificial View: Toward a Non-Anthropocentric Account of Moral Patiency.Fabio Tollon - forthcoming - Ethics and Information Technology.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Moral Zombies: Why Algorithms Are Not Moral Agents.Carissa Véliz - forthcoming - AI and Society:1-11.
    In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects but for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  24. AI Extenders and the Ethics of Mental Health.Karina Vold & Jose Hernandez-Orallo - forthcoming - In Marcello Ienca & Fabrice Jotterand (eds.), Ethics of Artificial Intelligence in Brain and Mental Health.
    The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. Sustainability of Artificial Intelligence: Reconciling Human Rights with Legal Rights of Robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. Bishkek: International Alatoo University Kyrgyzstan. pp. 25-28.
    With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - forthcoming - Philosophy and Technology:1-24.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. The Explainable AI research program aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory contributions. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27. The Hard Problem of AI Rights.Adam J. Andreotta - 2021 - AI and Society 36 (1):19-32.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Lethal Autonomous Weapons: Re-Examining the Law and Ethics of Robotic Warfare.Jai Galliott, Duncan MacIntosh & Jens David Ohlin (eds.) - 2021 - New York: Oxford University Press.
    The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapon Systems and the great powers have largely refused to support an effective ban. In this vacuum, the public has (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Moral Control and Ownership in AI Systems.Raul Gonzalez Fabre, Javier Camacho Ibáñez & Pedro Tejedor Escobar - 2021 - AI and Society 36 (1):289-303.
    AI systems are bringing an augmentation of human capabilities to shape the world. They may also drag a replacement of human conscience in large chunks of life. AI systems can be designed to leave moral control in human hands, to obstruct or diminish that moral control, or even to prevent it, replacing human morality with pre-packaged or developed ‘solutions’ by the ‘intelligent’ machine itself. Artificial Intelligent systems are increasingly being used in multiple applications and receiving more attention from the public (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects.Nik Hynek & Anzhelika Solovyeva - 2021 - AI and Society 36 (1):79-99.
    The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  31. Is It Time for Robot Rights? Moral Status in Artificial Entities.Vincent C. Müller - 2021 - Ethics and Information Technology:1-9.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we find (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  32. Pragmatism for a Digital Society: The (In)Significance of Artificial Intelligence and Neural Technology.Matthew Sample & Eric Racine - 2021 - In Orsolya Friedrich, Andreas Wolkenstein, Christoph Bublitz, Ralf J. Jox & Eric Racine (eds.), Clinical Neurotechnology meets Artificial Intelligence. Springer. pp. 81-100.
    Headlines in 2019 are inundated with claims about the “digital society,” making sweeping assertions of societal benefits and dangers caused by a range of technologies. This situation would seem an ideal motivation for ethics research, and indeed much research on this topic is published, with more every day. However, ethics researchers may feel a sense of déjà vu, as they recall decades of other heavily promoted technological platforms, from genomics and nanotechnology to machine learning. How should ethics researchers respond to (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Who Is Responsible for Killer Robots? Autonomous Weapons, Group Agency, and the Military‐Industrial Complex.Isaac Taylor - 2021 - Journal of Applied Philosophy 38 (2):320-334.
  34. AI Assistants and the Paradox of Internal Automaticity.William A. Bauer & Veljko Dubljević - 2020 - Neuroethics 13 (3):303-310.
    What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   5 citations  
  35. AI As a Moral Right-Holder.Joseph Bowen & John Basl - 2020 - In Markus Dubber, Frank Pasquale & Sunit Das (eds.), The Oxford Handbook of Ethics of AI. New York: Oxford University Press.
    This chapter evaluates whether AI systems are or will be rights-holders, explaining the conditions under which people should recognize AI systems as rights-holders. It develops a skeptical stance toward the idea that current forms of artificial intelligence are holders of moral rights, beginning with an articulation of one of the most prominent and most plausible theories of moral rights: the Interest Theory of rights. On the Interest Theory, AI systems will be rights-holders only if they have interests or a well-being. (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark   1 citation  
  36. An Ethical Framework for the Design, Development, Implementation, and Assessment of Drones Used in Public Healthcare.Dylan Cawthorne & Aimee Robbins-van Wynsberghe - 2020 - Science and Engineering Ethics 26 (5):2867-2891.
    The use of drones in public healthcare is suggested as a means to improve efficiency under constrained resources and personnel. This paper begins by framing drones in healthcare as a social experiment where ethical guidelines are needed to protect those impacted while fully realizing the benefits the technology offers. Then we propose an ethical framework to facilitate the design, development, implementation, and assessment of drones used in public healthcare. Given the healthcare context, we structure the framework according to the four (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  38. Welcoming Robots Into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  39. Community Perspectives on the Benefits and Risks of Technologically Enhanced Communicable Disease Surveillance Systems: A Report on Four Community Juries.Chris Degeling, Stacy M. Carter, Antoine M. van Oijen, Jeremy McAnulty, Vitali Sintchenko, Annette Braunack-Mayer, Trent Yarwood, Jane Johnson & Gwendolyn L. Gilbert - 2020 - BMC Medical Ethics 21 (1):1-14.
    Background Outbreaks of infectious disease cause serious and costly health and social problems. Two new technologies – pathogen whole genome sequencing and Big Data analytics – promise to improve our capacity to detect and control outbreaks earlier, saving lives and resources. However, routinely using these technologies to capture more detailed and specific personal information could be perceived as intrusive and a threat to privacy. Method Four community juries were convened in two demographically different Sydney municipalities and two regional cities in (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40. Kevin Macnish: The Ethics of Surveillance: An Introduction: Routledge, London and New York, 2018, ISBN 978-1138643796, $45.95.Tony Doyle - 2020 - Ethics and Information Technology 22 (1):39-42.
  41. In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   5 citations  
  42. Some Are More Equal Than Others.Marlena R. Fraune, Selma Šabanović & Eliot R. Smith - 2020 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 21 (3):303-328.
    How do people treat robot teammates compared to human opponents? Past research indicates that people favor, and behave more morally toward, ingroup than outgroup members. People also perceive that they have more moral responsibilities toward humans than nonhumans. This paper presents a 2×2×3 experimental study that placed participants into competing teams of humans and robots. We examined how people morally behave toward and perceive players depending on players’ Group Membership, Agent Type, and participant group Team Composition. Results indicated that participants (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   8 citations  
  44. Mind the Gap: Responsible Robotics and the Problem of Responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  45. Emerging Moral Status Issues. [REVIEW]Christopher Gyngell & Julian J. Koplin - 2020 - Monash Bioethics Review 38 (2):95-104.
    Many controversies in bioethics turn on questions of moral status. Some moral status issues have received extensive bioethical attention, including those raised by abortion, embryo experimentation, and animal research. Beyond these established debates lie a less familiar set of moral status issues, many of which are tied to recent scientific breakthroughs. This review article surveys some key developments that raise moral status issues, including the development of in vitro brains, part-human animals, “synthetic” embryos, and artificial womb technologies. It introduces the (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  46. The Immoral Machine.John Harris - 2020 - Cambridge Quarterly of Healthcare Ethics 29 (1):71-79.
    :In a recent paper in Nature1 entitled The Moral Machine Experiment, Edmond Awad, et al. make a number of breathtakingly reckless assumptions, both about the decisionmaking capacities of current so-called “autonomous vehicles” and about the nature of morality and the law. Accepting their bizarre premise that the holy grail is to find out how to obtain cognizance of public morality and then program driverless vehicles accordingly, the following are the four steps to the Moral Machinists argument:1)Find out what “public morality” (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Artificial Intelligence in Extended Minds: Intrapersonal Diffusion of Responsibility and Legal Multiple Personality.Jan-Hendrik Heinrichs - 2020 - In Technology, Anthropology, and Dimensions of Responsibility. Stuttgart, Deutschland: pp. 159-176.
    Can an artificially intelligent tool be a part of a human’s extended mind? There are two opposing streams of thought in this regard. One of them can be identified as the externalist perspective in the philosophy of mind, which tries to explain complex states and processes of an individual as co-constituted by elements of the individual’s material and social environment. The other strand is normative and explanatory atomism which insists that what is to be explained and evaluated is the behaviour (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  48. Decentered Ethics in the Machine Era and Guidance for AI Regulation.Christian Hugo Hoffmann & Benjamin Hahn - 2020 - AI and Society 35 (3):635-644.
    Recent advancements in AI have prompted a large number of AI ethics guidelines published by governments and nonprofits. While many of these papers propose concrete or seemingly applicable ideas, few philosophically sound proposals are made. In particular, we observe that the line of questioning has often not been examined critically and underlying conceptual problems not always dealt with at the root. In this paper, we investigate the nature of ethical AI systems and what their moral status might be by first (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. Surrogates and Artificial Intelligence: Why AI Trumps Family.Ryan Hubbard & Jake Greenblum - 2020 - Science and Engineering Ethics 26 (6):3217-3227.
    The increasing accuracy of algorithms to predict values and preferences raises the possibility that artificial intelligence technology will be able to serve as a surrogate decision-maker for incapacitated patients. Following Camillo Lamanna and Lauren Byrne, we call this technology the autonomy algorithm. Such an algorithm would mine medical research, health records, and social media data to predict patient treatment preferences. The possibility of developing the AA raises the ethical question of whether the AA or a relative ought to serve as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule.Tyler L. Jaynes - 2020 - AI and Society 35 (2):343-354.
    The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
1 — 50 / 405