About this topic
Summary The moral status of artificial systems is an increasingly open discussion due to the increasing ubiquity of increasingly intelligent machine systems. Questions range from those about the "smart" systems controlling traffic lights to those controlling missile systems to those counting votes, to questions about degrees of responsibility due semi-autonomous drones and their pilots given operating conditions at either end of the joystick, and finally to questions about the relative moral status of "fully autonomous" artificial agents, "Terminator"s and "Wall-E"s. Prior to the rise of intelligent machines, the issue may have seemed moot. Kant had made the status of anything that is not an end in itself very clear - it had a price, and you could buy and sell it. If its manufacture runs contrary to the categorical imperative, then it is immoral, e.g. there are no semi-autonomous flying missile launchers in the kingdom of ends, so no Kantan moral agent could ever will their creation. Even earlier, after using a number of physical models to describe the dynamics of cognition in the Thaetatus, Socrates tells us that some things "have infinity within them" - i.e. can't be ascribed a limited value - and others not. As machines exemplifying and then embodying such capacities typically reserved to human beings (Kant, famously for example, writes that we know only human beings to be able to answer to moral responsibility) are trained and learn, questions of robot psychology and motivation, autonomy as a capacity for self-determination, and so political and moral status under conventional law become important. To date, established conventions are typically taken as a given, as engineers have focused mainly on delivering non-autonomous machines and other artificial systems as tools for industry. However, even with limited applications in for example artificial companions, pets, interesting new issues have emerged. For example, can a human being fall in love with a computer program of adequate complexity? What about a robot sex industry? Artificial nurses? If an artificial nurse refuses a human doctor's order to remove life support from a child because his parents cannot pay the medical bills, is the nurse a hero, or is it malfunctioning? Closer to the moment, questions about expert systems and automation of transport, manufacturing and logistics raise important moral questions about the role of artificial systems in the displacement of human workers, public safety, as well as questions concerning the redirection of crucial natural resources to the maintenance of centrally controlled artificial systems at the expense of local human systems. Issues such as these make the relative status of widely distributed artificial systems an important area of discourse. This is especially true with intelligent machine technologies - AI. Recent use of drones in surveillance and wars of aggression, and the relationship of the research community to these end-user activities of course raise the same ethical questions which faced scientists developing the nuclear bomb in the middle 20th century. Thus, we can see that questions about the moral status of artificial systems - especially "intelligent" and "intelligence" systems - arise from the perspectives of the potential product, the engineer ultimately responsible (c.f. IEEE ethics for engineers), and the "end-user" left to live in terms of the artificial systems so established. Finally, given the diverse fields confronting similar issues as increasingly intelligent machines are integrated into various aspects of daily life, discourse on the relative moral status of artificial systems promises to be an increasingly integrative one, as well. 
Related categories

346 found
Order:
1 — 50 / 346
  1. added 2020-05-30
    The Hard Problem of AI Rights.Adam J. Andreotta - forthcoming - AI and Society:1-14.
    In the past few years, the subject of AI rights—the thesis that AIs, robots, and other artefacts (hereafter, simply ‘AIs’) ought to be included in the sphere of moral concern—has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress—namely, a lack of a solution to the ‘Hard Problem’ of consciousness: the problem of explaining why certain brain states give rise (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. added 2020-05-04
    AI Assistants and the Paradox of Internal Automaticity.William A. Bauer & Veljko Dubljević - forthcoming - Neuroethics:1-8.
    What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  3. added 2020-04-24
    Ethics of Artificial Intelligence.Vincent C. Müller - forthcoming - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 1-20.
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4. added 2020-03-11
    Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. added 2020-03-08
    Freedom in an Age of Algocracy.John Danaher - forthcoming - In Shannon Vallor (ed.), Oxford Handbook of Philosophy of Technology. Oxford, UK: Oxford University Press.
    There is a growing sense of unease around algorithmic modes of governance ('algocracies') and their impact on freedom. Contrary to the emancipatory utopianism of digital enthusiasts, many now fear that the rise of algocracies will undermine our freedom. Nevertheless, there has been some struggle to explain exactly how this will happen. This chapter tries to address the shortcomings in the existing discussion by arguing for a broader conception/understanding of freedom as well as a broader conception/understanding of algocracy. Broadening the focus (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. added 2020-02-25
    A Metacognitive Approach to Trust and a Case Study: Artificial Agency.Ioan Muntean - 2019 - Computer Ethics - Philosophical Enquiry (CEPE) Proceedings.
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. added 2020-02-07
    Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  8. added 2020-01-31
    Investigation Into Ethical Issues of Intelligent Systems.Marziyah Davoodabadi & Zahra Khazaei - 2008 - Journal of Philosophical Theological Research 10 (37):95-120.
    Despite of the undeniable advantages and surprising applications of them in training and industry as well as cultures of different countries, there have been many ethical issues concerning intelligent and computer systems. Presenting a definition of artificial intelligence and intelligent systems, the research paper deals with the shared ethical issues of intelligent systems, computer systems as well as the global network; and then it concentrates on the most important ethical issues of two types of intelligent systems, i.e. data-analysis system and (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. added 2020-01-22
    Gods of Transhumanism.Alex V. Halapsis - 2019 - Anthropological Measurements of Philosophical Research 16:78-90.
    Purpose of the article is to identify the religious factor in the teaching of transhumanism, to determine its role in the ideology of this flow of thought and to identify the possible limits of technology interference in human nature. Theoretical basis. The methodological basis of the article is the idea of transhumanism. Originality. In the foreseeable future, robots will be able to pass the Turing test, become “electronic personalities” and gain political rights, although the question of the possibility of machine (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  10. added 2020-01-22
    Artificiële Intelligentie En Normatieve Ethiek : Wie is Verantwoordelijk Voor de Misdaden van LAWS?1.Lode Lauwaert - 2019 - Algemeen Nederlands Tijdschrift voor Wijsbegeerte 111 (4):585-603.
    Artificial intelligence and normative ethics: Who is responsible for the crime of LAWS?In his text “Killer Robots”, Robert Sparrow holds that killer robots should be forbidden. This conclusion is based on two premises. The first is that attributive responsibility is a necessary condition for admitting an action; the second premise is that the use of killer robots is accompanied by a responsibility gap. Although there are good reasons to conclude that killer robots should be banned, the article shows that Sparrow's (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. added 2020-01-22
    Critical Analysis of the “No Relevant Difference” Argument in Defense of the Rights of Artificial Intelligences.Ali Reza Mazarian - 2019 - Journal of Philosophical Theological Research 21 (79):165-190.
    Received: 31/10/2018 | Accepted: 28/02/2019 There are many new philosophical queries about the moral status and rights of artificial intelligences; questions such as whether such entities can be considered as morally responsible entities and as having special rights. Recently, the contemporary philosophy of mind philosopher, Eric Schwitzgebel, has tried to defend the possibility of equal rights of AIs and human beings, by designing a new argument. In this paper, after an introduction, the author reviews and analyzes the main argument and (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. added 2020-01-22
    Automated Vehicles and Transportation Justice.Shane Epting - 2019 - Philosophy and Technology 32 (3):389-403.
    Despite numerous ethical examinations of automated vehicles, philosophers have neglected to address how these technologies will affect vulnerable people. To account for this lacuna, researchers must analyze how driverless cars could hinder or help social justice. In addition to thinking through these aspects, scholars must also pay attention to the extensive moral dimensions of automated vehicles, including how they will affect the public, nonhumans, future generations, and culturally significant artifacts. If planners and engineers undertake this task, then they will have (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  13. added 2020-01-22
    Man as ‘Aggregate of Data’.Sjoukje van der Meulen & Max Bruinsma - 2019 - AI and Society 34 (2):343-354.
    Since the emergence of the innovative field of artificial intelligence in the 1960s, the late Hubert Dreyfus insisted on the ontological distinction between man and machine, human and artificial intelligence. In the different editions of his classic and influential book What computers can’t do, he posits that an algorithmic machine can never fully simulate the complex functioning of the human mind—not now, nor in the future. Dreyfus’ categorical distinctions between man and machine are still relevant today, but their relation has (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14. added 2020-01-22
    Four Key Questions in Philosophy of Technology.Alexander V. Mikhailovski - 2019 - Epistemology and Philosophy of Science 56 (3):225-233.
    This article discusses Hans Poser’s new book “Homo creator”. It aims to open the philosophy of technology to ontological, epistemological and ethical problems. The keynote of the book serves the conviction that the technical creativity builds the core of the engineering. Modal concepts as possibility, necessity, contingency and reality are used in a systematic way to characterize technology. Technological artifacts essentially depend on a special type of interpretation. The central ontological problem consists in the fact that technology is based on (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  15. added 2020-01-22
    Reviewing Tests for Machine Consciousness.A. Elamrani & R. V. Yampolskly - 2019 - Journal of Consciousness Studies 26 (5-6):35-64.
    The accelerating advances in the fields of neuroscience, artificial intelligence, and robotics have been garnering interest and raising new philosophical, ethical, or practical questions that depend on whether or not there may exist a scientific method of probing consciousness in machines. This paper provides an analytic review of the existing tests for machine consciousness proposed in the academic literature over the past decade, and an overview of the diverse scientific communities involved in this enterprise. The tests put forward in their (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  16. added 2020-01-22
    The Pharmacological Significance of Mechanical Intelligence and Artificial Stupidity.Adrian Mróz - 2019 - Kultura I Historia 36 (2):17-40.
    By drawing on the philosophy of Bernard Stiegler, the phenomena of mechanical (a.k.a. artificial, digital, or electronic) intelligence is explored in terms of its real significance as an ever-repeating threat of the reemergence of stupidity (as cowardice), which can be transformed into knowledge (pharmacological analysis of poisons and remedies) by practices of care, through the outlook of what researchers describe equivocally as “artificial stupidity”, which has been identified as a new direction in the future of computer science and machine problem (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  17. added 2020-01-22
    The Picture of Artificial Intelligence and the Secularization of Thought.King-Ho Leung - 2019 - Political Theology 20 (6):457-471.
    This article offers a critical interpretation of Artificial Intelligence (AI) as a philosophical notion which exemplifies a secular conception of thinking. One way in which AI notably differs from the conventional understanding of “thinking” is that, according to AI, “intelligence” or “thinking” does not necessarily require “life” as a precondition: that it is possible to have “thinking without life.” Building on Charles Taylor’s critical account of secularity as well as Hubert Dreyfus’ influential critique of AI, this article offers a theological (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. added 2020-01-22
    A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Remove from this list   Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  19. added 2020-01-22
    Robots Like Me: Challenges and Ethical Issues in Aged Care.Ipke Wachsmuth - 2018 - Frontiers in Psychology 9 (432).
    This paper addresses the issue of whether robots could substitute for human care, given the challenges in aged care induced by the demographic change. The use of robots to provide emotional care has raised ethical concerns, e.g., that people may be deceived and deprived of dignity. In this paper it is argued that these concerns might be mitigated and that it may be sufficient for robots to take part in caring when they behave *as if* they care.
    Remove from this list   Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  20. added 2020-01-22
    Superintelligence as Moral Philosopher.J. Corabi - 2017 - Journal of Consciousness Studies 24 (5-6):128-149.
    Non-biological superintelligent artificial minds are scary things. Some theorists believe that if they came to exist, they might easily destroy human civilization, even if destroying human civilization was not a high priority for them. Consequently, philosophers are increasingly worried about the future of human beings and much of the rest of the biological world in the face of the potential development of superintelligent AI. This paper explores whether the increased attention philosophers have paid to the dangers of superintelligent AI is (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  21. added 2020-01-22
    Of Animals, Robots and Men.Christine Tiefensee & Johannes Marx - 2015 - Historical Social Research 40:70-91.
    Domesticated animals need to be treated as fellow citizens: only if we conceive of domesticated animals as full members of our political communities can we do justice to their moral standing—or so Sue Donaldson and Will Kymlicka argue in their widely discussed book Zoopolis. In this contribution, we pursue two objectives. Firstly, we reject Donaldson and Kymlicka’s appeal for animal citizenship. We do so by submitting that instead of paying due heed to their moral status, regarding animals as citizens misinterprets (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22. added 2020-01-22
    Moral Agency, Moral Responsibility, and Artifacts: What Existing Artifacts Fail to Achieve , and Why They, Nevertheless, Can Make Moral Claims Upon Us.Joel Parthemore & Blay Whitby - 2014 - International Journal of Machine Consciousness 6 (2):141-161.
    This paper follows directly from an earlier paper where we discussed the requirements for an artifact to be a moral agent and concluded that the artifactual question is ultimately a red herring. As...
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23. added 2020-01-22
    Robotic Pets in the Lives of Preschool Children.Peter H. Kahn, Batya Friedman, Deanne R. Pérez-Granados & Nathan G. Freier - 2006 - Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems 7 (3):405-436.
    This study examined preschool children’s reasoning about and behavioral interactions with one of the most advanced robotic pets currently on the retail market, Sony’s robotic dog AIBO. Eighty children, equally divided between two age groups, 34–50 months and 58–74 months, participated in individual sessions with two artifacts: AIBO and a stuffed dog. Evaluation and justification results showed similarities in children’s reasoning across artifacts. In contrast, children engaged more often in apprehensive behavior and attempts at reciprocity with AIBO, and more often (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  24. added 2020-01-22
    Caring Relationships with the Natural and Artifical Environments.Terri Field - 1995 - Environmental Ethics 17 (3):307-320.
    A relational-self theory claims that one’s self is constituted by one’s relationships. The type of ethics that is said to arise from this concept of self is often called an ethics of care, whereby the focus of ethical deliberation is on preserving and nurturing those relationships. Some environmental philosophers advocating a relational-self theory tend to assume that the particular relationships that constitute the self will prioritize the natural world. I question this assumption by introducing the problem of artifact relationships. It (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  25. added 2020-01-21
    Developing Artificial Agents Worthy of Trust: “Would You Buy a Used Car From This Artificial Agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  26. added 2020-01-07
    Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - forthcoming - Philosophy and Technology:1-24.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. The Explainable AI research program aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory contributions. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. added 2020-01-07
    The Philosophical Case for Robot Friendship.John Danaher - forthcoming - Journal of Posthuman Studies.
    Friendship is an important part of the good life. While many roboticists are eager to create friend-like robots, many philosophers and ethicists are concerned. They argue that robots cannot really be our friends. Robots can only fake the emotional and behavioural cues we associate with friendship. Consequently, we should resist the drive to create robot friends. In this article, I argue that the philosophical critics are wrong. Using the classic virtue-ideal of friendship, I argue that robots can plausibly be considered (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  28. added 2020-01-07
    Supporting Human Autonomy in AI Systems.Rafael Calvo, Dorian Peters, Karina Vold & Richard M. Ryan - forthcoming - In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach.
    Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  29. added 2020-01-07
    Autonomous Vehicles, Trolley Problems, and the Law.Stephen S. Wu - 2020 - Ethics and Information Technology 22 (1):1-13.
    Autonomous vehicles have the potential to save tens of thousands of lives, but legal and social barriers may delay or even deter manufacturers from offering fully automated vehicles and thereby cost lives that otherwise could be saved. Moral philosophers use “thought experiments” to teach us about what ethics might say about the ethical behavior of AVs. If a manufacturer designing an AV decided to make what it believes is an ethical choice to save a large group of lives by steering (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. added 2019-12-26
    Posthuman Personhood.Daryl J. Wennemann - 2013 - Upa.
    Wennemann argues that the traditional concept of personhood may be fruitfully applied to the ethical challenge we face in a posthuman age. The book posits that biologically non-human persons like robots, computers, or aliens are a theoretical possibility but that we do not know if they are a real possibility.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  31. added 2019-12-19
    The Motivations and Risks of Machine Ethics.Stephen Cave, Rune Nyrup, Karina Vold & Adrian Weller - 2019 - Proceedings of the IEEE 107 (3):562-574.
    Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. added 2019-10-20
    Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots.Minoru Asada - 2019 - Philosophies 4 (3):38-0.
    In this paper, a working hypothesis is proposed that a nervous system for pain sensation is a key component for shaping the conscious minds of robots. In this article, this hypothesis is argued from several viewpoints towards its verification. A developmental process of empathy, morality, and ethics based on the mirror neuron system that promotes the emergence of the concept of self scaffolds the emergence of artificial minds. Firstly, an outline of the ideological background on issues of the mind in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. added 2019-10-20
    The Future Impact of Artificial Intelligence on Humans and Human Rights.Steven Livingston & Mathias Risse - 2019 - Ethics and International Affairs 33 (2):141-158.
  34. added 2019-10-20
    Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - AIES: AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  35. added 2019-10-20
    Artificial Intelligence and Environmental Ethics: Moral, Legal Right of Artificial Intelligence.Kim Myungsik - 2018 - Environmental Philosophy 25:5-30.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. added 2019-10-20
    Artificial Intelligence and the Ethics of Human Extinction.T. Lorenc - 2015 - Journal of Consciousness Studies 22 (9-10):194-214.
    The potential long-term benefits and risks of technological progress in artificial intelligence and related fields are sub-stantial. The risks include total human extinction as a result of unfriendly superintelligent AI, while the benefits include the liberation of human existence from death and suffering through mind uploading. One approach to mitigating the risk would be to engineer ethical principles into AI devices. However, this may not be possible, due to the nature of ethical agency. Even if it is possible, these principles, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. added 2019-10-11
    Social Robots, Fiction, and Sentimentality.Raffaele Rodogno - 2016 - Ethics and Information Technology 18 (4):257-268.
    I examine the nature of human-robot pet relations that appear to involve genuine affective responses on behalf of humans towards entities, such as robot pets, that, on the face of it, do not seem to be deserving of these responses. Such relations have often been thought to involve a certain degree of sentimentality, the morality of which has in turn been the object of critical attention. In this paper, I dispel the claim that sentimentality is involved in this type of (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. added 2019-10-08
    Hohfeld in Cyberspace and Other Applications of Normative Reasoning in Agent Technology.Christen Krogh & Henning Herrestad - 1999 - Artificial Intelligence and Law 7 (1):81-96.
    Two areas of importance for agents and multiagent systems are investigated: design of agent programming languages, and design of agent communication languages. The paper contributes in the above mentioned areas by demonstrating improved or novel applications for deontic logic and normative reasoning. Examples are taken from computer-supported cooperative work, and electronic commerce.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. added 2019-10-04
    Kevin Macnish: The Ethics of Surveillance: An Introduction: Routledge, London and New York, 2018, ISBN 978-1138643796, $45.95.Tony Doyle - 2020 - Ethics and Information Technology 22 (1):39-42.
  40. added 2019-10-04
    When AI Meets PC: Exploring the Implications of Workplace Social Robots and a Human-Robot Psychological Contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. added 2019-10-04
    The Disciplinary Power of Predictive Algorithms: A Foucauldian Perspective.Paul B. de Laat - 2019 - Ethics and Information Technology 21 (4):319-329.
    Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that “governance by discipline” has given way to “governance by risk”. The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian discipline. Compliance to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. added 2019-10-04
    No Such Thing as Killer Robots.Michael Robillard - 2018 - Journal of Applied Philosophy 35 (4):705-717.
    There have been two recent strands of argument arguing for the pro tanto impermissibility of fully autonomous weapon systems. On Sparrow's view, AWS are impermissible because they generate a morally problematic ‘responsibility gap’. According to Purves et al., AWS are impermissible because moral reasoning is not codifiable and because AWS are incapable of acting for the ‘right’ reasons. I contend that these arguments are flawed and that AWS are not morally problematic in principle. Specifically, I contend that these arguments presuppose (...)
    Remove from this list   Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. added 2019-10-04
    Society-in-the-Loop: Programming the Algorithmic Social Contract.Iyad Rahwan - 2018 - Ethics and Information Technology 20 (1):5-14.
    Recent rapid advances in Artificial Intelligence and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  44. added 2019-10-04
    Ethics and Social Robotics.Raffaele Rodogno - 2016 - Ethics and Information Technology 18 (4):241-242.
  45. added 2019-10-04
    Refining the Ethics of Computer-Made Decisions: A Classification of Moral Mediation by Ubiquitous Machines.Marlies Van de Voort, Wolter Pieters & Luca Consoli - 2015 - Ethics and Information Technology 17 (1):41-56.
    In the past decades, computers have become more and more involved in society by the rise of ubiquitous systems, increasing the number of interactions between humans and IT systems. At the same time, the technology itself is getting more complex, enabling devices to act in a way that previously only humans could, based on developments in the fields of both robotics and artificial intelligence. This results in a situation in which many autonomous, intelligent and context-aware systems are involved in decisions (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. added 2019-10-04
    Privacy, Data Protection, and the Unprecedented Challenges of Ambient Intelligence.Antoinette Rouvroy - 2008 - Law and Ethics of Human Rights 2 (1).
    This paper identifies the unprecedented challenges that the prospects of an `ambient intelligence' era raise from the points of view of `privacy' and data protection. Privacy and data protection are identified, in line with Agre's conceptualization, as complementary and interdependent legal instruments aimed at preserving the individual freedom to build one's own personality without excessive constrains and influences, and to control some aspects of one's identity that one projects on the world. The `performativity' and the distribution of agency that characterize (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47. added 2019-10-01
    The Extended Corporate Mind: When Corporations Use AI to Break the Law.Mihailis Diamantis - forthcoming - North Carolina Law Review.
    Algorithms may soon replace employees as the leading cause of corporate harm. For centuries, the law has defined corporate misconduct — anything from civil discrimination to criminal insider trading — in terms of employee misconduct. Today, however, breakthroughs in artificial intelligence and big data allow automated systems to make many corporate decisions, e.g., who gets a loan or what stocks to buy. These technologies introduce valuable efficiencies, but they do not remove (or even always reduce) the incidence of corporate harm. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  48. added 2019-09-26
    The Human Rights of Nonhuman Artificial Entities: An Oxymoron?Amedeo Santosuosso - 2014 - Jahrbuch für Wissenschaft Und Ethik 18 (1).
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. added 2019-09-10
    The Moral Problem of Other Minds.Jeff Sebo - 2018 - The Harvard Review of Philosophy 25:51-70.
    In this paper I ask how we should treat other beings in cases of uncertainty about sentience. I evaluate three options: an incautionary principle that permits us to treat other beings as non-sentient, a precautionary principle that requires us to treat other beings as sentient, and an expected value principle that requires us to multiply our subjective probability that other beings are sentient by the amount of moral value they would have if they were. I then draw three conclusions. First, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  50. added 2019-08-29
    What’s Wrong with Designing People to Serve?Bartek Chomanski - 2019 - Ethical Theory and Moral Practice 22 (4):993-1015.
    In this paper I argue, contrary to recent literature, that it is unethical to create artificial agents possessing human-level intelligence that are programmed to be human beings’ obedient servants. In developing the argument, I concede that there are possible scenarios in which building such artificial servants is, on net, beneficial. I also concede that, on some conceptions of autonomy, it is possible to build human-level AI servants that will enjoy full-blown autonomy. Nonetheless, the main thrust of my argument is that, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 346