Switch to: Citations

Add references

You must login to add references.
  1. Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Case for Ethical Autonomy in Unmanned Systems.Ronald C. Arkin - 2010 - Journal of Military Ethics 9 (4):332-341.
    The underlying thesis of the research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and perhaps equally important by an assessment of the fundamental ability of human warfighters in today's battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  • What is it like to be a bat?Thomas Nagel - 1974 - Philosophical Review 83 (October):435-50.
  • Compatibilism.Michael McKenna - 2008 - Stanford Encyclopedia of Philosophy.
    Direct download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Robots should be slaves.Joanna J. Bryson - 2010 - In Yorick Wilks (ed.), Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues. John Benjamins Publishing. pp. 63-74.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   74 citations  
  • Two Ways of Socialising Responsibility: Circumstantialist and Scaffolded-Responsiveness.Jules Holroyd - 2018 - In Marina Oshana, Katrina Hutchison & Catriona Mackenzie (eds.), Social Dimensions of Moral Responsibility. New York: Oup Usa. pp. 137-162.
    This chapter evaluates two competing views of morally responsible agency. The first view at issue is Vargas’s circumstantialism—on which responsible agency is a function of the agent and her circumstances, and so is highly context sensitive. The second view is McGeer’s scaffolded-responsiveness view, on which responsible agency is constituted by the capacity for responsiveness to reasons directly, and indirectly via sensitivity to the expectations of one’s audience (whose sensitivity may be more developed than one’s own). This chapter defends a version (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Mechanism and responsibility.Daniel C. Dennett - 1973 - In Ted Honderich (ed.), Essays on Freedom of Action. Boston,: Routledge and Kegan Paul. pp. 157--84.
     
    Export citation  
     
    Bookmark   75 citations  
  • What is it like to be a bat?Thomas Nagel - 2004 - In Tim Crane & Katalin Farkas (eds.), Metaphysics: A Guide and Anthology. Oxford University Press UK.
    No categories
     
    Export citation  
     
    Bookmark   709 citations  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  • Automation and Utopia: Human Flourishing in an Age Without Work.John Danaher - 2019 - Cambridge, MA: Harvard University Press.
    Human obsolescence is imminent. We are living through an era in which our activity is becoming less and less relevant to our well-being and to the fate of our planet. This trend toward increased obsolescence is likely to continue in the future, and we must do our best to prepare ourselves and our societies for this reality. Far from being a cause for despair, this is in fact an opportunity for optimism. Harnessed in the right way, the technology that hastens (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • Humans and Robots: Ethics, Agency, and Anthropomorphism.Sven Nyholm - 2020 - Rowman & Littlefield International.
    This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Free Will Skepticism in Law and Society: Challenging Retributive Justice.Elizabeth Shaw, Derk Pereboom & Gregg D. Caruso (eds.) - 2019 - New York, NY: Cambridge University Press.
    'Free will skepticism' refers to a family of views that all take seriously the possibility that human beings lack the control in action - i.e. the free will - required for an agent to be truly deserving of blame and praise, punishment and reward. Critics fear that adopting this view would have harmful consequences for our interpersonal relationships, society, morality, meaning, and laws. Optimistic free will skeptics, on the other hand, respond by arguing that life without free will and so-called (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Practical reason.R. Jay Wallace - 2008 - Stanford Encyclopedia of Philosophy.
    Practical reason is the general human capacity for resolving, through reflection, the question of what one is to do. Deliberation of this kind is practical in at least two senses. First, it is practical in its subject matter, insofar as it is concerned with action. But it is also practical in its consequences or its issue, insofar as reflection about action itself directly moves people to act. Our capacity for deliberative self-determination raises two sets of philosophical problems. First, there are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   41 citations  
  • Moralizing Technology: Understanding and Designing the Morality of Things.Peter-Paul Verbeek - 2011 - University of Chicago Press.
    Technology permeates nearly every aspect of our daily lives. Cars enable us to travel long distances, mobile phones help us to communicate, and medical devices make it possible to detect and cure diseases. But these aids to existence are not simply neutral instruments: they give shape to what we do and how we experience the world. And because technology plays such an active role in shaping our daily actions and decisions, it is crucial, Peter-Paul Verbeek argues, that we consider the (...)
  • Can robots be moral?Laszlo Versenyi - 1974 - Ethics 84 (3):248-259.
  • Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  • Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
    The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  • Intending to err: the ethical challenge of lethal, autonomous systems. [REVIEW]Mark S. Swiatek - 2012 - Ethics and Information Technology 14 (4):241-254.
    Current precursors in the development of lethal, autonomous systems (LAS) point to the use of biometric devices for assessing, identifying, and verifying targets. The inclusion of biometric devices entails the use of a probabilistic matching program that requires the deliberate targeting of noncombatants as a statistically necessary function of the system. While the tactical employment of the LAS may be justified on the grounds that the deliberate killing of a smaller number of noncombatants is better than the accidental killing of (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Asimov’s “three laws of robotics” and machine metaethics.Susan Leigh Anderson - 2008 - AI and Society 22 (4):477-493.
    Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Robowarfare: Can robots be more ethical than humans on the battlefield? [REVIEW]John P. Sullins - 2010 - Ethics and Information Technology 12 (3):263-275.
    Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Responsible computers? A case for ascribing quasi-responsibility to computers independent of personhood or agency.Bernd Carsten Stahl - 2006 - Ethics and Information Technology 8 (4):205-213.
    There has been much debate whether computers can be responsible. This question is usually discussed in terms of personhood and personal characteristics, which a computer may or may not possess. If a computer fulfils the conditions required for agency or personhood, then it can be responsible; otherwise not. This paper suggests a different approach. An analysis of the concept of responsibility shows that it is a social construct of ascription which is only viable in certain social contexts and which serves (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   209 citations  
  • Moral Worth and Moral Knowledge.Paulina Sliwa - 2015 - Philosophy and Phenomenological Research 93 (2):393-418.
    To have moral worth an action not only needs to conform to the correct normative theory ; it also needs to be motivated in the right way. I argue that morally worthy actions are motivated by the rightness of the action; they are motivated by an agent's concern for doing what's right and her knowledge that her action is morally right. Call this the Rightness Condition. On the Rightness Condition moral motivation involves both a conative and a cognitive element—in particular, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   65 citations  
  • Ethical robots: the future can heed us. [REVIEW]Selmer Bringsjord - 2008 - AI and Society 22 (4):539-550.
    Bill Joy’s deep pessimism is now famous. Why the Future Doesn’t Need Us, his defense of that pessimism, has been read by, it seems, everyone—and many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  • On the Demarcation Problem and the Possibility of Environmental Ethics.Lars Samuelsson - 2010 - Environmental Ethics 32 (3):247-265.
    According to a popular critique of environmental ethics, the view that nature has intrinsic value faces an insurmountable demarcation problem. This critique was delivered in a particularly forceful manner two decades ago by Janna Thompson in her paper “A Refutation of Environmental Ethics.” However, the demarcation problem, albeit a real problem, is not insurmountable. Thompson’s argument draws on the claim that the possibility of environmental ethics depends on the possibility that nature can be demarcated with respect to some allegedly morally (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Autonomous Machines, Moral Judgment, and Acting for the Right Reasons.Duncan Purves, Ryan Jenkins & Bradley J. Strawser - 2015 - Ethical Theory and Moral Practice 18 (4):851-872.
    We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  • On the Moral Agency of Computers.Thomas M. Powers - 2013 - Topoi 32 (2):227-236.
    Can computer systems ever be considered moral agents? This paper considers two factors that are explored in the recent philosophical literature. First, there are the important domains in which computers are allowed to act, made possible by their greater functional capacities. Second, there is the claim that these functional capacities appear to embody relevant human abilities, such as autonomy and responsibility. I argue that neither the first (Domain-Function) factor nor the second (Simulacrum) factor gets at the central issue in the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • What makes any agent a moral agent? Reflections on machine consciousness and moral agency.Joel Parthemore & Blay Whitby - 2013 - International Journal of Machine Consciousness 5 (2):105-129.
    In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We set out a number of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci.Sven Nyholm - 2018 - Science and Engineering Ethics 24 (4):1201-1219.
    Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   63 citations  
  • Responsibility Practices and Unmanned Military Technologies.Merel Noorman - 2014 - Science and Engineering Ethics 20 (3):809-826.
    The prospect of increasingly autonomous military robots has raised concerns about the obfuscation of human responsibility. This papers argues that whether or not and to what extent human actors are and will be considered to be responsible for the behavior of robotic systems is and will be the outcome of ongoing negotiations between the various human actors involved. These negotiations are about what technologies should do and mean, but they are also about how responsibility should be interpreted and how it (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Mind-making practices: the social infrastructure of self-knowing agency and responsibility.Victoria McGeer - 2015 - Philosophical Explorations 18 (2):259-281.
    This paper is divided into two parts. In Section 1, I explore and defend a “regulative view” of folk-psychology as against the “standard view”. On the regulative view, folk-psychology is conceptualized in fundamentally interpersonal terms as a “mind-making” practice through which we come to form and regulate our minds in accordance with a rich array of socially shared and socially maintained sense-making norms. It is not, as the standard view maintains, simply an epistemic capacity for coming to know about the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   64 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   167 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   169 citations  
  • Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   83 citations  
  • Computer systems and responsibility: A normative look at technological complexity.Deborah G. Johnson & Thomas M. Powers - 2005 - Ethics and Information Technology 7 (2):99-107.
    In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  • On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  • The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • Issues in robot ethics seen through the lens of a moral Turing test.Anne Gerdes & Peter Øhrstrøm - 2015 - Journal of Information, Communication and Ethics in Society 13 (2):98-109.
    Purpose – The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent. Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out to address fundamental challenges within (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   287 citations  
  • “Ain’t No One Here But Us Social Forces”: Constructing the Professional Responsibility of Engineers. [REVIEW]Michael Davis - 2012 - Science and Engineering Ethics 18 (1):13-34.
    There are many ways to avoid responsibility, for example, explaining what happens as the work of the gods, fate, society, or the system. For engineers, “technology” or “the organization” will serve this purpose quite well. We may distinguish at least nine (related) senses of “responsibility”, the most important of which are: (a) responsibility-as-causation (the storm is responsible for flooding), (b) responsibility-as-liability (he is the person responsible and will have to pay), (c) responsibility-as-competency (he’s a responsible person, that is, he’s rational), (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. [REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
  • Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  • Bridging the Responsibility Gap in Automated Warfare.Marc Champagne & Ryan Tonkens - 2015 - Philosophy and Technology 28 (1):125-137.
    Sparrow argues that military robots capable of making their own decisions would be independent enough to allow us denial for their actions, yet too unlike us to be the targets of meaningful blame or praise—thereby fostering what Matthias has dubbed “the responsibility gap.” We agree with Sparrow that someone must be held responsible for all actions taken in a military conflict. That said, we think Sparrow overlooks the possibility of what we term “blank check” responsibility: A person of sufficiently high (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Information, Ethics, and Computers: The Problem of Autonomous Moral Agents. [REVIEW]Bernd Carsten Stahl - 2004 - Minds and Machines 14 (1):67-83.
    In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. However, the (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • The Explanatory Component of Moral Responsibility.Gunnar Björnsson & Karl Persson - 2012 - Noûs 46 (2):326-354.
    In this paper, we do three things. First, we put forth a novel hypothesis about judgments of moral responsibility according to which such judgments are a species of explanatory judgments. Second, we argue that this hypothesis explains both some general features of everyday thinking about responsibility and the appeal of skeptical arguments against moral responsibility. Finally, we argue that, if correct, the hypothesis provides a defense against these skeptical arguments.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   38 citations