After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...) computer systems have intentionality, and because of this, they should not be dismissed from the realm of morality in the same way that natural objects are dismissed. Natural objects behave from necessity; computer systems and other artifacts behave from necessity after they are created and deployed, but, unlike natural objects, they are intentionally created and deployed. Failure to recognize the intentionality of computer systems and their connection to human intentionality and action hides the moral character of computer systems. Computer systems are components in human moral action. When humans act with artifacts, their actions are constituted by the intentionality and efficacy of the artifact which, in turn, has been constituted by the intentionality and efficacy of the artifact designer. All three components – artifact designer, artifact, and artifact user – are at work when there is an action and all three should be the focus of moral evaluation. (shrink)
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...) adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them. (shrink)
After reviewing portions of the 21st Century Nanotechnology Research and Development Act that call for examination of societal and ethical issues, this essay seeks to understand how nanoethics can play a role in nanotechnology development. What can and should nanoethics aim to achieve? The focus of the essay is on the challenges of examining ethical issues with regard to a technology that is still emerging, still ‘in the making.’ The literature of science and technology studies (STS) is used to understand (...) the nanotechnology endeavor in a way that makes room for influence by nanoethics. The analysis emphasizes: the contingency of technology and the many actors involved in its development; a conception of technology as sociotechnical systems; and, the values infused (in a variety of ways) in technology. Nanoethicists can be among the many actors who shape the meaning and materiality of an emerging technology. Nevertheless, there are dangers that nanoethicists should try to avoid. The possibility of being co-opted from working along side nanotechnology engineers and scientists is one danger that is inseparable from trying to influence. Related but somewhat different is the danger of not asking about the worthiness of the nanotechnology enterprise as a social investment in the future. (shrink)
A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of (...) ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline. (shrink)
In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...) and Honoré. While these accounts are helpful, they have misled philosophers and others by presupposing that responsibility can be ascribed in all cases of action simply by paying attention to the free and intended actions of human beings. Such accounts neglect the part played by technology in ascriptions of responsibility in cases of moral action with technology. For both moral and role responsibility, we argue that ascriptions of both causal and role responsibility depend on seeing action as complex in the sense described by TMA. We conclude by showing how our analysis enriches moral discourse about responsibility for TMA. (shrink)
Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...) reactions, is not predetermined. The animal–robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. We argue that, despite some shared characteristics, when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another, analogies with animals are misleading. (shrink)
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...) When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly. (shrink)
Since the idea of forbidden knowledge is rooted in the biblical story of Adam and Eve eating from the forbidden tree of knowledge, its meaning today, in particular as a metaphor for scientific knowledge, is not so obvious. We can and should ask questions about the autonomy of science.
In this paper I use the concept of forbidden knowledge to explore questions about putting limits on science. Science has generally been understood to seek and produce objective truth, and this understanding of science has grounded its claim to freedom of inquiry. What happens to decision making about science when this claim to objective, disinterested truth is rejected? There are two changes that must be made to update the idea of forbidden knowledge for modern science. The first is to shift (...) from presuming that decisions to constrain or even forbid knowledge can be made from a position of omniscience (perfect knowledge) to recognizing that such decisions made by human beings are made from a position of limited or partial knowledge. The second is to reject the idea that knowledge is objective and disinterested and accept that knowledge (even scientific knowledge) is interested. In particular, choices about what knowledge gets created are normative, value choices. When these two changes are made to the idea of forbidden knowledge, questions about limiting or forbidding lines of inquiry are shown to distract attention from the more important matters of who makes and how decisions are made about what knowledge is produced. Much more attention should be focused on choosing directions in science, and as this is done, the matter of whether constraints should be placed on science will fall into place. (shrink)
The following views were presented at the Annual Meeting of the American Association for the Advancement of Science Seminar “Teaching Ethics in Science and Engineering”, 10–11 February 1993 organized by Stephanie J. Bird , Penny J. Gilmer and Terrell W. Bynum . Opragen Publications thanks the AAAS, seminar organizers and authors for permission to publish extracts from the conference. The opinions expressed are those of the authors and do not reflect the opinions of AAAS or its Board of Directors.
The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...) it functions in real-world contexts. We use the VW emission fraud case to explain triadic agency since in this case a technological artifact, namely software, was an essential part of the wrongdoing and the software might be said to have agency in the wrongdoing. We then extend the case to include futuristic AI, imagining AI that becomes more and more autonomous. (shrink)
_An engaging, accessible survey of the ethical issues faced by engineers, designed for students_ The first engineering ethics textbook to use debates as the framework for presenting engineering ethics topics, this engaging, accessible survey explores the most difficult and controversial issues that engineers face in daily practice. Written by a leading scholar in the field of engineering and computer ethics, Deborah Johnson approaches engineering ethics with three premises: that engineering is both a technical and a social endeavor; that engineers don’t (...) just build things, they build society; and that engineering is an inherently ethical enterprise. (shrink)