The literature on self-driving cars and ethics continues to grow. Yet much of it focuses on ethical complexities emerging from an individual vehicle. That is an important but insufficient step towards determining how the technology will impact human lives and society more generally. What must complement ongoing discussions is a broader, system level of analysis that engages with the interactions and effects that these cars will have on one another and on the socio-technical systems in which they are embedded. To (...) bring the conversation of self-driving cars to the system level, we make use of two traffic scenarios which highlight some of the complexities that designers, policymakers, and others should consider related to the technology. We then describe three approaches that could be used to address such complexities and their associated shortcomings. We conclude by bringing attention to the “Moral Responsibility for Computing Artifacts: The Rules”, a framework that can provide insight into how to approach ethical issues related to self-driving cars. (shrink)
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...) adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them. (shrink)
The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing’s practices and culture. Explanations for the crashes include: design flaws within the MAX’s new flight control software system designed to prevent stalls; internal pressure to keep pace with Boeing’s chief competitor, Airbus; Boeing’s lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the (...) FAA, especially during the certification of the MAX and following the first crash. While these and other factors have been the subject of numerous government reports and investigative journalism articles, little to date has been written on the ethical significance of the accidents, in particular the ethical responsibilities of the engineers at Boeing and the FAA involved in designing and certifying the MAX. Lessons learned from this case include the need to strengthen the voice of engineers within large organizations. There is also the need for greater involvement of professional engineering societies in ethics-related activities and for broader focus on moral courage in engineering ethics education. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...) is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...) difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading. (shrink)
As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...) gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs. (shrink)
Purpose The purpose of this paper is to explore the ethical issues surrounding information systems practice with a view to encouraging greater involvement in this aspect of IS research. Information integrity relies upon the development and operation of computer-based information systems. Those who undertake the planning, development and operation of these information systems have obligations to assure information integrity and overall to contribute to the public good. This ethical dimension of information systems has attracted mixed attention in the IS academic (...) discipline. Design/methodology/approach The authors are a multidisciplinary team providing a rich, diverse experience which includes professional and information ethics, management information systems, software engineering, data repositories and information systems development. Each author has used this experience to review the IS ethics landscape, which provides four complimentary perspectives. These are synthesised to tease out trends and future pointers. Findings It is confirmed that there is a serious lack of research being undertaken relating to the ethical dimension of the Information Systems field. There is limited crossover between the well-established multidisciplinary community of Computer Ethics research and the traditional Information Systems research community. Originality/value An outline framework is offered which could provide an opportunity for rich and valuable dialogue across the two communities. This is proposed as the starting point for a proactive research and practice action plan for information systems ethics. (shrink)
Purpose This paper aims to explore the ethical and social impact of augmented visual field devices, identifying issues that AVFDs share with existing devices and suggesting new ethical and social issues that arise with the adoption of AVFDs. Design/methodology/approach This essay incorporates both a philosophical and an ethical analysis approach. It is based on Plato’s Allegory of the Cave, philosophical notions of transparency and presence and human values including psychological well-being, physical well-being, privacy, deception, informed consent, ownership and property and (...) trust. Findings The paper concludes that the interactions among developers, users and non-users via AVFDs have implications for autonomy. It also identifies issues of ownership that arise because of the blending of physical and virtual space and important ways that these devices impact, identity and trust. Practical implications Developers ought to take time to design and implement an easy-to-use informed consent system with these devices. There is a strong need for consent protocols among developers, users and non-users of AVFDs. Social implications There is a social benefit to users sharing what is visible on their devices with those who are in close physical proximity, but this introduces tension between notions of personal privacy and the establishment and maintenance of social norms. Originality/value There is new analysis of how AVFDs impact individual identity and the attendant ties to notions of ownership of the space between an object and someone’s eyes and control over perception. (shrink)
Should software be sold “as is”, totally guaranteed, or something else? This paper suggests that “informed consent”, used extensively in medical ethics, is an appropriate way to envision the buyer/developer relationship when software is sold. We review why the technical difficulties preclude delivering perfect software, but allow statistical predictions about reliability. Then we borrow principles refined by medical ethics and apply them to computer professionals.
This volume collects key influential papers that have animated the debate about information computer ethics over the past three decades, covering issues such as privacy, online trust, anonymity, values sensitive design, machine ethics, professional conduct and moral responsibility of software developers. These previously published articles have set the tone of the discussion and bringing them together here in one volume provides lecturers and students with a one-stop resource with which to navigate the debate.
We describe the process of changing and the changes being suggested for the ACM Code of Ethics and Professional Conduct. In addition to addressing the technical and ethical basis for the proposed changes, we identify suggestions that commenters made in response to the first draft. We invite feedback on the proposed changes and on the suggestions that commenters made.
The first topic of concern is anonymity, specifically the anonymity that is available in communications on the Internet. An earlier paper argues that anonymity in electronic communication is problematic because: it makes law enforcement difficult ; it frees individuals to behave in socially undesirable and harmful ways ; it diminishes the integrity of information since one can't be sure who information is coming from, whether it has been altered on the way, etc.; and all three of the above contribute to (...) an environment of diminished trust which is not conducive to certain uses of computer communication. Counterbalancing these problems are some important benefits. Anonymity can facilitate some socially desirable and beneficial behavior. For example, it can eliminate the fear of repercussions for behavior in contexts in which repercussions would diminish the availability or reliability of information, e.g., voting, personal relationships between consenting adults, and the like. Furthermore, anonymity can be used constructively to reduce the effect of prejudices on communications. Negative aspects of anonymity all seem to point to a tension between accountability and anonymity. They suggest that accountability and anonymity are not compatible, and they even seem to suggest that since accountability is a good thing, it would be good to eliminate anonymity. In other words, the problems with anonymity suggest that individuals are more likely to behave in socially desirable ways when they are held accountable for their behavior, and they are more likely to engage in socially undesirable behavior when they are not held accountable. I am not going to take issue with the correlation between accountability and anonymity, but rather with the claim that accountability is good. To examine this problem, let's look at a continuum that stretches from total anonymity at one end, and no anonymity at all at the other end. At the opposite extreme of anonymity is a panopticon society. The panopticon is the prison environment described by Foucault in which prison cells are arranged in a large circle with the side facing the inside of the circle open to view. The guard tower is placed in the middle of the circle so that guards can see everything that goes on in every cell. In a recent article on privacy, Jeffrey Reiman, reflecting on the new intelligent highway systems, suggests that we are moving closer and closer to a panopticon society. When we contemplate all the electronic data that is now gathered about each one of us as we move through our everyday lives- intelligent highway systems, consumer transactions, traffic patterns on the internet, medical records, financial records, and so on- we see the trend that Reiman identifies. Electronic behavior is recorded and the information is retained. While actions/transactions in separate domains are not necessarily combined, it seems obvious that the potential exists for combining data into a complete portfolio of an individual's day to day life. So it would seem that as more and more activities and domains are moved into a IT-based medium, the closer we will come to a panopticon society. A panopticon society gives us the ultimate in accountability. Everything an individual does is observable and therefore available to those to whom we are accountable. Of course, in doing this, it puts us, in effect, in prison. The prison parallel is appropriate here because what anonymity allows us is freedom; prison is the ultimate in lack of freedom. In this way the arguments for a free society become arguments for anonymity. Only when individuals are free will they experiment, try new ideas, take risks, and learn by doing so. Only in an environment that tolerates making mistakes will individuals develop the active habits that are so essential for democracy. In a world without information technology, individuals have levels or degrees and various kinds of anonymity and consequently different levels and kinds of freedom. Degrees and kinds of anonymity vary with the domain: small town social life versus urban social life, voting, commercial exchanges, banking, automobile travel, airplane travel, telephone communication, education, and so on. Drawing from our experience before IT-based institutions, we might believe that what we need is varying levels or degrees and kinds of anonymity. This seems a good starting place because it suggests an attempt to re-create the mixture that we have in the physical, non-IT-based world. Nevertheless, there is a danger. If we think in terms of levels and degrees of anonymity, we may not see the forest from the trees. We may not acknowledge that in an electronic medium, levels and kinds of anonymity mean, in an important sense, no anonymity. If there are domains in which we can be anonymous but those domains are part of a global communication infrastructure in which there is no anonymity at the entry point, then it will always be possible to trace someone's identity. We delude ourselves when we think we have anonymity on-line or off-line. Rather, what we have both places is situations in which it is more and less difficult to identify individuals. We have a continuum of situations in which it is easy and difficult to link behavior to other behavior and histories of behaviors. In the physical world, we can go places and do things where others don't know us by name and have no history with us, though they see our bodies, clothes, and behavior. If we do nothing unusual, we may be forgotten. On the other hand, if we do something illegal, authorities may attempt to track us down and figure out who we are. For example, law enforcement officials, collection agencies, those who want to sue us may take an active interest in removing our anonymity, ex post facto. Think of Timothy McVeigh and Terry Nichols-the men who apparently bombed the federal building in Oklahoma City. Much of what they did, they did anonymously, but then law enforcement officials set out to find out who had done various things, e.g., rented a car, bought explosives, etc. The shrouds of anonymity under which McVeigh and Nichols had acted were slowly removed. Is this any different than behavior on the internet? Is there a significant difference in the kind or degree of anonymity we have in the physical world versus what we have in an IT-based world? The character of the trail we leave is different; in the one case, its an electronic trail; while in the other it involves human memories, photographs, and paper and ink. What law enforcement officials had to do to track down McVeigh is quite different from tracking down an electronic law breaker. Also, the cost of electronic information gathering, both in time and money, can be dramatically lower than the cost of talking to people, gathering physical evidence, and the other minutia required by traditional detective work. We should acknowledge that we do not and are never likely to have anonymity on the Internet. We would do better to think of different levels or kinds of identity. There are important moral and social issues arising as a result of these varying degrees and kinds of identity. Perhaps the most important matter is assuring that individuals are informed about the conditions in which they are interacting. Perhaps, even more important is that individuals have a choice about the conditions under which they are communicating. In the rest of this paper we explore a few examples of levels and kinds of identity that are practical on the Internet. We discuss the advantages and disadvantages that we see for these "styles" of identity for individuals, and we examine the costs and benefits of these styles for society as a whole. (shrink)
Purpose This short viewpoint is a response to a lead paper on professional ethics in the information age. This paper aims to draw upon the authors’ experience of professional bodies such as the ACM over many years. Points of agreement and disagreement are highlighted with the aim of promoting wider debate. Design/methodology/approach An analysis of the lead paper is undertaken using a binary agree/disagree approach. This highlights the conflicting views which can then be considered in more detail. Findings Four major (...) agreements and four major disagreements are identified. There is an emphasis on “acultural” professionalism to promote moral behavior rather than amoral behavior. Originality/value This is an original viewpoint which draws from the authors’ practical experience and expertise. (shrink)
The Association for Computing Machinery's Committee on Professional Ethics has been charged to execute three major projects over the next two years: updating ACM's Code of Ethics and Professional Conduct, revising the enforcement procedures for the Code, and developing new media to promote integrity in the profession. We cannot do this alone, and we are asking SIGCAS members to volunteer and get involved. We will briefly describe the rationale and plan behind these projects and describe opportunities to get involved.
This paper applies social-relational models of moral standing of robots to cases where the encounters between the robot and humans are relatively brief. Our analysis spans the spectrum of non-social robots to fully-social robots. We consider cases where the encounters are between a stranger and the robot and do not include its owner or operator. We conclude that the developers of robots that might be encountered by other people when the owner is not present cannot wash their hands of responsibility. (...) They must take care with how they develop the robot’s interface with people and take into account how that interface influences the social relationship between it and people, and, thus, the moral standing of the robot with each person it encounters. Furthermore, we claim that developers have responsibility for the impact social robots have on the quality of human social relationships. (shrink)
We demonstrate that different categories of software raise different ethical concerns with respect to whether software ought to be Free Software or Proprietary Software. We outline the ethical tension between Free Software and Proprietary Software that stems from the two kinds of licenses. For some categories of software we develop support for normative statements regarding the software development landscape. We claim that as society's use of software changes, the ethical analysis for that category of software must necessarily be repeated. Finally, (...) we make a utilitarian argument that the software development environment should encourage both Free Software and Proprietary Software to flourish. (shrink)
Traditionally, philosophers have ascribed moral agency almost exclusively to humans. Early writing about moral agency can be traced to Aristotle and Aquinas. In addition to human moral agents, Aristotle discussed the possibility of moral agency of the Greek gods and Aquinas discussed the possibility of moral agency of angels. In the case of angels, a difficulty in ascribing moral agency was that it was suspected that angels did not have enough independence from God to ascribe to the angels genuine moral (...) choices. Recently, new candidates have been suggested for non‐human moral agency. Floridi and Sanders suggest that artificially intelligence programs that meet certain criteria may attain the status of moral agents; they suggest a redefinition of moral agency to clarify the relationship between artificial and human agents. Other philosophers, as well as scholars in Science and Technology Studies, are studying the possibility that artifacts that are not designed to mimic human intelligence still embody a kind of moral agency. For example, there has been a lively discussion about the moral intent and the consequential effects of speed bumps. The connections and distributed intelligence of a network is another candidate being considered for moral agency. These philosophical arguments may have practical consequences for software developers, and for the people affected by computing. In this paper, we will examine ideas about artificial moral agency from the perspective of a software developer. (shrink)