The literature on self-driving cars and ethics continues to grow. Yet much of it focuses on ethical complexities emerging from an individual vehicle. That is an important but insufficient step towards determining how the technology will impact human lives and society more generally. What must complement ongoing discussions is a broader, system level of analysis that engages with the interactions and effects that these cars will have on one another and on the socio-technical systems in which they are embedded. To (...) bring the conversation of self-driving cars to the system level, we make use of two traffic scenarios which highlight some of the complexities that designers, policymakers, and others should consider related to the technology. We then describe three approaches that could be used to address such complexities and their associated shortcomings. We conclude by bringing attention to the “Moral Responsibility for Computing Artifacts: The Rules”, a framework that can provide insight into how to approach ethical issues related to self-driving cars. (shrink)
Recently, there has been an upsurge of attention focused on bias and its impact on specialized artificial intelligence applications. Allegations of racism and sexism have permeated the conversation as stories surface about search engines delivering job postings for well-paying technical jobs to men and not women, or providing arrest mugshots when keywords such as “black teenagers” are entered. Learning algorithms are evolving; they are often created from parsing through large datasets of online information while having truth labels bestowed on them (...) by crowd-sourced masses. These specialized AI algorithms have been liberated from the minds of researchers and startups, and released onto the public. Yet intelligent though they may be, these algorithms maintain some of the same biases that permeate society. They find patterns within datasets that reflect implicit biases and, in so doing, emphasize and reinforce these biases as global truth. This paper describes specific examples of how bias has infused itself into current AI and robotic systems, and how it may affect the future design of such systems. More specifically, we draw attention to how bias may affect the functioning of a robot peacekeeper, a self-driving car, and a medical robot. We conclude with an overview of measures that could be taken to mitigate or halt bias from permeating robotic technology. (shrink)
The crash of two 737 MAX passenger aircraft in late 2018 and early 2019, and subsequent grounding of the entire fleet of 737 MAX jets, turned a global spotlight on Boeing’s practices and culture. Explanations for the crashes include: design flaws within the MAX’s new flight control software system designed to prevent stalls; internal pressure to keep pace with Boeing’s chief competitor, Airbus; Boeing’s lack of transparency about the new software; and the lack of adequate monitoring of Boeing by the (...) FAA, especially during the certification of the MAX and following the first crash. While these and other factors have been the subject of numerous government reports and investigative journalism articles, little to date has been written on the ethical significance of the accidents, in particular the ethical responsibilities of the engineers at Boeing and the FAA involved in designing and certifying the MAX. Lessons learned from this case include the need to strengthen the voice of engineers within large organizations. There is also the need for greater involvement of professional engineering societies in ethics-related activities and for broader focus on moral courage in engineering ethics education. (shrink)
Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to “nudge” their human users in the direction of being “more ethical”. More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture “socially just” tendencies in their human counterparts. Designing technological artifacts in such a (...) way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation. (shrink)
To assess ethics pedagogy in science and engineering, we developed a new tool called the Engineering and Science Issues Test (ESIT). ESIT measures moral judgment in a manner similar to the Defining Issues Test, second edition, but is built around technical dilemmas in science and engineering. We used a quasi-experimental approach with pre- and post-tests, and we compared the results to those of a control group with no overt ethics instruction. Our findings are that several (but not all) stand-alone classes (...) showed a significant improvement compared to the control group when the metric includes multiple stages of moral development. We also found that the written test had a higher response rate and sensitivity to pedagogy than the electronic version. We do not find significant differences on pre-test scores with respect to age, education level, gender or political leanings, but we do on whether subjects were native English speakers. We did not find significant differences on pre-test scores based on whether subjects had previous ethics instruction; this could suggest a lack of a long-term effect from the instruction. (shrink)
This manuscript describes a pilot study in ethics education employing a problem-based learning approach to the study of novel, complex, ethically fraught, unavoidably public, and unavoidably divisive policy problems, called “fractious problems,” in bioscience and biotechnology. Diverse graduate and professional students from four US institutions and disciplines spanning science, engineering, humanities, social science, law, and medicine analyzed fractious problems employing “navigational skills” tailored to the distinctive features of these problems. The students presented their results to policymakers, stakeholders, experts, and members (...) of the public. This approach may provide a model for educating future bioscientists and bioengineers so that they can meaningfully contribute to the social understanding and resolution of challenging policy problems generated by their work. (shrink)
As a committee of the National Academy of Engineering recognized, ethics education should foster the ability of students to analyze complex decision situations and ill-structured problems. Building on the NAE’s insights, we report about an innovative teaching approach that has two main features: first, it places the emphasis on deliberation and on self-directed, problem-based learning in small groups of students; and second, it focuses on understanding ill-structured problems. The first innovation is motivated by an abundance of scholarly research that supports (...) the value of deliberative learning practices. The second results from a critique of the traditional case-study approach in engineering ethics. A key problem with standard cases is that they are usually described in such a fashion that renders the ethical problem as being too obvious and simplistic. The practitioner, by contrast, may face problems that are ill-structured. In the collaborative learning environment described here, groups of students use interactive and web-based argument visualization software called “AGORA-net: Participate – Deliberate!”. The function of the software is to structure communication and problem solving in small groups. Students are confronted with the task of identifying possible stakeholder positions and reconstructing their legitimacy by constructing justifications for these positions in the form of graphically represented argument maps. The argument maps are then presented in class so that these stakeholder positions and their respective justifications become visible and can be brought into a reasoned dialogue. Argument mapping provides an opportunity for students to collaborate in teams and to develop critical thinking and argumentation skills. (shrink)
The primary aim of this article is to identify ethical challenges relating to authorship in engineering fields. Professional organizations and journals do provide crucial guidance in this realm, but this cannot replace the need for frequent and diligent discussions in engineering research communities about what constitutes appropriate authorship practice. Engineering researchers should seek to identify and address issues such as who is entitled to be an author and whether publishing their research could potentially harm the public.
In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types of (...) robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients. (shrink)
The use of robotic workers is likely to continue to increase as time passes. Hence it is crucial to examine the types of effects this occurrence could have on employment patterns. Invariably, as new job opportunities emerge due to robotic innovations, others will be closed off. Further, the characteristics of the workforce in terms of age, education, and income could profoundly change as a result.
Many scholars predict that the technology to modify unborn children genetically is on the horizon. According to supporters of genetic enhancement, allowing parents to select a child’s traits will enable him/her to experience a better life. Following their logic, the technology will not only increase our knowledge base and generate cures for genetic illness, but it may enable us to increase the intelligence, strength, and longevity of future generations as well. Yet it must be examined whether supporters of genetic enhancement, (...) especially libertarians, adequately appreciate the ethical hazards emerging from the technology, including whether its use might violate the harm principle. (shrink)
The purpose of this paper is to explore whether laypersons can competently evaluate the specialized claims offered by experts. Since it is a lack of knowledge about a subject area that makes someone a layperson with respect to that area, the layperson may be unable to understand and assess what an expert knows.
The complexity of the interactions between humans and robots is increasing, and scholars predict that at some future point, robots will become caregivers and companions for children. This occurrence would raise many ethical issues, including what effects prolonged interactions with a robot may have on a child’s well-being. In this chapter, we discuss how robots could in principle be used to nurture the development of virtues in children by encouraging prosocial behavior and discouraging antisocial behavior.
Academic-industry collaborations and the conflicts of interest (COI) arising out of them are not new. However, as industry funding for research in the life and health sciences has increased and scandals involving financial COI are brought to the public’s attention, demands for disclosure have grown. In a March 2008 American Council on Science and Health report by Ronald Bailey, he argues that the focus on COI—especially financial COI—is obsessive and likely to be more detrimental to scientific progress and public health (...) than COI themselves. In response, we argue that downplaying the potential negative impact of COI arising out of academic-industry relationships is no less harmful than overreacting to it. (shrink)
Our courts are regularly confronted with the claims of expert witnesses. Since experts are permitted to present testimony in the courtroom, we have to assume that judges and juries understand what it means to have expertise and can consistently recognize someone who has it. Yet these assumptions need to be examined, for the legal system probably underestimates the difficulty of identifying expertise. In this paper, several philosophical issues pertaining to expertise will be discussed, including what expertise is, why we rely (...) on experts, what measures can be taken to verify expertise, and how we determine whether a particular individual is an expert. (shrink)