Robots today serve in many roles, from entertainer to educator to executioner. As robotics technology advances, ethical concerns become more pressing: Should robots be programmed to follow a code of ethics, if this is even possible? Are there risks in forming emotional bonds with robots? How might society--and ethics--change with robotics? This volume is the first book to bring together prominent scholars and experts from both science and the humanities to explore these and other questions in this emerging (...) field. Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robotethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The contributors then turn to human-robot emotional relationships, examining the ethical implications of robots as sexual partners, caregivers, and servants. Finally, they explore the possibility that robots, whether biological-computational hybrids or pure machines, should be given rights or moral consideration. Ethics is often slow to catch up with technological developments. This authoritative and accessible volume fills a gap in both scholarly literature and policy discussion, offering an impressive collection of expert analyses of the most crucial topics in this increasingly important field. (shrink)
As robots slip into more domains of human life-from the operating room to the bedroom-they take on our morally important tasks and decisions, as well as create new risks from psychological to physical. This book answers the urgent call to study their ethical, legal, and policy impacts.
Robotethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (...) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society. Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them. (shrink)
This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...) Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots. (shrink)
The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in (...) the feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction. (shrink)
In this chapter, the focus is on robotics development and its ethical implications, especially on some particular applications or interaction principles. In recent years, such developments have happened very quickly, based on the advances achieved in the last few decades in industrial robotics. The technological developments in manufacturing, with the implementation of Industry 4.0 strategies in most industrialized countries, and the dissemination of production strategies into services and health sectors, enabled robotics to develop in a variety of new directions. Policy (...) making and ethical awareness addressed these issues using socio-economic knowledge and also in an effort to solve some of the application problems raised in a range of different circumstances and sectoral environments. (shrink)
Purpose – The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent. Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out to address fundamental challenges within (...) the field of robotethics regarding the formal representation of moral theories and standards. Here, typically three design approaches to AMAs are available: top-down theory-driven models and bottom-up approaches which set out to model moral behaviour by means of models for adaptive learning, such as neural networks, and finally, hybrid models, which involve components from both top-down and bottom-up approaches to the modelling of moral agency. With inspiration from Allen and Wallace as well as Prior, we elaborate on theoretically driven approaches to machine ethics by introducing deontic tense logic. Finally, within this framework, we explore the character of human interaction with a robot which has successfully passed an MTT. Design/methodology/approach – The ideas in this paper reflect preliminary theoretical considerations regarding the possibility of establishing a MTT based on the evaluation of moral behaviour, which focusses on moral reasoning regarding possible actions. The thoughts reflected fall within the field of normative ethics and apply deontic tense logic to discuss the possibilities and limitations of artificial moral agency. Findings – The authors stipulate a formalisation of logic of obligation, time and modality, which may serve as a candidate for implementing a system corresponding to an MTT in a restricted sense. Hence, the authors argue that to establish a present moral obligation, we need to be able to make a description of the actual situation and the relevant general moral rules. Such a description can never be complete, as the combination of exhaustive knowledge about both situations and rules would involve a God eye’s view, enabling one to know all there is to know and take everything relevant into consideration before making a perfect moral decision to act upon. Consequently, due to this frame problem, from an engineering point of view, we can only strive for designing a robot supposed to operate within a restricted domain and within a limited space-time region. Given such a setup, the robot has to be able to perform moral reasoning based on a formal description of the situation and any possible future developments. Although a system of this kind may be useful, it is clearly also limited to a particular context. It seems that it will always be possible to find special cases in which a given system does not pass the MTT. This calls for a new design of moral systems with trust-related components which will make it possible for the system to learn from experience. Originality/value – It is without doubt that in the near future we are going to be faced with advanced social robots with increasing autonomy, and our growing engagement with these robots calls for the exploration of ethical issues and stresses the importance of informing the process of engineering ethical robots. Our contribution can be seen as an early step in this direction. (shrink)
There are two dominant trends in the humanitarian care of 2019: the ‘technologizing of care’ and the centrality of the humanitarian principles. The concern, however, is that these two trends may conflict with one another. Faced with the growing use of drones in the humanitarian space there is need for ethical reflection to understand if this technology undermines humanitarian care. In the humanitarian space, few agree over the value of drone deployment; one school of thought believes drones can provide a (...) utility serving those in need while another believes the large scale deployment of drones will exacerbate the already prevalent issues facing humanitarian aid providers. We suggest in this paper that the strength of the humanitarian principles approach to answer questions of aid provision can be complimented by a technology-facing approach, namely that of robotethics. We have shown that for humanitarian actors we ought to be concerned with the risks of a loss of contextualization and de-skilling. For the beneficiary, we raise three concerns associated with the threat to the principle of humanity for this group: a loss of dignity by reducing human-to-human interactions; a threat to dignity through a lack of informational transparency; and, a threat to dignity by failing to account for the physiological and behavioral impacts of the drone on human actors. Although we acknowledge the obstacles associated with understanding the physiological and behavioral impacts we insist that the moral acceptability and desirability of drones in humanitarian contexts is dependent on the findings from such studies and that tailored ethical guidelines for drone deployment in humanitarian action be created to reflect the results of such studies. (shrink)
This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge (...) or expertise. The first type is so-called rookie mistakes, which could be addressed by providing these people with the necessary ethical knowledge. The second, more difficult methodological issue concerns areas of peer disagreement in ethics, where no easy solutions are currently available. This paper examines several existing approaches to highlight the ethical pitfalls and challenges involved. Familiarity with these and similar problems can help programmers to avoid pitfalls and build better moral machines. The paper concludes that ethical decisions regarding moral robots should be based on avoiding what is immoral in combination with a pluralistic ethical method of solving moral problems, rather than relying on a particular ethical approach, so as to avoid a normative bias. (shrink)
There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robotethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a (...) continuum of agency that lies between amoral and fully autonomous moral agents. Thus, robots might move gradually along this continuum as they acquire greater capabilities and ethical sophistication. It also argues that many of the issues regarding the distribution of responsibility in complex socio-technical systems might best be addressed by looking to legal theory, rather than moral theory. This is because our overarching interest in robotethics ought to be the practical one of preventing robots from doing harm, as well as preventing humans from unjustly avoiding responsibility for their actions. (shrink)
How should human beings and robots interact with one another? Nyholm’s answer to this question is given below in the form of a conditional: If a robot looks or behaves like an animal or a human being then we should treat them with a degree of moral consideration (p. 201). Although this is not a novel claim in the literature on ai ethics, what is new is the reason Nyholm gives to support this claim; we should treat robots (...) that look like human or non-human animals with a certain degree of moral restraint out of respect for human beings or other beings with moral status. Although Danaher or Coeckelbergh also claim that we should treat robots with a degree of moral consideration, the reasons they give for making this claim focus on duties or rights attaching to the robot themselves (see J. Danaher, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism,” Science and Engineering Ethics, (2019): 1–27 or M. Coeckelbergh, “Moral Appearances: Emotions, Robots and Human Morality,” Ethics and Information Technology, 12(3) (2010): 235–241.). Nyholm disagrees with this type of reasoning and claims that until robots develop a human or animal like inner life, we have no direct duties to the robots themselves. Rather, it is out of respect for human beings or other beings with moral status that we should treat some robots with moral restraint. Gerdes, similarly inspired by Kant, focuses on the human agent to argue that we should avoid treating robots in cruel ways because this may corrupt the human agent’s character (see A. Gerdes, “The Issue of Moral Consideration in RobotEthics,” siggas Computers and Society, 45(3) (2015): 274–279.). Nyholm’s contribution here is to extend this view such that the corruption or harm being done is against the humanity in all of us. (shrink)
In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its (...) actions. Since building a moral robot requires the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. Finally, I raise but do not answer the question that if morality requires us to want robots that are not genuine moral agents, why should we want something different in the case of human beings. (shrink)
Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve human ends. A typical (...) objection to this case draws an analogy to the genetic engineering of humans: if designing eager robot servants is permissible, it should also be permissible to design eager human servants. Few ethical views can easily explain even the wrongness of such human engineering, however, and those few explanations that are available break the analogy with engineering robots. The case turns out to be illustrative of profound problems in the field of population ethics. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
Ethics for Robots describes and defends a method for designing and evaluating ethics algorithms for autonomous machines, such as self-driving cars and search and rescue drones. Derek Leben argues that such algorithms should be evaluated by how effectively they accomplish the problem of cooperation among self-interested organisms, and therefore, rather than simulating the psychological systems that have evolved to solve this problem, engineers should be tackling the problem itself, taking relevant lessons from our moral psychology. Leben draws on (...) the moral theory of John Rawls, arguing that normative moral theories are attempts to develop optimal solutions to the problem of cooperation. He claims that Rawlsian Contractarianism leads to the 'Maximin' principle - the action that maximizes the minimum value - and that the Maximin principle is the most effective solution to the problem of cooperation. He contrasts the Maximin principle with other principles and shows how they can often produce non-cooperative results. Using real-world examples - such as an autonomous vehicle facing a situation where every action results in harm, home care machines, and autonomous weapons systems - Leben contrasts Rawlsian algorithms with alternatives derived from utilitarianism and natural rights libertarianism. Including chapter summaries and a glossary of technical terms, Ethics for Robots is essential reading for philosophers, engineers, computer scientists, and cognitive scientists working on the problem of ethics for autonomous systems. (shrink)
Technological developments involving robotics and artificial intelligence devices are being employed evermore in elderly care and the healthcare sector more generally, raising ethical issues and practical questions warranting closer considerations of what we mean by “care” and, subsequently, how to design such software coherently with the chosen definition. This paper starts by critically examining the existing approaches to the ethical design of care robots provided by Aimee van Wynsberghe, who relies on the work on the ethics of care by (...) Joan Tronto. In doing so, it suggests an alternative to their non-principled approach, an alternative suited to tackling some of the issues raised by Tronto and van Wynsberghe, while allowing for the inclusion of two orientative principles. Our proposal centres on the principles of autonomy and vulnerability, whose joint adoption we deem able to constitute an original revision of a bottom-up approach in care ethics. Conclusively, the ethical framework introduced here integrates more traditional approaches in care ethics in view of enhancing the debate regarding the ethical design of care robots under a new lens. (shrink)
No single moral theory can instruct us as to whether and to what extent we are confronted with legal loopholes, e.g. whether or not new legal rules should be added to the system in the criminal law field. This question on the primary rules of the law appears crucial for today’s debate on roboethics and still, goes beyond the expertise of robo-ethicists. On the other hand, attention should be drawn to the secondary rules of the law: The unpredictability of robotic (...) behaviour and the lack of data on the probability of events, their consequences and costs, make hard to determine the levels of risk and hence, the amount of insurance premiums and other mechanisms on which new forms of accountability for the behaviour of robots may hinge. By following Japanese thinking, the aim is to show why legally de-regulated, or special, zones for robotics, i.e. the secondary rules of the system, pave the way to understand what kind of primary rules we may want for our robots. (shrink)
Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need (...) to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven’t done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of ‘procreative beneficence’ towards robots. (shrink)
If a robot sends a deceptive signal to a human user, is this always and everywhere an unethical act, or might it sometimes be ethically desirable? Building upon previous work in robotethics, this article tries to clarify and refine our understanding of the ethics of robotic deception. It does so by making three arguments. First, it argues that we need to distinguish between three main forms of robotic deception (external state deception; superficial state deception; and (...) hidden state deception) in order to think clearly about its ethics. Second, it argues that the second type of deception – superficial state deception – is not best thought of as a form of deception, even though it is frequently criticised as such. And third, it argues that the third type of deception is best understood as a form of betrayal because doing so captures the unique ethical harm to which it gives rise, and justifies special ethical protections against its use. (shrink)
This article summarizes the recommendations concerning robotics as issued by the Commission for the Ethics of Research in Information Sciences and Technologies (CERNA), the French advisory commission for the ethics of information and communication technology (ICT) research. Robotics has numerous applications in which its role can be overwhelming and may lead to unexpected consequences. In this rapidly evolving technological environment, CERNA does not set novel ethical standards but seeks to make ethical deliberation inseparable from scientific activity. Additionally, it (...) provides tools and guidance for researchers and research institutions. (shrink)
The debate about the use of robots in the care of older adults has often been dominated by either overly optimistic visions (coming particularly from Japan), in which robots are seamlessly incorporated into society thereby enhancing quality of life for everyone; or by extremely pessimistic scenarios that paint such a future as horrifying. We reject this dichotomy and argue for a more differentiated ethical evaluation of the possibilities and risks involved with the use of social robots. In a critical discussion (...) surrounding the capabilities approach to the ethical evaluation of quality of life, we develop an ethical framework that is more appropriate to the situation of the oldest old. We urge employment of a context-dependent approach to the ethical evaluation of new technologies in the care and therapy of older adults, and using the example of the robotic seal Paro, we show how this can be accomplished in a sensible and practical way. (shrink)
P. M. Asaro: What should We Want from a Robot Ethic? G. Tamburrini: RobotEthics: A View from the Philosophy of Science B. Becker: Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction E. Datteri, G. Tamburrini: Ethical Reflections on Health Care Robotics P. Lin, G. Bekey, K. Abney: Robots in War: Issues of Risk and Ethics J. Altmann: Preventive Arms Control for Uninhabited Military Vehicles J. Weber: Robotic warfare, Human Rights & The Rhetorics (...) of Ethical Machines T. Nishida: Towards Robots with Good Will R. Capurro: Ethics and Robotics. (shrink)
Robots are becoming an increasingly pervasive feature of our personal lives. As a result, there is growing importance placed on examining what constitutes appropriate behavior when they interact with human beings. In this paper, we discuss whether companion robots should be permitted to “nudge” their human users in the direction of being “more ethical”. More specifically, we use Rawlsian principles of justice to illustrate how robots might nurture “socially just” tendencies in their human counterparts. Designing technological artifacts in such a (...) way to influence human behavior is already well-established but merely because the practice is commonplace does not necessarily resolve the ethical issues associated with its implementation. (shrink)
Sexbots are coming. Given the pace of technological advances, it is inevitable that realistic robots specifically designed for people's sexual gratification will be developed in the not-too-distant future. Despite popular culture's fascination with the topic, and the emergence of the much-publicized Campaign Against Sex Robots, there has been little academic research on the social, philosophical, moral, and legal implications of robot sex. This book fills the gap, offering perspectives from philosophy, psychology, religious studies, economics, and law on the possible (...) future of robot-human sexual relationships. (shrink)
Soft robots promise an exciting design trajectory in the field of robotics and human–robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field (...) of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice. (shrink)
How can we best identify, understand, and deal with ethical and societal issues raised by healthcare robotics? This paper argues that next to ethical analysis, classic technology assessment, and philosophical speculation we need forms of reflection, dialogue, and experiment that come, quite literally, much closer to innovation practices and contexts of use. The authors discuss a number of ways how to achieve that. Informed by their experience with “embedded” ethics in technical projects and with various tools and methods of (...) responsible research and innovation, the paper identifies “internal” and “external” forms of dialogical research and innovation, reflections on the possibilities and limitations of these forms of ethical–technological innovation, and explores a number of ways how they can be supported by policy at national and supranational level. (shrink)
Robotic or automatic milking systems are novel technologies that take over the labor of dairy farming and reduce the need for human–animal interactions. Because robotic milking involves the replacement of ‘conventional’ twice-a-day milking managed by people with a system that supposedly allows cows the freedom to be milked automatically whenever they choose, some claim robotic milking has health and welfare benefits for cows, increases productivity, and has lifestyle advantages for dairy farmers. This paper examines how established ethical relations on dairy (...) farms are unsettled by the intervention of a radically different technology such as AMS. The renegotiation of ethical relationships is thus an important dimension of how the actors involved are re-assembled around a new technology. The paper draws on in-depth research on UK dairy farms comparing those using conventional milking technologies with those using AMS. We explore the situated ethical relations that are negotiated in practice, focusing on the contingent and complex nature of human–animal–technology interactions. We show that ethical relations are situated and emergent, and that as the identities, roles, and subjectivities of humans and animals are unsettled through the intervention of a new technology, the ethical relations also shift. (shrink)
The impacts that AI and robotics systems can and will have on our everyday lives are already making themselves manifest. However, there is a lack of research on the ethical impacts and means for amelioration regarding AI and robotics within tourism and hospitality. Given the importance of designing technologies that cross national boundaries, and given that the tourism and hospitality industry is fundamentally predicated on multicultural interactions, this is an area of research and application that requires particular attention. Specifically, tourism (...) and hospitality have a range of context-unique stakeholders that need to be accounted for in the salient design of AI systems is to be achieved. This paper adopts a stakeholder approach to develop the conceptual framework to centralize human values in designing and deploying AI and robotics systems in tourism and hospitality. The conceptual framework includes several layers – ‘Human-human-AI’ interaction level, direct and indirect stakeholders, and the macroenvironment. The ethical issues on each layer are outlined as well as some possible solutions to them. Additionally, the paper develops a research agenda on the topic. (shrink)
Abstract. The rapid developments of robotics technologies in the last twenty years of the XX century have greatly encouraged research on the use of robots for surgery, diagnosis, rehabilitation, prosthetics, and assistance to disabled and elderly people. This chapter provides an overview of robotic technologies and systems for health care, focussing on various ethical problems that these technologies give rise to. These problems notably concern the protection of human physical and mental integrity, autonomy, responsibility, ...
Ethics and robotics in the fourth industrial revolution The current industrial revolution, characterised by a pervasive spread of technologies and robotic systems, also brings with it an economic, social, cultural and anthropological revolution. Work spaces will be reshaped over time, giving rise to new challenges for human‒machine interaction. Robotics is hereby inserted in a working context in which robotic systems and cooperation with humans call into question the principles of human responsibility, distributive justice and dignity of work. In particular, (...) the responsibilities for using a robotic system in a surgical context will be discussed, along with possible problems of medium- or long-term technological unemployment to be tackled on the basis of shared concepts of distributive justice. Finally, the multiple dimensions of human dignity in the working context are dealt with in terms of dignity of work, dignity at work and dignity in human‒machine interaction. (shrink)
This article offers an overview of the main first-order ethical questions raised by robots and Artificial Intelligence (RAIs) under five broad rubrics: functionality, inherent significance, rights and responsibilities, side-effects, and threats. The first letter of each rubric taken together conveniently generates the acronym FIRST. Special attention is given to the rubrics of functionality and inherent significance given the centrality of the former and the tendency to neglect the latter in virtue of its somewhat nebulous and contested character. In addition to (...) exploring some illustrative issues arising under each rubric, the article also emphasizes a number of more general themes. These include: the multiplicity of interacting levels on which ethical questions about RAIs arise, the need to recognise that RAIs potentially implicate the full gamut of human values (rather than exclusively or primarily some readily identifiable sub-set of ethical or legal principles), and the need for practically salient ethical reflection on RAIs to be informed by a realistic appreciation of their existing and foreseeable capacities. -/- . (shrink)
Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral (...) grammar, in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments. (shrink)
Autonomous robots that are capable of learning are being developed to make it easier for human actors to achieve their goals. As such, robots are primarily a means to an end and replace human actions. An interdisciplinary technology assessment was carried out to determine the extent to which a replacement of this kind makes ethical sense in terms of technology, economics and legal aspects. Proceeding from an ethical perspective, derived from Kant’s formula of humanity, in this article we analyse the (...) use of robots in the care of the elderly or infirm and then examine robot learning in the context of this kind of cooperation. (shrink)
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that (...) the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot. (shrink)
This paper addresses the issue of whether robots could substitute for human care, given the challenges in aged care induced by the demographic change. The use of robots to provide emotional care has raised ethical concerns, e.g., that people may be deceived and deprived of dignity. In this paper it is argued that these concerns might be mitigated and that it may be sufficient for robots to take part in caring when they behave *as if* they care.