How can we best identify, understand, and deal with ethical and societal issues raised by healthcare robotics? This paper argues that next to ethical analysis, classic technology assessment, and philosophical speculation we need forms of reflection, dialogue, and experiment that come, quite literally, much closer to innovation practices and contexts of use. The authors discuss a number of ways how to achieve that. Informed by their experience with “embedded” ethics in technical projects and with various tools and methods of (...) responsible research and innovation, the paper identifies “internal” and “external” forms of dialogical research and innovation, reflections on the possibilities and limitations of these forms of ethical–technological innovation, and explores a number of ways how they can be supported by policy at national and supranational level. (shrink)
This paper intends to sum up the main findings of the European project RoboLaw. In this paper, the authors claim that the European Union should play a pro-active policy role in the regulation of technologies so as to inform the development of technologies with its values and principles. The paper provides an explication of the rationale for analysing of a limited and heterogeneous number of robotics applications. For these applications, the following issues are addressed: whether robotics deserve a (...) special case of regulation; the direct and indirect role ethics can play in regulating technology; the transformations of both vulnerabilities and capabilities, and the effects of liability law in favouring the socially relevant applications. In conclusion, a reflection on the possibility to generalize some of the RoboLaw findings to other technologies is proposed, with respect to liability and ethics. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and (...) used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
Cognitive Robotics can be defined as the study of cognitive phenomena by their modeling in physical artifacts such as robots. This is a very lively and fascinating field which has already given fundamental contributions to our understanding of natural cognition. Nonetheless, robotics has to date addressed mainly very basic, lowlevel cognitive phenomena like sensorymotor coordination, perception, and navigation, and it is not clear how the current approach might scale up to explain highlevel human cognition. In this paper we (...) argue that a promising way to do that is to merge current ideas and methods of 'embodied cognition' with the Russian tradition of theoretical psychology which views language not only as a communication system but also as a cognitive tool, that is by developing a Vygotskyan Cognitive Robotics. We substantiate this idea by discussing several domains in which language can improve basic cognitive abilities and permit the development of highlevel cognition: learning, categorization, abstraction, memory, voluntary control, and mental life. (shrink)
While the 2010 EPSRC principles for robotics state a set of 5 rules of what ‘should’ be done, I argue they should differentiate between legal obligations and ethical demands. Only if we make this difference can we state clearly what the legal obligations already are, and what additional ethical demands we want to make. I provide suggestions how to revise the rules in this light and how to make them more structured.
While it is often said that robotics should aspire to reproducible and measurable results that allow benchmarking, I argue that a focus on benchmarking can be a hindrance for progress in robotics. The reason is what I call the ‘measure-target confusion’, the confusion between a measure of progress and the target of progress. Progress on a benchmark (the measure) is not identical to scientific or technological progress (the target). In the past, several academic disciplines have been led into (...) pursuing only reproducible and measurable ‘scientific’ results – robotics should be careful to follow that line because results that can be benchmarked must be specific and context-dependent, but robotics targets whole complex systems for a broad variety of contexts. While it is extremely valuable to improve benchmarks to reduce the distance be- tween measure and target, the general problem to measure progress towards more intelligent machines (the target) will not be solved by benchmarks alone; we need a balanced approach with sophisticated benchmarks, plus real-life testing, plus qualitative judgment. (shrink)
The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human (...) harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human–robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot–robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation. (shrink)
This chapter summarises the autors' work in embodied robotics, emphasising the need for scientific tools to measure chaos and sensitivity to intial conditions, the role of novelty and development, and the relevance of human behaviour in natural environments.
Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise (...) development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills. (shrink)
Cybernetics promoted machine-supported investigations of adaptive sensorimotor behaviours observed in biological systems. This methodological approach receives renewed attention in contemporary robotics, cognitive ethology, and the cognitive neurosciences. Its distinctive features concern machine experiments, and their role in testing behavioural models and explanations flowing from them. Cybernetic explanations of behavioural events, regularities, and capacities rely on multiply realizable mechanism schemata, and strike a sensible balance between causal and unifying constraints. The multiple realizability of cybernetic mechanism schemata paves the way to (...) principled comparisons between biological systems and machines. Various methodological issues involved in the transition from mechanism schemata to their machine instantiations are addressed here, by reference to a simple sensorimotor coordination task. These concern the proper treatment of ceteris paribus clauses in experimental settings, the significance of running experiments with correct but incomplete machine instantiations of mechanism schemata, and the advantage of operating with real machines ??? as opposed to simulated ones ??? immersed in real environments. (shrink)
This article addresses prospective and retrospective responsibility issues connected with medical robotics. It will be suggested that extant conceptual and legal frameworks are sufficient to address and properly settle most retrospective responsibility problems arising in connection with injuries caused by robot behaviours (which will be exemplified here by reference to harms occurred in surgical interventions supported by the Da Vinci robot, reported in the scientific literature and in the press). In addition, it will be pointed out that many prospective (...) responsibility issues connected with medical robotics are nothing but well-known robotics engineering problems in disguise, which are routinely addressed by roboticists as part of their research and development activities: for this reason they do not raise particularly novel ethical issues. In contrast with this, it will be pointed out that novel and challenging prospective responsibility issues may emerge in connection with harmful events caused by normal robot behaviours. This point will be illustrated here in connection with the rehabilitation robot Lokomat. (shrink)
Ethical reflections on military robotics can be enriched by a better understanding of the nature and role of these technologies and by putting robotics into context in various ways. Discussing a range of ethical questions, this paper challenges the prevalent assumptions that military robotics is about military technology as a mere means to an end, about single killer machines, and about “military” developments. It recommends that ethics of robotics attend to how military technology changes our aims, (...) concern itself not only with individual robots but also and especially with networks and swarms, and adapt its conceptions of responsibility to the rise of such cloudy and unpredictable systems, which rely on decentralized control and buzz across many spheres of human activity. (shrink)
The Science and Religion Forum (SRF) seeks to be the premier organization promoting the discussion between science and religion in the United Kingdom for academics, professionals, and interested lay people. Each year, the SRF holds a conference tackling a topical issue, and in 2019 focused on artificial intelligence and robotics. This article introduces the thematic section which is made up of three papers from that conference and provides a summary of the event.
This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
In considering how to best deploy robotic systems in public and private sectors, we must consider what individuals will expect from the robots with which they interact. Public awareness of robotics—as both military machines and domestic helpers—emerges out of a braided stream composed of science fiction and popular science. These two genres influence news media, government and corporate spending, and public expectations. In the Euro-American West, both science fiction and popular science are ambivalent about the military applications for (...) class='Hi'>robotics, and thus we can expect their readers to fear the dangers posed by advanced robotics while still eagerly anticipating the benefits to be accrued through them. The chief pop science authors in robotics and artificial intelligence have a decidedly apocalyptic bent and have thus been described as leaders in a social movement called "Apocalyptic AI." In one form or another, such authors look forward to a transcendent future in which machine life succeeds human life, thanks to the march of evolutionary progress. The apocalyptic promises of popular robotics presume that presently exponential growth in computing will continue indefinitely, producing a "Singularity." During the Singularity, technological progress will be so rapid that undreamt of changes will take place on earth, the most important of which will be the evolutionary succession of human beings by massively intelligent robots and the "uploading" of human consciousness into computer bodies. This supposedly inevitable transition into post-biological life looms across the entire scope of pop robotics and artificial intelligence, and it is from beneath that shadow that all popular books engage the military and the ethics of warfare. Creating a just future will require that we transcend the apocalyptic discourse of pop science and establish an ethical approach to researching and deploying robots, one that emphasizes human rather than robot welfare; doing so will require the collaboration of social scientists, humanists, and scientists. (shrink)
Social robotics attempts to build robots able to interact with humans and other robots. Philosophical and scientific research in social cognition can provide social robotics research with models of social cognition to implement those models in mechanic agents. The aim of this paper is twofold: firstly, I present and defend a framework in social cognition known as mindshaping. According to it, human beings are biologically predisposed to learn and teach cultural and rational norms and complex cultural patterns of (...) behavior that enhance social cognition. Secondly, I will highlight how this framework can open new research perspectives in the area of social robotics. (shrink)
This paper pursues the intertwined tracks of robotics and art since the mid 20th century, taking a loose chronological approach that considers both the devices themselves and their discursive contexts. Relevant research has occurred in a variety of cultural locations, often outside of or prior to formalized robotics contexts. Research was even conducted under the aegis of art or cultural practices where robotics has been pursued for other than instrumental purposes. In hindsight, some of that work seems (...) remarkably prescient of contemporary trends. The context of cultural robotics is a highly charged interdisciplinary test environment in which the theory and pragmatics of technical research confronts the phenomenological realities of physical and social being in the world, and the performative and processual practices of the arts. In this context, issues of embodiment, material instantiation, structural coupling, and machine sensing have provoked the reconsideration of notions of (machine) intelligence and cognitivist paradigms. The paradoxical condition of robotics vis-à-vis artificial intelligence is reflected upon. This paper discusses the possibility of a new embodied ontology of robotics that draws upon both cybernetics and post-cognitive approaches. (shrink)
Pioneer approaches to Artificial Intelligence have traditionally neglected, in a chronological sequence, the agent body, the world where the agent is situated, and the other agents. With the advent of Collective Robotics approaches, important progresses were made toward embodying and situating the agents, together with the introduction of collective intelligence. However, the currently used models of social environments are still rather poor, jeopardizing the attempts of developing truly intelligent robot teams. In this paper, we propose a roadmap for a (...) new approach to the design of multi-robot systems, mainly inspired by concepts from Institutional Economics, an alternative to mainstream neoclassical economic theory. Our approach intends to sophisticate the design of robot collectives by adding, to the currently popular emergentist view, the concepts of physically and socially bounded autonomy of cognitive agents, uncoupled interaction among them and deliberately set up coordination devices. (shrink)
This article summarizes the recommendations concerning robotics as issued by the Commission for the Ethics of Research in Information Sciences and Technologies (CERNA), the French advisory commission for the ethics of information and communication technology (ICT) research. Robotics has numerous applications in which its role can be overwhelming and may lead to unexpected consequences. In this rapidly evolving technological environment, CERNA does not set novel ethical standards but seeks to make ethical deliberation inseparable from scientific activity. Additionally, it (...) provides tools and guidance for researchers and research institutions. (shrink)
This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these (...) capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots. (shrink)
The development and deployment of the notion of pre-objective or nonconceptual content for the purposes of intentional explanation of requires assistance from a practical and theoretical understanding of computational/robotic systems acting in real-time and real-space. In particular, the usual "that"-clause specification of content will not work for non-conceptual contents; some other means of specification is required, means that make use of the fact that contents are aspects of embodied and embedded systems. That is, the specification of non-conceptual content should use (...) concepts and insights gained from android design and android epistemology. (shrink)
This paper discusses possible correspondences between the dynamical systems characteristics observed in our previously proposed cognitive model and phenomenological accounts of immanent time considered by Edmund Husserl. Our simulation experiments in the anticiparatory learning of a robot showed that encountering sensory-motor flow can be learned as segmented into chunks of reusable primitives with accompanying dynamic shifting between coherences and incoherences in local modules. It is considered that the sense of objective time might appear when the continuous sensory-motor flow input to (...) the robot is reconstructed into compositional memory structures through the articulation processes described. (shrink)
Abstract. The rapid developments of robotics technologies in the last twenty years of the XX century have greatly encouraged research on the use of robots for surgery, diagnosis, rehabilitation, prosthetics, and assistance to disabled and elderly people. This chapter provides an overview of robotic technologies and systems for health care, focussing on various ethical problems that these technologies give rise to. These problems notably concern the protection of human physical and mental integrity, autonomy, responsibility, ...
There are unusual challenges in ethics for RAS. Perhaps the issue can best be summarised as needing to consider “technically informed ethics”. The technology of RAS raises issues that have an ethical dimension, and perhaps uniquely so due to the possibility of moving human decision-making which is implicitly ethically informed to computer systems. Further, if seeking solutions to these problems – ethically aligned design, to use the IEEE’s terminology – then the solutions must be technically meaningful, capable of realisation, capable (...) of assurance, and suitable as a basis for regulation. Thus, ethics for RAS is a rich, complex multi-disciplinary concern, and perhaps more complex than many other ethical issues facing society today. It is also fast-moving. This paper has endeavoured to give an accessible introduction to some of the key issues, noting that many of them are quite subtle, and it is not possible to do them full justice in such a short document. However, we have sought to counterbalance this by giving an extensive list of initiatives, standards, etc. that focus on ethics of RAS and AI, see Annex A. (shrink)
There is a definite challenge in the air regarding the pivotal notion of internal representation. This challenge is explicit in, e.g., van Gelder, 1995; Beer, 1995; Thelen & Smith, 1994; Wheeler, 1994; and elsewhere. We think it is a challenge that can be met and that (importantly) can be met by arguing from within a general framework that accepts many of the basic premises of the work (in new robotics and in dynamical systems theory) that motivates such scepticism in (...) the first place. Our strategy will be as follows. We begin (Section 1) by offering an account (an example and something close to a definition) of what we shall term Minimal Robust Representationalism (MRR). Sections 2 & 3 address some likely worries and questions about this notion. We end (Section 4) by making explicit the conditions under which, on our account, a science (e.g., robot- ics) may claim to be addressing cognitive phenomena. (shrink)
In order to build autonomous robots that can carry out useful work in unstructured environments new approaches have been developed to building intelligent systems. The relationship to traditional academic robotics and traditional artificial intelligence is examined. In the new approaches a tight coupling of sensing to action produces architectures for intelligence that are networks of simple computational elements which are quite broad, but not very deep. Recent work within this approach has demonstrated the use of representations, expectations, plans, goals, (...) and learning, but without resorting to the traditional uses, of central, abstractly manipulable or symbolic representations. Perception within these systems is often an active process, and the dynamics of the interactions with the world are extremely important. The question of how to evaluate and compare the new to traditional work still provokes vigorous discussion. (shrink)
Robots today serve in many roles, from entertainer to educator to executioner. As robotics technology advances, ethical concerns become more pressing: Should robots be programmed to follow a code of ethics, if this is even possible? Are there risks in forming emotional bonds with robots? How might society--and ethics--change with robotics? This volume is the first book to bring together prominent scholars and experts from both science and the humanities to explore these and other questions in this emerging (...) field. Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robot ethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The contributors then turn to human-robot emotional relationships, examining the ethical implications of robots as sexual partners, caregivers, and servants. Finally, they explore the possibility that robots, whether biological-computational hybrids or pure machines, should be given rights or moral consideration. Ethics is often slow to catch up with technological developments. This authoritative and accessible volume fills a gap in both scholarly literature and policy discussion, offering an impressive collection of expert analyses of the most crucial topics in this increasingly important field. (shrink)
The purpose of this paper is to offer some critical remarks on the so-called pragmatist approach to the regulation of robotics. To this end, the article mainly reviews the work of Jack Balkin and Joanna Bryson, who have taken up such ap- proach with interestingly similar outcomes. Moreover, special attention will be paid to the discussion concerning the legal fiction of ‘electronic personality’. This will help shed light on the opposition between essentialist and pragmatist methodologies. After a brief introduction (...) (1.), in 2. I introduce the main points of the methodological debate which opposes pragmatism and essentialism in the regulation of robotics and I examine how legal fictions are framed from a pragmatist, functional perspective. Since this approach entails a neat separation of ontological analysis and legal rea- soning, in 3. I discuss whether considerations on robots’ essence are actually put into brackets when the pragmatist approach is endorsed. Finally, in 4. I address the problem of the social valence of legal fictions in order to suggest a possible limit of the pragmatist approach. My conclusion (5.) is that in the specific case of regulating robotics it may be very difficult to separate ontological considerations from legal reasoning—and vice versa—both on an epistemological and social level. This calls for great caution in the recourse to anthropomorphic legal fictions. (shrink)
The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfus' Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents realize the property (...) of thrownness (the property of being always already embedded in a context). I argue that this positive proposal is incomplete until we understand exactly how the properties in question may be instantiated in machines like us. So, working within a broadly Heideggerian conceptual framework, I pursue the character of a representation-shunning thrown machine. As part of this analysis, I suggest that the frame problem is, in truth, a two-headed beast. The intra-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action within a context. The inter-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action in worlds in which adaptation to new contexts is open-ended and in which the number of potential contexts is indeterminate. Drawing on the field of situated robotics, I suggest that the intra-context frame problem may be neutralized by systems of special-purpose adaptive couplings, while the inter-context frame problem may be neutralized by systems that exhibit the phenomenon of continuous reciprocal causation. I also defend the view that while continuous reciprocal causation is in conflict with representational explanation, special-purpose adaptive coupling, as well as its associated agential phenomenology, may feature representations. My proposal has been criticized recently by Dreyfus, who accuses me of propagating a cognitivist misreading of Heidegger, one that, because it maintains a role for representation, leads me seriously astray in my handling of the frame problem. I close by responding to Dreyfus' concerns. (shrink)
Apocalyptic AI, the hope that we might one day upload our minds into machines and live forever in cyberspace, has become commonplace. This view now affects robotics and AI funding, play in online games, and philosophical and theological conversations about morality and human dignity.
Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...) make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimov’s “three laws of robotics” are an unsatisfactory basis for machine ethics, regardless of the status of the machine. (shrink)
In this essay we critically evaluate the progress that has been made in solving the problem of meaning in artificial intelligence and robotics. We remain skeptical about solutions based on deep neural networks and cognitive robotics, which in our opinion do not fundamentally address the problem. We agree with the enactive approach to cognitive science that things appear as intrinsically meaningful for living beings because of their precarious existence as adaptive autopoietic individuals. But this approach inherits the problem (...) of failing to account for how meaning as such could make a difference for an agent’s behavior. In a nutshell, if life and mind are identified with physically deterministic phenomena, then there is no conceptual room for meaning to play a role in its own right. We argue that this impotence of meaning can be addressed by revising the concept of nature such that the macroscopic scale of the living can be characterized by physical indeterminacy. We consider the implications of this revision of the mind-body relationship for synthetic approaches. (shrink)
In this paper, we first enumerate the problems that humans might face with a new type of technology such as robots with artificial intelligence. Robotics entrepreneurs are calling for discussions about goals and values because AI robots, which are potentially more intelligent than humans, can no longer be fully understood and controlled by humans. AI robots could even develop into ethically “bad” agents and become very harmful. We consider these discussions as part of a process of developing responsible innovations (...) in AI robotics in order to prevent catastrophic risks on a global scale. To deal with these issues, we propose the capability-effectual approach, drawing on two bodies of research: the capability approach from ethics, and the effectual process model from entrepreneurship research. The capability approach provides central human capabilities, guiding the effectual process through individual goals and aspirations in the collaborative design process of stakeholders. More precisely, by assuming and understanding correspondences between goals, purposes, desires, and aspirations in the languages of different disciplines, the capability-effectual approach clarifies both how a capability list working globally could affect the aspirations and end-goals of individuals, and how local aspirations and end-goals could either energise or limit effectual processes. Theoretically, the capability-effectual approach links the collaboration of stakeholders and the design process in responsible innovation research. Practically, this approach could potentially contribute to the robust development of AI robots by providing robotics entrepreneurs with a tool for establishing a permissible action range within which to develop AI robotics. (shrink)
P. M. Asaro: What should We Want from a Robot Ethic? G. Tamburrini: Robot Ethics: A View from the Philosophy of Science B. Becker: Social Robots - Emotional Agents: Some Remarks on Naturalizing Man-machine Interaction E. Datteri, G. Tamburrini: Ethical Reflections on Health Care Robotics P. Lin, G. Bekey, K. Abney: Robots in War: Issues of Risk and Ethics J. Altmann: Preventive Arms Control for Uninhabited Military Vehicles J. Weber: Robotic warfare, Human Rights & The Rhetorics of Ethical Machines (...) T. Nishida: Towards Robots with Good Will R. Capurro: Ethics and Robotics. (shrink)
This paper adopts a legal perspective to counter some exaggerations of today’s debate on the social understanding of robotics. According to a long and well-established tradition, there is in fact a relative strong consensus among lawyers about some key notions as, say, agency and liability in the current use of robots. However, dealing with a field in rapid evolution, we need to rethink some basic tenets of the contemporary legal framework. In particular, time has come for lawyers to acknowledge (...) that some acts of robots should be considered as a new source of legal responsibility for others’ behaviour. (shrink)
There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to “responsibility and autonomous robots”, “machines as a replacement for humans”, and “tele-presence”. Furthermore we will examine examples from special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issue nor of regulations in the field of (...)robotics, but we will demonstrate that there are legal challenges with regard to these issues. (shrink)
Robotics can be seen as a cognitive technology, assisting us in understanding various aspects of autonomy. In this paper I will investigate a difference between the interpretations of autonomy that exist within robotics and philosophy. Based on a brief review of some historical developments I suggest that within robotics a technical interpretation of autonomy arose, related to the independent performance of tasks. This interpretation is far removed from philosophical analyses of autonomy focusing on the capacity to choose (...) goals for oneself. This difference in interpretation precludes a straightforward debate between philosophers and roboticists about the autonomy of artificial and organic creatures. In order to narrow the gap I will identify a third problem of autonomy, related to the issue of what makes one's goals genuinely one's own. I will suggest that it is the body, and the ongoing attempt to maintain its stability, that makes goals belong to the system. This issue could function as a suitable focal point for a debate in which work in robotics can be related to issues in philosophy. Such a debate could contribute to a growing awareness of the way in which our bodies matter to our autonomy. (shrink)
Over the past two decades, ethical challenges related to robotics technologies have gained increasing interest among different research and non-academic communities, in particular through the field of roboethics. While the reasons to address roboethics are clear, why not to engage with ethics needs to be better understood. This paper focuses on a limited or lacking engagement with ethics that takes place within some parts of the robotics community and its implications for the conceptualisation of the human being. The (...) underlying assumption is that the term ‘ethical’ essentially means ‘human’. Thus, this paper discusses a working hypothesis according to which by avoiding to engage with roboethics, roboticists contribute to the tacit dehumanisation process emerging in and outside of robotics. An alternative approach includes ‘lived ethics’ which involves not only incorporating formal ethical approaches into the roboticists’ work but also ‘being’ ethical and actually engaging with ethical reflection and practice. (shrink)
After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper examines the underlying fear of intelligent robots, revisits Asimov’s response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument (...) against robotic rebellion is made along with a call for personal responsibility and suggestions for implementing safety constraints in intelligent robots. (shrink)
Service-Robotic—mainly defined as “non-industrial robotics”—is identified as the next economical success story to be expected after robots have been ubiquitously implemented into industrial production lines. Under the heading of service-robotic, we found a widespread area of applications reaching from robotics in agriculture and in the public transportation system to service robots applied in private homes. We propose for our interdisciplinary perspective of technology assessment to take the human user/worker as common focus. In some cases, the user/worker is the (...) effective subject acting by means of and in cooperation with a service robot; in other cases, the user/worker might become a pure object of the respective robotic system, for example, as a patient in a hospital. In this paper, we present a comprehensive interdisciplinary framework, which allows us to scrutinize some of the most relevant applications of service robotics; we propose to combine technical, economical, legal, philosophical/ethical, and psychological perspectives in order to design a thorough and comprehensive expert-based technology assessment. This allows us to understand the potentials as well as the limits and even the threats connected with the ongoing and the planned implementation of service robots into human lifeworld—particularly of those technical systems displaying increasing grades of autonomy. (shrink)
Ethics and robotics in the fourth industrial revolution The current industrial revolution, characterised by a pervasive spread of technologies and robotic systems, also brings with it an economic, social, cultural and anthropological revolution. Work spaces will be reshaped over time, giving rise to new challenges for human‒machine interaction. Robotics is hereby inserted in a working context in which robotic systems and cooperation with humans call into question the principles of human responsibility, distributive justice and dignity of work. In (...) particular, the responsibilities for using a robotic system in a surgical context will be discussed, along with possible problems of medium- or long-term technological unemployment to be tackled on the basis of shared concepts of distributive justice. Finally, the multiple dimensions of human dignity in the working context are dealt with in terms of dignity of work, dignity at work and dignity in human‒machine interaction. (shrink)
Healthcare robots enable practices that seemed far-fetched in the past. Robots might be the solution to bridge the loneliness that the elderly often experience; they may help wheelchair users walk again, or may help navigate the blind. European Institutions, however, acknowledge that human contact is an essential aspect of personal care and that the insertion of robots could dehumanize caring practices. Such instances of human–robot interactions raise the question to what extent the use and development of robots for healthcare applications (...) can challenge the dignity of users. In this article, therefore, we explore how different robot applications in the healthcare domain support individuals in achieving ‘dignity’ or pressure it. We argue that since healthcare robot applications are novel, their associated risks and impacts may be unprecedented and unknown, thus triggering the need for a conceptual instrument that is binding and remains flexible at the same time. In this respect, as safety rules and data protection are often criticized to lack flexibility, and technology ethics to lack enforceability, we suggest human dignity as the overarching governance instrument for robotics, which is the inviolable value upon which all fundamental rights are grounded. (shrink)
HeartMath is a contemporary, scientific, coherent model of heart intelligence. The aim of this paper is to review this coherence model with special reference to its implications for artificial intelligence and robotics. Various conceptual issues, implications and challenges for AI and robotics are discussed. In view of seemingly infinite human capacity for creative, destructive and incoherent behaviour, it is highly recommended that designers and operators be persons of heart intelligence, optimal moral integrity, vision and mission. This implies that (...) AI and robotic design and production should be continuously optimized through vigilant and appropriate human and material quality control procedures. Evidence is provided for some value and effectiveness of the HeartMath coherence model in this context. (shrink)
Dynamic, embodied and situated cognition set up organism-environment interaction — agency for short — as the core of cognitive systems. Robotics became an important way to study this behavioral kernel of cognition. In this paper, we discuss the implications of what we call the biological grounding problem for robotic studies: Natural and artificial agents are hugely different and it will be necessary to articulate what must be replicated by artificial agents such as robots. Interestingly, once this issue is explicitly (...) raised, it seems that a full replication of biological features is required for cognition itself to be plausibly cast as a biological phenomenon. Several issues come to the fore once one takes this implication seriously. Why does a full biological interpretation of cognition remain so controversial? How does this impact the relevance of robotics for the study of cognition? We try to articulate and ease the various tensions that arise from this biological scenario. (shrink)
Social robotics is a rapidly developing industry-oriented area of research, intent on making robots in social roles commonplace in the near future. This has led to rising interest in the dynamics as well as ethics of human-robot relationships, described here as a nascent relational turn. A contrast is drawn with the 1990s’ paradigm shift associated with relational-self themes in social psychology. Constructions of the human-robot relationship reproduce the “I-You-Me” dominant model of theorising about the self with biases that (as (...) in social constructionism) consistently accentuate externalist or “interactionist” standpoints as opposed to internalist or “individualistic”. Perspectives classifiable as “ecological relationalism” may compensate for limitations of interactionist-individualistic dimension. Implications for theorising subjectivity are considered. (shrink)