Pioneer approaches to Artificial Intelligence have traditionally neglected, in a chronological sequence, the agent body, the world where the agent is situated, and the other agents. With the advent of Collective Robotics approaches, important progresses were made toward embodying and situating the agents, together with the introduction of collective intelligence. However, the currently used models of social environments are still rather poor, jeopardizing the attempts of developing truly intelligent robot teams. In this paper, we propose a roadmap for a (...) new approach to the design of multi-robot systems, mainly inspired by concepts from Institutional Economics, an alternative to mainstream neoclassical economic theory. Our approach intends to sophisticate the design of robot collectives by adding, to the currently popular emergentist view, the concepts of physically and socially bounded autonomy of cognitive agents, uncoupled interaction among them and deliberately set up coordination devices. (shrink)
The development and deployment of the notion of pre-objective or nonconceptual content for the purposes of intentional explanation of requires assistance from a practical and theoretical understanding of computational/robotic systems acting in real-time and real-space. In particular, the usual "that"-clause specification of content will not work for non-conceptual contents; some other means of specification is required, means that make use of the fact that contents are aspects of embodied and embedded systems. That is, the specification of non-conceptual content should use (...) concepts and insights gained from android design and android epistemology. (shrink)
The frame problem is the difficulty of explaining how non-magical systems think and act in ways that are adaptively sensitive to context-dependent relevance. Influenced centrally by Heideggerian phenomenology, Hubert Dreyfus has argued that the frame problem is, in part, a consequence of the assumption (made by mainstream cognitive science and artificial intelligence) that intelligent behaviour is representation-guided behaviour. Dreyfus' Heideggerian analysis suggests that the frame problem dissolves if we reject representationalism about intelligence and recognize that human agents realize the property (...) of thrownness (the property of being always already embedded in a context). I argue that this positive proposal is incomplete until we understand exactly how the properties in question may be instantiated in machines like us. So, working within a broadly Heideggerian conceptual framework, I pursue the character of a representation-shunning thrown machine. As part of this analysis, I suggest that the frame problem is, in truth, a two-headed beast. The intra-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action within a context. The inter-context frame problem challenges us to say how a purely mechanistic system may achieve appropriate, flexible and fluid action in worlds in which adaptation to new contexts is open-ended and in which the number of potential contexts is indeterminate. Drawing on the field of situated robotics, I suggest that the intra-context frame problem may be neutralized by systems of special-purpose adaptive couplings, while the inter-context frame problem may be neutralized by systems that exhibit the phenomenon of continuous reciprocal causation. I also defend the view that while continuous reciprocal causation is in conflict with representational explanation, special-purpose adaptive coupling, as well as its associated agential phenomenology, may feature representations. My proposal has been criticized recently by Dreyfus, who accuses me of propagating a cognitivist misreading of Heidegger, one that, because it maintains a role for representation, leads me seriously astray in my handling of the frame problem. I close by responding to Dreyfus' concerns. (shrink)
Using Asimovâs Bicentennial Man as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we (...) make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimovâs three laws of robotics are an unsatisfactory basis for machine ethics, regardless of the status of the machine. (shrink)
There is a definite challenge in the air regarding the pivotal notion of internal representation. This challenge is explicit in, e.g., van Gelder, 1995; Beer, 1995; Thelen & Smith, 1994; Wheeler, 1994; and elsewhere. We think it is a challenge that can be met and that (importantly) can be met by arguing from within a general framework that accepts many of the basic premises of the work (in new robotics and in dynamical systems theory) that motivates such scepticism in (...) the first place. Our strategy will be as follows. We begin (Section 1) by offering an account (an example and something close to a definition) of what we shall term Minimal Robust Representationalism (MRR). Sections 2 & 3 address some likely worries and questions about this notion. We end (Section 4) by making explicit the conditions under which, on our account, a science (e.g., robot- ics) may claim to be addressing cognitive phenomena. (shrink)
This paper adopts a legal perspective to counter some exaggerations of today’s debate on the social understanding of robotics. According to a long and well-established tradition, there is in fact a relative strong consensus among lawyers about some key notions as, say, agency and liability in the current use of robots. However, dealing with a field in rapid evolution, we need to rethink some basic tenets of the contemporary legal framework. In particular, time has come for lawyers to acknowledge (...) that some acts of robots should be considered as a new source of legal responsibility for others’ behaviour. (shrink)
This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
This article addresses prospective and retrospective responsibility issues connected with medical robotics. It will be suggested that extant conceptual and legal frameworks are sufficient to address and properly settle most retrospective responsibility problems arising in connection with injuries caused by robot behaviours (which will be exemplified here by reference to harms occurred in surgical interventions supported by the Da Vinci robot, reported in the scientific literature and in the press). In addition, it will be pointed out that many prospective (...) responsibility issues connected with medical robotics are nothing but well-known robotics engineering problems in disguise, which are routinely addressed by roboticists as part of their research and development activities: for this reason they do not raise particularly novel ethical issues. In contrast with this, it will be pointed out that novel and challenging prospective responsibility issues may emerge in connection with harmful events caused by normal robot behaviours. This point will be illustrated here in connection with the rehabilitation robot Lokomat. (shrink)
In order to build autonomous robots that can carry out useful work in unstructured environments new approaches have been developed to building intelligent systems. The relationship to traditional academic robotics and traditional artificial intelligence is examined. In the new approaches a tight coupling of sensing to action produces architectures for intelligence that are networks of simple computational elements which are quite broad, but not very deep. Recent work within this approach has demonstrated the use of representations, expectations, plans, goals, (...) and learning, but without resorting to the traditional uses, of central, abstractly manipulable or symbolic representations. Perception within these systems is often an active process, and the dynamics of the interactions with the world are extremely important. The question of how to evaluate and compare the new to traditional work still provokes vigorous discussion. (shrink)
After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper examines the underlying fear of intelligent robots, revisits Asimov’s response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument (...) against robotic rebellion is made along with a call for personal responsibility and suggestions for implementing safety constraints in intelligent robots. (shrink)
This paper pursues the intertwined tracks of robotics and art since the mid 20th century, taking a loose chronological approach that considers both the devices themselves and their discursive contexts. Relevant research has occurred in a variety of cultural locations, often outside of or prior to formalized robotics contexts. Research was even conducted under the aegis of art or cultural practices where robotics has been pursued for other than instrumental purposes. In hindsight, some of that work seems (...) remarkably prescient of contemporary trends. The context of cultural robotics is a highly charged interdisciplinary test environment in which the theory and pragmatics of technical research confronts the phenomenological realities of physical and social being in the world, and the performative and processual practices of the arts. In this context, issues of embodiment, material instantiation, structural coupling, and machine sensing have provoked the reconsideration of notions of (machine) intelligence and cognitivist paradigms. The paradoxical condition of robotics vis-à-vis artificial intelligence is reflected upon. This paper discusses the possibility of a new embodied ontology of robotics that draws upon both cybernetics and post-cognitive approaches. (shrink)
Hamm, Kamp, and van Lambalgen 2006 (hereafter HLK) propose to relate NL discourse to cognitive representations that also deal with world knowledge, planning, belief revision, etc. Surprisingly, to represent human cognition they use an event calculus "which has found applications in robotics". This comment argues that the robotics-based theory of HLK attributes too much to world knowledge and not enough to the ontology, centering, and other universals of NL semantics. It is also too Anglo-centric to generalize to languages (...) of other linguistic types. (shrink)
Social robotics is a rapidly developing industry-oriented area of research, intent on making robots in social roles commonplace in the near future. This has led to rising interest in the dynamics as well as ethics of human-robot relationships, described here as a nascent relational turn. A contrast is drawn with the 1990s’ paradigm shift associated with relational-self themes in social psychology. Constructions of the human-robot relationship reproduce the “I-You-Me” dominant model of theorising about the self with biases that (as (...) in social constructionism) consistently accentuate externalist or “interactionist” standpoints as opposed to internalist or “individualistic”. Perspectives classifiable as “ecological relationalism” may compensate for limitations of interactionist-individualistic dimension. Implications for theorising subjectivity are considered. (shrink)
Introduces the use of Lego Robots for use in research and teaching in philosophy. Potential uses include using the machines as pedagogical tools for teaching introductory ideas in cognitive robotics, philosophy of mind, and the philosophy of Artificial Intelligence. Describes the strength and potential pitfalls of introducing this technology to the classroom.
Service-Robotic—mainly defined as “non-industrial robotics”—is identified as the next economical success story to be expected after robots have been ubiquitously implemented into industrial production lines. Under the heading of service-robotic, we found a widespread area of applications reaching from robotics in agriculture and in the public transportation system to service robots applied in private homes. We propose for our interdisciplinary perspective of technology assessment to take the human user/worker as common focus. In some cases, the user/worker is the (...) effective subject acting by means of and in cooperation with a service robot; in other cases, the user/worker might become a pure object of the respective robotic system, for example, as a patient in a hospital. In this paper, we present a comprehensive interdisciplinary framework, which allows us to scrutinize some of the most relevant applications of service robotics; we propose to combine technical, economical, legal, philosophical/ethical, and psychological perspectives in order to design a thorough and comprehensive expert-based technology assessment. This allows us to understand the potentials as well as the limits and even the threats connected with the ongoing and the planned implementation of service robots into human lifeworld—particularly of those technical systems displaying increasing grades of autonomy. (shrink)
Apocalyptic AI, the hope that we might one day upload our minds into machines and live forever in cyberspace, is a surprisingly wide-spread and influential idea, affecting everything from the world view of online gamers to government research funding and philosophical thought. In Apocalyptic AI, Robert Geraci offers the first serious account of this "cyber-theology¨and the people who promote it, drawing on interviews with roboticists and AI researchers and even devotees of the online game Second Life. He points out that (...) the rhetoric of Apocalyptic AI is strikingly similar to that of the apocalyptic traditions of Judaism and Christianity--in both systems the believer is trapped in a dualistic universe and expects a resolution in which he or she will be translated to a transcendent new world and live forever in a glorified new body. Geraci also shows how this worldview exerts significant influence by promoting certain types of research in robotics and artificial intelligence, and has also had an impact on philosophers of mind, theologians, and even legal scholars. (shrink)
Cybernetics promoted machine-supported investigations of adaptive sensorimotor behaviours observed in biological systems. This methodological approach receives renewed attention in contemporary robotics, cognitive ethology, and the cognitive neurosciences. Its distinctive features concern machine experiments, and their role in testing behavioural models and explanations flowing from them. Cybernetic explanations of behavioural events, regularities, and capacities rely on multiply realizable mechanism schemata, and strike a sensible balance between causal and unifying constraints. The multiple realizability of cybernetic mechanism schemata paves the way to (...) principled comparisons between biological systems and machines. Various methodological issues involved in the transition from mechanism schemata to their machine instantiations are addressed here, by reference to a simple sensorimotor coordination task. These concern the proper treatment of ceteris paribus clauses in experimental settings, the significance of running experiments with correct but incomplete machine instantiations of mechanism schemata, and the advantage of operating with real machines ??? as opposed to simulated ones ??? immersed in real environments. (shrink)
The emergent use of service robots in more and more areas of social life raises a number of legal issues which have to be addressed in order to apply and adapt the existing legal framework to this new technology. The article provides an overview of law as a means to regulate and govern technology and discusses fundamental issues of the relationship between law and technology. It then goes on to address a number of relevant problems in the field of service (...)robotics. In particular, these issues include the organization of administrative control and the legal liability regime which applies to service robots. Also, the issue of autonomy of service robots is discussed, which cannot easily be answered under the existing, human-centered legal regime. (shrink)
The paper at hand analyzes the economic implications of service robots as expected important future technology. The considerations are embedded into global trends, focusing on the interdependencies between services and industry not only in the context of the provision of services but already starting at the level of the innovation process. It is argued that due to the various interdependencies combined with heterogenous application fields, the resulting implications need to be contextualized. Concerning the net labor market effects, it is reasonable (...) to assume that the field of service robotics will generate overall job creation that goes along with increasing skill requirements demanded from involved employees. It is analyzed which challenges arise in evaluating and further developing the new technology field and some policy recommendations are given. (shrink)
This article distinguishes three archetypal ways of articulating spatial cognition: (1) via metric representation of objective geometry, (2) via somatosensory constitution of the peripersonal environment, and (3) via pragmatic comprehension of the finalistic sense of action. The last one is documented by neuroscientific studies concerning mirror neurons. Bio-robotic experiments implementing mirror functions confirm the constitutive role of goal-oriented actions in spatial processes.
We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that (...) provided by the participants) is needed to seed the survey. Third, N-Reasons is designed to support experiments so we can learn how to improve the platform. We sketch experimental results that show the platform is a success as well as pointing to ways it can be improved. (shrink)
Abstract Internet communication technology has been said to affect our sense of self by altering the way we construct “personal identity,” understood as identificatory valuative narratives about the self; in addition, some authors have warned that internet communication creates special conditions for moral agency that might gradually change our moral intuitions. Both of these effects are attributed to the fact that internet communication is “disembodied.” Our aim in this paper is to establish a link between this complex of claims and (...) past and ongoing research in phenomenology, empirical psychology and cognitive science, in order to formulate an empirical hypothesis that can assist development and evaluation of recent technology for embodied telecommunication. We first suggest that for the purposes of interdisciplinary exchange, personal identity is formally best represented by a selection function that (for temporal intervals of variable length) “bundles” capacity ascriptions into identificatory narratives. Based on this model, we discuss which cultural changes engendered by the internet affect the construction of personal identity in ways that diminish our ethical sensitivies. In a second step, working from phenomenological claims by Martin Buber, we argue that disembodied communication severs two modes of cognitive function, preconceptual and conceptual, which tie together moral motivation, self-experience, and identity construction. We translate Buber’s claims into the theoretical idiom of the “theory of cognitive orientation,” a psychological theory of motivation that links up with recent research in embodied cognition. In a third step, we investigate whether the embodiment of the internet with communication robots (e.g., telenoids) holds out the prospect of reverting this structural change at least partially. We conclude by formulating an empirical hypothesis (for researchers in cognitive science) that has direct import, we submit, on the question whether embodied telecommunication promises a new form of ethically sensitive self-constituting encounter. Content Type Journal Article Category Special Issue Pages 1-23 DOI 10.1007/s13347-012-0064-9 Authors Johanna Seibt, Department for Philosophy and the History of Ideas, Aarhus University, Aarhus, Denmark Marco Nørskov, Department for Philosophy and the History of Ideas, Aarhus University, Aarhus, Denmark Journal Philosophy & Technology Online ISSN 2210-5441 Print ISSN 2210-5433. (shrink)
Based on an integrated theoretical framework, this study analyzes user acceptance behavior toward socially interactive robots focusing on the variables that influence the users' attitudes and intentions to adopt robots. Individuals' responses to questions about attitude and intention to use robots were collected and analyzed according to different factors modified from a variety of theories. The results of the proposed model explain that social presence is key to the behavioral intention to accept social robots. The proposed model shows the significant (...) roles of perceived adaptivity and sociability, both of which affect attitude as well as influence perceived usefulness and perceived enjoyment, respectively. These factors can be key features of users' expectations of social robots, which can give practical implications for designing and developing meaningful social interaction between robots and humans. The new set of variables is specific to social robots, acting as factors that enhance attitudes and behavioral intentions in human-robot interactions. Keywords: Robot acceptance model; Socially interactive robots; Social robots; Social presence. (shrink)
In this paper, we consider the influence of Gibson's affordance theory on the design of robotic agents. Affordance theory (and the ecological approach to agent design in general) has in many cases contributed to the development of successful robotic systems; we provide a brief survey of AI research in this area. However, there remain significant issues that complicate discussions on this topic, particularly in the exchange of ideas between researchers in artificial intelligence and ecological psychology. We identify some of these (...) issues, specifically the lack of a generally accepted definition of "affordance" and fundamental differences in the current approaches taken in AI and ecological psychology. While we consider reconciliation between these fields to be possible and mutually beneficial, it will require some flexibility on the issue of direct perception. (shrink)
After critical appraisal of mathematical and biological characteristics of the model, we discuss how a classical hippocampal neural network expresses functions similar to those of the chaotic model, and then present an alternative stimulus-driven chaotic random recurrent neural network (RRNN) that learns patterns as well as sequences, and controls the navigation of a mobile robot.
Abstract This article evaluates the ?drive toward greater autonomy? in lethally-armed unmanned systems. Following a summary of the main criticisms and challenges to lethal autonomy, both engineering and ethical, raised by opponents of this effort, the article turns toward solutions or responses that defense industries and military end users might seek to incorporate in design, testing and manufacturing to address these concerns. The way forward encompasses a two-fold testing procedure for reliability incorporating empirical, quantitative benchmarks of performance in compliance with (...) formalized and programmable rules of engagement, and a conception of ?due care? in product liability. This would be designed in analogy with procedures currently followed by well-intentioned governments and militaries with their own (human) military personnel, both to ensure against failure, and to accept responsibility and compensate victims of inadvertent and unintended accidents. The procedure is designed specifically to address objections first posed by Robert Sparrow (2007) and Noel Sharkey (2007), and echoed in P.W. Singer's critically acclaimed Wired for War (2009), that lethal autonomous systems cannot be meaningfully held accountable for commission of war crimes, and thus the development, manufacture, and deployment of such systems would constitute a violation of international law. (shrink)
Michael Kassler (1982). Ethical Aspects of Robotics. In D. R. Oldroyd (ed.), Science and Ethics: Papers Presented at a Symposium Held Under the Aegis of the Australian Academy of Science, University of New South Wales, November 7, 1980. New South Wales University Press.score: 9.0
A suitable project for the new Millenium is to radically reconfigure our image of human rationality. Such a project is already underway, within the Cognitive Sciences, under the umbrellas of work in Situated Cognition, Distributed and De-centralized Cogition, Real-world Robotics and Artificial Life1. Such approaches, however, are often criticized for giving certain aspects of rationality too wide a berth. They focus their attention on on such superficially poor cousins as.
Discussion about the application of scientific knowledge in robotics in order to build people helpers is widespread. The issue herein addressed is philosophically poignant, that of robots that are “people”. It is currently popular to speak about robots and the image of Man. Behind this lurks the dialogical mind and the questions about the significance of an artificial version of it. Without intending to defend or refute the discourse in favour of ‘recreating’ Man, a lesser familiar question is brought (...) forth: “and what if we were capable of creating a very convincible replica of man (constructing a robot-person), what would the consequences of this be and would we be satisfied with such technology?” Thorny topic; it questions the entire knowledge foundation upon which strong AI/Robotics is positioned. The author argues for improved monitoring of technological progress and thus favours implementing weaker techniques. (shrink)
Cognitive systems research has predominantly been guided by the historical distinction between emotion and cognition, and has focused its efforts on modelling the “cognitive” aspects of behaviour. While this initially meant modelling only the control system of cognitive creatures, with the advent of “embodied” cognitive science this expanded to also modelling the interactions between the control system and the external environment. What did not seem to change with this embodiment revolution, however, was the attitude towards affect and emotion in cognitive (...) science. This paper argues that cognitive systems research is now beginning to integrate these aspects of natural cognitive systems into cognitive science proper, not in virtue of traditional “embodied cognitive science”, which focuses predominantly on the body’s gross morphology, but rather in virtue of research into the interoceptive, organismic basis of natural cognitive systems. (shrink)
Utilizing the film I, Robot as a springboard, I here consider the feasibility of robot utilitarians, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interaction. (This is a revised and expanded version of an essay that originally appeared in IEEE: Intelligent Systems.).
The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in (...) the feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction. (shrink)
Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve (more or less particular) human ends. (...) A typical objection to this case draws an analogy to the genetic engineering of humans: if designing eager robot servants is permissible, it should also be permissible to design eager human servants. Few ethical views can easily explain even the wrongness of such human engineering, however, and those few explanations that are available break the analogy with engineering robots. The case turns out to be illustrative of profound problems in the field of population ethics. (shrink)
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...) mind" real? This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one. (shrink)
Telerobotically operated and semiautonomous machines have become a major component in the arsenals of industrial nations around the world. By the year 2015 the United States military plans to have one-third of their combat aircraft and ground vehicles robotically controlled. Although there are many reasons for the use of robots on the battlefield, perhaps one of the most interesting assertions are that these machines, if properly designed and used, will result in a more just and ethical implementation of warfare. This (...) paper will focus on these claims by looking at what has been discovered about the capability of humans to behave ethically on the battlefield, and then comparing those findings with the claims made by robotics researchers that their machines are able to behave more ethically on the battlefield than human soldiers. Throughout the paper we will explore the philosophical critique of this claim and also look at how the robots of today are impacting our ability to fight wars in a just manner. (shrink)
Zlatev offers surprisingly weak reasoning in support of his view that robots with the right kind of developmental histories can have meaning. We ought nonetheless to praise Zlatev for an impressionistic account of how attending to the psychology of human development can help us build robots that appear to have intentionality.
Following the success of Sony Corporation’s “AIBO”, robot cats and dogs are multiplying rapidly. “Robot pets” employing sophisticated artificial intelligence and animatronic technologies are now being marketed as toys and companions by a number of large consumer electronics corporations. -/- It is often suggested in popular writing about these devices that they could play a worthwhile role in serving the needs of an increasingly aging and socially isolated population. Robot companions, shaped like familiar household pets, could comfort and entertain lonely (...) older persons. This goal is misguided and unethical. While there are a number of apparent benefits that might be thought to accrue from ownership of a robot pet, the majority and the most important of these are predicated on mistaking, at a conscious or unconscious level, the robot for a real animal. For an individual to benefit significantly from ownership of a robot pet they must systematically delude themselves regarding the real nature of their relation with the animal. It requires sentimentality of a morally deplorable sort. Indulging in such sentimentality violates a (weak) duty that we have to ourselves to apprehend the world accurately. The design and manufacture of these robots is unethical in so far as it presupposes or encourages this delusion. -/- The invention of robot pets heralds the arrival of what might be called “ersatz companions” more generally. That is, of devices that are designed to engage in and replicate significant social and emotional relationships. The advent of robot dogs offers a valuable opportunity to think about the worth of such companions, the proper place of robots in society and the value we should place on our relationships with them. (shrink)
According to a common philosophical distinction, the `original' intentionality, or `aboutness' possessed by our thoughts, beliefs and desires, is categorically different from the `derived' intentionality manifested in some of our artifacts –- our words, books and pictures, for example. Those making the distinction claim that the intentionality of our artifacts is `parasitic' on the `genuine' intentionality to be found in members of the former class of things. In Kinds of Minds: Toward an Understanding of Consciousness, Daniel Dennett criticizes that claim (...) and the distinction it rests on, and seeks to show that ``metaphysically original intentionality'' is illusory by working out the implications he sees in the practical possibility of a certain type of robot, i.e., one that generates `utterances' which are `inscrutable to the robot's designers' so that we, and they, must consult the robot to discover the meaning of its utterances. I argue that the implications Dennett finds are erroneous, regardless of whether such a robot is possible, and therefore that the real existence of metaphysically original intentionality has not been undermined by the possibility of the robot Dennett describes. (shrink)
Noel and Amanda Sharkey have written an insightful paper on the ethical issues concerned with the development of childcare robots for infants and toddlers, discussing the possible consequences for the psychological and emotional development and wellbeing of children. The ethical issues involving the use of robots as toys, interaction partners or possible caretakers of children are discussed reviewing a wide literature on the pathology and causes of attachment disorders. The potential risks emerging from the analysis lead the authors to promote (...) a multidisciplinary debate on the current legislation to deal with future robot childcare. As a general first consideration, the questions arising from the paper are extremely timely since current robot technology is surprisingly close to achieving autonomous bonding and sustained socialization with human toddlers. The evolution of robot technology has been so speedy in the last few years that even if a discipline like Human-machine Interaction has only recently welcomed human-robot interaction within its disciplinary scope, a variety of social robots have started to populate our life and daily activities. In the past five years human-robot interaction has received a significant and growing interest leading to the development of the so-called robots companions, a term that emphasizes a constant interaction and co-operation between human beings and robotic machines. While Noel and Amanda Sharkey in their paper take a critical stance on the consequences of the use of robots as companions or caretakers, others researchers seem more keen to highlight the potential of caregiver robots in particular in educational settings. In this commentary I’ll try to offer my personal viewpoint on the consequences of using robot companions or caretakers of children on learning and education, and the effects of technologies on cognitive skills development, a controversial area of research where different findings show how little is known. (shrink)
Given (1) Wittgensteins externalist analysis of the distinction between following a rule and behaving in accordance with a rule, (2) prima facie connections between rule-following and psychological capacities, and (3) pragmatic issues about training, it follows that most, even all, future artificially intelligent computers and robots will not use language, possess concepts, or reason. This argument suggests that AIs traditional aim of building machines with minds, exemplified in current work on cognitive robotics, is in need of substantial revision.
Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total childcare is not yet being promoted, there are indications that it is 'on the cards'. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, (...) privacy, robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care. (shrink)
In the present enterprise we take a look at the meaning of Autonomy, how the word has been employed and some of the consequences of its use in the sciences of the artificial. Could and should robots really be autonomous entities? Over and beyond this, we use concepts from the philosophy of mind to spur on enquiry into the very essence of human autonomy. We believe our initiative, as does Dennett's life-long research, sheds light upon the problems of robot design (...) with respect to their relation with humans. (shrink)
In his 1923 play R.U.R.: Rossum s Universal Robots, Karel Capek coined robot as a derivative of the Czech robota (forced labor). Limited to work too tedious or dangerous for humans, today s robots weld parts on assembly lines, inspect nuclear plants, and explore other planets. Generally, robots are still far from achieving their fictional counterparts intelligence and flexibility. Humanoid robotics labs worldwide are working on creating robots that are one step closer to science fiction s androids. Building a (...) humanlike robot is a formidable engineering task requiring a combination of mechanical, electrical, and software engineering; computer architecture; and realtime control. In 1993, we began a project aimed at constructing a humanoid robot for use in.. (shrink)
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the (...) machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents , the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots. (shrink)
Bill Joyâs deep pessimism is now famous. Why the Future Doesnât Need Us, his defense of that pessimism, has been read by, it seems, everyoneâand many of these readers, apparently, have been converted to the dark side, or rather more accurately, to the future-is-dark side. Fortunately (for us; unfortunately for Joy), the defense, at least the part of it that pertains to AI and robotics, fails. Ours may be a dark future, but we cannot know that on the basis (...) of Joyâs reasoning. On the other hand, we ought to fear a good deal more than fear itself: we ought to fear not robots, but what some of us may do with robots. (shrink)
It is argued that the notion of Umwelt is relevant for contemporary discussions within theoretical biology, biosemiotics, the study of Artificial Life, Autonomous Systems Research and philosophy of biology. Focus is put on the question of whether an artificial creature can have a phenomenal world in the sense of the Umwelt notion of Jakob von Uexküll, one of the founding figures of biosemiotics. Rather than vitalism, Uexküll's position can be interpreted as a version of qualitative organicism. A historical sketch of (...) Autonomous Systems Research (ASR) is presented to show its theoretical roots and fruitful opposition to traditional AI style robotics. It is argued that these artificial systems are only partly 'situated' because they do not in the full sense of the word experience an Umwelt. A deeper understanding of truly situated autonomous systems as being a kind of complex selforganizing semiotic agents with emergent qualitative properties must be gained, not only from the broad field of theoretical biology, but also from the perspective of biosemiotics in the Uexküll tradition. The paper is thus an investigation of a new notion of autonomy that includes a qualitative aspect of the organism. This indicates that the Umwelt concept is not reducible to purely functional notions. (shrink)
In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. (...) This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human–robot interaction based on recent psychological and neurological findings. (shrink)
Overview. Consciousness is often considered to have a "hard" part and a not-so-hard part. With the help of work in artificial intelligence and more recently in embodied robotics, there is hope that we shall be able solve the not-so-hard part and make artificial agents that understand their environment, communicate with their friends, and most importantly, have a notion of "self" and "others". But will such agents feel anything? Building the feel into the agent will be the "hard" part.
In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types of (...) robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients. (shrink)
How should biological behaviour be modelled? A relatively new approach is to investigate problems in neuroethology by building physical robot models of biological sensorimotor systems. The explication and justification of this approach are here placed within a framework for describing and comparing models in the behavioural and biological sciences. First, simulation models – the representation of a hypothesis about a target system – are distinguished from several other relationships also termed “modelling” in discussions of scientific explanation. Seven dimensions on which (...) simulation models can differ are defined and distinctions between them discussed: 1. Relevance: whether the model tests and generates hypotheses applicable to biology. 2. Level: the elemental units of the model in the hierarchy from atoms to societies. 3. Generality: the range of biological systems the model can represent. 4. Abstraction: the complexity, relative to the target, or amount of detail included in the model. 5. Structural accuracy: how well the model represents the actual mechanisms underlying the behaviour. 6. Performance match: to what extent the model behaviour matches the target behaviour. 7. Medium: the physical basis by which the model is implemented. No specific position in the space of models thus defined is the only correct one, but a good modelling methodology should be explicit about its position and the justification for that position. It is argued that in building robot models biological relevance is more effective than loose biological inspiration; multiple levels can be integrated; that generality cannot be assumed but might emerge from studying specific instances; abstraction is better done by simplification than idealisation; accuracy can be approached through iterations of complete systems; that the model should be able to match and predict target behaviour; and that a physical medium can have significant advantages. These arguments reflect the view that biological behaviour needs to be studied and modelled in context, that is, in terms of the real problems faced by real animals in real environments. Key Words: animal behaviour; levels; models; neuroethology; realism; robotics; simulation. (shrink)
Starting with service robotics and industrial robotics, this paper aims to suggest philosophical reflections about the relationship between body and machine, between man and technology in our contemporary world. From the massive use of the cell phone to the robots which apparently “feel” and show emotions like humans do. From the wearable exoskeleton to the prototype reproducing the artificial sense of touch, technological progress explodes to the extent of embodying itself in our nakedness. Robotics, indeed, is inspired (...) by biology in order to develop a new kind of technology affecting human life. This is a bio-robotic approach, which is fulfilled in the figure of the cyborg and consequently in the loss of human nature. Today, humans have reached the possibility to modify and create their own body following their personal desires. But what is the limit of this achievement? For this reason, we all must question ourselves whether we have or whether we are a body. (shrink)
This paper presents a series of 4 single subject experiments aimed to investigate whether children with autism show more social engagement when interacting with the Nao robot, compared to a human partner in a motor imitation task. The Nao robot imitates gross arm movements of the child in real-time. Different behavioral criteria (i.e. eye gaze, gaze shifting, free initiations and prompted initiations of arm movements, and smile/laughter) were analyzed based on the video data of the interaction. The results are mixed (...) and suggest a high variability in reactions to the Nao robot. The results are as follows: For Child2 and Child3, the results indicate no effect of the Nao robot in any of the target variables. Child1 and Child4 showed more eye gaze and smile/laughter in the interaction with the Nao robot compared to the human partner and Child1 showed a higher frequency of motor initiations in the interaction with the Nao robot compared to the baselines, but not with respect to the human-interaction. The robot proved to be a better facilitator of shared attention only for Child1. Keywords: human-robot interaction; assistive robotics; autism. (shrink)
When we interact with animals, we intuitively read thoughts and feelings into their expressions and actions - it is easy to suppose that they have minds like ours. And as technology grows more sophisticated, we might soon find ourselves interpreting the behaviour of robots too in human terms. -/- It is natural for us to humanize other beings in this way, but is it philosophically or scientifically justifiable? How different might the minds of animals or machines be to ours? As (...) David McFarland asks here, could robots ever feel guilty, and is it correct to suppose your dog can truly be happy? Can we ever know what non-human minds might be like, or will the answer be forever out of our reach? -/- These are central and important questions in the philosophy of mind, and this book is an accessible exploration of the differing philosophical positions that can be taken on the issue. McFarland looks not only at philosophy, but also examines new evidence from the science of animal behaviour plus the latest developments in robotics and artificial intelligence, to show how many different - and sometimes surprising - conclusions we can draw about the nature of 'alien minds'. (shrink)
We commonly identify something seriously defective in a human life that is lived in ignorance of important but unpalatable truths. At the same time, some degree of misapprehension of reality may be necessary for individual health and success. Morally speaking, it is unclear just how insistent we should be about seeking the truth. Robert Sparrow has considered such issues in discussing the manufacture and marketing of robot ‘pets’, such as Sony’s doglike ‘AIBO’ toy and whatever more advanced devices may supersede (...) it. Though it is not his only concern, Sparrow particularly criticizes such robot pets for their illusory appearance of being living things. He fears that some individuals will subconsciously buy into the illusion, and come to sentimentalize interactions that fail to constitute genuine relationships. In replying to Sparrow, I emphasize that this would be continuous with much of the minor sentimentality that we already indulge in from day to day. Although a disposition to seek the truth is morally virtuous, the virtue concerned must allow for at least some categories of exceptions. Despite Sparrow’s concerns about robot pets (and robotics more generally), we should be lenient about familiar, relatively benign, kinds of self-indulgence in forming beliefs about reality. Sentimentality about robot pets seems to fall within these categories. Such limited self-indulgence can co-exist with ordinary honesty and commitment to truth. (shrink)
It is proposed here that Webb's ideas about robots as possible models of animals need some rethinking. In our view, even though widely used biorobotics strategies are fairly successful at reproducing the macroscopic behavior of biological systems, there are still several problems unresolved on the side of robotics as well as biology. Both mathematical and hardware-like robotics models should be feasible physiologically. Control principles elaborated in robotics are not necessarily applied to biological control systems. Although observations of (...) flying birds inspired aerodynamics and thus modern airplanes, little knowledge has been added to the neurophysiological principles underlying flight in birds. Chess playing computers might outperform most chess players, but they cannot be considered as physiologically feasible models of human thinking. (shrink)
Service robotics has increasingly become the focus of reflective research on new technologies over the last decade. The current state of technology is characterized by prototypical robot systems developed for specific application scenarios outside factories. This has enabled context-based Science and Technology Studies and technology assessments of service robotic systems. This contribution describes the status quo of this reflective research as the starting point for interdisciplinary technology assessment (TA), taking account of TA studies and, in particular, of publications from (...) the ethical and empirical social science perspective. Finally, based on this status quo, evaluation criteria for service robots are developed, which are relevant for further reflective research. (shrink)
While robotics has benefited from inspiration gained from biology, the opposite is not the case: there are few if any cases in which robotic models have lead to genuine insight into biology. We analyze the reasons why biorobotics has been essentially a one-way street. We argue that the development of better tools is essential for progress in this field.
The raison d’être of this article is that many a spry-eyed analyst of the works in intelligent computing and robotics fail to see the essential concerning applications development, that of expressing their ultimate goal. Alternatively, they fail to state it suitably for the lesser-informed public eye. The author does not claim to be able to remedy this. Instead, the visionary investigation offered couples learning and computing with other related fields as part of a larger spectre to fully simulate people (...) in their embodied image. For the first time, the social roles attributed to the technical objects produced are questioned, and so with a humorous illustration. (shrink)
The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...) into focus. This chapter explores these issues, and from its results details a novel approach to meeting the given conditions in a simple architecture of information processing. (shrink)
This paper discusses recent research on humanoid robots and thought experiments addressing the question to what degree such robots could be expected to develop human-like cognition, if rather than being pre-programmed they were made to learn from the interaction with their physical and social environment like human infants. A question of particular interest, from both a semiotic and a cognitive scientific perspective, is whether or not such robots could develop an experiential Umwelt, i.e. could the sign processes they are involved (...) in become intrinsically meaningful to themselves? Arguments for and against the possibility of phenomenal artificial minds of different forms are discussed, and it is concluded that humanoid robotics still has to be considered “weak” rather than “strong AI”, i.e. it deals with models of mind rather than actual minds. (shrink)