The growing proportion of elderly people in society, together with recent advances in robotics, makes the use of robots in elder care increasingly likely. We outline developments in the areas of robot applications for assisting the elderly and their carers, for monitoring their health and safety, and for providing them with companionship. Despite the possible benefits, we raise and discuss six main ethical concerns associated with: (1) the potential reduction in the amount of human contact; (2) an increase in (...) the feelings of objectification and loss of control; (3) a loss of privacy; (4) a loss of personal liberty; (5) deception and infantilisation; (6) the circumstances in which elderly people should be allowed to control robots. We conclude by balancing the care benefits against the ethical costs. If introduced with foresight and careful guidelines, robots and robotic technology could improve the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction. (shrink)
Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve (more or less particular) (...) human ends. A typical objection to this case draws an analogy to the genetic engineering of humans: if designing eager robot servants is permissible, it should also be permissible to design eager human servants. Few ethical views can easily explain even the wrongness of such human engineering, however, and those few explanations that are available break the analogy with engineering robots. The case turns out to be illustrative of profound problems in the field of population ethics. (shrink)
According to a common philosophical distinction, the `original' intentionality, or `aboutness' possessed by our thoughts, beliefs and desires, is categorically different from the `derived' intentionality manifested in some of our artifacts –- our words, books and pictures, for example. Those making the distinction claim that the intentionality of our artifacts is `parasitic' on the `genuine' intentionality to be found in members of the former class of things. In Kinds of Minds: Toward an Understanding of Consciousness, Daniel Dennett criticizes that claim (...) and the distinction it rests on, and seeks to show that ``metaphysically original intentionality'' is illusory by working out the implications he sees in the practical possibility of a certain type of robot, i.e., one that generates `utterances' which are `inscrutable to the robot's designers' so that we, and they, must consult the robot to discover the meaning of its utterances. I argue that the implications Dennett finds are erroneous, regardless of whether such a robot is possible, and therefore that the real existence of metaphysically original intentionality has not been undermined by the possibility of the robot Dennett describes. (shrink)
Following the success of Sony Corporation’s “AIBO”, robot cats and dogs are multiplying rapidly. “Robot pets” employing sophisticated artificial intelligence and animatronic technologies are now being marketed as toys and companions by a number of large consumer electronics corporations. -/- It is often suggested in popular writing about these devices that they could play a worthwhile role in serving the needs of an increasingly aging and socially isolated population. Robot companions, shaped like familiar household pets, could comfort (...) and entertain lonely older persons. This goal is misguided and unethical. While there are a number of apparent benefits that might be thought to accrue from ownership of a robot pet, the majority and the most important of these are predicated on mistaking, at a conscious or unconscious level, the robot for a real animal. For an individual to benefit significantly from ownership of a robot pet they must systematically delude themselves regarding the real nature of their relation with the animal. It requires sentimentality of a morally deplorable sort. Indulging in such sentimentality violates a (weak) duty that we have to ourselves to apprehend the world accurately. The design and manufacture of these robots is unethical in so far as it presupposes or encourages this delusion. -/- The invention of robot pets heralds the arrival of what might be called “ersatz companions” more generally. That is, of devices that are designed to engage in and replicate significant social and emotional relationships. The advent of robot dogs offers a valuable opportunity to think about the worth of such companions, the proper place of robots in society and the value we should place on our relationships with them. (shrink)
Noel and Amanda Sharkey have written an insightful paper on the ethical issues concerned with the development of childcare robots for infants and toddlers, discussing the possible consequences for the psychological and emotional development and wellbeing of children. The ethical issues involving the use of robots as toys, interaction partners or possible caretakers of children are discussed reviewing a wide literature on the pathology and causes of attachment disorders. The potential risks emerging from the analysis lead the authors to promote (...) a multidisciplinary debate on the current legislation to deal with future robot childcare. As a general first consideration, the questions arising from the paper are extremely timely since current robot technology is surprisingly close to achieving autonomous bonding and sustained socialization with human toddlers. The evolution of robot technology has been so speedy in the last few years that even if a discipline like Human-machine Interaction has only recently welcomed human-robot interaction within its disciplinary scope, a variety of social robots have started to populate our life and daily activities. In the past five years human-robot interaction has received a significant and growing interest leading to the development of the so-called robots companions, a term that emphasizes a constant interaction and co-operation between human beings and robotic machines. While Noel and Amanda Sharkey in their paper take a critical stance on the consequences of the use of robots as companions or caretakers, others researchers seem more keen to highlight the potential of caregiver robots in particular in educational settings. In this commentary I’ll try to offer my personal viewpoint on the consequences of using robot companions or caretakers of children on learning and education, and the effects of technologies on cognitive skills development, a controversial area of research where different findings show how little is known. (shrink)
Functionalism of robot pain claims that what is definitive of robot pain is functional role, defined as the causal relations pain has to noxious stimuli, behavior and other subjective states. Here, I propose that the only way to theorize role-functionalism of robot pain is in terms of type-identity theory. I argue that what makes a state pain for a neuro-robot at a time is the functional role it has in the robot at the time, and (...) this state is type identical to a specific circuit state. Support from an experimental study shows that if the neural network that controls a robot includes a specific 'emotion circuit', physical damage to the robot will cause the disposition to avoid movement, thereby enhancing fitness, compared to robots without the circuit. Thus, pain for a robot at a time is type identical to a specific circuit state. (shrink)
In this article, the authors examine whether and how robot caregivers can contribute to the welfare of children with various cognitive and physical impairments by expanding recreational opportunities for these children. The capabilities approach is used as a basis for informing the relevant discussion. Though important in its own right, having the opportunity to play is essential to the development of other capabilities central to human flourishing. Drawing from empirical studies, the authors show that the use of various types (...) of robots has already helped some children with impairments. Recognizing the potential ethical pitfalls of robot caregiver intervention, however, the authors examine these concerns and conclude that an appropriately designed robot caregiver has the potential to contribute positively to the development of the capability to play while also enhancing the ability of human caregivers to understand and interact with care recipients. (shrink)
As we near a time when robots may serve a vital function by becoming caregivers, it is important to examine the ethical implications of this development. By applying the capabilities approach as a guide to both the design and use of robot caregivers, we hope that this will maximize opportunities to preserve or expand freedom for care recipients. We think the use of the capabilities approach will be especially valuable for improving the ability of impaired persons to interface more (...) effectively with their physical and social environments. (shrink)
As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues (...) and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava™ Mobile Robotics Platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. (shrink)
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns (...) how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force. (shrink)
How can we make sense of the idea of ‘personal’ or ‘social’ relations with robots? Starting from a social and phenomenological approach to human–robot relations, this paper explores how we can better understand and evaluate these relations by attending to the ways our conscious experience of the robot and the human–robot relation is mediated by language. It is argued that our talk about and to robots is not a mere representation of an objective robotic or social-interactive reality, (...) but rather interprets and co-shapes our relation to these artificial quasi-others. Our use of language also changes as a result of our experiences and practices. This happens when people start talking to robots. In addition, this paper responds to the ethical objection that talking to and with robots is both unreal and deceptive. It is concluded that in order to give meaning to human–robot relations, to arrive at a more balanced ethical judgment, and to reflect on our current form of life, we should complement existing objective-scientific methodologies of social robotics and interaction studies with interpretations of the words, conversations, and stories in and about human–robot relations. (shrink)
The effects of striatal dopamine on behaviour have been widely investigated over the past decades, with “phasic” burst firings considered as the key expression of a reward prediction error responsible for reinforcement learning. Less well studied is tonic dopamine, where putative functions include the idea that it is a regulator of vigour, incentive salience, disposition to exert an effort and a modulator of approach strategies. We present a model combining tonic and phasic dopamine to show how different outflows triggered by (...) either intrinsically or extrinsically motivating stimuli dynamically affect the basal ganglia by impacting on a selection process this system performs on its cortical input. The model, which has been tested on the simulated humanoid robot iCub in the interaction with a mechatronic board, shows the putative functions ascribed to dopamine emerging from the combination of a standard computational mechanism coupled to a differential sensitivity to the presence of dopamine across the striatum. (shrink)
For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing incremental (...) neural network and the hidden Markov model. We have added an interactive learning mechanism to HB-SOINN to prevent a single cluster from running into a failure as a result of polysemy of being assigned more than one meaning. For example, a sentence: “Keep on going left slowly” has three meanings such as, “Keep on (1)”, “going left (2)”, “slowly (3)”. We experimentally tested the clustering performance of the proposed method against data obtained from measuring gestures using a motion capture device. The results show that the classification performance of HB-SOINN exceeds that of conventional clustering approaches. In addition, we have found that the interactive learning function improves the learning performance of HB-SOINN. (shrink)
Following arguments put forward in my book (Why red doesn’t sound like a bell: understanding the feel of consciousness. Oxford University Press, New York, USA, 2011), this article takes a pragmatic, scientist’s point of view about the concepts of consciousness and “feel”, pinning down what people generally mean when they talk about these concepts, and then investigating to what extent these capacities could be implemented in non-biological machines. Although the question of “feel”, or “phenomenal consciousness” as it is called by (...) some philosophers, is generally considered to be the “hard” problem of consciousness, the article shows that by taking a “sensorimotor” approach, the difficulties can be overcome. What remains to account for are the notions of so-called “access consciousness” and the self. I claim that though they are undoubtedly very difficult, these are not logically impossible to implement in robots. (shrink)
Zlatev offers surprisingly weak reasoning in support of his view that robots with the right kind of developmental histories can have meaning. We ought nonetheless to praise Zlatev for an impressionistic account of how attending to the psychology of human development can help us build robots that appear to have intentionality.
Negative attitudes toward robots are considered as one of the psychological factors preventing humans from interacting with robots in the daily life. To verify their influence on humans‘ behaviors toward robots, we designed and executed experiments where subjects interacted with Robovie, which is being developed as a platform for research on the possibility of communication robots. This paper reports and discusses the results of these experiments on correlation between subjects’ negative attitudes and their behaviors toward robots. Moreover, it discusses influences (...) of gender and experience of real robots on their negative attitudes and behaviors toward robots. (shrink)
In this article, I shall examine some of the issues and questions involved in the technology of autonomous robots, a technology that has developed greatly and is advancing rapidly. I shall do so with reference to a particularly critical field: autonomous military robotic systems. In recent times, various issues concerning the ethical implications of these systems have been the object of increasing attention from roboticists, philosophers and legal experts. The purpose of this paper is not to deal with these issues, (...) but to show how the autonomy of those robotic systems, by which I mean the full automation of their decision processes, raises difficulties and also paradoxes that are not easy to solve. This is especially so when considering the autonomy of those robotic systems in their decision processes alongside their reliability. Finally, I would like to show how difficult it is to respond to these difficulties and paradoxes by calling into play a strong formulation of the precautionary principle. (shrink)
Utilizing the film I, Robot as a springboard, I here consider the feasibility of robot utilitarians, the moral responsibilities that come with the creation of ethical robots, and the possibility of distinct ethics for robot-robot interaction as opposed to robot-human interaction. (This is a revised and expanded version of an essay that originally appeared in IEEE: Intelligent Systems.).
I address a number of issues related to building an autonomous social robot. I review different approaches to social cognition and ask how these different approaches may inform the design of social robots. I argue that regardless of which theoretical approach to social cognition one favors, instantiating that approach in a workable robot will involve designing that robot on enactive principles.
This article addresses prospective and retrospective responsibility issues connected with medical robotics. It will be suggested that extant conceptual and legal frameworks are sufficient to address and properly settle most retrospective responsibility problems arising in connection with injuries caused by robot behaviours (which will be exemplified here by reference to harms occurred in surgical interventions supported by the Da Vinci robot, reported in the scientific literature and in the press). In addition, it will be pointed out that many (...) prospective responsibility issues connected with medical robotics are nothing but well-known robotics engineering problems in disguise, which are routinely addressed by roboticists as part of their research and development activities: for this reason they do not raise particularly novel ethical issues. In contrast with this, it will be pointed out that novel and challenging prospective responsibility issues may emerge in connection with harmful events caused by normal robot behaviours. This point will be illustrated here in connection with the rehabilitation robot Lokomat. (shrink)
Psychological attitudes towards service and personal robots are selectively examined from the vantage point of psychoanalysis. Significant case studies include the uncanny valley effect, brain-actuated robots evoking magic mental powers, parental attitudes towards robotic children, idealizations of robotic soldiers, persecutory fantasies involving robotic components and systems. Freudian theories of narcissism, animism, infantile complexes, ego ideal, and ideal ego are brought to bear on the interpretation of these various items. The horizons of Human-robot Interaction are found to afford new and (...) fertile grounds for psychoanalytic theorizing beyond strictly therapeutic contexts. (shrink)
We present an approach to subjective computing for the design of future robots that exhibit more adaptive and flexible behavior in terms of subjective intelligence. Instead of encapsulating subjectivity into higher order states, we show by means of a relational approach how subjective intelligence can be implemented in terms of the reciprocity of autonomous self-referentiality and direct world-coupling. Subjectivity concerns the relational arrangement of an agent’s cognitive space. This theoretical concept is narrowed down to the problem of coaching a reinforcement (...) learning agent by means of binary feedback. Algorithms are presented that implement subjective computing. The relational characteristic of subjectivity is further confirmed by a questionnaire on human perception of robot’s behavior. The results imply that subjective intelligence cannot be externally observed. In sum, we conclude that subjective intelligence in relational terms is fully tractable and therefore implementable in artificial agents. (shrink)
The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating according to the same well-understood principles that govern all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance. It may be wildly over-ambitious to suppose that (...) human artificers can repeat Nature's triumph, with variations in material, form, and design process, but this is not a deep objection. It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe--or in any event want to believe--that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons. (shrink)
Arguments about whether a robot could ever be conscious have been conducted up to now in the factually impoverished arena of what is possible "in principle." A team at MIT of which I am a part is now embarking on a longterm project to design and build a humanoid robot, Cog, whose cognitive talents will include speech, eye-coordinated manipulation of objects, and a host of self-protective, self-regulatory and self-exploring activities. The aim of the project is not to make (...) a conscious robot, but to make a robot that can interact with human beings in a robust and versatile manner in real time, take care of itself, and tell its designers things about itself that would otherwise be extremely difficult if not impossible to determine by examination. Many of the details of Cog's "neural" organization will parallel what is known (or presumed known) about their counterparts in the human brain, but the intended realism of Cog as a model is relatively coarse-grained, varying opportunistically as a function of what we think we know, what we think we can build, and what we think doesn't matter. Much of what we think will of course prove to be mistaken; that is one advantage of real experiments over thought experiments. (shrink)
Preprint of Cole, Sacks, and Waterman. 2000. "On the immunity principle: A view from a robot." Trends in Cognitive Science 4 (5): 167, a response to Shaun Gallagher, S. 2000. "Philosophical conceptions of the self: implications for cognitive science," Trends in Cognitive Science 4 (1):14-21. Also see Shaun Gallagher, Reply to Cole, Sacks, and Waterman Trends in Cognitive Science 4, No. 5 (2000): 167-68.
In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its (...) actions. Since building a moral robot requires the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. Finally, I raise but do not answer the question that if morality requires us to want robots that are not genuine moral agents, why should we want something different in the case of human beings. (shrink)
In this paper a look is taken at the relatively new area of culturing neural tissue and embodying it in a mobile robot platform—essentially giving a robot a biological brain. Present technology and practice is discussed. New trends and the potential effects of and in this area are also indicated. This has a potential major impact with regard to society and ethical issues and hence some initial observations are made. Some initial issues are also considered with regard to (...) the potential consciousness of such a brain. (shrink)
You are offered one billion dollars to 'simply' produce a proof-of-concept robot that has phenomenal consciousness -- in fact, you can receive a deliciously large portion of the money up front, by simply starting a three-year work plan in good faith. Should you take the money and commence? No. I explain why this refusal is in order, now and into the foreseeable future.
The purpose of the paper is to discuss whether a particular robot can be said to have an 'inner world', something that can be taken to be a critical feature of consciousness. It has previously been argued that the mechanism underlying the appearance of an inner world in humans is an ability of our brains to simulate behaviour and perception. A robot has previously been designed in which perception can be simulated. A prima facie case can be made (...) that this robot has an inner world in the same sense as humans. Various objections to this claim are discussed in the paper and it is concluded that the robot, although extremely simple, can easily be improved without adding any new principles, so that ascribing an inner world to it becomes intuitively reasonable. (shrink)
In the present enterprise we take a look at the meaning of Autonomy, how the word has been employed and some of the consequences of its use in the sciences of the artificial. Could and should robots really be autonomous entities? Over and beyond this, we use concepts from the philosophy of mind to spur on enquiry into the very essence of human autonomy. We believe our initiative, as does Dennett's life-long research, sheds light upon the problems of robot (...) design with respect to their relation with humans. (shrink)
Childcare robots are being manufactured and developed with the long term aim of creating surrogate carers. While total childcare is not yet being promoted, there are indications that it is 'on the cards'. We examine recent research and developments in childcare robots and speculate on progress over the coming years by extrapolating from other ongoing robotics work. Our main aim is to raise ethical questions about the part or full-time replacement of primary carers. The questions are about human rights, privacy, (...)robot use of restraint, deception of children and accountability. But the most pressing ethical issues throughout the paper concern the consequences for the psychological and emotional wellbeing of children. We set these in the context of the child development literature on the pathology and causes of attachment disorders. We then consider the adequacy of current legislation and international ethical guidelines on the protection of children from the overuse of robot care. (shrink)
Most animals have significant behavioral expertise built in without having to explicitly learn it all from scratch. This expertise is a product of evolution of the organism; it can be viewed as a very long term form of learning which provides a structured system within which individuals might learn more specialized skills or abilities. This paper suggests one possible mechanism for analagous robot evolution by describing a carefully designed series of networks, each one being a strict augmentation of the (...) previous one, which control a six legged walking machine capable of walking over rough terrain and following a person passively sensed in the infrared spectrum. As the completely decentralized networks are augmented. the robot’s performance and behavior repetoire demonstrably improve. The rationale for such demonstrations is that they may provide a hint as to the requirements for automatically building massive networks to carry out complex sensory-motor tasks. The experiments with an actual robot ensure that an essence of reality is maintained and that no critical disabling problems have been ignored. (shrink)
Emerging technologies like robotics for war and peace stress our moral norms and generate much public interest and controversy. We use this interest to attract participants to an innovative on-line survey platform, designed for experimenting with public engagement in the ethics of technology. In particular, the N-Reasons platform addresses several issues in democratic ethics: the cost of public participation, the methodological issue of feasible reflective ethical equilibrium (how can individuals in a large group, take into account the ethical views of (...) all others?), and the reliability of public participation processes. We sketch the motivation and design of the N-Reasons platform, stressing the need for a practical (fast, low-cost) instrument that makes equilibrium feasible. We focus on the Robot Ethics Survey that featured a set of nine ethical challenges raised by robotics for war and peace. Over 400 people in five disjoint groups participated in this on-line survey experiment. We analyze the results, both quantitatively and qualitatively, the participants’ decisions taken and the reasons supporting these decisions. Both decisions and reasons strongly distinguished lethal military robotics from peace-related robotics. Methodologically, both decisions and reasons over five distinct groups were remarkably consistent. (shrink)
Discussion about the application of scientific knowledge in robotics in order to build people helpers is widespread. The issue herein addressed is philosophically poignant, that of robots that are “people”. It is currently popular to speak about robots and the image of Man. Behind this lurks the dialogical mind and the questions about the significance of an artificial version of it. Without intending to defend or refute the discourse in favour of ‘recreating’ Man, a lesser familiar question is brought forth: (...) “and what if we were capable of creating a very convincible replica of man (constructing a robot-person), what would the consequences of this be and would we be satisfied with such technology?” Thorny topic; it questions the entire knowledge foundation upon which strong AI/Robotics is positioned. The author argues for improved monitoring of technological progress and thus favours implementing weaker techniques. (shrink)
In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. (...) Since building a moral robot requires the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. But then, if that is what we want for robots, why should we want something different for human beings? Robot ethics, it seems, presents something of a reductio of Kant’s ethics that points to hidden assumptions that hide in the very fabric of the Kantian moral enterprise, not the least of which is that Kant presumes humans to be fallen creatures. Religious doctrine, in other words, infects Kant’s attempt to derive morality from reason. This paper will demonstrate that this is so. (shrink)
This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
Young children generally learn words from other people. Recent research has shown that children can learn new actions and skills from nonhuman agents. This study examines whether young children could learn words from a robot. Preschool children were shown a video in which either a woman (human condition) or a mechanical robot (robot condition) labeled novel objects. Then the children were asked to select the objects according to the names used in the video. The results revealed that (...) children in the human condition were more likely to select the correct objects than those in the robot condition. Nevertheless, the five-year-old children in the robot condition performed significantly better than chance level, while the four-year olds did not. Thus there is a developmental difference in children's potential to learn words from a robot. The results contribute to our understanding of how children interact with non-human agents. Keywords: developmental cybernetics; word learning; social cognition; cognitive development. (shrink)
Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends (among other things) on the robot’s capacity to be a recipient of human empathy, and that one necessary condition for this to happen is that the robot mirrors human vulnerabilities. For the purpose of these arguments, I make a distinction between empathy-as-cognition and empathy-as-feeling, connecting the (...) latter to the moral sentiment tradition and its concept of “fellow feeling.” Furthermore, I sympathise with the intuition that vulnerability mirroring raises the ethical issue of deception. However, given the importance of appearance in social relations, problems with the concept of deception, and contemporary technologies that question the artificial-natural distinction, we cannot easily justify the underlying assumptions of the deception objection. If we want to hold on to them, we need convincing answers to these problems. (shrink)
This paper presents a series of 4 single subject experiments aimed to investigate whether children with autism show more social engagement when interacting with the Nao robot, compared to a human partner in a motor imitation task. The Nao robot imitates gross arm movements of the child in real-time. Different behavioral criteria (i.e. eye gaze, gaze shifting, free initiations and prompted initiations of arm movements, and smile/laughter) were analyzed based on the video data of the interaction. The results (...) are mixed and suggest a high variability in reactions to the Nao robot. The results are as follows: For Child2 and Child3, the results indicate no effect of the Nao robot in any of the target variables. Child1 and Child4 showed more eye gaze and smile/laughter in the interaction with the Nao robot compared to the human partner and Child1 showed a higher frequency of motor initiations in the interaction with the Nao robot compared to the baselines, but not with respect to the human-interaction. The robot proved to be a better facilitator of shared attention only for Child1. Keywords: human-robot interaction; assistive robotics; autism. (shrink)
To let humanoid robots behave socially adequate in a future society, we started to explore laughter as an important para-verbal signal known to influence relationships among humans rather easily. We investigated how the naturalness of various types of laughter in combination with different humanoid robots was judged, first, within a situational context that is suitable for laughter and, second, without describing the situational context. Given the variety of human laughter, do people prefer a certain style for a robot’s laughter? (...) And if yes, how does a robot’s outer appearance affect this preference, if at all? Is this preference independent of the observer’s cultural background? Those participants, who took part in two separate online surveys and were told that the robots would laugh in response to a joke, preferred one type of laughter regardless of the robot type. This result is contrasted by a detailed analysis of two more surveys, which took place during presentations at a Japanese and a German high school, respectively. From the results of these two surveys, interesting intercultural differences in the perceived naturalness of our laughing humanoids can be derived and challenging questions arise that are to be addressed in future research. (shrink)
It has been proposed that the design of robots might benefit from interactions that are similar to caregiver-child interactions, which is tailored to children's respective capacities to a high degree. However, so far little is known about how people adapt their tutoring behaviour to robots and whether robots can evoke input that is similar to child-directed interaction. The paper presents detailed analyses of speakers' linguistic behaviour and non-linguistic behaviour, such as action demonstration, in two comparable situations: In one experiment, parents (...) described and explained to their nonverbal infants the use of certain everyday objects; in the other experiment, participants tutored a simulated robot on the same objects. The results, which show considerable differences between the two situations on almost all measures, are discussed in the light of the computer-as-social-actor paradigm and the register hypothesis. Keywords: child-directed speech (CDS); motherese; robotese; motionese; register theory; social communication; human-robot interaction (HRI); computers-as-social-actors; mindless transfer. (shrink)
Very encouraging results have been obtained from a new program that derives a dense three-dimensional evidence grid representation of a robot's surroundings from wide-angle stereoscopic images. The pro gram adds several spatial rays of evidence to a grid for each of about 2,500 local image features chosen per stereo pair. It was used to construct a 256x256x64 grid, representing 6 by 6 by 2 meters, from a hand- collected test set of twenty stereo image pairs of an office scene. (...) Fifty nine stereo pairs of an 8 by 8 meter laboratory were also processed. The positive (probably occupied) cells of the grids, viewed in perspec tive, resemble dollhouse scenes. Details as small as the curvature of chair armrests are discernible. The processing time, on a 100 MIPS Sparc 20, is less than five seconds per stereo pair, and total memory is under 16 megabytes. The results seem abundantly adequate for very reliable navigation of freely roaming mobile robots, and plausibly adequate for shape identification of objects bigger than 10 centimeters. The program is a first proof of concept, and awaits optimizations, enhancements, variations, extensions and applications. (shrink)
Making interactions between humans and artificial agents successful is a major goal of interaction design. The aim of this paper is to provide researchers conducting interaction studies a new framework for the evaluation of robot believability. By critically examining the ordinary sense of believability, we first argue that currently available notions of it are underspecified for rigorous application in an experimental setting. We then define four concepts that capture different senses of believability, each of which connects directly to an (...) empirical methodology. Finally, we show how this framework has been and can be used in the construction of interaction studies by applying it to our own work in human-robot interaction. (shrink)
This study investigates the influence of a robot's speech rate. In human communication, slow speech is considered boring, speech at normal speed is perceived as credible, and fast speech is perceived as competent. To seek the appropriate speech rate for robots, we test whether these tendencies are replicated in human-robot interaction by conducting an experiment with four rates of speech: fast, normal, moderately slow, and slow. Our experimental results reveal a rather surprising trend. Participants prefer normal and moderately (...) slow speech to fast speech. A robot that provides normal or moderately slow speech is perceived as competent. We further study how context affects this perception. In a situation where the robot and participants talk while walking, we found that slow speech was the most comprehensible. In addition, slow speech is subjectively perceived as good as moderately slow and normal speech. Keywords: Human-robot interaction; speech rate. (shrink)
We commonly identify something seriously defective in a human life that is lived in ignorance of important but unpalatable truths. At the same time, some degree of misapprehension of reality may be necessary for individual health and success. Morally speaking, it is unclear just how insistent we should be about seeking the truth. Robert Sparrow has considered such issues in discussing the manufacture and marketing of robot ‘pets’, such as Sony’s doglike ‘AIBO’ toy and whatever more advanced devices may (...) supersede it. Though it is not his only concern, Sparrow particularly criticizes such robot pets for their illusory appearance of being living things. He fears that some individuals will subconsciously buy into the illusion, and come to sentimentalize interactions that fail to constitute genuine relationships. In replying to Sparrow, I emphasize that this would be continuous with much of the minor sentimentality that we already indulge in from day to day. Although a disposition to seek the truth is morally virtuous, the virtue concerned must allow for at least some categories of exceptions. Despite Sparrow’s concerns about robot pets (and robotics more generally), we should be lenient about familiar, relatively benign, kinds of self-indulgence in forming beliefs about reality. Sentimentality about robot pets seems to fall within these categories. Such limited self-indulgence can co-exist with ordinary honesty and commitment to truth. (shrink)