This paper presents a cross-cultural study on peoples’ negative attitude toward robots. 467 participants from seven different countries filled in the negative attitude towards robots scale survey which consists of 14 questions in three clusters: attitude towards the interaction with robots, attitude towards social influence of robots and attitude towards emotions in interaction with robots. Around one half of them were recruited at local universities and the other half was approached through Aibo online communities. The participants’ cultural background had a (...) significant influence on their attitude and the Japanese were not as positive as stereotypically assumed. The US participants had the most positive attitude, while participants from Mexico had the most negative attitude. The participants from the online community were more positive towards robots than those not involved. Previous experience in interacting with Aibo also had a positive effect, but owning an Aibo did not improve their attitude. (shrink)
Anthropomorphism is a phenomenon that describes the human tendency to see human-like shapes in the environment. It has considerable consequences for people’s choices and beliefs. With the increased presence of robots, it is important to investigate the optimal design for this tech- nology. In this paper we discuss the potential benefits and challenges of building anthropomorphic robots, from both a philosophical perspective and from the viewpoint of empir- ical research in the fields of human–robot interaction and social psychology. We believe (...) that this broad investigation of anthropomorphism will not only help us to understand the phenomenon better, but can also indicate solutions for facil- itating the integration of human-like machines in the real world. (shrink)
Robots have been introduced into our society, but their social role is still unclear. A critical issue is whether the robot’s exhibition of intelligent behaviour leads to the users’ perception of the robot as being a social actor, similar to the way in which people treat computers and media as social actors. The first experiment mimicked Stanley Milgram’s obedience experiment, but on a robot. The participants were asked to administer electric shocks to a robot, and the results show that people (...) have fewer concerns about abusing robots than about abusing other people. We refined the methodology for the second experiment by intensifying the social dilemma of the users. The participants were asked to kill the robot. In this experiment, the intelligence of the robot and the gender of the participants were the independent variables, and the users’ destructive behaviour towards the robot the dependent variable. Several practical and methodological problems compromised the acquired data, but we can conclude that the robot’s intelligence had a significant influence on the users’ destructive behaviour. We discuss the encountered problems and suggest improvements. We also speculate on whether the users’ perception of the robot as being “sort of alive” may have influenced the participants’ abusive behaviour. (shrink)
Exploring the Abuse of Robots.Christoph Bartneck & Jun Hu - 2008 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 9 (3):415-433.details
Robots have been introduced into our society, but their social role is still unclear. A critical issue is whether the robot’s exhibition of intelligent behaviour leads to the users’ perception of the robot as being a social actor, similar to the way in which people treat computers and media as social actors. The first experiment mimicked Stanley Milgram’s obedience experiment, but on a robot. The participants were asked to administer electric shocks to a robot, and the results show that people (...) have fewer concerns about abusing robots than about abusing other people. We refined the methodology for the second experiment by intensifying the social dilemma of the users. The participants were asked to kill the robot. In this experiment, the intelligence of the robot and the gender of the participants were the independent variables, and the users’ destructive behaviour towards the robot the dependent variable. Several practical and methodological problems compromised the acquired data, but we can conclude that the robot’s intelligence had a significant influence on the users’ destructive behaviour. We discuss the encountered problems and suggest improvements. We also speculate on whether the users’ perception of the robot as being “sort of alive” may have influenced the participants’ abusive behaviour. (shrink)
The Carrot and the Stick.Christoph Bartneck, Juliane Reichenbach & Julie Carpenter - 2008 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 9 (2):179-203.details
This paper presents two studies that investigate how people praise and punish robots in a collaborative game scenario. In a first study, subjects played a game together with humans, computers, and anthropomorphic and zoomorphic robots. The different partners and the game itself were presented on a computer screen. Results showed that praise and punishment were used the same way for computer and human partners. Yet robots, which are essentially computers with a different embodiment, were treated differently. Very machine-like robots were (...) treated just like the computer and the human; robots very high on anthropomorphism / zoomorphism were praised more and punished less. However, barely any of the participants believed that they actually played together with a robot. After this first study, we refined the method and also tested if the presence of a real robot, in comparison to a screen representation, would influence the measurements. The robot, in the form of an AIBO, would either be present in the room or only be represented on the participants’ computer screen. Furthermore, the robot would either make 20% errors or 40% errors in the collaborative game. We automatically measured the praising and punishing behavior of the participants towards the robot and also asked the participant to estimate their own behavior. Results show that even the presence of the robot in the room did not convince all participants that they played together with the robot. To gain full insight into this human–robot relationship it might be necessary to directly interact with the robot. The participants unconsciously praised AIBO more than the human partner, but punished it just as much. Robots that adapt to the users’ behavior should therefore pay extra attention to the users’ praises, compared to their punishments. (shrink)