Elsevier

Cognition

Volume 95, Issue 3, April 2005, Pages B49-B57
Cognition

Brief article
Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons

https://doi.org/10.1016/j.cognition.2004.08.001Get rights and content

Abstract

As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., Barna, J., & DiAdamo, C., (2000). Precursors to the development of intention at 6 months: understanding people and their actions. Developmental Psychology, 36, 5, 627–634.], we investigated whether 10-month-old infants expected people to talk to a humanoid robot. In a familiarization period, each infant observed an actor and an interactive robot behaving like a human, a non-interactive robot remaining stationary, and a non-interactive robot behaving like a human. In subsequent test trials, the infants were shown another actor talking to the robot and to the actor. We found that infants who had previously observed the interactive robot showed no difference in looking-time between the two types of test events. Infants in the other conditions, however, looked longer at the test event where the second experimenter talked to the robot rather than where the second experimenter talked to the person. These results suggest that infants interpret the interactive robot as a communicative agent and the non-interactive robot as an object. Our findings imply that infants categorize interactive humanoid robots as a kind of human being.

Introduction

Many robots that have appeared in science-fiction movies (such as “R2-D2”1 in Star Wars) are mechanical in appearance, but make many communicative actions, such as uttering beeping-sounds, moving their heads, and/or blinking a light. Almost everyone who watches R2-D2 will be convinced that the robot has something like a mind.

As technology advances, it is no longer fantasy or fiction that human beings can live with robots. “AIBO”,2 a dog-like robot, is enjoyed by many ordinary Japanese families. More humanlike robots such as “ASIMO”3(Sakagami, 2002), “QRIO”,4 and “Robovie” (Ishiguro et al., 2001) are also being developed for household use. Although these robots do not exactly resemble human beings, they have equivalent body parts such as a head, a face, hands, arms, a trunk, and so on. Technology also allows us to interact with robots via a variety of gestures and voices. Robovie, for example, can make hand gestures and eye contact, and also speaks natural languages. The robot can behave like a human being and communicate verbally and non-verbally. This opens the question of how infants categorize such robots.

Developmental psychology has addressed the issue of how infants characterize humans as agents having mental states. Some studies suggest that infants attribute mental states only to humans (Meltzoff, 1995, Field et al., 1982, Legerstee, 1991). For instance, using infants' looking time as a measurement of violation-of-expectation, Legerstee et al. (2000) found that infants do not expect people to talk to objects. In their experiment, infants were shown an actor who talked to something hidden behind a curtain. When the curtain was opened, an object such as a broom or a person appeared. The infants looked longer when the object appeared than they did when the person appeared. These results suggested that infants do not expect non-human objects to be talked to, and that infants think that only humans can communicate with humans. In the context of this research, it is understandable that infants did not perceive objects as agents, because these studies used ordinary objects like wooden rods, machine arms, dolls, and a radio-controlled toy having few human-like features and motions (Legerstee, 1994, Legerstee, 1997, Legerstee, 2001, Woodward, 1998, Meltzoff, 1995, Poulin-Dubois et al., 1996). Other studies have suggested, however, that infants attribute mental states to non-human objects that appear to be interactive with a person: a box-shaped machine that beeped and flashed light, or a small fur ball that making noises and flashed lights (Movellan et al., 1987; Johnson et al., 1999, Johnson et al., 2001). These results imply that interactivity between humans and objects is the key factor in mental attribution, however, interesting questions remain to be answered: do infants characterize humanoid robots that behave like humans as mentalistic agents? Or, do infants also attribute mental states to humanlike but non-interactive robots?

In this study, we used an experimental paradigm similar to Legerstee et al. (2000) to investigate whether 10-month-old infants expected an experimenter to talk to the humanoid robot “Robovie”. To show infants how the robot behaved and interacted with people, we added a familiarization period prior to the test trials in which another actor talked to the robot and the person.

There were three experimental conditions. The stimuli in the familiarization of these conditions are as follows:

  • (1)

    Interactive robot condition: the robot behaved like a human, and the person and the robot interacted with each other.

  • (2)

    Non-active robot condition: the robot was stationary and the person was both active and talked to the robot.

  • (3)

    Active robot condition: the robot behaved like a human, and the person was stationary and silent.

In the latter two conditions, there were no two-way human–robot interactions. During the test trials, infants saw a person speak to the robot and the other person. If infants regard only interactive robots as communicative agents, they will not be surprised only in the interactive robot condition, implying that interactivity is crucial for infants' attribution of social mental states to objects. But if infants regard humanlike robots as communicative agents only because they behave like humans, they will not be surprised in either the interactive robot or active robot conditions.

Section snippets

Participants

Fifty-eight 10-month-old infants (M=10.15 months of age, SD=0.51) were randomly assigned to one of the three experimental groups or to the control group in a between-subjects design. Ten infants were excluded from the study: two because of experimenter error, two because of sleepiness, and six because of fussiness. The remaining 48 infants (36 in the experimental groups [19 males, 17 females] and 12 in the control group [eight males, four females]) were healthy, full-term babies.

Stimuli

Infants viewed

Familiarization period

During the familiarization period, infants' looking time averaged 45 s (SD=9.7) in the interactive robot condition, 42 s (SD=5.7) in the non-active robot condition, and 48 s (SD=5.2) in the active robot condition. A one-way analysis of variance (ANOVA) was conducted to determine whether the infants in the three conditions had similar attention levels measured by looking time during the familiarization period. There were no significant main effects within the familiarization period, [F (2,

Discussion

The purpose of this study was to determine whether infants think of humanoid robots as communicative agents. Recently, Legerstee et al. (2000) demonstrated that infants expect people to communicate only with people and not with non-human objects. However, the question of how infants characterize objects that behave like humans had not been resolved. Using a similar approach to Legerstee et al. (2000), we investigated whether infants expect people to talk to humanoid robots. In this study, there

Acknowledgements

This research was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled and by Japan Science and Technology Corporation, PRESTO. It was also supported by a Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science and Technology, Japan (No. 30323455) and partly supported by the Center for Evolutionary Cognitive Science at the University of Tokyo. The authors wish to thank Yoshimi Uezu, Kayoko Iwase, Mayumi

References (14)

There are more references available in the full text version of this article.

Cited by (0)

View full text