1 Introduction

Social robots, which are designed to interact socially with people (Breazeal et al. 2016), are becoming increasingly present in both personal and professional domains (e.g., Lutz et al. 2019). As a result, it can be expected that, in the near future, human-robot interaction (HRI) will more frequently occur and the emergence of human-robot relationships will become more common (Edwards et al. 2019). Children, in particular, have a strong tendency to relate socially to non-human entities (Epley et al. 2007). Although child-robot relationships are thus likely to emerge, it remains unclear to what degree these social bonds will resemble children’s relationships with people, pets, and devices (Kory Westlund et al. 2018). That is, research has demonstrated that children’s hybrid conceptualization of social robots overlaps, but does not entirely coincide, with their conceptualizations of humans, animals, or objects (Kahn et al. 2013). At the same time, children do perceive robots to be social others that could potentially be their friends; mental others that are intelligent and emotional; and partly moral others that deserve to be treated fairly (Kahn et al. 2012). Social robots may thus have several practical applications. They could, for instance, accompany hospitalized children in times at which no human presence is allowed (e.g., during radiation therapy; Ligthart et al. 2019a, b), or support diabetic children in self-managing their condition (e.g., Baroni et al. 2014).

At the same time, there are concerns about how children’s perception of robots as social, mental, and moral others may be encouraged by the way in which robots are presented to children. First, questions have been raised about the potentially ‘deceptive’ nature of the often-employed Wizard-of-Oz (WOZ) set-up, in which a robot is being remotely controlled during the interaction (for a discussion, see Kory Westlund and Breazeal 2015). Social robots are currently still rather limited in autonomously interacting with people, and children in particular, in a both socially advanced and technologically reliable manner (e.g., Tolksdorf et al. 2020; van den Berghe et al. 2019). Therefore, child-robot interaction (CRI) studies often rely upon the WOZ set-up (Kory Westlund and Breazeal 2016; van Straten et al. 2020b). Robots’ limited social capacities can in this way be overcome to some extent (e.g., Stower et al. 2021), giving children “the impression that they are interacting with a [robot] that understands [them] as well as another human would” (Kelley 1984, p. 27) and that appears to be autonomous and thus qualifies for some degree of moral standing and accountability (Johnson 2011; Neeley 2014).

A second concern about current social robots centers on the presentation of robots to children with a backstory that does not accurately reflect their mechanical nature, thus possibly strengthening children’s social behavior and feelings toward them (see Kory Westlund and Breazeal 2019b). For example, in many CRI studies in which (humanoid) robots interact with children verbally, robots engage in self-description, usually by referring to themselves in the first-person and telling children about themselves. This act of self-description implies that a robot possesses knowledge about itself, or that it has a mental representation of its ‘self’ (Lewis 2011). Moreover, when a robot shares information about itself, children may get the impression that the robot is a social actor with a personality of its own (Kory Westlund and Breazeal 2019b; Ligthart et al. 2020), which may lead to the ascription of traits, dispositions, and capacities to the robot (Epley and Waytz 2010). The attribution of a personal identity, in turn, entails that the robot is autonomous (Wrigley 2007) and, thus, morally accountable (Johnson 2011; Neeley 2014).

If we knew how child-robot relationships emerge when robots are not presented as more advanced than they currently are, new light would be shed on the societal and ethical discussion surrounding the topic. However, current research is inconsistent about the effects of presenting robots as they are, for example by making children aware of robots’ remotely controlled nature (e.g., Cameron et al. 2017; de Haas et al. 2016; Tozadore et al. 2017; Turkle et al. 2006). In addition, it remains unclear how a robot’s self-description (i.e., conveying self-related information from a first-person perspective) affects children’s relationship formation with a robot as well as their perceptions of it.

Leite and colleagues (2017,2016) compared, in two studies, how children in the age ranges of 4–6 and 7–10 respectively responded to a social robot. These studies consistently showed children in the older age group to be more critical of, and sensitive to, aspects of the robot’s communication (Leite et al. 2017; Leite and Lehman 2016). Indeed, children in middle childhood (i.e., 6–12 years of age; Cole et al. 2005) become increasingly sensitive to social conventions and discourse flexibilities (e.g., Stafford 2004). In addition, they become increasingly able to discern facts from fiction (e.g., Stafford 2004). Against this background, a robot’s self-description and transparent teleoperation may affect how children in middle childhood perceive and relate to a social robot.

Both self-description and transparency about the teleoperation procedure tap into the robot’s status as more or less of a ‘self’: an entity that controls its own actions and has its own unique backstory. They may influence children’s perceptions of social robots and their relationship formation with them independently of each other, but they may also interact: The effects of self-description may depend on whether children are informed in a transparent way about the robot. We therefore studied, in a two-factorial experiment among children aged 7–10 years, whether (a) being transparent about the WOZ set-up before an interaction and (b) a robot’s engagement in self-description affect children’s perception of, and relationship formation with, a humanoid robot. Research on children’s interactions and relationship formation with social robots is still in an early stage (e.g., Peter et al. 2019; Stower et al. 2021) and modeling relationships between various CRI-related concepts requires more basic, preparatory study (e.g., Oliveira et al. 2021; van Straten et al. 2020b). Hence, it seems too early for well-founded predictions about complex interrelationships between children’s robot perceptions and child-robot relationship formation and for fielding the pertinent empirically more demanding studies. We, therefore, decided to initially focus on studying the direct effects of transparency and self-description on children’s perceptions of, and relationship formation with a social robot.

2 Theoretical framework

2.1 (Transparent) teleoperation in CRI

Several CRI studies have been transparent to children about the WOZ procedure either by informing children about the teleoperation set-up or by demonstrating it to them. In a study among 8- to 13-year-olds, Turkle et al. (2006) found that informing children, after their interaction with a humanoid robot, about the robot’s teleoperated working neither influenced children’s perception of the robot as alive, intelligent, emotional, and humanlike, nor their sense of relationship with it. Tozadore and colleagues (2017), in contrast, reported that children (aged 7–11) perceived a humanoid robot to be less intelligent after hearing that it had been remotely controlled during their conversation with it. When, in yet another study, Cameron et al. (2017) overtly activated a humanoid robot’s emotional expressions by pressing a button on the robot’s chest, children younger than 6 years of age perceived the robot as machine- rather than person-like. Yet, children older than 6 years of age considered the robot a machine regardless of its apparent autonomy (Cameron et al. 2017).

Likewise, De Haas et al. (2016) found that 7- to 8-year-old children’s perceptions of, and behavior toward, a humanoid robot did not differ between conditions in which it functioned autonomously or was being remotely controlled. However, De Haas et al. (2016) did not actively bring the teleoperation procedure to children’s attention and it may have gone unnoticed. This idea is supported by an exploratory study on child-computer interaction, which found that children (aged 12–13) were generally unaware of a teleoperator’s presence (Read et al. 2005). Finally, three studies used robots incapable of verbal interaction. After watching a robotic dog perform a series of movements, children aged 5–7 attributed less physical and emotional sentience as well as less moral standing to the robot in case its movements had been overtly activated by an experimenter (Chernyak and Gary 2016). Similarly, children aged 4–5 were less convinced of a mechanomorphic robot’s memory and vision when its movements had been overtly teleoperated, while their belief in its animacy remained intact (Somanader et al. 2011). Yet, Bumby and Dautenhahn (1999) reported that children aged 7–11 ascribed free will to a mechanomorphic robot and continued to anthropomorphize it after seeing a controlling program being downloaded onto the robot.

Some of the abovementioned studies suggest that transparency about the WOZ set-up is effective in changing children’s robot perception. However, the findings of the studies are difficult to compare. Moreover, while Cameron et al. (2017) shed light on children’s categorization of humanoid robots as person- or machine-like, more detailed insights into the effects of transparent teleoperation on children’s perception of, and relationship formation with, such robots is currently lacking. In addition, it remains unknown whether informing children about a humanoid robot’s teleoperated working before their interaction with the robot is effective (see Kory Westlund and Breazeal 2016, for a research proposal).

Regardless of whether transparency about the WOZ set-up is effective or not, giving children the impression that a robot functions autonomously while it is actually being teleoperated may raise ethical questions (see Kory Westlund and Breazeal 2015, 2016). For instance, Scheutz (2012) has outlined that perceived autonomy is crucial to the perception of robots as social, humanlike agents. He emphasizes that robots are no agents and argues that ‘falsely pretending’ the opposite may lead to the emergence of ‘unidirectional emotional bonds’ that may have negative consequences for the human (see Scheutz 2012). Indeed, agency plays an important role in mind perception (see Gray et al. 2007), which largely determines whether we attribute humanlike characteristics to, and form humanlike relationships with, nonhuman others (Epley and Waytz 2010). From both empirical and normative viewpoints, it thus appears timely to investigate whether transparency about a robot’s teleoperated working might alter children’s perception of the robot as well as their relationship formation with it.

Transparency about the WOZ set-up may affect at least five concepts relevant to children’s perception of robots as social, mental, and moral others. First, transparent teleoperation may encourage children to think of a robot as an object rather than an ‘other’, thus affecting their perception of the robot’s animacy (i.e., the “perception of life”, Bartneck et al. 2009, p. 74). Second, transparency may influence their perception of the robot’s autonomy, or “the degree to which the decision-making process used to determine how [its goals] should be pursued, is free from intervention by any other agent” (Barber et al. 2000, p. 133). Third, children’s anthropomorphic thinking, or “the tendency to imbue the real or imagined behavior of nonhuman agents with humanlike characteristics, motivations, intentions, or emotions” (Epley et al. 2007, p. 864) may be affected. Anthropomorphism, fourth, interacts with a social presence, or the degree to which the artificiality of a robot goes unnoticed (Lee 2004), and is influenced, fifth, by the degree to which a robot is perceived to be similar to the self (Ames 2004; Epley et al. 2007).

As children’s reasoning about humans and robots overlaps (Kahn et al. 2012, 2013), the literature on interpersonal relationship formation is useful to determine concepts that are relevant to the emergence of a social relationship between a child and robot. Feelings of closeness and trust seem of primary interest here: They develop interdependently and are central to both the emergence of interpersonal relationships in general (Berscheid and Regan 2005) and children’s friendships in particular (Bauminger-Zviely and Agam-Ben-Artzi 2014). Closeness constitutes a feeling of intimacy or connectedness that may develop into friendship (Sternberg 1987). Trust, in turn, has been defined as the belief in another person’s benevolence and honesty (Larzelere and Huston 1980).

In line with Kahn et al. (2012), we consider children’s ratings of a robot’s animacy and social presence as indicative of their perception of the robot’s social otherness; children’s ratings of a robot’s anthropomorphism and similarity to themselves as informative about their perception of its mental otherness; and children’s ratings of a robot’s autonomy as providing insight into children’s perception of the robot as a moral other. While Kahn et al. (2012) see child-robot relationship formation as an inherent aspect of children’s treatment of robots as social others, we distinguish between children’s perceptions of, and relationship formation with social robots as distinct processes. Considering someone a social entity does not automatically entail that one considers this person to be a (potential) friend. In a similar fashion, we hold that perceiving a robot as a social other and experiencing a relationship with it are separate things.

A recent study found that children’s awareness of a social robot’s lack of humanlike psychological capacities (i.e., capacities of ‘mental others’) decreased children’s ratings of a robot’s animacy, anthropomorphism, social presence, and perceived similarity to the self, as well as children’s trust in the robot (van Straten et al. 2020c). Thus, informing children about a robot’s technological rather than human like status alters children’s robot perceptions and affects child-robot relationship formation at least partially. As a robot’s teleoperated working implies that it is not autonomous, we additionally expect that transparency about the teleoperation procedure will decrease children’s perceptions of the robot’s autonomy. Finally, because friendships can be understood as relationships that arise between equal, autonomous entities (Emmeche 2014), “that one chooses to enter—and can choose to leave” (Keller 1997, p. 159), we expect that children’s feelings of closeness toward, and trust in, a robot will decrease when they realize that the robot is being remotely controlled. This expectation receives support from a qualitative, exploratory study among children in middle childhood, which reports that children sometimes based their level of interpersonal trust in a social robot upon their belief in its technological capacities (van Straten et al. 2018). In addition, a recent study on children’s first impressions of a robot’s trustworthiness found that children’s perception of a social robot’s competence predicted their level of trust in the robot (Calvo-Barajas et al. 2020). In summary, we, therefore, hypothesized that:

  • Hypothesis 1a (H1a) Transparency about the teleoperation procedure decreases children’s ratings of a robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity.

  • Hypothesis 1b (H1b) Transparency about the teleoperation procedure decreases children’s feelings of closeness toward and trust in a robot.

2.2 Self-description: Telling you about me

For interpersonal relationships to emerge and develop, it is crucial that interactants provide each other with information about themselves (see Roloff 1976). As described in Berger and Calabrese’s (1975) Uncertainty Reduction Theory, this is especially important in initial interactions and early stages of relationship formation, in which the mutual seeking and sharing of self-related information can decrease people’s uncertainty about each other. In its most basic form, sharing self-related information implies self-description, or the act of sharing factual information about oneself (see Culbert 1967, as cited in Gilbert 1976). While relationship formation generally benefits most from the sharing of increasingly intimate information (Gilbert 1976), the importance of intimacy to friendships still develops during primary school years (e.g., Furman and Bierman 1984; Laursen and Hartup 2002), and continues to increase during adolescence (e.g., Bauminger et al. 2008; Berndt 2004). Hence self-description may suffice in the emergence of children’s early friendships, whether with peers or with robots.

Accordingly, CRI research has suggested that a robot’s engagement in self-description fosters child-robot relationship formation (Kanda et al. 2007; Ligthart et al. 2019a, b; Shiomi et al. 2015; van der Drift et al. 2014).Footnote 1 However, these studies did not focus on self-description as an isolated feature of CRI. It is, therefore, difficult to disentangle the effects of self-description in particular from the effects of the robots’ behavior more generally. Moreover, none of the studies adopted an experimental design with self-description as the independent variable, which impairs causal conclusions.

Self-description can be operationalized as self-reference through the use of first-person pronouns when sharing factual, self-related information (see Curtis 1981, for a similar approach in a related context). When referring to the self as an “I”, the provided information is framed as a backstory unique to the one who is speaking. The adoption of a third-person perspective, in contrast, will turn the information into a general description of a larger group of entities (e.g., people, robots). Findings on the effects of pronoun use by nonhuman entities in human-nonhuman communication are mixed. For instance, Brennan and Ohaeri (1994) found that when a computer agent used first-person pronouns to refer to itself, people used more politeness markers (e.g., please, thank you) and were more likely to use the pronoun “you” to refer to the agent in their responses. This may indicate that they considered the computer agent more of a social agent when it used first-person pronouns (Brennan and Ohaeri 1994). Yet, a study on the effects of personal formulations (i.e., containing pronouns) used by a voice agent found no effects on people’s evaluations of the agent’s humanlikeness (Kruijff-Korbayová et al. 2008). In a CRI context, Kory Westlund and colleagues (2016) found that children’s self-reported robot perceptions were unaffected by an experimenter’s reference to the robot as “the robot” versus “a friend” and using the third-person “it” versus second-person pronouns, a subtle difference in children’s gaze patterns notwithstanding.

However, when a robot itself uses first-person pronouns (or not), different findings may emerge than when an experimenter varies the use of pronouns when describing a robot (as in Kory Westlund et al. 2016). Moreover, the aforementioned studies investigated the effects of pronoun use but did not employ the use of pronouns as a means to operationalize self-description. In an experimental study among adults, Eyssel et al. (2017) found that people’s evaluations of a robot were not affected by a robot’s self-description (unless controlling for individual differences in the tendency to anthropomorphize, which revealed a significant effect of self-description on mind attribution). However, in this study, self-description was not operationalized as pronoun use. As Nass and Brave (2005, p. 115) argue, “[w]hen a person avoids the use of I, there must be a reason [and] when personhood is in question, the use of I can resolve the ambiguity”. They add that not using first-person pronouns when speaking about oneself communicates that one does not have “full human status” (Nass and Brave 2005, p. 115). As a consequence, a robot’s avoidance of the use of personal pronouns may decrease children’s perception of the robot as a social, mental, and moral other. Given the centrality of sharing self-related information and reducing the other’s uncertainty about the self to the emergence of interpersonal relationships, self-description—operationalized as self-reference through the use of first-person pronouns—may also affect child-robot relationship formation. Therefore, our second hypothesis predicted:

  • Hypothesis 2a (H2a) A robot’s engagement in self-description increases children’s ratings of the robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity.

  • Hypothesis 2b (H2b) A robot’s engagement in self-description increases children’s feelings of closeness toward and trust in the robot.

While research on the topic is scarce, the findings presented in an unpublished study by Huang and colleagues (2001), which is available online and cited in Nass and Brave (2005), indicate that the degree to which an artificial entity is perceived to be humanlike may also influence people’s responses to its use of first-person pronouns. Huang et al. (2001) found that people felt comfortable with a recorded, but not with a synthetic (i.e., non-human, artificial) voice engaging in self-reference using the pronoun “I”. Moreover, their trust in a synthetic voice system decreased when it referred to itself by saying “I” (Huang et al. 2001). Thus, the type of voice (recorded vs. synthetic) interacted with pronoun use in that the effects of the system’s use of first-person pronouns were dependent upon the implemented type of voice. As Nass and Brave (2005, p. 119) put it, when the system used a synthetic voice, its use of pronouns was considered an “attempt to claim humanity” that caused suspicion, leading to negative evaluations of the system.

Huang et al. (2001) studied pronoun use to establish (im)personal formulations rather than self-description, but a similar interaction effect may occur in the present study: When the teleoperation procedure is transparent, children may feel that the robot’s self-description is out of place because the robot is, in fact, not an independent entity (i.e., not a ‘self’). This discrepancy may further increase children’s awareness of the robot’s inanimate, machinelike status. In terms of child-robot relationship formation, when children know the robot is being remotely controlled, they may understand that the robot’s engagement in self-description is unspontaneous and, therefore, less meaningful. In contrast, when children are not aware of the teleoperation procedure, it may appear to them as if the robot chooses to tell them something about itself. This impression may give children the feeling that the robot is actually invested in the process of getting to know each other, which may be beneficial to their experience of the robot as a potential friend. Therefore, our third hypothesis predicted that the effect of a robot’s engagement in self-description on children’s perception of, and relationship formation with, the robot is moderated by transparency about the teleoperation procedure:

  • Hypothesis 3a (H3a) When the teleoperation procedure is transparent, as opposed to when it is not transparent, the robot’s engagement in self-description will decrease children’s ratings of the robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity.

  • Hypothesis 3b (H3b) When the teleoperation procedure is not transparent, the positive effect of the robot’s engagement in self-description on closeness and trust will be stronger than when the teleoperation procedure is transparent.

3 Methods

We conducted a two-factorial experiment with teleoperation (overt/covert) and self-description (operationalized as self-reference through personal pronouns) as between-subject factors. Before we started the data collection, we obtained ethical approval for carrying out this study from the Ethics Review Board of the Faculty of Social and Behavioral Sciences of the University of Amsterdam.

3.1 Participants

We collected data at four primary schools across the Netherlands. We asked for active written consent from the schools as well as from children’s parents. On the parental consent form, parents were asked to report whether their child had any medical condition. In an accompanying letter that informed parents about the study, we explained that although all children would be welcome to participate, data from children with medical conditions that could interfere with the study’s scientific goals would be excluded from analyses.

We were able to collect data from 172 children in the age range of 7–10 years. The data of four children were excluded from analyses as they had participated in an earlier data collection of ours (one child); did not properly understand the questionnaire procedure (one child); or were diagnosed with Autism Spectrum Disorder (ASD). We excluded the data of children with ASD because, first, these children tend to experience difficulties with respect to social interactions (American Psychological Association 2013) and relationships (e.g., Eisenmajer et al. 1996), and, second, ASD seems to be associated with atypical anthropomorphic reasoning (Epley et al. 2007). Therefore, we analyzed the data of 168 children (74 male, 94 female, Mage = 9.02, SDage = 0.71), who had been randomly assigned to the four experimental groups. We found no significant differences in age, F(3, 164) = 0.930, p = .428, or biological sex, χ2 (3, N = 168) = 0.193, p = .979, across the groups, which indicates that the randomization procedure was successful. Occasionally, a few children indicated that they did not know how to answer particular items of the questionnaire, resulting in missing values. These children were excluded from the analysis of the respective measure.

3.2 Interaction task and manipulation

Each child engaged in one short interaction with the Nao robot (Softbank), during which they asked the robot eight pre-determined questions (e.g., “Are you a boy or a girl?”, “Do you ever get tired?”) from a question sheet. In case children had difficulty reading the questions, the experimenter helped them out. During previous data collections (e.g., van Straten et al. 2020c) children had often tried to ask the robot questions, which we used as inspiration while designing the current interaction scenario.

We made sure that the robot did not engage in any behaviors that could influence children’s perceptions of the robot as alive or humanlike beyond our experimental manipulations. The robot, therefore, did not conform to any social conventions (e.g., greetings, listener responses) and stood completely still without blinking throughout the interaction. In the overt teleoperation condition, the experimenter told the child, prior to the interaction, that she would control the robot from a laptop. She explained that Nao could not talk with the child on its own, that she had to press a button upon every question to make the robot respond, and that the child could only pose the first question after she had started up a computer program containing the answers. To children assigned to the covert teleoperation condition, the experimenter said that as the questions were provided on the question sheet, they would not be in need of her help during the interaction, such that she would go and do something else for a while. The explanations that were provided across the groups were matched in length.

In the self-description condition, the robot referred to itself by using the personal pronoun “I” when answering the child’s questions (e.g., “I’m not a boy, but also not a girl: I’m just a robot”), while in the condition in which the robot did not self-describe, it only referred to robots in general (e.g., “Robots are no boys but also no girls: Robots are just robots”). Apart from the difference in pronoun use and some minor, unavoidable adjustments, the robot’s answers were identical across conditions.

3.3 Procedure

Before taking the first child to the experimental room, the experimenter explained the study procedure to the children at class level. She showed the children a picture of the robot and explained, in age-appropriate language, that participation could be stopped any moment without justification. Furthermore, children were assured that their data would be stored and processed such that others could not find out who had given which answers. They were given the opportunity to ask additional questions about any aspect of the study procedure. Answers to questions that could influence the findings were postponed until the debriefing.

Once everything was fully clear to the children, children came to the experimental room one by one, where the experimenter awaited them. The robot was activated before children’s arrival. The experimenter asked the child to sit on the floor in front of the robot, indicating that the child could freely determine how close to the robot s/he would like to sit. The experimenter sat down next to the child and asked explicitly whether the child would still like to participate, reminding him/her that the interaction could always be stopped at any point in time.

Upon an affirmative answer from the child, the experimenter handed him/her the question sheet and explained that s/he could ask the questions, one by one and in the right order, to the robot. She told the child that the robot’s name was Nao, and that once Nao would have answered all questions, she would have some questions for the child (i.e., the questionnaire). In the overt teleoperation condition, she then explained the WOZ procedure. In the covert teleoperation condition, she told the child that she would go and do something else. The child was asked to save any questions that were not on the question sheet for later. Once the child understood the procedure, the experimenter took place behind the laptop (see Fig. 1 for a picture of the experimental setting).

Fig. 1
figure 1

A child interacting with the robot during the experiment (picture taken from experimenter viewpoint and published with active parental consent)

When the robot had answered the last question, the experimenter put it in a stand-by modus (i.e., seated position) and asked the child to join her at a table. After filling in some demographics, the experimenter explained the questionnaire procedure, introducing children to the answer scale and familiarizing them with the question format through several practice items (e.g., “I like French fries”; the familiarization phase was inspired by Leite et al. (2017)). Once the child had indicated to be ready for starting the questionnaire, the experimenter presented them with a series of questions tapping into their perception of, and relationship formation with, the robot. The questionnaire ended with a treatment check, which consisted of two semantic differentials. This answer format was explained to the children before they answered the semantic differentials.

When the experimenter asked the child to return to his/her classroom and call the next child, she asked them not to discuss the content of the interaction and/or questionnaire with other children until the debriefing. When all children had finished their participation, they were debriefed at class-level (see Schadenberg et al. 2017, for a similar approach). The experimenter informed children about the robot’s mechanical nature and working and explained the pre-programmed nature of the interaction using a screenshot of a Choregraphe program as an example. She pointed out some differences between robots and humans (i.e., current robots’ lack of truly human capacities). To children who had been exposed to the covert teleoperation condition, she revealed that she had controlled the robot from a distance. Judging from their surprise, children in the overt teleoperation condition had kept this information a secret. The experimenter explained why she had told some children but not others about the WOZ procedure in advance. She also indicated that while the robot had said almost exactly the same things to each child, there was one more difference: To some, the robot had referred to itself saying “I”, while to others, it had exclusively talked about “robots” in general. The purpose of this manipulation, too, was explained. To finish the debriefing, children were allowed to pose any remaining questions.

3.4 Measures

The questionnaire consisted of closed-ended questions and used a five-point Likert response scale (see Appendix A). The answer options ran from (1) “does not apply at all” to (5) “applies completely”, and their meaning was illustrated by bars of increasing height that did, however, not contain any indication as to the desirability of the answer options (e.g., colors, smileys; see Severson and Lemm 2016 for the original visual response scale). The suitability of the answer scale for children in this age range was confirmed in earlier data collections (de Jong et al. 2020; van Straten et al. 2020a, c).

The questionnaire first tapped into children’s perceptions of the robot’s animacy, anthropomorphism, social presence, and similarity to themselves. Subsequently, children’s feelings of closeness toward and trust in the robot were assessed, followed by a measure of perceived autonomy and, finally, the treatment check (see Appendix B for the items used to measure each concept). The measures were ordered such that earlier ones would minimally influence the later ones. In contrast to the other perception measures, the measure of perceived robot autonomy was placed toward the end of the questionnaire. Perceived robot autonomy was a new measure and we tried to avoid that potential confusion arising from it would affect children’s performance on the other measures. The one-factorial structure of the measures of animacy, anthropomorphism, social presence, perceived similarity, closeness, and trust was confirmed in earlier studies (van Straten et al. 2020a, c).

3.4.1 Animacy

We assessed animacy through a four-item scale inspired by two measures of the concept that were used among adults (Bartneck et al. 2009; Ho and MacDorman 2010). We performed a factor analysis (principal axis factoring, direct oblimin rotation; the same procedure was used for all scales) that confirmed the one-factorial structure of the scale, which explained 33% of the variance. One item (i.e., “Nao can die”) only had a factor loading of .134, which resulted in low internal consistency of the scale (α = .58). Removing the item increased the internal consistency to α = .69. We thus performed our analyses using a three-item version of the measure as it was originally administered. An index score of animacy was computed by averaging the remaining items (M = 2.95, SD = 0.91, skewness = − 0.386, kurtosis = − 0.131).

3.4.2 Autonomy

Based on two measures of (robot) autonomy used among adults (Rijsdijk and Hultink 2003; Rosenthal-von der Pütten et al. 2017), we developed a five-item measure to assess this concept in a CRI context. The items tapped both into autonomy itself and the moral accountability resulting from this notion.Footnote 2 The five items loaded onto one factor that explained 37% of the variance, and the scale had good internal consistency (α = .72). An index score of autonomy was computed by averaging the items (M = 2.81, SD = 0.90, skewness = 0.013, kurtosis = − 0.499).

3.4.3 Anthropomorphism

Anthropomorphism was measured using a four-item scale that was based on the technology dimension of the Individual Differences in Anthropomorphism Questionnaire-Child Form (IDAQ-CF) as presented by Severson and Lemm (2016). The one-factorial structure of the scale was confirmed for the present sample and explained 26% of the variance. There was one item with a low factor loading (i.e., “Nao knows that Nao is a robot”, factor loading .186). As a consequence, the internal consistency of the scale was low (α = 54). Because removing the item did not substantially increase internal consistency, we maintained the original scale, for which an index score was computed by averaging the items (M = 3.46, SD = 0.73, skewness = − 0.283, kurtosis = − 0.421).

3.4.4 Social presence

To assess social presence, we used a four-item scale inspired by an adult measure presented by Heerink and colleagues (2010). The factor analysis confirmed that the items loaded onto one factor that explained 60% of the variance. The scale was internally consistent (α = .86). We averaged the items to compute an index score of social presence (M = 3.87, SD = 0.88, skewness = − 0.745, kurtosis = 0.333).

3.4.5 Perceived similarity

Perceived similarity was assessed through a four-item scale adapted from the attitude dimension of McCroskey et al.’s (1975) perceived homophily measure. The items loaded onto one factor explaining 41% of the variance, and the scale had good internal consistency (α = .72). We computed an index score of perceived similarity by averaging the items (M = 2.40, SD = 0.74, skewness = 0.187, kurtosis = − 0.149).

3.4.6 Closeness

We measured closeness using a five-item scale that we developed for use in CRI settings and validated among children aged 8–9 years old (van Straten et al. 2020a). The one-factorial structure of the scale explained 52% of the variance in the present sample and internal consistency was good (α = .84). An index score of closeness was computed by averaging the items (M = 3.88, SD = 0.72, skewness = − 0.484, kurtosis = 0.568).

3.4.7 Trust

Trust was assessed through a four-item scale based on a measure by Larzelere and Huston (1980). The factor analysis confirmed the one-factorial structure of the scale that explained 46% of the variance. The scale was internally consistent (α = .74). We computed an index score of trust by averaging the items (M = 4.28, SD = 0.61, skewness = − 0.694, kurtosis = − 0.172).

3.4.8 Treatment check

The treatment check consisted of two seven-point semantic differentials, the first tapping into the robot’s self-description and the second addressing its teleoperation. The first item asked children to indicate whether the robot had talked about itself (left-hand extreme; this answer option corresponded to a score of 1) or about other robots (right-hand extreme, corresponding to a score of 7; M = 3.05, SD = 1.97, skewness = 0.576, kurtosis = − 0.832). The second item asked whether, when she took place behind the laptop, the experimenter had said that she would go and control the robot (score 1) or do something else (score 7; M = 3.70, SD = 2.77, skewness = 0.208, kurtosis = -1.845).

3.5 Analytical approach

The data were analyzed using SPSS Statistics (version 25) and were considered to be normally distributed when skewness and kurtosis ranged between – 2 and 2 (George and Mallery 2010). This was confirmed for all dependent variables. We conducted a series of ANOVAs to test the treatment check and hypotheses. The assumption of homoscedasticity was only violated for the treatment check. We, therefore, consulted the parameter estimates with robust standard error for the latter (using the heteroscedasticity-consistent standard error HC3; Hausman and Palmer 2012). As both significance tests provided the same results, we report only the results of the ANOVAs. We initially controlled for potential influences of school and errors in the teleoperation procedure (e.g., ill-timed robot responses, premature activation of the standby mode) in the analyses. As the results of the ANOVAs mirrored the outcomes of the analyses with the two control variables, we only report the results of the model without the control variables.

4 Results

4.1 Treatment check

Children who were exposed to the self-description condition indicated more often that the robot had talked about itself (M = 1.81, SD = 1.15) than children in the condition in which the robot talked about robots in general (M = 4.34, SD = 1.81), F(1, 164) = 116.267, p < .001, part. η2 = .42 . Children in the overt teleoperation condition indicated more often that the experimenter had said that she would go and control the robot (M = 1.53, SD = 1.28) than children in the covert teleoperation condition (M = 6.03, SD = 1.91), F(1, 162) = 320.357, p < .001, part. η2 = .66. No interaction effects between teleoperation and self-description were found. Thus, the treatment check was successful.

4.2 Tests of hypotheses

Table 1 provides an overview of the means and standard deviations for each of the two factors. In addition, the means and standard deviations for each of the four experimental groups can be consulted in Appendix C. According to H1, transparency about teleoperation would affect children’s perception of (H1a) and relationship formation with (H1b) the robot such that in the overt teleoperation condition, children would rate the robot lower in animacy, autonomy, anthropomorphism, social presence, and perceived similarity, and report less closeness and trust. As to children’s robot perceptions, children in the overt teleoperation condition perceived the robot to be less autonomous (M = 2.54, SD = 0.89) than did children in the covert teleoperation condition (M = 3.08, SD = 0.83), F(1, 164) = 17.416, p < .001, part. η2 = .10. In addition, overt teleoperation led children to rate the robot lower in anthropomorphism (M = 3.25, SD = 0.70) than covert teleoperation (M = 3.68, SD = 0.69), F(1, 164) = 15.682, p < .001, part. η2 = .09.

Table 1 Means with standard deviations in parentheses per factor level

However, transparency about the teleoperation procedure had no effect on children’s perceptions of the robot’s animacy, F(1, 163) = 0.158, p = .692, part. η2 = .00, social presence, F(1, 164) = 1.130, p = .289, part. η2 = .01, and perceived similarity, F(1, 164) = 0.682, p = .410, part. η2 = .00. As to child-robot relationship formation, we found no differences in children’s feelings of closeness, F(1, 164) = 0.218, p = .641, part. η2 = .00, or trust, F(1, 164) = 2.318, p = .130, part. η2 = .01, across teleoperation conditions. Thus, H1a was partly supported, while H1b was not supported.

According to H2, self-description through the use of personal pronouns would increase children’s perceptions of the robot’s animacy, autonomy, anthropomorphism, social presence, and similarity to themselves (H2a), and strengthen their feelings of closeness toward and trust in the robot (H2b). In contrast to our expectation, children in the self-description condition perceived the robot to be less similar to themselves (M = 2.28, SD = 0.73) than children in the condition without self-description (M = 2.52, SD = 0.72), F(1, 164) = 4.609, p = .033, part. η2 = .03. Self-description had no effect on perceived animacy, F(1, 163) = 2.741, p = .100, part. η2 = .02, autonomy, F(1, 164) = 3.282, p = .072, part. η2 = .02, anthropomorphism, F(1, 164) = 0.258, p = .612, part. η2 = .00, and social presence, F(1, 164) = 0.142, p = .707, part. η2 = .00, and failed to affect child-robot relationship formation in terms of closeness, F(1, 164) = 1.534, p = .217, part. η2 = .01, and trust, F(1, 164) = 2.134, p = .146, part. η2 = .01. Thus, neither H2a nor H2b were supported.

Finally, H3 predicted that self-description would decrease, instead of increase, children’s ratings of the robot’s animacy, autonomy, anthropomorphism, social presence, and perceived similarity in the overt teleoperation condition (H3a), and increase children’s closeness to and trust in the robot more strongly in the covert than in the overt teleoperation condition(H3b). Neither H3a nor H3b were supported: We found no interaction effects on animacy, F(1, 163) = 3.384, p = .068, part. η2 = .02, autonomy, F(1, 164) = 0.030, p = .862, part. η2 = .00, anthropomorphism, F(1, 164) = 0.097, p = .756, part. η2 = .00, social presence, F(1, 164) = 2.982, p = .086, part. η2 = .02, perceived similarity, F(1, 164) = 0.428, p = .514, part. η2 = .00, closeness, F(1, 164) = 0.010, p = .921, part. η2 = .00, or trust, F(1, 164) = 0.264, p = .608, part. η2 = .00.

5 Discussion

Children’s tendency to treat robots as social, mental, and moral others (Kahn et al. 2012) may partly result from the way in which social robots are presented to them: as autonomous entities that tell children about themselves during CRI. Against this background, we experimentally investigated whether and how transparency about the teleoperation procedure and a robot’s engagement in self-description affect children’s perception of a social robot and their sense of relationship formation with it.

5.1 Effects of transparent teleoperation

Children’s lower ratings of the robot’s anthropomorphism and autonomy in the overt teleoperation condition suggest that transparency about the teleoperation procedure decreased children’s perceptions of the robot as a mental and moral other. At the same time, children’s views of the robot as an animate entity that is similar to themselves remained unaffected. A potential explanation of the absence of transparency effects on animacy and perceived similarity to the self is that, in all conditions, the content of the robot’s answers clearly communicated its mechanical nature. This may have influenced children’s ratings of the robot’s animacy and perceived similarity, which fell closely around (for animacy) or somewhat below (for perceived similarity) the center point of the answer scale across the factors (see Table 1). In other words, children tended to slightly disagree with robot’s similarity to themselves and were generally undecided about its animacy.

Children’s experience of the robot as a socially present entity was independent of their awareness of the teleoperation procedure, which may be explained by our operationalization of the concept. The items that we used to assess social presence asked children about their experience of the robot as a humanlike presence (e.g., “When I was talking to Nao, it felt as though I was with a person”). Children in the covert teleoperation condition may have experienced the robot as socially present because of its seemingly autonomous working. Children in the overt teleoperation condition, in contrast, may have experienced the robot’s presence as humanlike because of, rather than despite, their knowledge of the teleoperator’s involvement in the interaction. Although their perception of the robot in terms of humanlike capacities decreased (i.e., anthropomorphism), children may thus have interpreted the ‘human behind the machine’ as a reason to ascribe humanlike presence to the robot.

Children’s relationship formation with the robot in terms of closeness and trust was unaffected by transparency about the teleoperation procedure. As noted by Serpell (2003) in the context of human-animal bonding, the inability of non-human others to lie, criticize, and betray may foster a sense of support and intimacy. Likewise, judging from children’s comments during the experiment, the robot’s lack of autonomy may have given children reasons to trust the robot: The inability to act on its own disables the robot, in the children’s view, from behaving unreliably (e.g., the robot is unable to pass on secrets), and the preprogrammed nature of its responses may have prevented children from questioning the robot’s honesty. As children’s comments only provide initial, anecdotal evidence for this line of reasoning, future research should further investigate this possibility.

Turkle (2007) has argued that children bond with relational artifacts “not because of what these objects [can] do (physically or cognitively) but because of the children’s emotional connection to the objects and their fantasies about how the objects might be feeling about them” (Turkle 2007, p. 507). In contrast to this statement, a recent study (van Straten et al. 2020c) found that children trusted a robot less when they were made aware of its lack of human psychological capacities (i.e., intelligence, self-consciousness, emotionality, identity construction, and social cognition). Still, in the present study, children tended to be aware of the robot’s lack of autonomy and human-likeness but tended not to care about it. It thus seems too early to conclude that one particular robot feature is responsible for child-robot relationship formation. Possibly, children’s persistent view of social robots as potential friends is determined in part by their experience and in part by the capacities of a robot.

5.2 Effects of self-description

The robot’s self-description (operationalized as self-reference through personal pronouns) did not affect children’s perceptions of the robot in terms of animacy, autonomy, anthropomorphism, or social presence. Similar to the absent effect of transparency on animacy, the absence of an effect of self-description on animacy may result from the content of the interaction. More generally, the robot’s avoidance of self-reference may have appeared less meaningful to children than expected because of the emphasis of the robot’s answers on its own technological nature. Against this background, children may not have been surprised when the robot did not refer to itself by using “I”.

The content of the robot’s answers may also explain why children perceived the robot to be less similar to themselves, and thus as less of a ‘mental other’, when it engaged in self-description. Across the factors, children perceived the robot to be rather dissimilar to themselves (see mean similarity scores in Table 1), which may be a consequence of the robot explaining to the children that it does not possess characteristics such as biological sex or age. The robot’s use of the pronoun “I” may have emphasized even more strongly to children how this robot in particular, rather than robots in general, fundamentally differs from them with respect to such characteristics, resulting in an adverse effect of self-reference on perceived similarity.

Children’s feelings of closeness toward, and trust in, the robot also remained unaffected by the robot’s self-description. Next to children’s general persistence in considering social robots as potential friends (see above), the absent effects of self-description on child-robot relationship formation may indicate that self-disclosure may be more effective than self-description when the aim is to promote the emergence of a social relationship between a child and robot. Even though the importance of intimacy to friendships is still developing during primary school years (e.g., Furman and Bierman 1984; Laursen and Hartup 2002), some more ‘private’ information may need to be shared to further increase children’s feelings of closeness toward, and trust in, a robot.

Alternatively, and in light of the robot’s openness about its mechanical nature, children may not have considered the robot an individual, but rather an interchangeable mechanical entity. The robot’s provision of general information about robots may have reduced their uncertainty about this robot to the same extent as when it provided information specifically about itself. By extension, children may thus have seen friendship potential in social robots generally, rather than in this particular robot—which would make any child-robot relationship that emerged rather impersonal (see also Fox and Gambino 2021, on HRI). This explanation is supported by the outcomes of the treatment check. The mean score of children to whom the robot did not self-describe fell near the center point of the semantic differential asking them to indicate whether the robot had talked about itself or about other robots. Apparently, the children in the condition in which the robot only talked about robots in general still thought that the robot had, indirectly, provided them with information about itself.

5.3 Limitations

Our study has four limitations. First, our operationalization of the robot’s self-description was rather unobtrusive (i.e., only the use of pronouns differed between the conditions). Being aware of its subtleness, we opted for this operationalization because it constitutes the only way to manipulate the robot’s provision of self-related information without altering the content of the interaction. Second and relatedly, the effects of transparency about the teleoperation procedure might have been stronger if the experimenter, in the overt teleoperation condition, had controlled the robot within children’s direct line of sight (i.e., sitting down next to the child). Instead, she sat down behind a table which, depending on the room in which the experiment was conducted, was more or less visible to children when facing the robot. Third, the robot’s openness about its technological status during the interaction may have obscured expected findings. However, our goal was to investigate children’s responses to robots as they are currently entering our society without actively portraying them as social, mental, and moral others. The robot’s answers thus had to provide realistic information.

Fourth and finally, our expectations about the effects of transparent teleoperation and self-description on children’s perception of, and relationship formation with, the robot may have been biased by what can be expected in an interpersonal context, when adopting an adult perspective: If a human refuses to refer to him/herself saying “I” and only speaks of “people” in general, this is considered odd. But if a robot describes characteristics of “robots” rather than of itself, this may match its machinelike status and thus be acceptable. In addition, it may seem evident to adults that when a robot is being remotely controlled, it lacks social presence and does, by extension, not qualify as a potential friend. Yet the absent effects on social presence and trust, in particular, demonstrate that children’s reasoning about the teleoperator’s involvement in the interaction may follow different patterns. While our questionnaire only included closed-ended questions, the inclusion of open-ended ones may aid future studies to further elucidate children’s reasoning about robots.

6 Conclusions

Our findings tentatively suggest that research on children’s interactions with robots in general, and on relationship formation between children and robots in particular, may benefit from some reorientation. First, CRI research should also investigate children’s responses to robots in realistic interaction settings that leave the depiction of robots as social, mental, and moral others up to children’s imagination instead of, actively or passively, portraying robots in ways that do not match their current status and capacities. Insights from such studies may help to inform societal debates about the benefits and drawbacks of social robots in children’s lives. Second, instead of focusing our attention on similarities between interpersonal communication and CRI, we should also ask ourselves how the mechanisms of children’s responses to social robots may deviate from interpersonal processes (see also Fox and Gambino 2021, the broader context of HRI). When children are (made) aware of the differences between robots and humans, interpersonal principles may less seamlessly apply than they seem to do now.

Our findings also indicate that children may consider robots as potential friends regardless of their knowledge of the robot’s teleoperated working and its engagement in self-description. A societally important implication of this finding is that it may be possible to reach potential benefits of child-robot relationship formation (e.g., in education and healthcare applications; Kory Westlund and Breazeal 2019a; Sinoo et al. 2018) without ‘deceiving’ children into thinking robots are more capable and social than they currently are. Future research should investigate whether our findings with respect to the emergence of children’s initial sense of relationship with robots extend to situations in which children interact with robots on a long-term basis. If so, robotic companions could be used in, for example, healthcare and educational settings while minimizing possible negative consequences of child-robot relationship formation (e.g., disappointment about robots’ actual ‘friendship potential’ upon discovering their teleoperated nature; see Kory Westlund and Breazeal 2015).

In short, we need to keep reminding ourselves that “robots are not people” (Dautenhahn 2007, p. 104)—and shape the research agenda accordingly—to critically explore the full range of possible societal implications of social robots for children. Further elucidating the boundary conditions of child-robot relationship formation would advance our understanding of the characteristics of robots that are necessary or sufficient to support children—whether as a complement to or a temporary replacement of interpersonal interaction (e.g., during radiation therapy; Ligthart et al. 2019b). While future robots may be (more) autonomous and may have a wider range of (increasingly humanlike) characteristics and capacities than current robots, the distinction between humans and robots will remain relevant (Fox and Gambino 2021), and, if not overlooked, may be integrated in CRI scenarios to allow for rewarding interactions between robots and children.