Introduction

Nations worldwide consider robots part of the package in dealing with societal challenges of aging societies such as increased demands for care and health care provision. As technology advances, pilot projects have emerged in many countriesFootnote 1 that test and embed robots in care and healthcare environments. This is in line with the agenda of the German Ministry for Education and Research (BMBF) to pursue assistive technology as a possible solution in the care of older adults (BMBF 2018). Integrating robots into care routines, however, defines a substantial change in an understanding of care as a human to human, social engagement in which ethical sensitive actions and phenomena occur. This leads to an advanced discussion of core social and relational concepts that try to capture these phenomena. One example is the concept of trust, which has been discussed as essential for the success of therapeutic interventions (Thom et al. 1999) and human–robot interaction (Langer et al. 2019; Koren et al. 2022). In this paper, we wish to address the ongoing need for ethical and normative reflection on the ‘question of trust’ (Kellmeyer et al. 2018) on the basis of current but also possible future (e.g., complemented by neurotechnological and artificial intelligence [AI]Footnote 2) applications of care robots for the older population. In asking if robots could be trustworthy, we highlight and criticize the philosophically insufficient conceptualization of trust and trustworthiness as a measurable design feature.

Trust and trustworthiness should not be reduced to mere means for facilitating the acceptance of socially assistive robots (SARs). Thus, the main aims of this paper are to describe trust as an essential capability of living beings to generate relations with others and to analyze the ethical implications of robots that can simulate core dimensions of this capability to engender trust in humans.

We argue that care situations in particular show this relational condition for trust in the necessity for responsive others as indispensable for care. Someone who does not respond to any of the trustor’s needs could not be considered caring, even if that person is a nurse. We want to take this observation further and argue that the practice of responding is not only a prerequisite for care but also for trust to occur at all, which explains why human beings seem to be more incited to trust someone they perceive as caring, i.e., responding in a caring way. To foster the argument, we will proceed in four steps:

  1. 1.

    The analysis of trust and trusting in a caring context elucidates the bidirectionality of the concepts of care, trust, and responsivity (Fig. 1) from a phenomenological–anthropological and situational perspective. Trust, in our definition, is a responsive phenomenon that emerges from the human capability, i.e., the anthropological disposition and realized functioning to trust under certain conditions. Responsivity is structurally essential for trust and for care, while trust and care interrelate on a qualitative scale.

  2. 2.

    The phenomenological–anthropological analysis addresses the structural phenomenological level of responsivity: We start from the premises that all human responsive phenomena are grounded in responsivity, which serves at the phenomenological level as the basic meaningful relational engagement with ‘others’. Responsivity has been studied in phenomenological research as a foundational structure that is constitutive for responding to the (ethical) demand of the other (Waldenfels 1987, 1994, 1997). Without responding, care would not be different from just a ‘procedure’ or ‘following a protocol’. In that, care is a relational engagement defined by responding. In our adaptation of the Capability Approach by Martha C. Nussbaum (2000, 2011), responding in a caring way to the human dispositions and capabilities means supporting human flourishing. In the following, we will make the case that without responsivity, relations would not account for the possibility of trust and trustworthiness.

  3. 3.

    By introducing the phenomenological structure of responsivity to the ethical debate, we want to propose a different approach for evaluating trustworthiness. The analysis of the phenomenological–anthropological conditions for trust leads to the question in which case a human being might be incited to trust but should not since the robot does not (and cannot) respond to inherent demands that human responsivity allows for with respect to the realization of human capabilities. As we understand trusting to be a scalable engagement that relates to the scope and quality of the responsive interaction, we argue that good care means to respond to the existential complexity of the human being (e.g., its dignity, vulnerability, lived body experience, situatedness, and capabilities). The trustworthy other is therefore someone who lives up to these complex ethical demands of responsivity.

  4. 4.

    Based on a deeper understanding of the alterity-relation in human–technology interaction—that robots could be ‘others’ to which we can relate in complex ways (Müller 2022)—, we want to examine the capability to trust as the ability of living beings to generate relations with diverse forms of responsive others. Robotic systems that are constructed to react to this human disposition, e.g., through AI-complemented social functions, could thus be part of a responsive trust situation and relation. In looking at instances of responsive behavior in humans and robots in order to create a preliminary hypothesis on possible similarities and differences, we still see the need to limit the scope of this paper to the theoretical analysis that should be expanded by in-depth empirical research. The phenomenological–anthropological approach, however, allows for the account of the first-person subjective experience to be integrated into the theoretical work. We will argue that robots should not be designed as too trust-inciting because the constructed responsive qualities do not suffice for a reciprocal and dignified relationship since constructed responsiveness in robots is not accompanied by a sense of responsibility necessary to meet the existential complexity of the human being.

Fig. 1
figure 1

Illustration of the relationship between care, responsivity, dignity, and trust

In the following, we will first ask the question what it means for a ‘social’ robot to be social and offer a short survey on existing and future (AI-complemented) applications of SARs in care. Further, we want to take a closer look at the multifactored embeddedness of the phenomenon of trust in care situations and on how the qualities and features of the robot interplay with the human responsive capability for trusting. After discussing concepts of trust and trustworthiness in philosophy and research as well as existing guidelines for technology development along the example of AI, we will conduct a preliminary analysis of the responsive qualities of SARs and how they are perceived as trust-inducing social affordances. In the last section, we will discuss the ethical and normative dimension of the potential for deception when designing ‘responsive machinic-others’ for human users.

Social robots and the need for trust

Social robots are considered ‘social’ because they are designed to interact with people “in a natural, interpersonal manner—often to achieve social–emotional goals” (Breazeal et al. 2016). Part of this interaction is ‘natural’ communication “using both verbal and nonverbal signs”, and engaging as a ‘partner’, “not only on a cognitive level, but on an emotional level as well” (Breazeal et al. 2016, p. 1935). This leads to the goal of designing so-called “empathetic” robots (Misselhorn 2021; Henschel et al. 2021) which are able to detect human emotions and reply with a programmed emotional reaction-schema (e.g., AI robot systems like MABU).

In the context of care, the assistive functionality of robots has priority. Robots may be categorized as “contact assistive robots” (Feil-Seifer and Mataric 2005) like robot arms, which aid and support a human user by close contact to the body, and ‘non-contact assistive robots’ for which their communicative abilities can become more defining. However, they are not mutually exclusive as the pet robot PARO demonstrates which is contact assistive but communicates nonverbally. From a descriptive perspective and depending on their abilities, we can speak of different degrees of social features that are used to facilitate the general assistive purpose.

Yet, categorizing social robots remains difficult: As PARO also demonstrates, the label ‘socially assistive’ does not only imply the means (having social features like communication) but also the purpose (assisting with social needs [e.g., reducing apathy and enhancing interaction of the patient]). Purpose and means tend to blend. SARs provide special assistance through these social engagements (Feil-Seifer and Mataric 2005), which can be individualized and adapted to a patient’s needs.

Why is trust relevant for interaction with social robots in care?

The discourse about patients’ needs in care highlights the role of trust for successful therapeutic and care interactions (Pellegrini 2017; Greene and Ramos 2021; Dinç and Gastmans 2013, 2011; Peter and Morgan 2001). Trust in general is regarded as highly relevant for successful patient–physician relationships (Montgomery et al. 2020; Ridd et al. 2009; Thom et al. 1999). The quality of the patient–physician relationship and the ability of the physician to ameliorate patient cooperation refer to trust as part of the bond in the therapeutic alliance or working alliance (Müller et al. 2014; Bordin 1979, 1974), as well as to psychological findings about the human capacity to form trusting relationships as part of early child development (Koepke and Denissen 2012; Erikson 1950).

In the case of assistive social robots, the robot is being integrated into the therapeutic or care interaction. The understanding of socially assistive robots (Lewis et al. 2018) in care starts from the premise that the robot, too, needs to induce trust in order to successfully fulfill the role of a social interaction partner in care. To explain the effects of the robotic interventions, the phenomenon of trust is considered to be an important factor as it emerges in and from social interactions.

The functionality of social robots thus adds a certain urgency to the question of trust. Current models differ from the historically prototypical ‘industrial robot’ in that they are often described as ‘intelligent’.Footnote 3 The metaphor refers to an enhanced computational functionality made possible by the implementation of machine learning, such as artificial neural networks for deep learning, sensory endowment, and data ‘memory’.Footnote 4

This increasingly automatized and adaptive functionality makes it more likely that the robots are used without a human supervisor directly present. This may create situations of exposedness to and in some cases dependence on the robot. This is especially critical in care, where the robot assists in actions and tasks that the patient might not be able to perform alone anymore. At the same time, the robot might create the situational atmosphere of ‘caring’ by asking about well-being and needs.Footnote 5

When used in rehabilitation, robots are also programmed to prompt and monitor goal-oriented co-operation. One open empirical question is whether and to what extent humans who interact with these robots form specific beliefs about the ‘good’ intentions (and their truthful realization) of actions that are announced by the robot, or whether they are operating from a baseline level of trust without speculating (implicitly or explicitly) about the robots’ intentions. These aspects underlie the specific relevance of trust in human–social robots interaction in that the machines are implemented and used to create a situation and interactional setting in which the human propensity to trust is activated out of vulnerability, dependency, and susceptibility to constructed social cues (Baier 1986; Ryan 2020; Hoff and Bashir 2015), which is particularly different to other human–AI interactions in which the social dimension is not essential to the interaction (Duran and Jongsma 2021).

Current research, however, neglects this social situational embeddedness and its phenomenological–anthropological dimensions. We agree with Gille et al. (2020, cp. 1) that trust is “relational, highly complex”, “situational and difficult to develop as a general concept”, but we link the problem to generalize to the tendency of epistemological approaches to reduce trust to its cognitive aspects. Assessing the probability for justified trust (Starke et al. 2021) does not consider the embodied and subconscious processes involved in trust. Updating or not-updating a belief on the trustworthiness of an actor or economizing on the monitoring (Ferrario et al. 2021) would reduce the cognitive aspects of the trust relationship. Rational trust theories tend to understand interaction as a transaction (exchanging control or complexity for trust), turning trust into a form of capital. We want to point out that the social dimension in care interactions exceeds the transactional paradigm. This will be demonstrated by a phenomenological–anthropological analysis and qualitative interpretation of the interrelation between the structure of responsivity that underlies human being-in and being-to-the-world and the capability to trust in respect to the design of the robot and its social functionality. As trust is also used in other emerging contexts of human–technology interaction, a brief examination of conceptualizations, here along the example of the EU guidelines for AI, will be helpful.

Guidelines for trust and trustworthiness with respect to AI

According to the EU HLEG guidelines on AI (HLEG AI 2019), trustworthy AI should be: “(1) lawful, complying with all applicable laws and regulations, (2) ethical, ensuring adherence to ethical principles and values, (3) robust, both from a technical perspective and social perspective” (HLEG AI 2019, p. 2). The guidelines themselves do not offer any definition or operational description of the concept of trust or trustworthiness that goes beyond the (not binding) regulatory aim.Footnote 6 Instead, they add more information on key requirements that an AI system should meet “in order to be deemed trustworthy”: “AI systems should empower human beings”, they need to be resilient, safe, secure, human-monitored, accurate, reliable, and reproductive. They should ensure “full respect for privacy and data protection”, and they should be transparent, accessible, fair, sustainable, environmentally friendly, and auditable.Footnote 7

The trustworthy AI-enhanced robot, so to speak, satisfies these attributes, which are partly technical (robust, reliable, accessible, sustainable, safe, secure) and partly procedural (transparent, respectful of laws, regulations, privacy and data protection, as well as respectful of ethical principles and values). The first set of attributes, in our view, refers to the stability of the operation and is better described as a set of features that signal the reliability of the robot since these are expectations about the technical construction. The robot should run smoothly and without unexpected disturbances. It is clear that these guidelines aim at AI developers. Hence, the responsibility for the promised functionality lies with them. They should design AI systems in compliance with these properties, and at the same time, make AI systems comply with these properties. But why do these attributes render an AI system automatically trustworthy?

Structurally, the argument for appealing to the semantics of trust and trustworthiness usually derives from the idea that a ‘trust situation’ involves risk. Describing something as safe, secure, robust, and reliable, thus, includes the claim to have reduced risk to the minimum by applying control. Technically speaking, the chance of system failure can never be eliminated. In this case, trustworthiness seems to be a declaration of limited liability: The engineers have done everything in their power but there is no total security. To be more precise, AI systems most of all create a ‘risk situation’ that the EU guidelines try to solve by appealing to the semantics of trust, mixing the technical with desirable procedural attributes.

Most of all, the EU guidelines seem to adopt the language of consumer trust without reflecting on it. We agree with Gille et al. (2020) that the conceptualization of trust and trustworthiness in the EU guidelines lack coherence. By calculating that trustworthiness is a condition for trust in AI and, further, that trust is a condition for beneficial use and acceptance of the AI product, proposing a list of desirable attributes (that are then called trustworthy) seems like a marketing strategy to mask the risk situation, and not a policy to prevent the risk.Footnote 8

Meanwhile, it facilitates the false impression of the AI system as having a moral character that respects (i.e., ‘cares’) for human values by anthropomorphizing the technology. Yet, it is the developer who is implicitly responsible for engineering the ‘risk situation’ by implementing the technology. Thus, it is also still the developer who should guarantee and (publicly) testify to the user and the community that these guidelines are met. Here, ‘trustworthiness’ could be considered an ‘empty’ concept that does not translate into clear requirements for laws and regulations that are legally (and not only morally) binding and would offer a societal tool to manage the consumer–developer relation and to govern innovation. Instead, the EU guidelines create a paradoxical constellation in which a possible patient would need to believe that the developers respected the regulations without having any direct personal contact or means of interacting with them.Footnote 9 The guidelines therefore serve as a rhetorical document for dissolving public responsibility into private.

Trust is often seen as a design component for successful social interaction and should thus be enhanced to optimize the utility (Kuipers 2022; Billings et al. 2012) as well as the acceptability (Whelan et al. 2018; Siau and Wang 2018) of the robot. As a consequence, trust is operationalized into a measurable parameter for evaluating the design and the probability of the human to use the device (Hancock et al. 2011), making ‘social’ an experience that can be constructed and consumed. Hence, to ensure that the innovation and use of robots for care and healthcare is not equally misguided by underspecified conceptualizations of trust and trustworthiness, a critical reflection and conceptual improvements are necessary.

Trust in technology and trust in robots: conceptual discussion

Trust and trust in technology: historical and conceptual foundations

The question of trust in social robots and artificial intelligence is part of a wider debate around trust in technology. While trust as a general research focus had its first heyday in the 1980s, introducing a sensitivity to the ethics and morality of trust (Baier 1986, 1991), the development of artificial intelligence brought the still unresolved issues of what trust actually is to the foreground again. Today, trust appears in the context of key issues within the philosophy of technology, such as automation and control.

In everyday understanding, trust is a basic human experience and psychological propensity or disposition that is prototypically actualized in interpersonal relations with other people: “We trust our partners to be faithful, we trust that our friends will keep our secret, and we trust our family members to stand by us in difficult times and situations” (Ryan 2020, p. 3). Trust in technology, and especially emerging technologies, differs in the way that the interpersonal model of trust is being transferred onto a nonhuman technical artifact. The transfer of the interpersonal model onto human–robot interaction takes into account the human disposition in the interaction but has been criticized for eliminating the central aspect of value-rich human interpersonal relationality. The phenomena of promise and betrayal that make up for a relation between moral agents, differentiate trust in people from reliance on tools and objects (Baier 1986; Holton 1994). “Most philosophers interested in characterizing the nature of trust regard it as a species of reliance” but not “‘mere’ reliance” (Goldberg 2020, p. 97), which leads to further distinctions of trust in different contexts such as e‑trust (trust in digital environments; Taddeo and Floridi 2011).

The prototypical ‘trust situation’, however, is characterized by a lack of guarantees (McLeod and Ryman 2020). Epistemologically speaking, trust is thus a belief or supposition (Baker 1987). Trust is directed towards the future and the possible, positive outcome of a pronounced action or the truthfulness of a proposition. On the one hand, this can refer to trust as a positive-affective stance or attitude as a generalized positive belief that can encompass the goodwill and competence of the trustee (Jones 1996). This belief can be informed by emotions, but also by reasons and past experience. Rationality-based accounts of trust highlight the fact-based decision by means of calculating the trustworthiness of the trustee, as well as costs and benefits of the cooperation: “Following the rational choice paradigm, actors are self-interested, and trust is the rational outcome of the imperfect estimations of a trustor’s perspective on the trustworthiness of a trustee” (Hardin 2002). However, the rational calculus rather seems to evaluate how reliable (as opposed to trustworthy) the information about the trustee is and how to justify the evaluation (O’Neill 2018).

More generally, trust has been conceptualized by Luhmann as a conscious surrender to the incompleteness of information and a way of handling contingency (Luhmann 1968). Trust for Luhmann is a reaction to insecurity on different levels, from trust in persons to trust in institutions. Trust, in summary, reduces complexity and enables the subject to act. From the more abstract perspective on social systems, trust and technology converge with respect to human vulnerability, when one condition for trust to occur is a general situation of uncertainty where neither knowledge nor control are possible (Luhmann 1968). Attempting to isolate trust from its situative embeddedness, however, fails to recognize the full complexity of the phenomenon and its anthropological significance.

A ‘trust situation’, therefore, is also constituted by the risks that emerge from the basic dynamics of committing something to someone. Given that trust for Luhmann is a condition that enables the subject to act, wrongfully placed trust may have dire consequences. In the context of human–robot interaction in care, the analysis needs to look at what exactly is at stake. For Esther Keymolen (2016, p. 15), trust is an ability to expose oneself to the vulnerability and uncertainty regarding the future and the agency of other people, instead of “avoiding or diminishing vulnerability”. This leads to another structural tension because trust “solves a basic problem of social relations without eliminating the problem” (Möllering 2006, p. 6).

Hence, the concept of trust can be understood as a multidimensional and multilevel concept: at the individual level, it comprises intrapersonal and interpersonal psychological components, complex mechanisms of risk assessment and decision-making as well as relational and social aspects; at the societal level, trust is also influenced by accepted norms, rules and complex aspects such as social hierarchies and other sociocultural conditions. The conceptual work on trust as part of a transactional game–theory relation, however, is insufficient to capture the full ethical dimension of an alterity relation in a social situation.

For understanding trust in the context of human–technology interaction in care, a substantial challenge is to map these more or less well understood dimensions of human–human relations and interactions onto, e.g., human–robot interactions. In a care situation, the robot might be only ‘used’ as a tool by the care facility administration to fill an economic gap. The patient, nevertheless, might not only ‘use’ the robot, but socially interact and form a relation with it. How the patient conceptualizes the robot, i.e., understands the robot as a tool or a social interactional partner, needs to be taken into account. This can also be expressed in the way in which the robot appears to the patient as a machinic or “virtual” other (Coeckelbergh 2011) but lacks important prerequisites (e.g., in terms of moral agency).

In the following, we examine how phenomenological and anthropological aspects of human–human interactions, especially the notion of responsivity, may help to conceptualize human–robot interaction and the notion of trust in care situations.

Trust from a phenomenological–anthropological perspective

In the phenomenology of Alfred Schütz, the natural attitude in the lived world forms the starting point of human interaction, and as a consequence, also of theory formation. For Schütz, the lived world is characterized by an implicit familiarity and obvious self-evidence (Schütz and Luckmann 2003). Schütz and Luckmann highlight the significance of this primary natural attitude towards the world for the development of trusting relationships. For Thomas Fuchs, the process of original apprehension of the world is a process of settling into the world, what he calls oikeiosis (Fuchs 2015, p. 102), which has two main aspects: The experience of a fundamental self-familiarity with the body forms the foundation for the belief in the stability and continuity of our sensual perceptions and of a shared reality. Embodied experiences of safety and belonging emerge from interpersonal and social interactions with primary caregivers that consistently repeat and build a habitual practice of basic trust. This belief consists of an affective relation to the future (Fuchs 2015, p. 104) and can develop into a positive expectation towards the goodwill of another person. Familiarity with the world and trust as a human functioning evolve reciprocally. Trust is thus not just a social phenomenon but has to be understood as a basic human capability. To be able to trust is not only a human need but a potential and behavioral propensity that can be actively realized into a functioning. It is part of the basic capability to form relations, and in our interpretation specifically, to form relations with responsive others which is in turn part of a good and dignified life (Nussbaum 2000)Footnote 10.

Thomas Fuchs’ analysis sheds light on the fact that basic trust is fundamentally connected to the practice and experience of being cared for. Embodied experiences of safety and belonging emerge from interpersonal and social interactions with primary caregivers that consistently repeat and build a habitual practice of basic trust. This interpersonal paradigm cannot be reduced to the logic of transaction or the concession of control. The early, embodied, and situational priming with close caregivers informs the human practice of trusting in other situations. In our view, to trust always involves the forming of and entering into a responsive relation with the trustee which marks the specific quality of trusting in respect to responsivity. We trust when someone cares about us.Footnote 11

As trust is considered crucial for the success of the therapeutic or assistive intervention, the design of the robot come into the foreground again. It seems to follow that the design of social robots should avoid creating an atmosphere of danger or look like it would pose a threat but rather display a ‘responsive appearance’ that makes human patients interact with the machine as if it were ‘caring’ or in other words: trustworthy. When Coeckelbergh (2012, p. 56) speaks of a “default mode” of trust, we can link his social–phenomenological frame to the human disposition of trusting in the phenomenological–anthropological sense of a capability, an understanding that also undergirds our framework here.

Responsivity and the machinic other: a preliminary phenomenological–anthropological analysis of trust in social robots in care

The social robot as machinic other

As has been discussed by earlier phenomenological analyses, the robot may appear as a ‘quasi-other’ that portrays ‘virtual intentionality’ (Ihde 1990; Coeckelbergh 2011, 2010). In his postphenomenologically oriented relational ontology, Ihde identifies ‘alterity relations’ alongside ‘embodiment relations’, ‘hermeneutic relations’ and ‘background relations’.Footnote 12 With the concept of alterity (also inspired by Levinas 1992), Ihde explores whether and to what extent we can speak of ‘technology-as-other’ or also of a ‘quasi-otherness’ in the encounter with machines. These three relations form the dimensions of ‘technological intentionality’ (Ihde 1990) that inform the appearance of technological artifacts ‘as if’ they own a form of subjectivity.

The prominence of trust and trustworthiness in the debate about AI-complemented social robots thus highlights this ambiguity: It circles around the blurry ontological status of robots which oscillates, in western philosophy, between the categories of object and subject (tool and agent), where the latter has traditionally been limited to European, white, male, human beings and framed as active, rational and autonomous as highlighted by critical discussions in feminist philosophy (Loh and Coeckelberg 2019). Critiques of the notion of technological artifacts as (potential) subjects come to the surface again by automation and AI, when the appearance and functionality of the robot suggest a form of agency.Footnote 13

As an alternative to these debates, a phenomenological–anthropological account focuses on the phenomenal appearance of the robot while describing the structure of experience that emerges in the interaction between human and robot and its situative embeddedness with respect to human dispositions and capabilities.

Responsivity in caring for others

In care, particularly health care, robots are expected to be more than mere tools: they should recognize our needs and respond to them in appropriate ways. In the absence of a human doctor, nurse, or therapist, the patient’s counterpart is the artificial replacement. The robot is assigned the position of the ‘caring other’ by its function as replacement of the human caregiver as well as by the situational social dynamic, including the normative dimensions of what the robot is ought to do and whatnot as a socially assistive robot (Vaesen et al. 2013).Footnote 14

“Care is a practice of awareness and relatedness, which includes self-care and small gestures of attention the same way as nursing and providing interactions as well as collective activities” (Conradi 2001, p. 13, translated by IS)Footnote 15. Caregiver and care receiver are in a relationship that is often characterized by asymmetry and dependency.Footnote 16 Yet, this dependency is one of reliance inside a “world comprised of relationships rather than of people standing alone” (Gilligan 1982, p. 29). Central aspects of a care orientation are attentiveness and awareness, aid, and responsiveness to needs (Bubeck 1995).

Hence, caring for others is an intrinsically relational process that requires certain capacities for social interaction such as language, but also ‘theory of mind’—the ability to see the world from someone else’s perspective—and empathy. Together, these capacities enable us to specifically respond to the needs of someone. This human capacity for responding seems fundamental for successful interactions, yet it has not been a very prominent topic in recent ethical scholarship on trust and trustworthiness concerning human–technology relations.Footnote 17 We want to connect the anthropological observation of a human capacity or capability to respond with the ‘responsive appearance’ and the phenomenon of being perceived as responsive (virtual responsivity) and take a closer look at the phenomenological structure underlying this behavior. Here, we suggest that philosophical theories of responsivity (Responsivität), particularly from the phenomenology of Bernhard Waldenfels, could provide key insights for understanding human–robot interactions and, ultimately, the characteristics of trustworthy relations to robots and other machines. Therefore, to answer the question whether robots can truly care, we need to understand the role of responsivity, particularly in relation to trust.

The interest in responsive behavior already elicited empirical studies. However, without an ethical and phenomenologically informed analysis. Psychological research has been interested in responsive behavior in close human relationships with respect to intimacy (Reis and Clark 2013; Reis 2014), clustering a certain behavioral morphology as ‘responsiveness’ or ‘perceived responsiveness’ towards the partner.Footnote 18 Here, we aim to extend the literature on SARs by a phenomenological–anthropological account of the interrelation between responsiveness, trust and care.

Responsivity, virtual responsivity and constructed responsiveness

Responding to the needs of another being, answering to the demand of the other, is a special form of caring responsivity: “[C]are is integrated into our dealing with things, others and ourselves. […] It appears in everyday form, for example in the form of carefulness and in institutional form as care for children and old people, as preventive medicine, a public welfare, as custody of children, as religious or secular pastoral care” (Waldenfels 2020, p. 189). The “aim of therapy consists in enhancing or restoring responsivity” (Waldenfels 2020, p. 196) which is in line with the medical anthropology of Viktor von Weizsäcker (1987) and Kurt Goldstein (1934). We would submit that responsivity is a key phenomenon for human–robot interaction, too, when therapy, for Waldenfels, “responds by getting the other to respond” (Waldenfels 2020, p. 203)—which is a stated goal in the case of, for example, pet robots for patients with dementia. Indeed, it is evident that when robots are perceived as responsive, they have a stronger effect on the human user, be it a greater reduction in the perception of pain (Geva et al. 2022), a decrease in salivary cortisol levels (Tanaka et al. 2012), or an increase in prevalence of positive emotions (Crossmann et al. 2018).

As discussed above, trust differs from reliance in the belief that the person we are trusting acts with a specific attitude of goodwill towards us (Baier 1996). The expectation of goodwill, or at least the expectation of a sense of obligation, as a (habitual) affective attitude derived from positive relations with primary caregivers, includes an expectation that the trustee ‘cares about’ the trustor, i.e., responds and not only reacts. Hence, responsivity is crucial in a bi-directional manner: Without responsive others, human beings would not be able to develop the capability to trust. As trust is closely connected to responsivity, trust will be understood particularly as the capability to form affective relations with responsive others.

We argue that the appearance of caring displayed in the functionality of the robot is achieved by the responsive qualities that the robot exhibits achieved by an engineered or constructed responsiveness in the technical sense.Footnote 19 In line with Coeckelbergh’s terminology, we could call that phenomenon ‘virtual responsivity’ in the form of constructed responsiveness.Footnote 20 Virtual denotes the experience of possibility, i.e., something exists in virtue of its potential.Footnote 21 Virtual responsivity also refers to the interplay between the perception of something as something and the thing that appears as something. The entity or thing in question offers affordances (in the sense of James J. Gibson 1979)Footnote 22, options to act upon and interact with, that are perceived as stimulus and attraction in a certain way, e.g., as inviting, as threatening or—as trustworthy. Affordances constitute a “symbolic excess” (Waldenfels 2019, pp. 376–377); they can inspire a certain action and more in terms of a creative openness. Affordances suggest ways of acting and using, which in the case of social robots could be framed as ‘social affordances’. These include not only interactional prompts but also the potential for bonding experiences. A social robot in a care situation, thus, ‘invites’ the patient to socially respond to the affordances on a relational and possibly emotional level. For determining virtual responsivity, it is not essential whether the robot truly cares (as this would lead also into complex discussions about robot consciousness, etc.), but how and why the patient responds to the social affordances of the robot resulting in the subjective impression of being cared-for.

Basic dimensions of human responsivity

The capability to respond relates to a more fundamental phenomenal structure that needs further clarification. The phenomenology of responsivity is rooted in the challenge that “the other” poses to the self. This challenge is a moment of alienation, experienced as a ‘call upon the self’ (Waldenfels 2011, p. 36), which imposes on the self a doubled pathos and a demand. This pathic dimension of challenge, provocation, withdrawal and defiance constitutes a crucial asymmetry of call and response for Waldenfels (cp. 2011, p. 37). In responding, the self is already “incited, attracted, threatened, challenged and appealed” (Waldenfels 2010) by something different from itself that calls upon it but never becomes quite normalized. The call, or demand (Anspruch) “is directed at someone” and has “a claim to something” (Waldenfels 2011, p. 37). It starts with looking-at and listening-to (cp. Waldenfels 2011, p. 37) and leads to the invention of an answer that meets the “invite” of the other (Waldenfels 2011, p. 38). Responsivity is thus a “basic trait present in all our behaviors towards things, towards ourselves and towards others” (Waldenfels 2010).

From there follows a scantly noticed anthropological description: “The human being is an animal which responds” (Waldenfels 2011, p. 38). Responding, however, does not only constitute a specific speech act, it constitutes a basic dyadic phenomenological motif in the structure of human experience that can also be described in philosophical–anthropological manner as a capability to receive and answer. There exists for Waldenfels (2011, p. 38) a necessity and obligation to answer, evoked by the presence of the other. Besides the above-mentioned re-interpretation of Husserl’s concept of intentionality (Husserl 2009), the argument can be interpreted by structuring it into four aspects of responsive relationality: the symbolic–expressive dimension, the embodied dimension, the situative embeddedness in the life-world, and the ethical dimension.

  1. 1.

    The symbolic–expressive dimension: Language is a powerful medium to convey meaning and generate sense, and yet communication and semantic copractices go beyond solely linguistic features. The symbolic functionality of human language has been identified as the anthropological difference to the stimulus–reaction schema of other beings, conceptualized as ‘responding’ rather than reacting (Cassirer 1944).

    Language serves as the prime medium for responsive expressionality in the form of question and answer (Waldenfels 2007). Similar, but less anthropocentric and more relational conceptualizations can be found in Donna Haraway’s ‘response-ability’ (2003) aiming at a reciprocal relationship between humans and nonhumans that roots in responsive interaction.

  2. 2.

    The embodied dimension: Responding for Waldenfels is part of the “embodied responsory” (leibliches Responsorium) that is not limited to language (Waldenfels 2019, p. 255) but includes emotions, moods, and expressive movement. While looking at and listening to constitute a prototypical embodied-responsive stance to the world, gestures of giving and receiving hint towards an elementary normativity, an ethos of the senses and the body (Waldenfels 2019, p. 388).Footnote 23

  3. 3.

    The situative embeddedness in the life-world: Situations constitute the frame of action and understanding. People can get accustomed to situations that occur repeatedly, following scripts and rules that help to deal with ambiguity. However, human beings can be characterized by their ability to transform this frame of action and understanding, which Waldenfels calls the “sense of possibility” or “virtuality” (Virtualität; Waldenfels 2021, p. 199). The sense of possibility inherently accompanies human perception and interaction with other beings and objects, offering potential threats or welcome chances (cp. Waldenfels 2021, p. 204). The given might not be “taken as that which it really is, but it is already viewed in light of its possibilities” (Waldenfels 2021, p. 204).Footnote 24 The sense of possibility (or virtuality) can thus be linked to the question of the human condition, expressed as a basic and fundamental situation embedded in the life-world with respect to the actualization of potential.

  4. 4.

    The ethical dimension: The ethical dimension follows the basic conception that responding as such starts from “somewhere else”Footnote 25 (Waldenfels 2019, p. 255). The initiative arises from the other, from an “alien impulse” (Waldenfels 2019, p. 255). This line of thought is rooted in the phenomenology of alterity of Emmanuel Levinas, who argued for a priority of the ethical dimension over the self-constituting processes of the subject in traditional Western philosophy (Levinas 1969). In fact, the self is pre-ontologically dependent on the other, its social directedness, so to speak, is an alterity-oriented, i.e., responsive, one.Footnote 26

The social robot as virtual responsive other

The dimensions of human responsivity relate back to the care-situation and constitute the phenomenological–anthropological foundations of the experience of being cared-for—and as a consequence, the capability and willingness to trust the robot. As the basic trait of interacting with and experiencing the world, human beings respond to the incentives of the robot from their human disposition for trust if the robot appears to be responsive.

Most of the robots are still only accessible in experimental trials and research contexts, so a full integration into the life-world of people in need of care is not a reality yet. However, video documentations, ethnographic observations, and qualitative interviews (e.g., Koren et al. 2022) can be used as source material for a preliminary phenomenological–anthropological description of the phenomenal presence of the robots and their effects on the human disposition for trust, which could be complemented by an in-person analysis in the future. As Hancock et al. (2011) categorized, the “social character” of the robot is the second most important factor in their meta-analysis of trust in human–robot interaction. This social character of the robot, we argue, is in part due to its ‘virtual responsivity’ that encompasses appearance and functionality. Virtual responsivity aims to capture the perceived potential of the robot to respond (i.e., its constructed responsiveness) that is informed by the human sense of possibility. In the following, the specific appearances that constitute the constructed responsiveness of the robot will be shortly described.

  1. 1.

    Dialogic responsiveness: Robot systems like LIO offer dialogic communication with the patient they serve. Most prominently, social robots ask questions like ‘What can I do for you today?’ or ‘How can I help you?’ The question is a proposal of options (Waldenfels 2007, p. 191), while adhering to the laws of polite turn-taking. Birnbaum et al. (2016a) named positive responsive speech acts like “You must have gone through a very difficult time” (Birnbaum et al. 2016a, p. 419) or “I completely understand what you have been through” (Birnbaum et al. 2016a, p. 419) as linguistic codes for responsive behavior. The robot might also comment on the activities of the human as if it was concerned (‘Did you sleep well?’)Footnote 27, offer “confirmations” (Hoffman et al. 2014) or explicit acknowledgement of the previous speech segment (Birnbaum et al. 2016b). Adding language, speech acts, and dialogically fitting reactions like summarizing, affirming, and repeating to the embodied gestures of understanding enhances the constructed responsiveness of the robot.

  2. 2.

    Embodied responsiveness: Attributing competences to the robot seems to rest on the robot’s appearance as an entity that can enact these competences physically. Primarily, it is movement and seemingly autonomous movement that evokes qualities of a living being and animacy (Plessner 2016, pp. 179–180)Footnote 28. The materiality and superficial appearance of the robot is also the zone of contact between human and machine. Enhanced sensors, cameras, and microphones create a data-sensitive interface to the environment. Understanding trust in human–robot interaction has to consider this sensual encounter with the robot’s ‘body’ and its mobile functions in respect to its sensors.Footnote 29 This dynamic physicality of the robot leads to the impression that the robot has received and internally processed the environmental data and interactional cues and responds to them in an active and agentive way.Footnote 30

  3. 3.

    Social responsiveness: Social robots also offer social affordances by their movements that may generate interactions like play, dance, cooperation and appreciation: The human user is invited to repeat or imitate what the robot is doing. Prompting imitation, human and robot enter a synched and inter-coordinated interaction in which the robot responds to the human moving, giving feedback and correcting the movement if need be—or the other way around. Leading and following are the corresponding responsive social phenomena. Accompanied by communication and esthetic features that enhance familiarity, these socially oriented cues reinforce the impression of alterity and social roles.Footnote 31 Combined, these cues tap into our inherent propensity to trust based on social signals.

Ethical implications of trusting robots

Following the preceding analysis, it could be said that the constructed responsiveness of the robot (generated by the social affordances it provides), the AI-enhanced speech recognition and its physical mobility, gives the robot the appearance of ‘virtual responsivity.’ Analogous to the phenomenological structure of responsivity and its dimensions in human lived experience, this raises questions about potential ethical pitfalls, tensions, and challenges of SARs, especially in the context of medical applications.

From a principlist, deontological perspective, interacting with robots may affect a patient’s personal autonomy (e.g., by carrying out actions on behalf of a patient) and we need to balance the potential positive impact of interacting with a robot (beneficence), for example, for therapeutic purposes, with the inherent risks (safety, medical, psychological) to avoid harm to patients (nonmaleficence; Beauchamp and Childress 2001). Another important principle is justice, i.e., how can we ensure equal access and use to complex and costly technologies such as robots, especially in economically and/or technologically constrained settings, given that this technology often requires an advanced digital ecosystem. In addition to these classical principles, we want to especially highlight three ethically relevant dimensions related to responsivity that also need to be considered in human–robot interaction in medicine: vulnerability, dignity, and the ethical dimensions of the responsive capability to trust.

Vulnerability can be understood as an anthropological foundation of human existence that, in its most basic form, denotes a human’s propensity to experience harm based on their certain characteristics, for example, based on specific medical conditions but also—importantly—because of group-based (e.g., gender-based) discrimination or other forms of structural inequalities and injustices (Herzog et al. 2022). In the context of biomedical ethics, vulnerability is a multidimensional concept denoting various yet specific ways in which a human being can be vulnerableFootnote 32. Vulnerability increases in a situation where responsive interaction is particularly and explicitly needed, for example, when the patient’s own capability to respond is limited or impaired due to sickness, circumstances (isolation), or ageFootnote 33.

SARs are meant to facilitate and improve care situations and they succeed in that their design interplays with the dimensions of responsivity shown above. There is, however, a potential for deception when the ethical and normative significance of responsivity is not appropriately reflected. The constructed responsiveness and resulting ‘virtual responsivity’ might make it difficult to see that the robot is used as a medium and interlocutor that only allows for an indirect form of ‘virtual responsivity’ without the essential direct reciprocity of ethical responsivity.

It is clear that human beings can respond to virtual, i.e. imagined or perceived possibilities and turn them into real possibilities, actual potential and actions or skills. Responsivity structurally includes a sensitivity to the potential of the other, i.e., the “more” of the other. This compels us to take another important ethical consideration into account: The special quality of human responsivity seems to include a claim to dignity. Responsivity not only allows us to answer to the vulnerability of the other person, their fundamental otherness and difference, but also to the perceived potential of the other being as a fundamental capability and important prerequisite for health, wellbeing, and flourishing. Treating someone with dignity means respecting the capacity for responsivity of the person as an actual potential.

One might argue, however, that the virtual responsivity of a social robot alone does not entail specific ethical risks because the robot is not a person, i.e., it does not possess a sense of self, moral consciousness nor moral agency and other characteristics (depending on one’s understanding of what makes a person). We would agree with this point but would argue that insofar as the robot displays an operational or functional type of responsivity and acts socially interactive in a persuasive manner, it might be mistaken for an ‘Other’ that possesses a more elaborate moral status than being merely a machine. This elevated moral status might then make a human treat the robot with a kind of dignity that is similar to the dignity they confer to fellow humans. This can be a source of, at least, disappointment if not psychological harm. Imagine a human that is becoming so attached to a robot that they feel sad or lonely if the robot is no longer with them (e.g., because therapy has ended) or because the robot cannot, after all, reciprocate in treating the human in a dignified manner because, ultimately, it lacks the conceptual understanding and hence agency. In situations in which a robot cares for a human, in the sense of providing care work (whether physical/‘manual’ labor such as lifting, washing etc., or social labor such as communication or other forms of interaction), it might thus indeed matter to the human if the robot truly cares in the sense of having the capacity for the required psychological and anthropological features of caring (e.g., empathy, a sense of relationality and full responsivity), all of which are invariably tied to consciousness and personhood, both of which robots in their current form clearly lack. An important empirical question (with ethical implications) at the level of designing and using robots could therefore be to investigate, which types of responsivity displayed by a robot elicit which kinds of feelings (e.g., of trust, feeling respected as a person) in the human ‘user’ and whether and to which degree this would be considered to be a form of manipulation.

Regarding the ethical implications of trust, we want to propose that trust entails the belief that the other person respects one’s own claim to dignity by virtue of their capacity to respond. It is a belief to be treated as a being with an inherent ‘virtuality’ (in the sense of potentiality) to flourish which forms an intrinsic valueFootnote 34. If responsivity is the fundamental phenomenological structure in relation to the other, then it is also a prerequisite for trusting when it includes an answer to the perceived potential and ‘virtuality’ of the other being as expressed in fundamental capabilities.

Setting the stakes this high for trust and trustworthiness allows us to critically examine the potential of robots with respect to the virtuality and potential of human beings. Especially in care situations, where respecting human dignity still forms an important ethical principle for evaluating actions, we need to reflect on trust as an essential human disposition and capacity to form relations, and trustworthiness as the result of responding to this disposition.

We hope to have shown that our analysis of the complex relationship between trust as a concept and phenomenon in human–human relations and the situative dimensions of care proposes a nuanced answer to the question of whether social robots should be trustworthy. The robot is trustworthy if the constructed responsiveness is not used to deprive patients of their claim to dignified care and if the limitations of the robot’s ‘virtual responsiveness’ are being made transparent to the patient. SARs’ constructed responsiveness should be appropriate to the therapeutic intervention and not overact on inducing trusting and bonding behavior by patients. Informed usage should therefore perhaps include disclaimers about the human propensity to trust responsive others and human orientation towards alterity. ‘Ethically sensitive responsiveness’ could be an important factor in the assessment of social robots. It would include boundaries to the social affordances and can explicitly state reasons why certain responsive features are therapeutically necessary and also, for example, justify the deactivation of responsive features in specifically vulnerable patients. SARs should not be introduced as harmless toys if they are in fact (or are perceived as) ‘machinic others’ with virtual responsive capacities, especially if they are designed to be placed in and maybe replace a responsive human–human relationship of care.