Skip to main content
Log in

Dynamics of Perceptible Agency: The Case of Social Robots

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

How do we perceive the agency of others? Do the same rules apply when interacting with others who are radically different from ourselves, like other species or robots? We typically perceive other people and animals through their embodied behavior, as they dynamically engage various aspects of their affordance field. In second personal perception we also perceive social or interactional affordances of others. I discuss various aspects of perceptible agency, which might begin to give us some tools to understand interactions also with agents truly other than ourselves or “perhaps agents” like present and future social robots. Robots have various kinds of physical and behavioral presence and thus make their agency—if we want to call it that—perceptible in ways that computers and other forms of AI do not. The largely dualist assumptions pertaining to the hidden bodies of traditional Turing tests are discussed, as well as the social affordance effects of such indirect interactions as opposed to interactions in physically shared space. The question is what role various abilities to reveal, hide and dynamically control the body and broader behavior plays in heterogeneous tech or machine mediated interactions. I argue that the specifics and richness of perceptible agency matters to the kind of reciprocity we can obtain.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. See also Hakli (2014) for a recent analysis of attributions of sociality to current human-robot interactions.

  2. In Ishiguro’s conception of “android science” (Ishiguro 2007) the idea is that the inspiration and learning between cognitive science and social robotics should be “bidirectional”. I would take it a step further and say that the knowledge is not only mutually applied but generated in these transdisciplinary meetings of biological and computational brain and behavioral sciences.

  3. See here also Seibt’s (2014) call for an ontologically neutral idiom for human-robot interactions.

  4. Marley is perhaps more the disseminator than the creator of this saying. A similar expression is attributed e.g. to Lincoln: “You may fool all the people some of the time, you can even fool some of the people all of the time, but you cannot fool all of the people all the time.”

  5. See also Sterrett (2003) for an interesting analysis of the difference between what is typically called the “Turing test” and his first formulation of the “original imitation game”.

  6. An analogy is that if one adds sufficient noise to ones behavior, a lie detector will be rendered useless. Of course the question is if Player B is privy to player A’s responses thus might change the outcome. But leaving that behind a good Player A should bring down the interrogator’s score, however a good interrogator should randomize her guesses if they start to go below 50%, thus effectively reaching an equilibrium.

  7. One wonders of course whether an interrogator would guess that senseless, irrelevant, inflexible or otherwise subpar answers would come from the man or the woman? If the interrogators choice is random here, note that the strategy of the computer could actually be to not make much sense, but lets ignore this possibility for now.

  8. He writes: “In order that tones of voice may not help the interrogator the answers should be written, or better still typewritten. The ideal setting is to have a teleprinter communicating between two rooms. Alternatively the questions and answers can be repeated by an intermediary.” (Turing 1950).

  9. This is not to say that mediated interactions cannot also be intimate and real, nor that they cannot not in their own way involve something like a virtually shared and thus to some extent negotiated affordance space. However to properly explain this point we first need to look at what is meant by a concretely or perceptually shared affordance space and direct perceptible agency.

  10. Our perceptual systems have evolved special proclivities to the human shape and motion, see here for example the massive amount of neuroscience research on facial recognition done and the role of the fusifom gurus in the temporal lobe.

  11. Note that the many types of movement and their different central and peripheral control circuits are highly layered and merged in our overall behavioral output and thus categorizing them at a high functional level can be misleading. Thus perceptible perception e.g. is clear a mélange of many levels of control, intentionality and automaticity. Brincker and Torres (forthcoming), see also Torres (2011, 2013).

  12. This is precisely what is exploited in lie detector tests, using measurable autonomic responses, such as galvanic skin responses.

  13. Note here how these three aspects of self-self, self-world and self-other mutually informs each other. E.i. the fact of your perceptible perception and perceptible autonomic reaction make certain forms of low level social reciprocity and entrainement possible. Conversely such low level social reciprocity—like in the case of infants and caregivers—can also support autonomic regulation within the individual (Trevarthen 1979).

  14. There are multiple tricky issues here, some of which goes to the heart of the pivotal relationality and potentiality of the affordance concept. For various views on this thorny but important theoretical issue. See Chemero (2003), Stoffregen (2003), Turvey (1992) and Rietveld and Kiverstein (2014).

  15. See Brincker 2015a for an analysis of distance and freeing of the imagination in aesthetic perception.

  16. I am pacing a critique Cornell West made in a TV interview of Obama’s tendency to start with a compromising position.

  17. Emotional responses are particularly intriguing in that they in a sense are perceptible expressions of what one “feels”, i.e. they are typically both self and other directed and inherently communicative – even if only to oneself. See also Breazeal and Brooks (2005).

  18. It could here be informative to do a detailed comparison to peripheral systems and human body multiplicities in terms of aspects of centralized and decentralized agency and their perceptible nature.

  19. The laptop I am currently typing has a e.g. keyboard, webcam, microphone, screen and pop up reminders that it is time to update the software. It might in a sense be seen as a perceptible interaction partner, even if it does not have a body in a traditional anthropomorphic sense, as it presents with some sensor and effector features that some might want to think of as very weakly autonomous or agentic. The typical perceptible presence of laptop computers does not generally induce a sense of social interaction but rather presents as a tool or a medium for interaction. However, note that many traditional robots often are seen as tools as well.

  20. An illustration: I ask my 8 year old to pretend to be a robot, she jumps into the cartoon image of a robot: She transforms her voice to eliminate the fluid intonation and spit out words individually equally spaced like beads on a string, and she makes her movements stiffer, more jerky and again as in the case of the voice, the movements are produced serially rather than merging fluidly. The sorts of norms reflected in her pretense performance shows something about our expectations about the appearance of a robot – but the question is if it tells us something about our expectations about their overall functioning and capacities as well.

  21. The behavioral range is impressive, but there are expectably still many limitations to the robot movements as compared to those of humans, in part due to the number of individual actuators, which is high (50 in total, 13 in the face), but crude compared to the human behavior (Ishiguro 2007).

  22. For a recent overview of the frame problem in AI see Shanahan (2016).

  23. Note also that Wheeler’s objective is to argue that plasticity of behavior should be seen as a necessary but not sufficient condition of thinking. Given this aim he seems to hold that the machine behavior only counts towards this criteria insofar it is “in virtue solely of its own intrinsic capacities” (Wheeler 2010). This is ‘onboard’ requirement is not fulfilled by most social robots as distal control and design is frequent. Such distal agencies might be conducive to successful interaction but at the same time under the proposed analysis of perceptible agency would be of a certain subspecies importantly different from the ‘onboard’ kind.

  24. A comparison of this interpretation of the uncanny valley to various discriminatory practices enforcing neat divisions based on gender, race, sexuality etc. could be of interest here.

References

  • Ansuini, C., Cavallo, A., Bertone, C., & Becchio, C. (2014). The visible face of intention: Why kinematics matters. Frontiers in psychology, 5, 815. doi:10.3389/fpsyg.2014.00815.

    Article  Google Scholar 

  • Aunger, R., & Curtis, V. (2015). Gaining control: How human behavior evolved. Oxford: Oxford University Press.

    Book  Google Scholar 

  • Bickhard, M. H. (2000). Autonomy, function, and representation. Communication and Cognition-Artificial Intelligence, 17(3-4), 111–131.

    Google Scholar 

  • Boden, M. A. (1988). Escaping from the Chinese room. In J. Heil (Ed.), Computer models of mind. Cambridge: Cambridge University Press.

    Google Scholar 

  • Breazeal, C. (2003). Toward sociable robots. Robotics and Autonomous Systems, 42(3–4), 167–175.

    Article  MATH  Google Scholar 

  • Breazeal, C., & Brooks, R. (2005). Robot emotion: A functional perspective. In J.-M. Fellous & M. A. Arbib (Eds.), Who needs emotions? The brain meets the robot (pp. 271–310). New York: Oxford University Press.

    Chapter  Google Scholar 

  • Breazeal, C., Buchsbaum, D., Gray, J., Gatenby, D., & Blumberg, B. (2005). Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots. Artificial Life, 11(1–2), 31–62.

    Article  Google Scholar 

  • Brincker, M. (2010). Moving beyond mirroring-A social affordance model of sensorimotor integration during action perception. City University of New York.

  • Brincker, M. (2014). Navigating beyond “here & now” affordances—on sensorimotor maturation and “false belief” performance. Frontiers in Psychology, 5, 1433. doi:10.3389/fpsyg.2014.01433.

    Article  Google Scholar 

  • Brincker, M. (2015a). The Aesthetic Stance: On the conditions and consequences of becoming a beholder. In Aesthetics and the embodied mind: Beyond art theory and the Cartesian mind-body dichotomy, pp. 117–138. Netherlands: Springer.

  • Brincker, M. (2015b). Evolution beyond determinism: On Dennett’s compatibilism and the too timeless free will debate. Cognition and Neuroethics, 3(1), 39–74.

    Google Scholar 

  • Brincker, M. (2015c). Beyond sensorimotor segregation: On mirror neurons and affordance space tracking. Cognitive Systems Research, 34, 18–34.

    Article  Google Scholar 

  • Brincker, M. (forthcoming). Privacy in public and the contextual conditions of agency. In T. Timan, B.-J. Koops & B. Newell (Eds.), Privacy in public space: Conceptual and regulatory challenges. Edward Elgar.

  • Brincker, M., & Torres, E. B. (2013). Noise from the periphery in autism. Frontiers in Integrative Neuroscience, 7(34), 34.

    Google Scholar 

  • Brincker, M., & Torres, E. B. (forthcoming). Why study movement variabilities in autism? In E. B. Torres & R. J. Whyatt (Eds.), Autism: The movement-sensing perspective. CRC Press.

  • Cattaneo, L., Fabbri-Destro, M., Boria, S., Pieraccini, C., Monti, A., Cossu, G., et al. (2007). Impairment of actions chains in autism and its possible role in intention understanding. Proceedings of the National Academy of Sciences, 104(45), 17825–17830.

    Article  Google Scholar 

  • Chemero, Anthony. (2003). An outline of a theory of affordances. Ecological Psychology, 15(2), 181–195.

    Article  Google Scholar 

  • Dautenhahn, K., Nehaniv, C. L., Walters, M. L., Robins, B., Kose-Bagci, H., Mirza, N. A., et al. (2009). KASPAR: A minimally expressive humanoid robot for human–robot interaction research. Applied Bionics and Biomechanics, 6, 369–397.

    Article  Google Scholar 

  • De Jaegher, H. (2009). Social understanding through direct perception? Yes, by interacting. Consciousness and Cognition, 18, 535–542.

    Article  Google Scholar 

  • De Jaegher, H., & De Paolo, E. (2007). Participatory sense-making. Phenomenology and the Cognitive Sciences, 6(4), 485–507.

    Article  Google Scholar 

  • De Jaegher, H., De Paolo, E., & Gallagher, S. (2010). Can social interaction constitute social cognition? Trends in Cognitive Sciences, 14(10), 441–447.

    Article  Google Scholar 

  • Dennett, D. C. (1984). Elbow roomvarieties of free will worth wanting. Cambridge, MA: MIT Press.

    Google Scholar 

  • Dennett, D. C. (1989). The intentional stance. Cambridge: MIT Press.

    Google Scholar 

  • Descartes, R. (1637/2008). Discourse on method. Cosimo, Inc.

  • Dorigo, M., ed. (2006). Ant colony optimization and swarm intelligence: 5th International workshop, ANTS 2006, Brussels, Belgium, Sept. 47, 2006, Proceedings, vol. 4150. Berlin: Springer

  • Farennikova, A. (2013). Seeing absence. Philosophical Studies, 166(3), 429–454.

    Article  Google Scholar 

  • Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford: OUP.

    Google Scholar 

  • Gallagher, S. (2005). How the body shapes the mind. Oxford: Clarendon Press.

    Book  Google Scholar 

  • Gallagher, Shaun. (2008). Direct perception in the intersubjective context. Consciousness and Cognition, 17, 535–543.

    Article  Google Scholar 

  • Gibson, J. J. (1979). The ecological approach to visual perception, p. 127.

  • Hakli, R. (2014) Social robots and social interaction. In Sociable robots and the future of social relations: Proceedings of robo-philosophy 2014. IOS Press, pp. 105–114.

  • Hendriks-Jansen, H. (1996). Catching ourselves in the act: Situated activity, interactive emergence, evolution, and human thought. Cambridge: MIT Press.

    Google Scholar 

  • Ishiguro, H. (2007). Android science. In Robotics Research. Berlin, Heidelberg: Springer, pp. 118–127.

  • Kowler, E. (2011). Eye movements: The past 25years. Vision Research, 51(13), 1457–1483.

    Article  Google Scholar 

  • Kozima, H., Michalowski, M. P., & Nakagawa, C. (2009). Keepon. International Journal of Social Robotics, 1(1), 3–18.

    Article  Google Scholar 

  • Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The embodied mind and its challenge to western thought. London: Basic Books.

    Google Scholar 

  • Merleau-Ponty, M. (1964). The primacy of perception: And other essays on phenomenological psychology, the philosophy of art, history and politics. Evanston, IL: Northwestern University Press.

    Google Scholar 

  • Pijpers, J. R., Oudejans, R. R. D., & Bakker, F. C. (2007). Changes in the perception of action possibilities while climbing to fatigue on a climbing wall. Journal of Sports Sciences, 25(1), 97–110.

    Article  Google Scholar 

  • Rietveld, E. (2008). Unreflective action. In A philosophical contribution to integrative neuroscience. University of Amsterdam Dissertation, Amsterdam, ILLC-Dissertation Series.

  • Rietveld, E., & Kiverstein, J. (2014). A rich landscape of affordances. Ecological Psychology, 26(4), 325–352.

    Article  Google Scholar 

  • Roskies, A. (2016). Neuroethics. In E. N. Zalta (Ed.) The stanford encyclopedia of philosophy, http://plato.stanford.edu/archives/spr2016/entries/neuroethics/

  • Sartre, J. P. (1943). Being and nothingness.. Trans. Hazel E. Barnes. 1992. New York: Washington Square Press.

  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(03), 417–424.

    Article  Google Scholar 

  • Seibt, J. (2014). Varieties of the ‘As If’: Five ways to simulate an action. Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy, 2014(273), 97.

    Google Scholar 

  • Shanahan, M (2016). The frame problem. In E. N. Zalta (Ed.) The stanford encyclopedia of philosophy. http://plato.stanford.edu/archives/spr2016/entries/frame-problem/

  • Sterrett, S. G. (2003). Turing’s two tests for intelligence. In The turing test, pp. 79–97. Netherlands: Springer.

  • Stoffregen, T. A. (2003). Affordances as properties of the animal-environment system. Ecological Psychology, 15(2), 115–134.

    Article  Google Scholar 

  • Thelen, E. (2000). Grounded in the world: Developmental origins of the embodied mind. Infancy, 1, 3–28.

    Article  Google Scholar 

  • Tomasello, M. (2014). A natural history of human thinking. Cambridge, MA: Harvard University Press.

    Book  Google Scholar 

  • Torres, E. B. (2011). Two classes of movements in motor control. Experimental Brain Research, 215(3-4), 269–283.

    Article  Google Scholar 

  • Torres, E. B. (2013). Signatures of movement variability anticipate hand speed according to levels of intent. Behavioral and Brain Functions, 9(10), 1744–1790.

    Google Scholar 

  • Torres, E. B., Brincker, M., Isenhower, R. W., Yanovich, P., Stigler, K. A., Nurnberger, J. I., Metaxas, D. N., & José, J. V. (2013a). Autism: The micro-movement perspective. Frontiers in Integrative Neuroscience, 7, 32. doi:10.3389/fnint.2013.00032.

    Google Scholar 

  • Torres, E. B., Yanovich, P., & Metaxas, D. N. (2013b). Give spontaneity and self-discovery a chance in ASD: Spontaneous peripheral limb variability as a proxy to evoke centrally driven intentional acts. Frontiers in Integrative Neuroscience, 7, 46. doi:10.3389/fnint.2013.00046.

    Google Scholar 

  • Trevarthen, C. (1979). Communication and cooperation in early infancy: A description of primary intersubjectivity. In M. Bullowa (Ed.), Before speech: The beginning of interpersonal communication (pp. 321–347). Cambridge: CUP Archive.

    Google Scholar 

  • Tronick, E., et al. (1978). The infant’s response to entrapment between contradictory messages in face-to-face interaction. Journal of the American Academy of Child psychiatry, 17(1), 1–13.

    Article  Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

    Article  MathSciNet  Google Scholar 

  • Turkle, S. (2010). In good company? On the threshold of robotic companions. In Close engagements with artificial companions: key social, psychological, ethical and design issues. Amsterdam, The Netherlands: John Benjamins Publishing Company, pp. 3–10.

  • Turvey, M. T. (1992). Affordances and prospective control: An outline of the ontology. Ecological Psychology, 4(3), 173–187.

    Article  Google Scholar 

  • Varela, F. G., Maturana, H. R., & Uribe, R. (1974). Autopoesis: The organization of living systems, its characterization and a model. Biosystems, 5(4), 187–196.

    Article  Google Scholar 

  • Wheeler, M. (2010). Plastic machines: Behavioural diversity and the Turing test. Kybernetes, 39(3), 466–480.

    Article  MathSciNet  MATH  Google Scholar 

  • Zahavi, D. (2002). Intersubjectivity in Sartre’s being and nothingness. Alter, 10(265), 81.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria Brincker.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brincker, M. Dynamics of Perceptible Agency: The Case of Social Robots. Minds & Machines 26, 441–466 (2016). https://doi.org/10.1007/s11023-016-9405-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-016-9405-2

Keywords

Navigation