— by Mark Dingemanse & Andreas Liesenfeld, Radboud University Nijmegen
Clark & Fischer propose that people see social robots as interactive depictions and that this explains some aspects of people’s behaviour towards them. We agree with C&F’s conclusion that we don’t need a novel ontological category for these social artefacts and that they can be seen as intersecting with a lineage of depictions from Michelangelo’s David to Mattel’s talking barbie doll. We have two constructive contributions to make.
First, we think C&F undersell the power of their depiction account as a tool for designers: it can help us understand the work of creating simulacra of social agents (Suchman, 2007), and can help explain people’s initial responses to perceptions of them. When mechanical artefacts are endowed with cues to agency like voice, movement, and likenesses of body parts, this shapes people’s perceptions of their affordances for interaction. It may also help explain the uncanny valley effect (Mori, MacDorman, & Kageki, 2012): when depictions are so lifelike as to be mistakeable for the real thing, a closer look may jolt us from as to as-if perception.
Second and more critically, we note that C&F leave unexamined the notion of “social” in social robots; the question of how technologies like this become enmeshed in human sociality (Heath & Luff, 2000). In particular, C&F’s focus on robots-as-depictions risks losing sight of the question of how exactly people make robots part of their ongoing interactional business. The transcripts shown by C&F provide direct evidence of this interactional work. For instance, the laughter and non-serious responses to Smooth and Aibo show how people normalise strange situations by turning to humour and playful exploration (Moerman, 1988). It is not just that people can see these objects as depictions; they treat them as “liminal members” that can barely hold up their end of a conversation (Kopp & Krämer, 2021). This is where the limits of the depiction account come into view. There are many other beings that, for better or worse, are sometimes treated in interaction as having partial or liminal membership (Sacks, 1989). Surely children and pets are not depictions, and yet they too can be treated as less than full agents, with less than full social accountability.
In sum, while the robots-as-depictions account helps make sense of the construction and perception of humanoid robots, it needs to be coupled with investigations of our interaction with them to explore how they become social.
References
- Clark, Herbert H., and Kerstin Fischer. 2022. “Social Robots as Depictions of Social Agents.” Behavioral and Brain Sciences, March, 1–33. https://doi.org/10.1017/S0140525X22000668.
- Heath, C., & Luff, P. (2000). Technology in action. Cambridge, U.K. ; New York: Cambridge University Press.
- Kopp, S., & Krämer, N. (2021). Revisiting Human-Agent Communication: The Importance of Joint Co-construction and Understanding Mental States. Frontiers in Psychology, 12. doi: 10.3389/fpsyg.2021.580955
- Moerman, M. (1988). Talking Culture: Ethnography and Conversation Analysis. Philadelphia: University of Pennsylvania Press.
- Mori, M., MacDorman, K. F., & Kageki, N. (2012). The Uncanny Valley [From the Field]. IEEE Robotics Automation Magazine, 19(2), 98–100. doi: 10.1109/MRA.2012.2192811
- Sacks, H. (1989). Extract Nine: For children: A limited set of categories. Human Studies, 12(3–4), 363–364. doi: 10.1007/BF00142783
- Suchman, L. A. (2007). Human-machine reconfigurations: Plans and situated actions (2nd ed). Cambridge ; New York: Cambridge University Press.