Language between animals and computers

Language is what makes us human: one of those things, perhaps the one thing, that sets us apart. But there is an interesting asymmetry in our willingness to ascribe linguistic capacities to non-humans: animals tend to be seen as having none, whereas computers are increasingly thought to have mastered language. This asymmetry is the focus of a recent essay I co-authored with a range of people, led by Marlou Rasenberg. 1

People will of course grant that other animals have communication systems, too; they’re just not language. Some people feel they have meaningful interactions with their pets; that too, consensus says, is not language. In short, scientists are generally careful, frugal even, when it comes to ascribing linguistic capacities to non-human animals.

With computers on the other hand we are not so cautious. Already in the 1960s there was the idea that a computer should be able to learn language (as reviewed in Weizenbaum 1976). Today, according to some, the advent of large language models means that computers now have “mastered language” (source) or at least can “acquire human-like grammatical language” (Contreras Kallens et al. 2023).

Photo by Hristina Šatalova on Unsplash

Terra nullius

So we are hesitant to grant non-human animals even the tiniest bit of linguistic capacities, and yet over-eager to ascribe mastery of language to computers. Why? One reason is the Cartesian history of linguistics and AI: a mentalist-rationalist project that sees language as an abstract symbol system and the mind as an abstract symbol manipulator. As Weizenbaum (1976) wrote, “we, all of us, have made the world too much into a computer”.

One effect of this is that objectively interesting features of human-animal interaction end up in a kind of interdisciplinary terra nullius. This despite the fact that there is plenty to study. How do allospecifics (agents from different species) manage to achieve interactive contingency? How do different bodies co-construct semiotic affordances in interaction? How do we manage to assign meaning to the behaviour of ostensibly non-linguistic others?

Another effect, equally striking, is that our bedazzlement with the fluid outputs of text generators leads us to overlook the very serious interactive and interpretive work it takes for people to interact with these machines. We are overeager interpreters, always ready to invest things with meaning.

In this sense, then, pet owners are not all that different from wide-eyed LLM enthusiasts. And it is one of the jobs of today’s language scientists to take a cold hard look at the interactive and interpretive processes going on in both cases. For this, essentialist Cartesian conceptions of ‘mind’ and ‘language’ are not all that useful; indeed they will frequently stand in the way, biasing us to ignore all that is flexible and co-constructed about our interaction with non-humans and inviting us to privilege the disembodied, text-bound artefacts of text generators. To really crack this nut, we will need to enrich the language sciences with insights from disparate disciplines like ethology, conversation analysis, semiotics, embodied cognitive science, and interaction studies.

The Turing test is overrated

Just as Descartes made language a criterion of mind, so Turing made it a test of machine intelligence. Much is made today of the ability of large language models (LLMs) to generate text that is sufficiently fluid and interactive to “pass” the Turing test, i.e., to fool some people some of the time into thinking they’re dealing with a person. I think this is a category mistake. As we write in our paper:

The Turing test—a closed experimental setup in which a human interpreter judges textual output—foregrounds only the tiniest and most disembodied sliver of language. Today’s large language models (next-token predictors that excel at completing text prompts in plausible ways) can pass at least some forms of this test. What do we learn from this? Whereas some have rushed to the conclusion that this means statistical learning may explain the human capacity for language, here we take a different view: it is time to rethink the disembodied, decontextualized, text-bound conception of language these models are founded on.

In short, the Turing test is overrated. One of my favourite slowly-cited 2 papers, McIlvenny (1993), mentions at least ten ways in which seeing it a proof of “machine intelligence” or “mastery of language” is less than useful, if not actively misleading. Needless to say, Turing himself wasn’t so categorical either; his original essay is well worth a careful read.

Recalibrating “linguistics as we know it”

Anyway, the point of our short essay is to draw attention to this odd asymmetry and ask what we can learn from this about our conceptions of language and about our interactions with non-humans, both animals and computers.

One possibility is that it will not “have any bearing on linguistics as we know it” (as one skeptical reply to Cornips’ call for an ‘animal turn’ in linguistics opined recently). But of course, there is nothing sacred about “linguistics as we know it” — indeed, the very notion is question-begging. A key challenge for any science is how to escape conceptualizations that limit us to the field “as we know it” and stay open to recalibration. As semiotician, linguist and networker extraordinaire Victoria Welby wrote almost a century ago:

The living world is plastic, and it is life which has mind and language whereby to express it. Thus rigid definition (and the results obtained by it) must always be secondary, plastic definition primary. Language must mainly follow the inexhaustible subtleties of organic phenomena.

Welby 1931, reprinted in Welby 1985 (ed. H.W. Schmitz)

We think there’s something to be said for an inclusive view of the language sciences that doesn’t exclude animals from consideration by default, that is mindful of the limitations of equating language with abstract symbol systems, and that is open to the study of gradient, semiotically diverse resources for interaction. Have a read:

Rasenberg, Marlou, Azeb Amha, Marjo van Koppen, Emiel van Miltenburg, Lynn de Rijk, Wyke Stommel, and Mark Dingemanse. 2023. ‘Reimagining Language: Towards a Better Understanding of Language by Including Our Interactions with Non-Humans’. Linguistics in the Netherlands 40: 309–17. doi: 10.1075/avt.00095.ras.

References cited

  • Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’ In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. Virtual Event Canada: ACM. doi.org/10.1145/3442188.3445922
  • Cornips, Leonie. 2019. ‘The Final Frontier: Non-Human Animals on the Linguistic Research Agenda’. Linguistics in the Netherlands 36 (1): 13–19. doi.org/10.1075/avt.00015.cor.
  • Heesen, Raphaela, Marlen Fröhlich, Christine Sievers, Marieke Woensdregt, and Mark Dingemanse. 2022. ‘Coordinating Social Action:  A Primer for the Cross-Species Investigation of Communicative Repair’. Philosophical Transactions of the Royal Society B: Biological Sciences 377 (1859): 20210110. https://doi.org/10.1098/rstb.2021.0110.
  • McIlvenny, Paul B. 1993. ‘Constructing Societies and Social Machines: Stepping Out of the Turing Test Discourse’. Journal of Intelligent Systems 3 (2–4). doi.org/10.1515/JISYS.1993.3.2-4.119.
  • Turing, A. M. 1950. ‘Computing Machinery and Intelligence’. Mind LIX (236): 433–60. doi.org/10.1093/mind/LIX.236.433.
  • Weizenbaum, Joseph. 1976. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W. H. Freeman.
  • Welby, Victoria. 1985. Significs and Language: The Articulate Form of Our Expressive and Interpretive Resources. Edited by H. Walter Schmitz. Foundations of Semiotics, v. 5. Amsterdam ; Philadelphia: J. Benjamins Pub. Co.

Footnotes

  1. Our piece won shared second prize in an essay competition on Big Questions in Linguistics organised by LOT, the Netherlands Graduate School for Linguistics.[]
  2. By the way, “slowly-cited” is a term I just made up for interesting work that somehow hasn’t reached escape velocity; this paper has <5 cites as I write, two of which are ours.[]

1 thought on “Language between animals and computers”

Leave a Reply

Your email address will not be published. Required fields are marked *