Gregory Bateson, in Mind and Nature, writes: To liken the mountain to a man and talk of its “humor” or…
I am extremely happy to announce that NWO will be funding the project Futures of Language over the next five years. We will start in September 2024; stay tuned for news about positions for postdocs, PhDs, and research software engineers.
We study artisanal and artificial ways of languaging to better understand language + technology, and to reimagine our linguistic futures.
A common trope in recreational mathematics is the grazing goat problem: for a goat tethered to some piece of rope, what is the area it can graze given the length of the rope and various other variables like the shape of the field? In my recent Annual Reviews article I argue that linguistics has something of an inverse grazing goat problem.
Out now in Annual Review of Linguistics: Interjections at the Heart of Language. This review critically considers received views of interjections as involuntary grunts and provides a number of alternative ways of thinking about interjections. I would be very happy if you read it.
Language makes us human. But there is an interesting asymmetry in our willingness to ascribe linguistic capacities to non-humans: animals are seen as having none, whereas computers may well master it according to many. What curious conception of language makes this asymmetry possible? And what do Descartes and Turing have to do with it? Notes from a new essay about language between animals and computers.
We have a new paper out in which we argue that the robustness and flexibility of human language is underpinned by a machinery of interactive repair. Repair is normally thought of as a kind of remedial procedure. We argue its import is more fundamental. Simply put (and oversimplifying only a bit), we wouldn’t have complex language if it weren’t for interactive repair.
Interjections are, in Felix Ameka’s memorable formulation, “the universal yet neglected part of speech” (1992). They are rarely the subject of historical, typological or comparative research in linguistics, and they are notably underrepresented in descriptive grammars. As grammars are the main source of data for typologists, this is of course a perfect example of a self-reinforcing feedback loop. How can we break this trend?
There is a minor industry in speech science and NLP devoted to detecting and removing disfluencies. In some of our recent work we’re showing this adversely impacts voice user interfaces. Here I review a case where the hemming and hawing is the point — and where removing it adversely impacts our ability to make sense of what people do in interaction.
This is a the second part in a two part series of peer commentary on a recent preprint.
With the first excitement of ChatGPT dying down, people are catching up on the risk of relying on closed and proprietary models that may stop being supported overnight or may change in undocumented ways. Good news: we’ve tracked developments in this field and there are now over 20 alternatives with varying degrees of openness, most of them more transparent than ChatGPT.
In a recent BBS paper, Clark & Fischer propose that people see social robots as interactive depictions and that this explains some aspects of people’s behaviour towards them. We note that they leave unexamined the notion of “social” in social robots: the question of how technologies like this become enmeshed in human sociality.
Readers of this blog know that I believe serendipity is a key element of fundamental research. There is something neatly…
We have a new paper out in which we find that people overwhelmingly like to help one another, independent of differences in language, culture or environment. This is a surprising finding from the perspective of anthropological and economic research, which has tended to foreground differences in how people work together and share resources.
It’s a common misconception that iconicity or sound symbolism is universal, perpetuated in part by the almost universal success of famous experiments involving pseudowords like bouba and kiki. But iconicity in natural languages is much more messy than paradigms like bouba-kiki suggest. Which begs the question, what do we really measure when we measure iconicity? This is what our new paper investigates.
It’s easy to forget amidst a rising tide of synthetic text, but language is not actually about strings of words, and language scientists would do well not to chain themselves to models that presume so. For apt and timely commentary we turn to Bronislaw Malinowski
Perhaps only those who haven’t read Bakhtin can call themselves true Bakhtinians: the ideas have to reach you and influence you through a polyphony of other texts and people.
Been reading this paper by Damián Blasi, Jo Henrich, Lila Adamou, David Kemmerer and Asifa Majid and can recommend it…
A serendipitous wormhole into #EMCA history. I picked up Sudnow’s piano course online and diligently work through the lessons. Guess…
A preprint claims that “ideas from theoretical linguistics have played no role in [NLP]”. Outside the confines of Chomskyan linguistics folks have long been working on incorporating storage, retrieval, gating and attention in theories of language, with direct relevance to computational models. The only way to give any content to the claim is by giving the notion “theoretical linguistics” the narrowest conceivable reading.
📣New! “Interjections“, a contribution to the Oxford Handbook on Word Classes. One of its aims: rejuvenate work on interjections by…