In this paper we consider the awesome flexibility of communicative repair in human interaction and take a peek under the hood. We ask: what elementary building blocks make this possible?
We find that several of the building blocks are found across species —from gibbons apparently self-correcting to chimps & bonobos showing persistence and elaboration— and introduce a conceptual framework that we hope will foster further comparative work
I've been interested in this topic ever since observing (in http://doi.org/10.1371/journal.pone.0136100) that ways of dealing with communicative trouble pattern within & across species in interesting ways. This year serendipity struck and we were able to get to it with an interdisciplinary team
It was great to work on this with @rapha_heesen@MarlenFroehlich Christine Sievers and @mariekewoe — between us, we represent (at least) psychology, anthropology, primatology, philosophy, psychobiology and the language sciences, which made things all the more fun and interesting
Anyway, while we still seem to have a joint focus of attention, let me just drop this link here again, which (as you can read in the paper) may be a form of persistence if not elaboration http://doi.org/10.31234/osf.io/35hzt — go check it out!
One thing we found is that outside primates, research on sequentially organized social interaction is still rare — most work focuses on acoustics, song structure & ethograms rather than on contingency, sequence & interactional achievement. Lots of opportunities for exciting work!
As we point out in the paper, sequential analysis allows us to unify work on persistence & elaboration in great apes w/ work on repair in humans; and to identify possible continuities or bridging contexts, such as the freeze-look described by @elycorman and @njenfield
One risk of introducing a 'framework' is that it may be interpreted as proposing a simple matrix of ready-to-use labels for reified phenomena. Our goal here is different: we seek to make visible a space of possibilities with room for diversity & gradience
Primates 🦧🧑 are cool, but if there is one thing that I hope our paper will help contribute to it would be a broader interactive turn in communicative ethology across species 🐳🐠🐘🐦🦇 : from signals and their properties to sequential exchanges as an interactional achievement.
Always plot your data. We're working with conversational corpora and looking at timing data. Here's a density plot of the timing of turn-taking for three corpora of Japanese and Spanish. At least 3 of the distributions look off (non-normal). But why?
Plotting turn duration against offset provides a clue: in the weird looking ones, there’s a crazy number of turns whose negative offset is equal to their duration — something that can happen if consecutive turns in the data share the exact same end time (very unlikely in actual data).
Plotting the actual timing of the turns as a piano roll shows what’s up: the way turns are segmented and overlap are highly improbable ways — imagine a conversation that goes like this! (in red are data points on the diagonal lines above)
Fortunately some of the corpora we have for these languages don’t show this — so we’re using those. If we hadn’t plotted the data in a few different ways it would have been pretty hard to spot, with consequences down the line. So: always plot your data.
Few historical maps of Ghana’s Volta and Oti regions have been invested with so much political and sociohistorical meaning as Hans Gruner’s 1913 map of the Togo Plateau. Gruner, stationed for over twenty years at Misahöhe in present-day Togo, was a long-time colonial administrator known for his ethnographical and historical knowledge of the area. His name is still known in most localities depicted on the map, as I attested in Akpafu myself (I’ve written about the map on this blog before). Besides Akpafu, we find the communities Santrokofi, Gbi, Alavanyo, Nkonya, and Bowiri on this map.
The map is not uncontroversial and is in the first place a political object, serving a double goal of documentation and geopolitical regimentation. Gruner worked with the communities bordering on the Togo Plateau and saw to it that all of them received an official copy, some of which still survive. It was widely accepted by most of the communities, was adopted and used by British colonial authorities in the 1920s, and has since been upheld by the Ghana High Court numerous times as the definitive demarcation for settling land claims and boundary disputes, though the Nkonya-Alavanyo border remains disputed, with conflicts flaring up every now and then (Penu & Essaw 2019).
Gruner is still a household name in part because was a petty tyrant with a powerful grip on ‘his’ Misahöhe district. In the 1890s he played a key role in expanding the German colonial sphere and violently subjugating people that were in the way of German commercial and political interests. He led the infamous 1894/95 Togo Hinterland expedition, which sought to extend Germany’s sphere of influence ostensibly under the goal of building scientific and ethnographic collections (the latter obtained by buying or by looting). The influential Dente Bosomfo at Kete-Krachi was executed in public by firing squad under Gruner’s direction in November 1894, and Gruner subsequently oversaw the plundering of the Dente shrine (Maier 1980, Hüsgen 2020). He was stationed at Misahöhe between 1896 and 1914 and was presumptuous enough to give himself the title of “Graf von Misahöhe”.
Obscure and hard to find
Despite its historical significance and continuing local geopolitical relevance, access to the Gruner Map has been severely restricted for over hundred years, and interested parties have been pointed to archival copies in the custody of local authorities or in libraries in Europe that carry copies of Mitteilungen aus den deutschen Schutzgebieten, the obscure and long-defunct German colonial-era journal in which the map was originally published as a supplement. Here’s a photograph of one of the copies surviving in Ghana:
Now in the public domain
This situation is far from desirable: material of such significance should be available freely and at the highest possible quality to anyone interested. Fortunately, digitisation puts early sources within reach of anyone with an internet connection, and it has been possible for a while to now to find low-resolution copies online. But we can do better. Therefore I am making available a new high resolution scan of the map that I made myself with the help of the librarians at the MPI for Psycholinguistics. Here it is:
If ~5000x7500px is too large for you, try this slightly more reasonably sized one at 2000×2700 pixels, sourced from the Bayerische Staatsbibliothek: Gruner map, 1913, JPG (2Mb, 2000×2707 pixels). Given that the map is from 1913 and its makers died in 1928 (Sprigade) and 1943 (Gruner), I consider it to be in the public domain.
To be clear: I take no position in any territorial disputes in which this old map may or may not be relevant. My position is that information wants to be free. For an overview of the Nkonya/Alavanyo conflicts, the contested role of the Gruner Map, and alternative ways of determining the relevant boundaries, see Penu & Essaw 2019 (PDF).
Historical & ethnographical interest
The communities featured on the map are, in clockwise order from top right and by their present-day designations: Akpafu, Santrokofi, Gbi, Alavanyo, Nkonya, and Bowiri. Gruner used a Germanized spelling, as seen in Sandrokofi and Kunja, and was somewhat erratic in keeping (Egbi) or leaving out (Lavanyo, Kunja) presyllabic vowels and nasals.
Even though the map is mostly known for its local geopolitical significance, there is another reason to share it: it has great historical-descriptive value. Just taking the Akpafu area (which I know best), it is clear that placenames have been faithfully recorded in such a way that we can recognise and even parse many local toponyms (e.g., Eprimkato = Iprimu-kato ‘the top of Iprimu’, Klasereré ɔkàlà-sɛrɛrɛ ‘steep sleeping mat’, and so on). Moreover, many abandoned settlements in the Kùbe mountains —of great historical and archaeological significance because of the famed local iron industry— are indicated on the map.
In short, the Gruner map offered unprecented detail for its time and was underpinned by considerable geological, ethnographical and sociological research. The research underlying the map was amply documented in an often-overlooked 12 page treatise that accompanies the map and that is also made available digitally here, perhaps for the first time (Gruner 1913):
While I have published translations of German early sources on this blog before, and in general I try to go out of my way to make early work accessible to as many readers as possible, I don’t quite have the resources right now to commit to a translation of this 12-page treatise, which is all the more reason to make the original German available. Perhaps others will beat me to translating it.
References & further reading
Gruner, Hans. 1913. Begleitworte zur Karte des Sechsherrenstocks (Amandeto). Mitteilungen aus den deutschen Schutzgebieten 26(2). 127–139.
Gruner, Hans & Sprigade, P. & Ketzer, H. 1913. Karte des Sechsherrenstockes (bisher Kunjagebirge genannt). Nach den Aufnahmen des Regierungsrats Dr. H. Gruner unter Leitung von P. Sprigade bearbeitet und gezeichnet von H. Ketzer. Mitteilungen aus den deutschen Schutzgebieten 26 (Karte 3).
Hüsgen, Jan. 2020. Colonial Expeditions and Collecting – The Context of the “Togo-Hinterland” Expedition of 1894/1895. Journal for Art Market Studies 4(1). (doi:10.23690/jams.v4i1.100)
Maier, Donna. 1980. Competition for Power and Profits in Kete-Krachi, West Africa, 1875-1900. The International Journal of African Historical Studies. Boston University African Studies Center 13(1). 33–50. (doi:10.2307/218371)
Penu, Dennis Amego Korbla & Essaw, David Wellington. 2019. Geographies of peace and violence during conflict: The case of the Alavanyo-Nkonya boundary dispute in Ghana. Political Geography 71. 91–102. (doi:10.1016/j.polgeo.2019.03.003)
I’ve been caught up in a few debates recently about Recognition and Rewards, a series of initiatives in the Netherlands to diversify the ways in which we recognize and reward talent in academia. One flashpoint was the publication of an open letter signed by ~170 senior scientists (mostly from medical and engineering professions), itself written in response to two developments. First, the 2019 shift towards a “narrative CV” format in grant applications for the Dutch Research Council (NWO), as part of which applicants are asked to show evidence of the excellence, originality and impact of their work using article-level metrics instead of journal level metrics like the Journal Impact Factor (JIF). Second, the recent announcement of Utrecht University (signatory of the Declaration on Research Assessment) to abandon the JIF in its hiring and promotion processes (see coverage).
Why funders in search of talent are ditching the JIF
Some background will be useful. The decision to not use JIF for the evaluation of researchers and their work is evidence-based. There is a lot of work in bibliometry and beyond showing that a 2-year average of a skewed citation distribution is an imperfect measure of journal quality, a driver for perverse incentives, and a misleading proxy for the quality of individual papers. Indeed Clarivate itself, the for-profit provider of the JIF metric, has this to say about it: “In the case of academic evaluation for tenure, it is inappropriate to use a journal-level metric as a proxy measure for individual researchers, institutions, or articles”.
Despite this evidence, JIFs have long been widely used across the sciences not just as a way to tell librarians which journals are making waves (= what they were designed for) but also as a quick heuristic to judge the merits of work appearing in them or people publishing in them. As they say, ‘Don’t judge a book by its cover, but do judge scientific work by its JIF’. There is a considerable halo-effect attached to JIFs, whereby an article that ends up in a high IF journal (whether by sheer brilliance or simply knowing the right editor, or both) is treated, unread, with a level of veneration normally reserved for Wunderkinder. Usually this is done by people totally oblivious to network effects, gatekeeping and institutional biases.
It appears that the decision to explicitly outlaw the use of JIFs now has people coming out of the woodwork to protest. The first letter (and also another one by a number of junior medical scientists) is aimed specifically at the prohibition against using the JIF, which is (incorrectly) framed as a ban on all quantification. The feeling is that this deprives us of a valuable (if inexact) metric that has long been used as a quick heuristic of the ‘value’ or ‘quality’ of work.
‘Halo? What halo?’
Raymond Poot, main author of the first letter, strongly believes that the JIF, even if inexact, should not be ditched. Saying, “Let’s talk JIF”, he provides this diagram of citation distributions in support:
The diagram compares the citation distributions of Nature and PLOS ONE (an open access megajournal). Poot’s argument, if I understand it well, is that even if Nature’s JIF is skewed by a few highly cited papers, the median number of citations is still higher, at 13, than the median number of cites that PLOS ONE papers receive (which looks like 1). As Poot says in reference to an earlier tweet of mine on the halo-effect, ‘Halo? What halo?’.
We want to identify and reward good work wherever it appears
We’ll get to that halo. First things first. We’re talking about whether using the JIF (a journal’s 2-year citation average) is a good idea if you want to identify and reward good individual work. And especially whether using the JIF is better or worse than using article-level metrics. Another assumption: we care about top science so we would like to identify good work by talented people wherever it appears. Analogy: we want to scout everywhere, not just at the fancy private school where privileges and network can obscure diverse and original talent.
Let’s assume the figure represents the citation distributions reasonably well (I’m going to ignore the obvious folly of taking an average of a very skewed and clearly not unimodal distribution). Where is the JIF halo? Right in front of you, where it says, for publication numbers, “in thousands for PLOS ONE”. Publication volume differs by an order of magnitude. This diagram hides that by heavily compressing the PLOS distribution, which is never good practice for responsible visualization, so let’s fix that. We’ll lose exact numbers (they’re hard to get) but the difference is large enough for this to work whatever the numbers.
The enormous difference in sheer volume means that an OA megajournal is likely to have quite a few papers with more cites than the Nature median — high impact work that we would miss entirely if we focused only on the JIF. The flip side is where we find the halo effect: there are, in any given year, hundreds of Nature papers that underperform quite a bit relative to the IF (indeed half of them underperform relative to the median). This —the skewed distributions for both the megajournal and the glamour journal— shows why it is a bad idea to ascribe properties to individual papers based on how other papers published under the same flag have been cited.
On average, my paper is better than yours
“But still, surely on average Nature papers are…” — besides the point. I would rather give a talent grant to the bright student who made their way through the public school system (an over-performing PLOS paper) than to the one who dangles at the bottom of the distribution of their privileged private school (an underperforming Nature paper). Identifying talent on the basis of JIF instead of content or impact is like handing out bonus points to private school essays in the central exam. “But on average those elite schools do tend to do better don’t they?” Unsurprisingly, they do, and if you think such differences are meaningful or worth further reinforcement , it’s worth reading some more sociology, starting perhaps with the diversity-innovation paradox.
There are other issues with our hyperfocus on glamour journals. These journals like to publish good work but they also apply some highly subjective filters (selecting for ‘broad appeal’ or ‘groundbreaking’ research — phrases that will sound familiar from the desk-rejects that statistically speaking many readers from academia will have seen). Nature prides itself on an 8% acceptance rate, the same chances that we rightly call a lottery when it concerns grant proposals. Being overly selective inevitably means that you’ll miss out on top performers. One recent study concluded that this kind of gate-keeping often leads us to miss highly impactful ideas and research:
However, hindsight reveals numerous questionable gatekeeping decisions. Of the 808 eventually published articles in our dataset, our three focal journals rejected many highly cited manuscripts, including the 14 most popular; roughly the top 2 percent. Of those 14 articles, 12 were desk-rejected. This finding raises concerns regarding whether peer review is ill-suited to recognize and gestate the most impactful ideas and research.
Gatekeepers of course also introduce their own networks, preferences and biases with regards to the disciplines, topics, affiliations, and genders they’re more likely to favour. In this context, Nature has acknowledged the sexism of how its editorial boards are constituted, and as the New York Times wrote last year, the publishing process at top journals is “deeply insular, often hinging on personal connections between journal editors and the researchers from whom they solicit and receive manuscripts”.
From smoke and mirrors to actual article-level impact
“But doesn’t my Nature paper count for anything?” I sure hope it does. And the neat thing is, under the new call for evidence-based CVs you can provide evidence and arguments instead of relying on marketing or association fallacies. Do show us what’s so brilliant and original about it. Do tell us about your contributions to the team, about the applications of your work in industry, and about the relative citation ratio of your paper. Indeed, such article-level metrics are explicitly encouraged as a direct indicator of impact and originality. To spell it out: a PLOS ONE paper that makes it to the 1% most cited papers of its field is more telling than, say, a Nature paper of the same age that has managed to accrue a meagre 30 cites. An evidence-based CV format can show this equally for any type of scientific output, without distracting readers with the smoke and mirrors of the JIF.
Scientists are people, and people are easily fooled by marketing. That is going to be the case whether we mention the JIF or not. (The concerned medical scientists writing the letters know full well that most grant reviewers will know the “top” journals and make inferences accordingly.) The purpose of outlawing the JIF is essentially a nudge, designed to make evaluators reflect on this practice, and inviting them to look beyond the packaging to the content and its actual impact. I can only see this as an improvement — if the goal is to identify truly excellent, original and impactful work. Content over silly bean counts. True impact over halo effects.
If you want to find actual impact, look beyond the JIF
I have focused so far on PLOS ONE and Nature because that’s the example provided by Raymond Poot. However, arguably these are two extremes in a very varied publishing landscape. Most people will seek more specialised venues or go for other multidisciplinary journals. But the basic argument easily generalizes. Most journals’ citation distributions will overlap more than those of PLOS ONE and Nature. For instance, let’s take three multidisciplinary journals titrated along the JIF ranks: Science Advances (14.1), PNAS (11.1), and Scientific Reports (4.4). Set up by Nature to capture some of the market share of OA megajournals, Scientific Reports is obviously less artificially selective than the other two. And yet its sheer publication volume means that a larger number of high impact papers appear in Scientific Reports than in PNAS and Science Advances combined! This means, again, that if you want to find high impact work and you’re just looking at high IF journals, you’re missing out.
Trying to find good or impactful work on the basis of the JIF is like searching for your keys under the streetlight because that’s where the light is. Without it, we stand a better chance of identifying truly groundbreaking work across the board — and fostering diversity and innovation in the process.
Caveats. I’ve used article-level citations as a measure of impact here because they most directly relate to the statistically illiterate but widespread use of the JIF to make inferences about individual work or individual researchers. However, citations come with a time lag, are subject to known biases against underrepresented minorities, and are only one of multiple possible measures of originality, reach and impact. Of course, to the extent that you think this makes actual citations problematic as indicators of article-level impact or importance, it means the JIF is even more problematic.
Lezenswaardig: een groep jonge medici ageert tegen de marketing-wedstrijd waarin volgens hen narratieve CVs in kunnen ontaarden — de nieuwste bijdrage aan het Erkennen & Waarderen-debat. Maar niets is wat het lijkt. Over evidence-based CVs, kwaliteit & kwantificatie
Eerst dit: de brief benoemt het risico dat je met narratieve CVs een soort competitie krijgt tussen verhalen. Dat kan zeker als de conventies van het genre nog niet uitgekristalliseerd zijn, zoals ik al schreef in 2019, toen NWO het invoerde. Een mooie-verhalen-wedstrijd wil niemand, daar zijn we het over eens. Ik ben het wat dat betreft trouwens ook eens met misschien wel het belangrijkste punt van de eerste brief o.l.v. Raymond Poot: meten is weten. Je moet alleen wel weten wát je meet. Daarover gaat dit stuk ook.
De medici (zowel deze jongere collega’s als de senioren o.l.v. @raymondpoot in het openingssalvo) lijken vooral te ageren tegen de term “narratief CV”. Die heeft ook de schijn tegen natuurlijk: gaan we elkaar nou sterke verhalen zitten vertellen bij het kampvuur? Nee toch zeker! Volgens de briefschrijvers moet een wetenschapper in het nieuwe systeem iets over haar achtergrond & prestaties opschrijven “op een onderscheidende manier” en “zonder kwantitatieve maten te gebruiken”. Factcheck: ❌ Kwantificatie in het narratieve CV is prima, gewenst zelfs!
Laten we de call van NWO er anders even bij pakken: hier is de PDF — het stuk waar het om gaat (§3.4.1 sectie 1 en 2) plak ik hieronder
Als de term “narratief CV” je niet zint kun je het ook een evidence-based CV noemen: in plaats van contextloze lijstjes & getallen wil men argumenten zien voor de excellentie van de kandidaat & haar werk, kracht bijgezet door kwalitatief en kwantitatief bewijs van impact.
Want kijk even mee: zowel kwantitatieve als kwalitatieve indicatoren zijn uitdrukkelijk toegestaan. Dat zou je niet uit de brieven van Raymond Poot & collega-medici gehaald hebben. Het cruciale verschil is dat indicatoren duidelijk betrekking moeten hebben op specieke items: “Alle type kwaliteitsindicatoren mogen genoemd worden, zolang ze betrekking hebben op slechts één output item.”
Wat hier goed aan is 1: Waar eerder de complete publicatielijst geplempt mocht worden (waar vooral veelschrijvers bij gebaat zijn) vraagt dit format om een gemotiveerde keuze van 10 items: de n-best methode die aan Ivy Leagues gangbaar is. Niks mis mee!
Wat hier goed aan is 2: Waar je eerder goede sier kon maken met journal-level metrics als IF (statistisch gezien niet meer dan een opgedirkt halo-effect) moet je nu hard bewijs leveren voor de impact & het belang van je werk.
Wat hier goed aan is 3: Waar je eerder te koop kon lopen met een hoge h-index (niet gecorrigeerd voor voorsprong door leeftijd, coauteurschap, zelfcitaties & andere biases) mag je nu laten zien welke van je papers echt zo briljant & origineel zijn.
Dat kwantificatie niet meer mag is quatsch
Volgens mij zijn dat ook 3 manieren waarop een evidence-based CV meer kansen biedt juist voor de ‘kwetsbare groepen’ die de brief noemt. (En ook: 3 manieren waarop de voorsprong van traditioneel bevoorrechten enigszins rechtgetrokken wordt — is dat niet ook een deel van de pijn?)
Kortom, dat kwantificatie niet meer zou mogen is quatsch. Je kunt alleen niet meer wegkomen met de meest indirecte cijfers (die vooral wat zeggen over privileges, kruiwagens en co-auteurs) — in plaats daarvan moet je nu hard bewijs leveren voor de impact & het belang van je werk.
Ik moet wel zeggen: de misverstanden in de brieven komen niet helemaal uit de lucht vallen. “Narratief CV” is geen beste term en er is kennelijk gebrek aan sterke voorbeelden van verantwoorde & genuanceerde kwantificatie op artikelniveau. Werk aan de winkel voor Erkennen en Waarderen en NWO!
Tot slot: álle briefschrijvers —van @raymondpoot cs tot @DeJongeAkademie@RadboudYA etc tot de jonge medici— zijn het erover eens dat roofbouw op de financiering de echte nekslag is voor topwetenschap in ons land: meer investering in fundamenteel onderzoek is cruciaal
Toevoeging 11 mei 2022:
Nou, mijn betoog in dit draadje, of in ieder geval de de term ‘evidence-based CV’, lijkt bij NWO gehoor gevonden te hebben — waar op mijn CV zal ik dat zetten? 😃
This Lingbuzz preprint by Baroni is a nice read if you’re interested in linguistically oriented deep net analysis. I did feel it’s a bit hampered by the near-exclusive equation of linguistic theory with generative/Chomskyan aps. (I know it makes a point of claiming a “very broad notion of theoretical linguistics”, but it doesn’t really demonstrate this, and throughout the implicit notion of theory is near-exclusively aligned with GG and its associated concerns of competence, poverty of the stimulus, et cetera).
For instance, it notes (citing Lappin) that theoretical linguistics “played no role” in deep learning for NLP, but while this may hold for generative grammar (GG), linguistic theorizing was much broader than that right at the start of connectionism and RNNs, e.g. in Elman 1991.
In fact, just look at the bibliography of Elman’s classic RNN work and tell us again how exactly theoretical linguistics “played no role” — Bates & Macwhinney, Chomsky, Fillmore, Fodor, Givon, Hopper & Thompson, Lakoff, Langacker, they’re all there. Elman’s bibliography is a virtual Who is Who of big tent linguistics at the start of the 1990s. The only way to give any content to Lappin’s claim (and by extension, Baroni’s generalization) is to give the notion of “theoretical linguistics” the narrowest conceivable reading.
However, Baroni’s point may generalize: perhaps modern-day usage-based, functional, and cognitive approaches to ling theory aren’t drawing as heavily on current NLP/ML/DL work as they could either. Might a lack of reciprocity play a role? After all, the well known ahistoricism and lack of interdisciplinary engagement of NLP today does not exactly invite productive exchange. (Though some of us try.)
The theory=Chomsky equation also makes it appearance at the end, where Baroni muses about incorporating storage, retrieval, gating and attention in theories of language. Outside the confines of Chomskyan linguistics folks have long been working on precisely such things. One might think work by Joan Bybee, Maryellen MacDonald, Morten Christiansen, and others might merit a mention!
In sum, Baroni’s piece provides an informative if partial review of recent work and includes bold proposals (e.g., deep nets as algorithmic linguistic theories), worth reading if you’re interested in a particular kind of linguistics. Consider pairing it with this well-aged bottle of Elman 1991!
Bybee, J. L. (2010). Language, Usage, and Cognition. Cambridge: Cambridge University Press.
Christiansen, M. H., & Chater, N. (2017). Towards an integrated science of language. Nature Human Behaviour, 1, s41562-017-0163–017. doi: 10.1038/s41562-017-0163
Elman, J. L. (1991). Distributed Representations, Simple Recurrent Networks, And Grammatical Structure. Machine Learning, 7, 195–225. doi: 10.1023/A:1022699029236
Lappin, S. (2021). Deep learning and linguistic representation. Boca Raton: CRC Press.
MacDonald, M. C., & Christiansen, M. H. (2002). Reassessing working memory: Comment on Just and Carpenter (1992) and Waters and Caplan (1996). Psychological Review, 109(1), 35–54. doi: 10.1037/0033-295X.109.1.35
📣New! “Interjections“, a contribution to the Oxford Handbook on Word Classes. One of its aims: rejuvenate work on interjections by shifting focus from stock examples (ouch, yuck) to real workhorses like mm-hm, huh? and the like. Abstract:
No class of words has better claims to universality than interjections. At the same time, no category has more variable content than this one, traditionally the catch-all basket for linguistic items that bear a complicated relation to sentential syntax. Interjections are a mirror reflecting methodological and theoretical assumptions more than a coherent linguistic category that affords unitary treatment. This chapter focuses on linguistic items that typically function as free-standing utterances, and on some of the conceptual, methodological, and theoretical questions generated by such items. A key move is to study these items in the context of conversational sequences, rather than as a mere sideshow to sentences. This makes visible how some of the most frequent interjections streamline everyday language use and scaffold complex language. Approaching interjections in terms of their sequential positions and interactional functions has the potential to reveal and explain patterns of universality and diversity in interjections.
Anyone who writes about interjections has first to cut through a tangle of assumptions about marginality, primitivity, and insignificance. I think this is incoherent: linguistics without interjections is like chemistry without the noble gases.
Re-centering interjections is possible now because there’s plenty of cool new interactional work by folks like Emily Hofstetter, Elliott Hoey, Nick Williams, Kristian Skedsmo, Johanna Mesch, and many others.
A fairly standard take in linguistics is that interjections are basically public emissions of private emotions — a view that is remarkably close to folk notions about the category. However, corpus data suggests that interjections expressive of emotions are actually not all that frequent — interactional and interpersonal uses are much more prominent (yet they are the least studied). This is why re-centering is important.
In line with this, part of the chapter focuses on some of the most frequent interjections out there: continuers, minimal particles that acknowledge a turn is underway and more is anticipated (like B’s m̀:hm seen at lines 53 and 57)
I am always impressed by the high-precision placement of these items and by their neat form-function fit, a pliable template for signalling various degrees of alignment and affiliation, with closed lips signifying ‘keep going, I won’t take the floor’.
Traditional linguistic tools are ill-suited to deal with the nature of interjections. Perhaps this is why most grammars do little more than just listing a bunch of them & noting how they don’t fit the phonological system. Fortunately, interactional linguistics and conversation analysis offer robust methodological tools, ready to be used in descriptive & comparative work. One aim of this piece is to point folks to some concrete places to start.
In the NWO project Elementary Particles of Conversation, we undertake the comparative study of these kinds of items and their consequences for language; this chapter aims to contribute towards that goal by fostering more empirical & theoretical work.
Some further goals I set myself for this piece: 1) foreground empirical work rather than traditional research agendas; 2) elevate new work by junior & minoritized scholars; 3) treat matters in modality-inclusive & modality-agnostic ways.
I found that often, these goals converge & point to exciting new directions. For instance, including sign language data (as in this case from Norwegian SL Kristian Skedsmo, but also work by Joanna Mesch) shows the prospects of a cross-modal typology of interjections.
With the tenth World Congress of African Linguistics around the corner (June 7-12, 2021), let me draw your attention to a workshop we are organizing: Centering pragmatic phenomena on the margins in African languages. Convened by Felix Ameka and Mark Dingemanse, this workshop gathers researchers from at least 8 African universities and from around the world for a report on the latest development in this exciting research area. Workshop abstract:
In pragmatics, as in linguistics in general, various expressive devices that are indispensable in communication have been left on the margins as being non-conventional, non-lexical or non-verbal. This includes a range of interjections, particles, response cries, calls and conversational gestures but also bodily conduct such as sighs, sniffs, coughs, and winks. Despite their ubiquity in everyday interaction, many of these devices are thought of as extra-linguistic or paralinguistic and have consequently been mostly ignored in theoretical and empirical linguistic work. There is a realization on the rise in the language sciences that grappling with these devices holds the key to an understanding of language, the unique feature of the human species. In this workshop, we focus on the interactional uses of linguistic elements, or more broadly semiotic resources, that are traditionally thought of as extra-grammatical, non-lexical, or para-linguistic based on linguistic practices and norms in African communities of practice, with a view to moving them from the margins to the centre in African and general linguistics.
Felix Ameka & mark dingemanse, convenors
The workshop takes place online as part of WOCAL, for which registration is required. Registration is free for participants from the Global South and €50 for others. The fees are used to support the inclusiveness and diversity of the overall programme, including technical support, subtitling, live captioning, sign language interpretation and other measures.
Now tell us in earnest that only one of these contains “theoretical implications that shed light on the nature of language and the language faculty”. (That was the phrasing a handling editor at Glossa used to desk-reject Henner’s submission.)
The point here is not to hate on a published paper (though to be honest I think that paper is flawed at the very least because of its unexamined deficit-based view of autism). The point is also not to argue that a preprint should be published as is. It is to argue that desk-rejecting that 2nd paper as “mainly about language use” is incorrect, far from theoretically neutral, and problematic for a journal of general linguistics.
The difference is that paper 1 takes a disability-as-deficit approach, which is currently the status quo in linguistics/psychology/education, whereas paper 2 asks us to consider an alternative interpretation, at which point people aligned with the status quo shut down.
Figuring out the myriad ways in which the second paper interrogates, uproots, and respecifies the theoretical premises of the first is left as an exercise to the reader.