Sound symbolism in language: Does nurunuru mean dry or slimy?

Guest posting by Gwilym Lockwood, PhD student in the Neurobiology of Language Department at the Max Planck Institute for Psycholinguistics.

Picture taken from Gomi 1989 ‘An illustrated dictionary of Japanese onomatopoeic expressions’

Picture taken from Gomi 1989 ‘An illustrated dictionary of Japanese onomatopoeic expressions’

When you hear the word dog, you understand it because you have learned that meaningless individual sounds mean dog when arranged in a specific order into a word – it’s not like d means “fluffy”, o means “four legs”, and g means “enjoys rolling in smelly things”. The sound of the word dog is unrelated to the thing it means – it just so happens that the combination of d, o, and g in English mean dog. The same concept in other languages is expressed with very different sounds; hond in Dutch, inu in Japanese, sobaka in Russian. The idea that the individual sounds which make up words are unrelated to the meaning of the word that those sounds express is called arbitrariness in linguistics.

Arbitrariness seems to make sense; if the sounds of language did have specific meanings, if the sounds of words were related to the things they mean, then surely all languages would sound quite similar. Given that they rather obviously do not (for example, Hawaiian has only eight consonants, while Georgian has 28, and !Xóõ, spoken in Botswana, has at least 58), it is safe to say that there is little or no relation between sound and meaning.

However, researchers have recently started to investigate this assumption. Several languages around the world use sound symbolic words called ideophones, which are used to talk about sensory imagery. Interestingly, these words seem to be directly related to their meaning (i.e. the sounds of the words are symbolic of their meaning), and even more interestingly, there seems to be something universal about these words – several experiments have shown that people who don’t speak these languages can still understand (or accurately guess) the meanings of ideophones.

We can try that out now. See if you can guess the meanings of these Japanese ideophones (answers below):

  1. nurunuru – dry or slimy?
  2. pikapika – bright or dark?
  3. wakuwaku – excited or bored?
  4. iraira – happy or angry?
  5. guzuguzu – moving quickly or moving slowly?
  6. kurukuru – spinning around or moving up and down?
  7. kosokoso – walking quietly or walking loudly?
  8. gochagocha – tidy or messy?
  9. garagara – crowded or empty?
  10. tsurutsuru – smooth or rough?
Click here to see the answers to the quiz
  1. nurunuru – slimy
  2. pikapika – bright
  3. wakuwaku – excited
  4. iraira – angry
  5. guzuguzu – moving slowly
  6. kurukuru – spinning around
  7. kosokoso – walking quietly
  8. gochagocha – messy
  9. garagara – empty
  10. tsurutsuru – smooth

.

Did you guess the meanings of these words better than you would expect? Unlike the word dog, it seems that the individual sounds in these words actually do contribute to the meaning of the words, and this is called sound symbolism. Sound symbolism is the opposite of arbitrariness, but the two can coexist perfectly happily within language.

Speakers of languages with sound symbolic ideophones, such as Japanese, often talk about how the ideophones create a very vivid image or feeling in their minds, whereas normal words don’t. When a Japanese person hears the word kirakira, meaning sparkly, it is like they can actually see the thing that is sparkly. How sound symbolism works, however, is not quite clear, and there have not yet been many neuroscience studies on it, but the research so far suggests that hearing sound symbolic words might involve other forms of sensory perception in a similar way to how people with synaesthesia associate colours to letters. My research at MPI is to investigate why certain sounds appear to be related to certain meanings across languages and how the brain processes these sounds.

Read more

How to paint with language

Words evolve not as blobs of ink on paper but in face to face interaction. The nature of language as fundamentally interactive and multimodal is shown by the study of ideophones, vivid sensory words that thrive in conversations around the world. The ways in which these Lautbilder enable precise communication about sensory knowledge has now for the first time been studied in detail. It turns out that we can paint with language, and that the onomatopoeia we sometimes classify as childish might be a subset of a much richer toolkit for depiction in speech, available to us all and in common use around the globe.

Words

Few professions should be more familiar with the nature of words than academia. Words are the currency of our trade. They record our cumulative progress, and they measure our productivity as we disperse our ideas through articles and books. How easy is it to fall in love with the printed word, black symbols on a white page, tidy spaces separating units of thought like stars dotting the skies of conceptual clarity!

How different are our everyday ways with words. We roll them on our tongues as we speak, whisper, or exclaim. They become pliable as we perform, prolong, and repeat them. We colour them with fine shades of meaning as we exert control over pitch, intensity, and duration. We masterfully integrate them with gestures and facial expressions into what linguists call composite utterances (Enfield 2009). For a long time all of these things were cordoned off as paralanguage: not the real thing, but a side show detracting us from the incisiveness of an idealised formal language. In the philosopher Frege’s musings on language, aesthetic delight and the striving for truth are in direct opposition.

However, this view is fast becoming outdated as linguists increasingly realise that the written word is a poor model for our true communicative competence. Language evolved in a much richer environment, and it has always done more for us than just deliver disembodied information. We use language to build social relations, to relate our experiences and to express our stances. We do not just inform, we also perform. This calls for a renewed study of how words work.

Ideophones

Often the best way to develop a fresh view of one’s field of study is to radically change one’s starting point. Linguists at the Max Planck Institute for Psycholinguistics do this by collecting new data on languages far afield. In the Language & Cognition Department, researchers carry out sustained fieldwork in over twenty locations across the globe. Over the last few years, there have been several in-depth studies of a type of words known as ideophones. Characterised as “vocal images”, these words are probably the best approximations of painting in speech. They were long seen as exotic, out-of-the-way words, but new research shows that they are ubiquitous in conversations across the globe, and used in unforeseen ways.

Ideophones are words whose form is suggestive of their meaning. Familiar examples include English kerplop and boom or German holterdipolter and tick-tack. But whereas in European languages, these words tend to be small in number and limited mainly to imitating sound, many of the world’s languages have hundreds or even thousands of ideophones, covering a much broader range of sensory meanings. Take tuŋjil-tuŋjil ‘bobbing, floating’, ulakpulak ‘unbalanced scary appearance’ and c’onc’on ‘woven tightly’ from Korean; or dhdŋɔh ‘appearance of nodding constantly’, praduk pradɛk ‘noises of scattered small drops of rain’ and grɛ:p ‘crispy sound’ from Semai, a language of peninsular Malaysia; or mukumuku ‘mumbling mouth movements’, fũɛ̃fũɛ̃ ‘elastic, flexible’ and kpɔtɔrɔ-kpɔtɔrɔ ‘walking like a tortoise’ from Siwu, a language of eastern Ghana (Dingemanse 2012).

Durch Video-Aufnahmen fremder Sprachen – hier Unterhaltung beim Herstellen von Palmöl in Akpafu-Mempeasem, Ghana, – analysieren die Wissenschaftler alltägliche Gespräche, in denen Lautmalereien vorkommen.

Linguists investigate the communicative uses of ideophones and gestures by making video-recordings of everyday social interaction — here, conversations during the making of palm-oil in Akpafu, Ghana. (Photo: Mark Dingemanse)

When studying video recordings of conversations in such languages, one sees that these words, with their peculiar forms and colourful meanings, are not spoken like ordinary words. They are delivered as performances. They bring to life the events in ways that ordinary words never do. Observing them in use, we get a sense of what the linguist and psychologist Karl Bühler meant when he wrote, “If there were to be a vote on who is more richly equipped with resources, the painter with colours or the painter with the voice, I would not hesitate to give the second my vote” (Bühler 1934).

Depiction

How can we paint with speech? In the ideophone systems of the world’s languages, there are three basic ways in which speech is used to depict sensory imagery — three types of iconicity (Dingemanse 2012). The first is to imitate sound with sound, as in English boom ‘sound of explosion’. This is called direct iconicity. It is the most simple way, but also the most limited. After all, not all events involve sound. What all events do have is internal temporal structure. This is where the second method comes in: the structure of words may resemble the structure of events. Bühler recognised this when he noted that words may be “Gestalt faithful” to the events they represent. Therefore we call this type Gestalt iconicity. For instance, words can be prolonged to evoke duration, closed syllables can evoke end points, and repeated syllables can evoke repetition — as in several of the examples given above. Third, and finally, sometimes similar words are used for similar events. Take the three Semai words grɛ:p ‘of chewing fruit’, gra:p ‘of chewing crisps’, grɨ:p ‘of chewing cassava’. They share a common template gr_p which we can characterise as ‘crispy sound’ (Tufvesson 2011). Because related words map onto related meanings, we call this type relative iconicity. Together, these three ways of suggesting meaning are part of the word painter’s toolkit. They allow depictive words like ideophones to form perceptual analogies of events.

But how does one know that some stretch of speech is intended as an ideophone —a vocal image— rather than an ordinary word? Comparative research shows that languages converge on remarkably similar ways to do this. For instance, across languages, ideophones sound special because they exhibit certain liberties relative to other words. They have a wider range of possible syllable structures and word forms, and they are remarkably susceptible to playful word formation processes like reduplication and lengthening. In spoken utterances, they stand out because they show a great measure of syntactic independence and they tend to be delivered as an intonation unit of their own. Last but not least, in many languages, ideophones are introduced by quotative markers or “say” or “do” verbs, emphasizing their performative nature. All of these features work together to mark ideophones as depictions, much as the frame around a painting tells us to interpret it as a painting and not as the wallpaper.

Perception

Researchers of the Language & Cognition Department are studying ideophones in a number of fieldsites. They use specially designed stimulus materials to study how these words encode sensory perceptions. They make high quality video and audio recordings of everyday conversations to understand how people use these words in face to face interaction — the kind of setting in which language evolved and keeps evolving. In the course of this research it has become clear that ideophones are far from the stylistic flourishes that people once took them to be. They are dedicated sensory words, on a par with industry-designed sensory vocabularies in terms of their structure and coverage of sensory spaces (Dingemanse and Majid 2012). They are used to communicate expert knowledge during joint work and to share and interpret experiences in storytelling. In languages which have thousands of ideophones, they are seen as the ultimate sign of eloquence.

Every challenge represents an opportunity. Ideophones challenge us to innovate theory and methods, encouraging us to move away from a view of language limited by ideologies or traditions to a perspective that is informed by as wide a range of data as possible. In the end, it should come as no great surprise that depiction may be just as important as description in linguistic life. Even our own findings are not communicated just in abstract words. We illustrate them with gestures and visualise them in figures and diagrams. There is truth, then, to the saying that an image says more than a thousand words. But what we may have overlooked was that our words have been peppered with images all along.

Speech tends to come together with gesture. Here, a speaker clearly makes a point.

Speech tends to come together with gesture. Here, a speaker clearly makes a point. (Photo: Mark Dingemanse)

References

  • Bühler, Karl. 1934. Sprachtheorie: Die Darstellungsfunktion der Sprache. Jena: G. Fischer.
  • Dingemanse, Mark. 2012. “Advances in the cross-linguistic study of ideophones.” Language and Linguistics Compass 6 (10): 654–672. doi:10.1002/lnc3.361.
  • Dingemanse, Mark, and Asifa Majid. 2012. The semantic structure of sensory vocabulary in an African language. In Proceedings of the 34th Annual Conference of the Cognitive Science Society, ed. N. Miyake, D. Peebles, and R. P. Cooper, 300–305. Austin, TX: Cognitive Science Society.
  • Enfield, N. J. 2009. The Anatomy of Meaning: Speech, Gesture, and Composite Utterances. Cambridge: Cambridge University Press.
  • Tufvesson, Sylvia. 2011. “Analogy-making in the Semai Sensory World.” The Senses and Society 6 (1): 86–95. doi:10.2752/174589311X12893982233876.

Note

This piece was written for the Max Planck Jahrbuch 2013. A German version (translated by Gunter Senft) appeared here as well as on scinexx.de.

An ode to Narita Airport Resthouse

I just got back from Japan. Because of an early flight out, I booked an overnight stay at Narita Airport Resthouse, a hotel located —as the name suggests— right at the airport. My booking website asked me to review the hotel; here’s what I wrote.

Narita Airport Resthouse review

Great if you like dilapidated buildings, rooms with actual keys, antique light switches and printed signs featuring Screen Beans characters, revealing the last update to have been over a decade ago.

The place is perfect for an overnight stay prior to departure from Japan. The room has plenty of power outlets to charge your gadgets for a long haul flight, and when you wake up, the breakfast is reasonable and the coffee is dark and fresh.

But the true charm of this place lies in its ramshackle state. You go up with an ageing, heaving elevator and pass through long hallways with water-stained khaki carpeting. The metal door of your room closes behind you with a satisfying clang.

On the bathroom door, a screen bean character warns you that “the steam from the shower can operate the fire alarm”. That steam seems quite versatile, as it is also responsible for executing a wall mold in the bathroom that puts Jackson Pollock to shame.

None of this is to detract from the virtues of this hotel. It is a more than welcome change after the sterile high tech modernity of Tokyo and Osaka. Narita Airport Resthouse is the liminal space that helps you get back from the future to the present — from Japan to your own shabby country of origin.

Wetenschapper+Weblog

Gisteren was ik op de eerste vakconferentie Wetenschapscommunicatie in de Van Nelle Ontwerpfabriek in Rotterdam. Samen met een aantal collega’s sprak ik in een sessie over ‘wat motiveert wetenschappers?’. Mjin bijdrage ging over Wetenschapper + Weblog. Hier is mijn boodschap in 79 woorden:

Bloggen is geweldig, roept de technofetisjist. Zonde van de tijd, bromt de technopessimist. Als bloggende wetenschapper laveer ik tussen beide extremen door. Ik geef een eerlijk overzicht van de kosten en baten van zes jaar onregelmatig bloggen. Hoe schrijf je over onderzoek, wie wil je bereiken, en wat kun je leren van je lezers? Uiteindelijk komt het neer op mindful gebruik van technologie: wie snapt hoe bloggen werkt kan er zijn voordeel mee doen — als wetenschapper en als communicator.

Continue reading

Taal in de reageerbuis

Gek op cross-overs van kunst en wetenschap, muziek en experiment? Ik ook. Daarom organiseer ik met mijn collega’s Tessa Verhoef en Seán Roberts een experiment op het Discovery Festival in Amsterdam — hét festival voor interessante kruisbestuivingen, rare muziek, en nieuwe experimenten. Ons experiment is vermomd als game en, afgaand op de pilots die we al gedaan hebben, erg verslavend.

Logo Taal_in_de_Reageerbuis

We noemen het “Taal in de reageerbuis” en dat nemen we vrij letterlijk. Proefpersonen krijgen van ons een tablet in handen gedrukt, een koptelefoon voor het geluid, en mogen via een app —een virtuele reageer-buis— met elkaar communiceren. En daar kunnen ze trouwens geen taal bij gebruiken die ze al kennen. Ze moeten dus from scratch een nieuwe taal bouwen — een minitaaltje dat evolueert over de duur van het experiment. Dat proces bestuderen wij om zo meer te leren over hoe taalevolutie werkt in echte talen.

De primeur is komende vrijdag op het Discovery Festival in Amsterdam. Een week later kun je ons vinden op het Weekend van de Wetenschap in het Universiteitsmuseum Utrecht. Zie taalindereageerbuis.nl voor meer informatie.

Expressiveness and system integration

Just a heads-up to let interested readers know of a newish article on the morphosyntactic typology of ideophones by yours truly: Expressiveness and system integration. On the typology of ideophones, with special reference to Siwu (PDF). Completed in May 2012, it has been peer reviewed and accepted, and is due to appear in a special issue of Language Typology and Universals, though the special issue editor tells me that it may, regrettably, take another while before it actually comes out.

Expressiveness and system integration

Deideophonisation and ideophonisation on the expressiveness and system integration continuum

Anyway because this is now being referred to in numerous places I have decided to make the pre-preprint available here. The basic approach is to exploit corpus data on the morphosyntactic variation of ideophones within a language to shed light on some larger questions in the morphosyntactic typology of ideophones. Some of the proposals of possible interest to typologists include the following:

  • an inverse correlation between morphosyntactic integration and various expressive features (the more syntactically independent an ideophone is, the more susceptible it is to the typical processes of expressive morphology and prosodic foregrounding)
  • a functional explanation of the inverse correlation (ideophones are prototypically syntactically independent to help signal their status as depictions of sensory imagery; this also explains their common occurrence at utterance edge)
  • a generalisation about ideophone morphosyntax in relation to frequency (higher frequency ideophones tend to be easier to integrate into morphosyntax, a Zipfian effect that may have to do with the erosive role of frequency)
  • a prediction with regard to the areal diffusion of ideophones (to the extent that ideophones are typically characterised by a low degree of morphosyntactic integration, this should increase their borrowability)
  • a scalar conception of the differences between ideophone systems across  languages (looking at morphosyntax and expressive morphology allows us to state more explicitly what makes the ideophone system of Somali different from that of Siwu and these two different again from Semai)

Enjoy. Here’s to hoping the article won’t take too long to appear in print. I’ve already been working with Kimi Akita on an exciting follow-up project testing some of these proposals quantitatively on Japanese corpus data.

  1. Dingemanse, Mark. accepted. “Expressiveness and system integration. On the typology of ideophones, with special reference to Siwu.” STUF – Language Typology and Universals (special issue).

Ideophones in Bakairi, Brasil, 1894

Last year Sabine Reiter defended an interesting PhD thesis on ideophones in Awetí, a Tupian language spoken in the Upper Xingu area of central Brazil. In the introduction, she mentions an early source on ideophones in this area. It’s a vivid description of a native of Xingu felling a tree, and it’s full of ideophones and gestures:

Wie quält sich der Bakaïrí, um einen Baum zu fällen: frühmorgens, wenn die Sonne tschischi aufgeht, – dort im Osten steigt sie – beginnt er die Steinaxt zu schwingen. Und tschischi wandert aufwärts und der Bakaïrí schlägt wacker immerzu, tsök tsök tsök. Immer mehr ermüden die Arme, sie werden gerieben und sinken schlaff nieder, es wird ein kleiner matter Luftstoss aus dem Mund geblasen und über das erschöpfte Gesicht gestrichen; weiter schlägt er, aber nicht mehr mit tsök tsök, sondern einem aus dem Grunde der Brust geholten Aechzen. Die Sonne steht oben im Zenith; der Leib – die flache Hand reibt darüber und legt sich tief in eine Falte hinein – ist leer; wie hungrig ist der Bakaïrí – das Gesicht wird zu kläglichstem Ausdruck verzogen: endlich, wenn tschischi schon tief unten steht, fällt ein Baum: tokále = 1 zeigt der Kleinfinger. Aber Du, der Karaibe, – plötzlich ist Alles an dem Mimiker Leben und Kraft – der Karaibe nimmt seine Eisenaxt, reisst sie hoch empor, schlägt sie wuchtig nieder, tsök, tsök, pum – ah …, da liegt der Baum, ein fester Fusstritt, schon auf dem Boden. Und da und dort und wieder hier, überall sieht man sie fallen. Schlussfolgerung für den Karaiben: gieb uns Deine Eisenäxte. (Steinen 1894)

Sabine Reiter translates this passage as follows: “How the Bakaïrí struggles with felling a tree: early in the morning, when the sun tschischi rises, – there in the east it rises – he begins to swing his stone axe. And tschischi rises further, and the Bakaïrí – bravely – keeps beating tsök tsök tsök. His arms are getting tired, he rubs them; they drop down. A small and feeble puff of air escapes his mouth, he runs his hand over his exhausted face; he keeps beating, no longer with tsök tsök, but with a groan from deep within his chest. The sun has reached its zenith; the belly – the hand rubs over it and falls into a deep hollow – is empty; how hungry is the Bakaïrí – he shows the most miserable face: finally, when tschischi is already low, falls a tree: tokále = 1 shows the little finger. But you, the caraiba (nonindian), – suddenly everything on the mimic becomes lively and forceful – the caraiba takes his metal axe, swings it high up, strikes it down with force, tsök, tsök, pum – ah …, a last forceful kick, and there lies the tree on the ground. And there and yonder and here again, everywhere one sees them fall. Conclusion for the caraiba: give us your metal axes.”

For African languages, it looks like the earliest clear descriptions of ideophones go back to the 1850’s (Dingemanse 2011:Ch. 3). This particular instance from 1894 is one of the earliest sources I’ve seen yet for the Americas, but it would not surprise me at all to find much earlier descriptions (e.g. of ideophones in Quechua varieties?) given the linguistic interests of early colonisers (e.g. Jesuits) in the New World.

References

  • Dingemanse, Mark. 2011. “The Meaning and Use of Ideophones in Siwu”. PhD dissertation, Nijmegen: Radboud University. http://thesis.ideophone.org/.
  • Reiter, Sabine (2012). “Ideophones in Awetí”. PhD thesis, Köln: Universität zu Köln.
  • Steinen, Karl von den (1894). Unter den Naturvölkern Zentral-Brasiliens. Reiseschilderung und Ergebnisse der Zweiten Schingú-Expedition 1887-1888. Berlin: Dietrich Reimer.

Waarom roep je ‘au’ bij plotselinge pijn?

Waarom au?

Is het echt “au” en niet iets anders? (illustratie Frank Landsbergen)

Voor het Kennislink Vragenboek beantwoordde ik de vraag: “Waarom roep je ‘au!’ bij plotselinge pijn?”. Dat is kennelijk een vraag die nogal leeft, want vorig jaar stelde Labyrint radio me dezelfde vraag en dit voorjaar was het raak op Hoe?Zo! radio. Daarom hier, als service voor zoekers, tweeters en andere au-gefascineerden, mijn antwoord.

In deze vraag zitten twee vragen verborgen. Voor een helder antwoord kunnen we die het beste opbreken:

(1) Waarom roepen we als we pijn hebben?

(2) Waarom roepen we ‘au!’ en niet iets anders?

Bij de eerste vraag zijn we in het gezelschap van een hoop andere dieren. Kreten van pijn komen door heel het dierenrijk voor. Waarom? Darwin, die in 1872 een boek schreef over emoties in mens en dier, dacht dat het samenhing met de sterke spiersamentrekkingen  die bijna elk dier vertoont bij een pijnscheut — een geritualiseerde versie van het zich bliksemsnel onttrekken aan een pijnlijke stimulus. Maar dat brengt ons nog niet veel verder: waarom zou de mond daarbij open moeten gaan? Onderzoek sindsdien heeft uitgewezen dat kreten in het dierenrijk ook communicatieve functies hebben: bijvoorbeeld om soortgenoten te alarmeren bij gevaar, om hulp te roepen, of om zorgend gedrag op te wekken. Die laatste functie begint al in de eerste seconden van ons leven, wanneer we het op een huilen zetten en onze moeder ons zorgzaam in de armen neemt. Baby’s, en trouwens de jongen van veel dieren, hebben hele repertoires aan verschillende kreten. In die repertoires is de pijnkreet —de uitroep bij een acute pijnbeleving—altijd duidelijk herkenbaar: een plotseling begin, een hoge intensiteit, en een relatief korte duur. Hier zien we al de contouren van ons “au!”. En daarmee komen we aan bij het tweede deel van de vraag.

Waarom au en niet iets anders? Eerst moeten we de vraag kritisch bekijken. Is het echt nooit anders? Zeg je au als je op je duim slaat of is het “aaaah!”? In het echt is er flink wat variatie. Toch is de variatie is niet oneindig. Niemand roept bibibibibi of vuuuuu in plotselinge pijn. Pijnkreten zijn variaties op een thema. Dat thema begint met een “aa” vanwege de vorm van ons spraakkanaal bij wijd open mond, en klinkt als “aau” als de mond daarna weer snel naar een dichte stand beweegt. Het woordje “au” vat dat thema prima samen. Daarmee hebben we meteen een belangrijke functie van taal te pakken. Taal helpt ons om ervaringen die nooit volledig hetzelfde zijn toch als soortgelijk te beoordelen. Dat is handig, want als we het willen hebben over “iemand die au roept” hoeven we niet de kreet precies te imiteren. In die zin is au een talig woord en geen kreet meer. Is au dan ook in alle talen hetzelfde? Bijna, maar niet helemaal, want elke taal gebruikt zijn eigen inventaris van klanken voor het beschrijven van de pijnkreet. In het Duits is het “au!”, een Engelsman zegt “ouch!”, en voor iemand uit Israel “oi!” — althans zo schreef Byington in 1942 in één van de eerste vergelijkende studies van uitroepen van pijn.

Ieder van ons komt ter wereld met een repertoire van kreten, en leert daarbovenop een taal. Die taal maakt dat we meer kunnen dan het uitschreeuwen — we kunnen er ook over praten. Gelukkig maar, want anders was er van dit antwoord niets terecht gekomen.

Dit stukje schreef ik als bijdrage aan het Kennislink Vragenboek, onder redactie van Sanne Deurloo en Anne van Kessel. Je kunt de gepubliceerde versie van het stuk hier lezen (PDF).

On “unwritten” and “oral” languages

The world’s many endangered languages are often characterized as “unwritten” and “oral” languages. Both of these terms reveal the language ideologies still implicit in many academic approaches to language: “unwritten” defines by negation, revealing a bias towards stable, standardized abstractions of communicative behaviour (away from a dynamic conception of situated talk-in-interaction); and “oral” defines by exclusion, revealing a bias towards the vocal-auditory channel (away from the multi-modal, fully embodied nature of face to face interaction). How much of our research today is unwittingly shaped by these implicit biases?

Better science through listening to lay people

Slides for a presentation given at the ECSITE 2013 Annual Conference on science communication. I spoke in a session convened by Alex Verkade (De Praktijk) and Jen Wong (Guerilla Science). The other speakers in the session were Bas Haring on ‘Ignorance is a virtue’, and Jen Wong on ‘Mixing science with art, music and play’.

We all have them: intellectual blind spots. For scientists, one way to become aware of them is to listen to people outside the academic bubble. I discuss examples from social media and serendipitous fieldwork. Social media helps academics to connect to diverse audiences. On my research blog ideophone.org, I have used the interaction with readers to refine research questions, tighten definitions, and explore new directions, but also to connect science and art. In linguistic and ethnographic fieldwork in Ghana, I have let serendipity shape my research. Unexpected questions and bold initiatives from locals led me in directions I would never have anticipated on the basis of expert knowledge. Ultimately the involvement of lay people led to methodological innovations, changes of perspective, and most importantly, a host of new questions.

Hyperlinks for material mentioned

Convenors and speakers

Feedback

Thanks for the wonderful tweets — and feel free to get in touch!