Monetizing uninformation: a prediction

Over two years ago I wrote about the unstoppable tide of uninformation that follows the rise of large language models. With ChatGPT and other models bringing large-scale text generation to the masses, I want to register a dystopian prediction.

Of course OpenAI and other purveyors of stochastic parrots are keeping the receipts of what they generate (perhaps full copies of generated output, perhaps clever forms of watermarking or hashing). They are doing so for two reasons. First, to mitigate the (partly inevitable) problem of information pollution. With the web forming a large part of the training data for large language models you don’t want these things to feed on their own uninformation. Or at least I hope they’re sensible enough to want to avoid that.

But the second reason is to enable a new form of monetization. Flood the zone with bullshit (or facilitate others doing so), then offer paid services to detect said bullshit. (I use bullshit as a technical term for text produced without commitment to truth values; see Frankfurt 2009.) It’s guaranteed to work because as I wrote, the market forces are in place and they will be relentless.

Universities will pay for it to check student essays, as certification is more important than education. Large publishers will likely want it as part of their plagiarism checks. Communication agencies will want to claim they offer certified original human-curated content (while making extra money with a cheaper tier of LLM-supported services, undercutting their own business). Google and other behemoths with an interest in high quality information will have to pay to keep their search indexes relatively LLM-free and fight the inevitable rise of search engine optimized uninformation.

Meanwhile, academics will be antiquarian dealers of that scarce good of human-curated information, slowly and painstakingly produced. My hope is that they will devote at least some of their time to what Ivan Illich called counterfoil research:

Present research is overwhelmingly concentrated in two directions: research and development for breakthroughs to the better production of better wares and general systems analysis concerned with protecting [hu]man[ity] for further consumption. Future research ought to lead in the opposite direction; let us call it counterfoil research. Counterfoil research also has two major tasks: to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool-systems that optimize the balance of life, thereby maximizing liberty for all.

Illich, Tools for Conviviality, p. 92

  • Frankfurt, H. G. (2009). On Bullshit. In On Bullshit. Princeton University Press. doi: 10.1515/9781400826537
  • Illich, I. (1973). Tools for conviviality. London: Calder and Boyars.

Sometimes precision gained is freedom lost

Part of the struggle of writing in a non-native language is that it can be hard to intuit the strength of one’s writing. Perhaps this is why it is especially gratifying when generous readers lift out precisely those lines that {it?} took hard work to streamline — belated thanks!

Interestingly, the German translation for Tech Review needed double the amount of words for the same point: “Ein Mehr an Präzision bedeutet manchmal ein Weniger an Freiheit.” I’m still wondering whether that makes it more precise or less.

  • Dingemanse, M. (2020, August). Why language remains the most flexible brain-to-brain interface. Aeon. doi: 10.5281/zenodo.4014750