Over two years ago I wrote about the unstoppable tide of uninformation that follows the rise of large language models. With ChatGPT and other models bringing large-scale text generation to the masses, I want to register a dystopian prediction.
Of course OpenAI and other purveyors of stochastic parrots are keeping the receipts of what they generate (perhaps full copies of generated output, perhaps clever forms of watermarking or hashing). They are doing so for two reasons. First, to mitigate the (partly inevitable) problem of information pollution. With the web forming a large part of the training data for large language models you don’t want these things to feed on their own uninformation. Or at least I hope they’re sensible enough to want to avoid that.
But the second reason is to enable a new form of monetization. Flood the zone with bullshit (or facilitate others doing so), then offer paid services to detect said bullshit. (I use bullshit as a technical term for text produced without commitment to truth values; see Frankfurt 2009.) It’s guaranteed to work because as I wrote, the market forces are in place and they will be relentless.
Universities will pay for it to check student essays, as certification is more important than education. Large publishers will likely want it as part of their plagiarism checks. Communication agencies will want to claim they offer certified original human-curated content (while making extra money with a cheaper tier of LLM-supported services, undercutting their own business). Google and other behemoths with an interest in high quality information will have to pay to keep their search indexes relatively LLM-free and fight the inevitable rise of search engine optimized uninformation.
I believe we should resist this, and hold the makers of LLMs directly accountable for the problems they create instead of letting them hold us ransom and extract money from us for ‘solutions’. Creating a problem and then selling the solution is one of the oldest rackets in corporate profiteering, and we should not fall for it. Not to mention the fact that generic AI detection tools (an upcoming market) will inevitably create false positives and generate additional layers of confusion and bureaucratic tangles for us all.
Meanwhile, academics will be antiquarian dealers of that scarce good of human-curated information, slowly and painstakingly produced. My hope is that they will devote at least some of their time to what Ivan Illich called counterfoil research:
Present research is overwhelmingly concentrated in two directions: research and development for breakthroughs to the better production of better wares and general systems analysis concerned with protecting [hu]man[ity] for further consumption. Future research ought to lead in the opposite direction; let us call it counterfoil research. Counterfoil research also has two major tasks: to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool-systems that optimize the balance of life, thereby maximizing liberty for all.
Illich, Tools for Conviviality, p. 92
- Frankfurt, H. G. (2009). On Bullshit. In On Bullshit. Princeton University Press. doi: 10.1515/9781400826537
- Illich, I. (1973). Tools for conviviality. London: Calder and Boyars.
4 responses to “Monetizing uninformation: a prediction”
I had similar thoughts upon seeing the quite impressive performance of ChatGPT. If I can add to your dystopian view: not only will this occur on the web but it won’t be long before this also translates to physical books. Just imagine the following prompt: “write me a pop science book on X, 40,000 words, detailed… ” And there you have it, a new bestseller rearing its head.
[…] (and by implication, about ourselves and ethics), for monetary gain and hegemonic power (e.g. Dingemanse, 2020; McQuillan, 2022), I believe it is our academic responsibility to […]
[…] This kind of tools might be a future revenue stream for OpenAI and their ilk. […]
[…] The sheer breadth of topics covered evokes Bruno Latour’s sprawling Actor Network Theory framework and also shares a lot of insights with current approaches to collective intelligence (e.g., all intelligence is collective intelligence). What’s also impressive about this thesis is the way it combines a hard-hitting critique of the current state of academic workflows with concrete tools and constructive proposals to make things better. This makes it a truly excellent example of what Ivan Illich calls counterfoil research. […]