Monetizing uninformation: a prediction

Over two years ago I wrote about the unstoppable tide of uninformation that follows the rise of large language models. With ChatGPT and other models bringing large-scale text generation to the masses, I want to register a dystopian prediction.

Of course OpenAI and other purveyors of stochastic parrots are keeping the receipts of what they generate (perhaps full copies of generated output, perhaps clever forms of watermarking or hashing). They are doing so for two reasons. First, to mitigate the (partly inevitable) problem of information pollution. With the web forming a large part of the training data for large language models you don’t want these things to feed on their own uninformation. Or at least I hope they’re sensible enough to want to avoid that.

But the second reason is to enable a new form of monetization. Flood the zone with bullshit (or facilitate others doing so), then offer paid services to detect said bullshit. (I use bullshit as a technical term for text produced without commitment to truth values; see Frankfurt 2009.) It’s guaranteed to work because as I wrote, the market forces are in place and they will be relentless.

Universities will pay for it to check student essays, as certification is more important than education. Large publishers will likely want it as part of their plagiarism checks. Communication agencies will want to claim they offer certified original human-curated content (while making extra money with a cheaper tier of LLM-supported services, undercutting their own business). Google and other behemoths with an interest in high quality information will have to pay to keep their search indexes relatively LLM-free and fight the inevitable rise of search engine optimized uninformation.

Meanwhile, academics will be antiquarian dealers of that scarce good of human-curated information, slowly and painstakingly produced. My hope is that they will devote at least some of their time to what Ivan Illich called counterfoil research:

Present research is overwhelmingly concentrated in two directions: research and development for breakthroughs to the better production of better wares and general systems analysis concerned with protecting [hu]man[ity] for further consumption. Future research ought to lead in the opposite direction; let us call it counterfoil research. Counterfoil research also has two major tasks: to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool-systems that optimize the balance of life, thereby maximizing liberty for all.

Illich, Tools for Conviviality, p. 92

  • Frankfurt, H. G. (2009). On Bullshit. In On Bullshit. Princeton University Press. doi: 10.1515/9781400826537
  • Illich, I. (1973). Tools for conviviality. London: Calder and Boyars.

3 thoughts on “Monetizing uninformation: a prediction

  1. I had similar thoughts upon seeing the quite impressive performance of ChatGPT. If I can add to your dystopian view: not only will this occur on the web but it won’t be long before this also translates to physical books. Just imagine the following prompt: “write me a pop science book on X, 40,000 words, detailed… ” And there you have it, a new bestseller rearing its head.

  2. Pingback: Stop feeding the hype and start resisting - My Blog

  3. Pingback: Disgusted, but not surprised (14.12.2022–15.1.2023) | Around the Web « ovl

Leave a Reply

Your email address will not be published. Required fields are marked *