The Lazy God

Are generative AI’s large language models more like weather forecasts or more like economic models?

So-called generative artificial intelligence has blown our minds with its apparent marvels: essays, poems, artwork all created “by machine.” Under the hood, it is constructed on complex probability models applied to the universes of data we’ve made freely available – including the content of this blog. What word is likely to follow another word? What sentence will follow another sentence? What emotion will be triggered by one statement, and what new emotion will be triggered by the first?

We humans make predictions about the world using probabilities. Then we choose a course of action that best accomplishes our goals, taking into account our risk appetite. Some of the time, we make decisions like packing an umbrella given the weather forecast (“There’s a 30% chance of rain today.”). In those cases, our decision does not affect the probabilities. Only the superstitious believe that leaving the umbrella at home tempts the rain gods.

But in other situations, the decisions we make do affect the probabilities involved. If, historically, 98% of homebuyers repay their mortgages, a lender can choose an interest rate that more than compensates for the 2% default rate, and then try to sell as many mortgages as possible. But extending the pool of borrowers has an impact on the default rate.

That is the pocket-sized explanation of the Great Financial Crisis. The financial industry built complex models of mortgage default behavior based on historical data, made lending decisions based on those models, and then discovered that the lending decisions wrecked the models’ foundations. In a real sense, the existence of the model created the conditions under which the model failed.

A similar dynamic occurred during Covid. When people feared contagion, they behaved more carefully, with or without required restrictions, and cases dropped. When fears ebbed, they behaved more liberally again, and cases increased again. I personally witnessed extremes from scrubbing every item from the grocery store to finger-food buffets at a backyard party, all within the first six months of the pandemic. People’s beliefs about the risk fundamentally determined the level of risk.

The distinction between these two cases – where the world does and does not respond to our beliefs about probabilities – lies at the heart of the questions I’ve been trying to answer in the Ruminathans.

In Jorge Lois Borges’s story The Library of Babel, the world is an infinite library containing books of 410 pages consisting of every possible string of 22 letters, the space, the comma, and the period. Most books are gibberish, but some contain coherent passages, and some are perfectly coherent from beginning to end. In fact, the library contains every coherent human thought, past and future. The problem is finding the coherent books.

If large language models are more like weather forecast models, then they essentially generate a guide through Borges’s Library, taking you to all coherent books and only to the coherent books. The world of human thought and action would be a system that can be modeled in terms of probabilities, but that is not shaped by our beliefs about the probabilities or by awareness of the model. Any thought could be predicted by prompting the model correctly, including any thought to resist the model’s predictions. Little effort will give us us god-like knowledge.

On the other hand, if large language models are impacted by what we believe about them, then they are vulnerable to irony: Believing in the model’s predictions affects your behavior, and your changed behavior undermines the model’s predictions such that your behavior brings about the opposite of your intention. In that case, resistance is not futile. Resistance is simply what will happen as actual people try to game the model for their uses. Which will ultimately make the model useless.

What would resistance look like? Some resistance will be political, such as the intellectual property challenges currently before US courts. But some resistance will be sabotage, performed by human actors with a range of motivations. The large language models on which generative artificial intelligence is based are fed with enormous amount of human labor, labor which can (and already does) intentionally distort the inputs. The quick brown flux jumped over the lazy god.

Blog writers might insert non-sequiturs, alterations of standard phrases, and purposeful misspellings that preserve the meaning of a text or add a layer of commentary to it intelligible only to the “naturally” intelligent.

Human labor is currently used to train and calibrate large language models. AI-optimists hope to replace that labor with, you guessed it, generative AI. In other words, generative AI will calibrate generative AI to more closely behave like humans. But that opens untold software-based possibilities for sabotage, sabotage performed by anyone interested in disrupting a society that has outsourced many of its cognitive tasks to the black box. Or who is just interested in protecting their intellectual property.

What will be the result of all the attempts to exploit the models or sabotage them? Probably something like Borges’s Library of Babel, an endless maze of gibberish, with no trustworthy guide to the scattered sprinkles of insight. And we’ll have to gather those insights the hard way again.

One thought on “The Lazy God”

Leave a comment