Making machines smarter or ourselves dumber? Why I’m betting on model collapse.

If the emergence of artificial intelligence is as profound a development as both its boosters and its doomsayers predict, then we all need to make some pretty important decisions: bets about where we invest our savings, our time, and the time of our children.

In decision theory, you choose a course of action from several options based on the expected value: You look at representative scenarios, value them in whatever terms you want (monetarily or in terms of your personal utility), and weight the scenarios by their respective probabilities of occurrence. Then you choose the option with the highest expected value.

I’m acutely aware of the limitations of this approach to making decisions. But let’s take it as a given for the moment: What are the representative scenarios for how the world evolves with AI? I’ve been thinking about three:

AI becomes smarter than us: We train models that outperform us humans at every conceivable task. The emperor really has clothes.

We become dumber than AI: We train ourselves to no longer appreciate the difference between “intelligence” and its parody. The emperor has no clothes, but nobody is capable of calling him out.

AI fizzles: Whatever mechanisms have made AI appear smarter from iteration to iteration cease to deliver even marginal improvements and even lead to deteriorating performance (aka “model collapse“). The emperor has no clothes, and we’re willing and able to call him naked.

Our next step would normally be to come up with probabilities of each of these scenarios occurring and then devise some strategies with respective payoffs for the three scenarios.

But this approach makes sense when there are positive outcomes. It doesn’t matter what the probabilities are, if all but one of the payouts are zero (or less) under any conceivable strategy, then those scenarios aren’t even worth planning for.

I’m struggling with imagining any positive payoffs in the first two scenarios.

Society works because we have a stake in each other’s existence. If we become unnecessary to each other – if we no longer need to tie ourselves into a web of reciprocity – then we will become charity cases either for whoever controls the “Artificial General Intelligence” or for the AGI itself. Being interdependent, as we are now, is risky; being just plain dependent is a blind alley. I cannot see this as a positive outcome.

I don’t believe this scenario is very probable. But it doesn’t matter. No matter what its probability, it has 0 (or even negative) outcome value. Unless all scenarios have non-positive outcomes, there’s no sense in planning for it.

If you put a gun to my head and asked which of the three scenarios was the most probable, I’d have to go for scenario two. If the overwhelming majority of people do not see a difference between real intelligence and its parody, they cannot value that difference. And so in that scenario, I will also become a charity case because I cannot sell anything anyone wants to buy. Unfortunately, that feels like the direction we’re heading in. Nor am I alone in thinking that.

That is why I don’t think the elaborate game of decision theory makes sense here. There is only one scenario that has positive outcomes. I’m placing my bets on AI fizzling and the strategies that optimize for that scenario: Maintaining my skill set without AI assistance. Investing in a broad range of asset classes. And helping my children learn critical thinking and, hopefully, the courage to call emperors naked. And maybe if we all take that approach, we’ll make Scenario 2 less likely.