My previous post was an expedition into the world of generative AI. Since I published it, that world’s fault lines have been exposed more clearly to the public. OpenAI’s Sam Altman was first fired, then re-hired as CEO in a four-day quake that had me following every shockwave from Silicon Valley.
Explaining the firing, OpenAI’s original board of directors – who have stepped down in the meantime – shared that Altman “had not been consistently candid in his communications.”
It was a matter of trust.
Project Civilization is built on trust. We’ve collectively surrendered our ability to survive as individuals in return for the benefits of the division of labor, to the point where our livelihoods all depend on the actions of strangers. Day-to-day, we take for granted the network of trust required to make it all work. We think of money as a cold, inert thing, when it’s really the embodiment of our credit, our collective belief that we’ll all make good on our commitments to each other.
We trust each other to be forthright, to say things we believe. And we trust each other to be reliable, to believe things with good reason.
We’re trusting OpenAI and a handful of other companies to develop gAI responsibly.
Inside OpenAI, there have been two camps, the optimists who believe that AI will help humanity and the pessimists who fear AI may destroy humanity. The stakes are high, in other words.
Personally, I’m not worried about the Skynet scenario, with autonomous machines concluding that humanity would be better off if culled by 90%, or that biological life would be better off without humanity.
Here’s what worries me: What will happen when communication no longer takes place between people: people trusting each other to say what they believe and believe things for good reasons? When instead much of what we read and hear is said by a large language model that cannot really be said to believe at all?
Now Sam Altman has returned to the helm of OpenAI. Should we trust him?
In a recent interview, he characterized himself as “somewhere in the middle” between the optimists and the pessimists.
People who lay claim to a center – on any debate – do not give me the warm fuzzies. Claiming the center – as opposed to simply formulating your own position – is a power move. It’s a rhetorical gambit designed to convey an impression of reasonableness, not a reasoned argument.
Saying you’re “in the middle” implies that the thing you’re arguing about is a matter of spectrum and degree when it might not be. If one side holds that “2 + 2 = 4” and the other insists “2 + 2 = 5” then someone proposing a middle ground at 4.5 is not in a neutral middle. He’s from the second camp, masquerading as the voice of moderation.
Even supposing there was a spectrum between helping and destroying humanity, what are the units, what is the scale, and where lies “the middle?”
There are questions of political economy where views might be mapped to a scale. With respect to inheritance taxes, some might argue that 100% of a person’s wealth should go to her designated heirs, some might argue that 100% should go to the state, and there are many points in between. But even then: What is the “middle?” Is it the numerical midpoint of 50%? Or is it what the citizen of average wealth believes? Or of median wealth? Or the average views of all voters?
When you stake out your position as the “middle” you are participating in the political negotiation, but you are negotiating in bad faith, claiming a mantle of neutrality to which you have no right.
We all say things like “I see myself in the middle on this issue,” casually, without rhetorical design. So I’ll happily give Altman the benefit of the doubt and assume he was speaking thoughtlessly.
But I can do so because Altman is a person, someone to whom I can extend trust and charity. Because – for now – the things he says come from himself, and not from gAI’s strange averaging of everyone’s and no one’s thoughts, an arbitrarily and non-transparently calculated “middle” that can neither be reasoned with nor moved.
First of all, thank you for this wonderful post and your thoughts. I think the expression “I find myself in the middle” is a – perhaps awkward – description of my own ambivalence. If we somehow think two mutually exclusive things are possible, then we’re not really assuming a predictable middle ground. Imagine a child standing in front of a rollercoaster for the first time: they really want to ride AND they are scared. At the same time. (We also have different brain regions for these feelings and not a mean value region). So Altman is not talking about a position, but about ambivalence. Perhaps this is the better option, because opinions that are believed to be certain quickly become ideologies.
LikeLike