Sawing off our own branch

An AI-bedazzled friend recently sparked some reflection on the nature of writing and the experience of creativity. “Sure,” he admitted, “ChatGPT doesn’t produce truly great writing by itself. But it can get you to a decent first draft.”

His suggestion nudged a memory of assembling a plastic Christmas tree. You set up the base, then the central shaft, then attach the branches. Finally, you decorate it with sparkly stuff. To my friend – who isn’t a writer – writing consists of constructing an outline, fleshing it out with sentences, and then festooning the result with colorful turns of phrase and metaphors.

Maybe some people write like that, and I don’t want to belittle anyone’s creative process. If that’s your approach, maybe ChatGPT saves you all but the last step, where you can still apply a creative human touch by removing its most garish flourishes and replacing them with something more tasteful.

My own experience of the creative writing process – fiction or non-fiction, short essay or lengthy tome – is very different from assembling a Christmas tree. It’s more like growing a real tree in the garden.

It seems to start with a central idea that, like a seed, contains within it a plan for the entire work, the intricate shapes of its furthest leaves just as much as its basic structure. As a writer, I feel like I’m tending the tree and optimizing its environment as the work unfolds the logic embedded in its kernel.

In great writing, there is no distinction between function and form, content and decoration. Elegant turns of phrase and evocative metaphors aren’t grafted onto a rough structure to grab the reader’s waning attention. They express the work’s central idea.

Once you have a central idea – the seed – typing the first words is the easy part. The hard part comes later, when your work has taken its basic shape. Whether it’s your own work, someone else’s, or ChatGPT’s, editing a poorly conceived first draft is much, much more time-consuming than starting afresh.

I don’t know what this means for how we will create, connect, and share big ideas with each other in the future, and whether we humans will do so at all. It’s tempting to wring every last drop out of the analogy: Plastic trees may be pretty and convenient, but they won’t feed you, provide tools, or even warm you.

But an analogy isn’t an argument. And at this stage, I’m no longer even sure what it is I should argue and if there’s room left for arguments. Maybe only for eulogies.

I see a chasm between the organic writing (or music, or art) and the bot-generated output. I see it in this experiment in the New York Times, where a short story writer faced off against a bot writer, each using the same “prompting.” Maybe that chasm will someday be bridged, but it hasn’t yet and it might never be, no matter how many data centers are devoted to the cause.

What daunts me is not the thought that the chasm might be bridged, but that our descendants might never develop the capacity to see it. My grandchildren, maybe even my children, may happen on this text and have no clue what I’m yammering on about, just as today we have can’t even imagine, let alone regret, the lost spiritual dimensions and layered meanings of flint stone knapping.

Some say the Cistercian monastic order invented capitalism as a side-effect of their efforts to commune with God. By investing in labor-saving technology, they hoped to escape the drudgery of survival tasks and free up time to contemplate what is good, true, and beautiful. I worry that large language models are not another step to liberate us from drudgery, and instead a tool to “save time” by getting us to settle quickly for the mediocre, the reassuring, and the distracting.

The Mirage of the Middle

Was I too harsh on the concept of an ideological middle ground in my previous post? Was I too harsh on those who lay claim to the middle ground, like Sam Altman did with respect to pessimistic and optimistic views on generative AI?

I received a couple of very insightful comments that raised these questions. In this post, I’ll explore the idea of an ideological “middle” and the rhetorical act of claiming it.

The following are very different statements about a pair of logically opposed ideological positions:

  1. “There exists a third position that reconciles the two.”
  2. “My own position represents such a reconciliation.”

Statement 1 could be the beginning of a search for third position. It’s an open invitation to deliberate. The reconciliation may take different forms. It may turn out to be a synthesis of thesis and anti-thesis à la Hegel. It may be an uneasy compromise that both sides can tolerate, though not love. Or it may offer a quantitative scale on which the two opposing views take different values, and define a midpoint between them.

Statement 2 is a rhetorical power move.

In most cases it may be an inadvertent power move. I try to make a point of not judging intentions, and always assuming the best. But the effect of claiming the middle – intended or unintended – is to label the original two opposing views as unreasonable and extreme.

The “middle” has powerful moral connotations. The Aristotelian virtues lie in a moderate middle between destructive extremes. In a world characterized by the normal distribution, the arithmetic mean and its immediate surroundings enjoy a kind of democratic legitimacy. We experience the world as an alternation of intense passions and calm refractory periods, and we’ve been taught to believe – through ideology and experience – that dispassionate interpretations of a situation are closer to the truth.

One reader-friend disagreed with my argument that “staking out a position in the middle is a cop-out.” We’re not in actual disagreement: I’m not arguing it is a cop-out. It’s the opposite. It’s taking a stand while fortifying it with the moral authority of “moderation” and “reasonableness.”

We say things like Altman did all the time: “I’m in the middle on this issue.” Most of the time we are not intentionally performing the power move. Instead, we are trying to express something else. What? Here are some candidates:

  1. “I simply do not know or care enough to choose either side or to contribute to the search for a third position. So I’m withholding my judgement.” That is a very respectable position to take. I, for one, would do better to adopt it much more often. But “I don’t know/care” would be a better way to say that than “I’m in the middle.”
  2. “I see the merits of both sides and have not yet found a reconciling third position. The search is still on.” That is also a very respectable and underutilized perspective. I think the charitable interpretation of Altman’s “in the middle” is this one. But again, “I’m in the middle” is not really an accurate way to describe this position. Better would be “I’m torn” or “I’m withholding judgment because I see the merits of both sides’ arguments.”
  3. “The rhetoric of the opposing sides is unnecessarily emotional, and I want to reframe the debate using less fraught language.” This is not the same thing as establishing a third, reconciling position. It can be a respectable position to take. But “taking the emotion” out of the debate easily becomes a power move in itself. Using a word with relatively little emotional valence in an emotionally charged context is also an ideological statement. American-English “enhanced interrogation” and Nazi-German verschärfte Vernehmung mean the same thing: torture.
  4. “I have no skin in this game, so my view should be understood as a ‘middle ground.’” Sounds reasonable, but sorry, no dice. To truly have no skin in the game means you also have no view. The second you adopt a view, you’re fooling yourself at least, and probably others, to think you are not emotionally invested.

It’s a commonplace that no single person sees the world from a “neutral” nowhere, even as we imagine the idea of a neutral point of view and strive for it. But there is no reason to assume that a God’s Eye Perspective comes from any sort of “middle.” Even when opposing positions can be quantified onto a quantitative metric, the decision of how to define the numerical “middle” itself is highly ideological. See my example of inheritance tax in the previous post: Between a) total state confiscation of property at death, and b) zero inheritance tax, is “50%” the middle? Or is the middle the average of everyone’s opinion on the matter?

At what speed – including zero speed – and in which direction to develop AI is a political question. Our best, if imperfect, tools for resolving political questions are to:

  1. apply consistent, rule-based deliberation processes that are framed independently of the content of the question, and
  2. take the pragmatic leap of faith to attribute to others the same motives that you wish others would attribute to you.

The “middle” is a mirage, and claiming it as your own does nothing to improve the tone of the debate. At best, it’s a misleading way to say “my mind isn’t made up yet;” misleading because it suggests that the two opposing camps surrounding the middle are equally wrong.

At worst, claiming the middle is a power play: rolling your heavy rhetorical artillery onto the commanding heights of the debate.

Trust in/and AI

My previous post was an expedition into the world of generative AI. Since I published it, that world’s fault lines have been exposed more clearly to the public. OpenAI’s Sam Altman was first fired, then re-hired as CEO in a four-day quake that had me following every shockwave from Silicon Valley.

Explaining the firing, OpenAI’s original board of directors – who have stepped down in the meantime – shared that Altman “had not been consistently candid in his communications.”

It was a matter of trust.

Project Civilization is built on trust. We’ve collectively surrendered our ability to survive as individuals in return for the benefits of the division of labor, to the point where our livelihoods all depend on the actions of strangers. Day-to-day, we take for granted the network of trust required to make it all work. We think of money as a cold, inert thing, when it’s really the embodiment of our credit, our collective belief that we’ll all make good on our commitments to each other.

We trust each other to be forthright, to say things we believe. And we trust each other to be reliable, to believe things with good reason.

We’re trusting OpenAI and a handful of other companies to develop gAI responsibly.

Inside OpenAI, there have been two camps, the optimists who believe that AI will help humanity and the pessimists who fear AI may destroy humanity. The stakes are high, in other words.

Personally, I’m not worried about the Skynet scenario, with autonomous machines concluding that humanity would be better off if culled by 90%, or that biological life would be better off without humanity.

Here’s what worries me: What will happen when communication no longer takes place between people: people trusting each other to say what they believe and believe things for good reasons? When instead much of what we read and hear is said by a large language model that cannot really be said to believe at all?

Now Sam Altman has returned to the helm of OpenAI. Should we trust him?

In a recent interview, he characterized himself as “somewhere in the middle” between the optimists and the pessimists.

People who lay claim to a center – on any debate – do not give me the warm fuzzies. Claiming the center – as opposed to simply formulating your own position – is a power move. It’s a rhetorical gambit designed to convey an impression of reasonableness, not a reasoned argument.

Saying you’re “in the middle” implies that the thing you’re arguing about is a matter of spectrum and degree when it might not be. If one side holds that “2 + 2 = 4” and the other insists “2 + 2 = 5” then someone proposing a middle ground at 4.5 is not in a neutral middle. He’s from the second camp, masquerading as the voice of moderation.

Even supposing there was a spectrum between helping and destroying humanity, what are the units, what is the scale, and where lies “the middle?”

There are questions of political economy where views might be mapped to a scale. With respect to inheritance taxes, some might argue that 100% of a person’s wealth should go to her designated heirs, some might argue that 100% should go to the state, and there are many points in between. But even then: What is the “middle?” Is it the numerical midpoint of 50%? Or is it what the citizen of average wealth believes? Or of median wealth? Or the average views of all voters?

When you stake out your position as the “middle” you are participating in the political negotiation, but you are negotiating in bad faith, claiming a mantle of neutrality to which you have no right.  

We all say things like “I see myself in the middle on this issue,” casually, without rhetorical design. So I’ll happily give Altman the benefit of the doubt and assume he was speaking thoughtlessly.

But I can do so because Altman is a person, someone to whom I can extend trust and charity. Because – for now – the things he says come from himself, and not from gAI’s strange averaging of everyone’s and no one’s thoughts, an arbitrarily and non-transparently calculated “middle” that can neither be reasoned with nor moved.  

The Lazy God

Are generative AI’s large language models more like weather forecasts or more like economic models?

So-called generative artificial intelligence has blown our minds with its apparent marvels: essays, poems, artwork all created “by machine.” Under the hood, it is constructed on complex probability models applied to the universes of data we’ve made freely available – including the content of this blog. What word is likely to follow another word? What sentence will follow another sentence? What emotion will be triggered by one statement, and what new emotion will be triggered by the first?

We humans make predictions about the world using probabilities. Then we choose a course of action that best accomplishes our goals, taking into account our risk appetite. Some of the time, we make decisions like packing an umbrella given the weather forecast (“There’s a 30% chance of rain today.”). In those cases, our decision does not affect the probabilities. Only the superstitious believe that leaving the umbrella at home tempts the rain gods.

But in other situations, the decisions we make do affect the probabilities involved. If, historically, 98% of homebuyers repay their mortgages, a lender can choose an interest rate that more than compensates for the 2% default rate, and then try to sell as many mortgages as possible. But extending the pool of borrowers has an impact on the default rate.

That is the pocket-sized explanation of the Great Financial Crisis. The financial industry built complex models of mortgage default behavior based on historical data, made lending decisions based on those models, and then discovered that the lending decisions wrecked the models’ foundations. In a real sense, the existence of the model created the conditions under which the model failed.

A similar dynamic occurred during Covid. When people feared contagion, they behaved more carefully, with or without required restrictions, and cases dropped. When fears ebbed, they behaved more liberally again, and cases increased again. I personally witnessed extremes from scrubbing every item from the grocery store to finger-food buffets at a backyard party, all within the first six months of the pandemic. People’s beliefs about the risk fundamentally determined the level of risk.

The distinction between these two cases – where the world does and does not respond to our beliefs about probabilities – lies at the heart of the questions I’ve been trying to answer in the Ruminathans.

In Jorge Lois Borges’s story The Library of Babel, the world is an infinite library containing books of 410 pages consisting of every possible string of 22 letters, the space, the comma, and the period. Most books are gibberish, but some contain coherent passages, and some are perfectly coherent from beginning to end. In fact, the library contains every coherent human thought, past and future. The problem is finding the coherent books.

If large language models are more like weather forecast models, then they essentially generate a guide through Borges’s Library, taking you to all coherent books and only to the coherent books. The world of human thought and action would be a system that can be modeled in terms of probabilities, but that is not shaped by our beliefs about the probabilities or by awareness of the model. Any thought could be predicted by prompting the model correctly, including any thought to resist the model’s predictions. Little effort will give us us god-like knowledge.

On the other hand, if large language models are impacted by what we believe about them, then they are vulnerable to irony: Believing in the model’s predictions affects your behavior, and your changed behavior undermines the model’s predictions such that your behavior brings about the opposite of your intention. In that case, resistance is not futile. Resistance is simply what will happen as actual people try to game the model for their uses. Which will ultimately make the model useless.

What would resistance look like? Some resistance will be political, such as the intellectual property challenges currently before US courts. But some resistance will be sabotage, performed by human actors with a range of motivations. The large language models on which generative artificial intelligence is based are fed with enormous amount of human labor, labor which can (and already does) intentionally distort the inputs. The quick brown flux jumped over the lazy god.

Blog writers might insert non-sequiturs, alterations of standard phrases, and purposeful misspellings that preserve the meaning of a text or add a layer of commentary to it intelligible only to the “naturally” intelligent.

Human labor is currently used to train and calibrate large language models. AI-optimists hope to replace that labor with, you guessed it, generative AI. In other words, generative AI will calibrate generative AI to more closely behave like humans. But that opens untold software-based possibilities for sabotage, sabotage performed by anyone interested in disrupting a society that has outsourced many of its cognitive tasks to the black box. Or who is just interested in protecting their intellectual property.

What will be the result of all the attempts to exploit the models or sabotage them? Probably something like Borges’s Library of Babel, an endless maze of gibberish, with no trustworthy guide to the scattered sprinkles of insight. And we’ll have to gather those insights the hard way again.

Ideologies and Loyalties

What is the content of an ideology? And what competing ideologies presently dominate our discourse?

Last week, I attended a small conference loosely organized around the question “Are our civil liberties threatened?” One of the talks was given by a political scientist, who chose to frame the question from the opposite end. He asked whether totalitarianism – a form of government practically defined in terms of its negation of civil liberties – is on the rise. His approach was to take Hannah Arendt’s list of conditions under which totalitarianism is likely to unfold, and then to investigate each of the conditions to see if it describes today’s world in ways similar to what Arendt observed in the 1930s and 40s.

One of those conditions is the availability of a political ideology. You need an “ism” to get a significant number of people to move in the same direction. In Arendt’s time, fascism and communism were on offer, providing the organizing principles for the mass movements behind the totalitarian systems of her day.

At the conference, the political scientist struggled to identify clearly analogous candidate ideologies today, at least for the Western world. He allowed that China might be concocting a blend of nationalism, Marxism, and Confucianism, and that Islamic fundamentalism was clearly a potent ideology. But for the German/European/Western participants at the conference, they play at best a role in defining opposing ideologies, whatever they may be.

And that’s where the political scientist came up empty. What are the dominant ideologies around which mass movements leading to totalitarianism might be organized today, in the Western world? He did not have a clear answer, and consequently, he at least implied that not all the conditions for totalitarianism are present. For now.

I’m reluctant to put words into the gentleman’s mouth, but based on some follow-up conversation, I got the impression that he was not saying we live in a post-ideological world, but rather that we have too many ideologies on offer, with none dominant enough to launch a mass movement.

But what if ideologies now operate more subtly than they did in the 20th century? What if they do not come branded with an “ism,” and with a program that is trumpeted by self-described “ists” (fascists, communists, etc.). Even in the 20th century, were the “isms” preceded by subtle, unlabeled ideas that got labeled only after they had turned into mass movements? Are there ideologies today that are operating under radar for now, but that are ready to channel a mass movement?

To identify what ideologies might be exerting a dominant role, I don’t want to go down the rabbit-hole of defining what “ideology” means. At least not today. Instead, I want to investigate one dimension of the content of political ideologies: What level of our nested identities commands our highest loyalty?

Our selves are constituted in nested circles of relationships. We are individual organisms, sure. But we are born into families. Our immediate family is part of an extended family, extended by blood and marriage, and the extended family may be part of a larger clan or tribe, possibly concentrated in a polis shared with many other clans. Today, most towns or cities, along with their hinterlands, are part of a larger confederation called a nation-state, which is invented and re-created based on shared language and myths. Beyond the state, though, we share common bonds with all members of our human species, and even beyond that, we are integrated into the biosphere that sustains and eventually consumes us.

And that’s just one way of accounting for our nested identities. One could easily argue for a different nesting order, or a different collection of definitions entirely. We also constitute our identities based on professions, organizations, religions, class, etc.

A political ideology will have many different things to say. But one thing it almost always tells us is which are the “authentic” as opposed to “artificial” nested sets, and more importantly, which level ought to command our highest loyalty. Take Horace’s declaration that “it is a sweet and seemly thing to die for one’s country.” The fulfilment of your individual life lies in serving the patria, even unto death, and regardless of what impact your death has on your family.

Ideologies are rarely completely coherent, so what “highest loyalty” means may not be completely coherent either. Liberal nation-states may dedicate themselves to and base their legitimacy on individual rights. And yet they can and do make Horace’s claim on individual life. Vestiges of the priority of the family’s claim also remain, e.g., when you have not only the right to refuse to testify against yourself in a criminal trial, but can also refuse to testify against a spouse.

As an “ism,” nationalism’s position on which claims are paramount is obvious. Communism follows a kind of universal logic, which is precisely why the idea of “containment” was not all paranoia. Some (but not all) religious ideologies push the focus of loyalty beyond nature, but for practical purposes that may mean they assign the next highest rank to Humanity. St. Paul pushed Christianity to the universal human level and away from a national/ethnic focus. Separatist movements often push the focus of loyalty to a level lower than that of most current nation-states, closer to the polis or tribal level.

One way to recognize an emerging ideology might be to look for new narratives about which level reigns paramount. Are there such new narratives?

Cosmopolitanism – the idea that the supreme claims are those of humanity-at-large – is not a new ideology. In antiquity, however, cosmopolitanism likely followed a kind of bottom-up logic: A few people did travel around the Hellenic, Persian, and Roman worlds, and as far as China, and they observed that people around the known world may have had cultural differences, but their behavioral differences paled in comparison to their similarities. Since the 20th century, not only are more people exposed through travel and commerce to the inexorable bottom-up cosmopolitan logic; we also now face the top-down logic of threats to the species itself: weapons of mass destruction, climate change, global pandemics, the fragility of total economic interdependence.

Today, with both the bottom-up and top-down logics in operation, we should see not only new ideologies with a cosmopolitan ethos emerge. We should see other ideologies define themselves more clearly in opposition to cosmopolitanism, for instance more assertive forms of nationalism that outright deny any and all claims of humanity-at-large.

I doubt that this is news to anyone, which is why I’d like to look more closely at an entirely different ideological trend “at the opposite end” to cosmopolitanism, a trend that asserts the primacy of the individual level.

Individualism is nothing new either, of course. You might characterize liberalism as an ideology that places individual claims at its center, granting legitimacy to the nation-state level only insofar as the nation-state is the most effective guarantor of individual rights. At its origin, liberalism was preoccupied above all with issues of individual property rights. And liberalism defined its position on individual property rights less in opposition to the king – the anthropomorphized tribe or nation – as in opposition to traditional land tenure forms centered on the family. [I’m reading yet another great book on that exhaustively covered topic, Owning the Earth].

But liberalism’s commitment to individual rights is premised not on the uniqueness of the individuals involved but on their fundamental similarity. It’s precisely the fundamental similarity of individuals, with similar appetites, strengths, and failings that puts them into potential conflict with each other, requiring some kind of governance to protect their individual rights.

In my perception, we are now faced with a different notion of individualism, one that isn’t rooted in the belief in the fundamental similarity of individuals. There seems to be an ideology that goes farther than accepting or celebrating individual variation, but that pushes individuation as a moral imperative. For want of a better term, I’ll give it a silly one: snowflakeism. Everyone not only is but ought to be a unique snowflake. The snowflakeist says “The undifferentiated life is not worth living.” Yours will be a failed life if it can’t be told as a unique story, with a combination of experiences, relationships, and preferences that is yours and yours only. “You be you.”

Corporate marketing departments have latched onto snowflakeism – Apple is a particularly egregious case – but in doing so, they also perpetuate it and proselytize its gospel. And by offering a dizzying array of products and services, consumer capitalism offers the opportunity to realize the snowflakeist ideal, if only through a unique permutation of tastes and brand loyalties.

But it would be a mistake to think it’s purely a consumerist phenomenon. Try to find a daycare facility that advertises its pedagogical philosophy as one that does not prioritize a child’s ability to self-actualize. Few and far between are the films/series/novels in which a convention-breaking change agent is defeated by a collective of drab normies, and that is the happy end.

[A possible exception is The Lord of the Rings, in which, for thousands of years, an alliance of exceptional, powerful, heroic characters is unable to defeat an equally exceptional, powerful anti-hero, until distinctly ordinary, diminutive hobbits are able to win the day, precisely because they are unexceptional and unambitious. But LotR is from a different age, and had a throwback ethos even when it was brand new.]

My point is not to take a position on snowflakeism. Admittedly, I sometimes feel a cultural pressure to be quirky, and I experience that pressure as tiresome. And certainly, striving for the snowflakeist ideal by selecting a unique collection of brand-fetishes seems like one of the worst possible ways to tell your one and only life-story. My point is that it is a new set of ideas about how we ought to live our lives and about what it means to place the individual at the center of the political and moral universe. We’ve always had our myths about trickster Gods, our Hermeses, our Lokis, our Mauis. What’s new is that we expect all of us to create ourselves in their image.

I wouldn’t go so far as to say snowflakeism is our dominant ideology, but it is in the running. And its most powerful impact may be – just as for cosmopolitanism – as a foil for other ideologies that define themselves in opposition to it.

Both cosmopolitanism and snowflakeism must feel abstract and fantastic to many, possibly most, people. They feel unreal to me much of the time. Lived reality takes place in our families, in our local communities, and in our professional associations. We know from experience both painful and banal that too much individual differentiation strains our close-to-home relationships to the breaking point and beyond. On the other end, the nation-state has always been a strange abstraction, at best a projection of our particular local experience onto an imaginary screen. When we expect loyalty to the species level, we’re operating in a space of pure abstraction, driven by bottom-up and top-down logic, sure, but logic unsupported by sentiment.

Neither cosmopolitanism nor snowflakeism may be ideologies that channel the kinds of mass movements that lead to totalitarianism. And then again, they might be. But an ideology that defines itself in opposition to them could be extremely potent. Snowflakeism may be a particularly strong force to react against. It’s all the more powerful for being subtle, unnamed, and new. And the corollary of its main idea is that your conventional life is inauthentic. Them’s fightin’ words.   

The elephant in the room is, of course, so-called “populism” in its many global breeds. “Ism” thought it may be, I don’t think “populism” is well-defined enough to provide an operating system for a mass movement; it will have to be (and arguably is being) distilled into some kind of nationalism, or into some new kind of “ism” that might well transcend traditional national boundaries.

The important thing to realize is that we should not be dismissing the people caught up in populism as irrational, with pejorative and ill-defined terms such as “bigot” and “racist.” Sadly, many attendees at the aforementioned conference did just that. Instead, we should recognize that many people may feel caught in an ideological pincer, between the pressure to take a species-wide perspective that exceeds everyone’s emotional capacity, and the pressure to individuate to a point that will alienate them from their local bonds.

We simply won’t be able to head off totalitarian trends if we don’t understand and have compassion for the people whose supreme loyalties are commanded between the extremes ends of our nested identities.

Forecasting for Fun

After a large creative project captured my time and energy, I’m returning to the Ruminathans. When appropriate, I’ll reveal more about that project! In the meantime, I’m thinking about economic forecasting.

Forecasting: Near, Medium, and Long-Term

The leadership team of a company I know intimately has been spending a good amount of time trying to read the tea leaves on the direction of the economy. Will it grow? Will it contract? What implications will that have for direct customers and for sales?

It’s inevitable that companies fret about the economy and try to make contingency plans. Of course, what actually happens to the economy in the near term is a result of everyone’s fretting and planning on the one hand. On the other, it’s a function of events – like wars and pandemics – whose timing and impact are unforeseeable. For both reasons, forecasting near-term deviations from an underlying trend is probably pointless.

As a going concern with a product and paying customers, there doesn’t seem to be that much you can do about near term economic fluctuations other than prepare a financing cushion – whether through savings or through credit access – that is proportional to your cost structure: your ability as a company to dial down expenses as your volume of business goes down. What qualifies as “near term” varies based on what kind of business you’re in, probably between six months and two years.

At the same time, the long-term economic prospects are also too hard to plan for. Your mileage may vary depending on your industry, but with the rapidity of technological, social, political, and – to be brutally realistic – environmental change, I no longer find any forecasting beyond ten years to be useful for most individual businesses, and even for the economy as a whole.

In a future Ruminathan, I might come back to this topic of the limits of knowledge in the near and long terms, and how that is (or isn’t) reflected in the standard investment planning concepts like the cost of capital. For now, I’m trying to wrap my head around the medium term – say two to ten years – the range where our crystal balls aren’t clouded by near-term noise or long-term chaos.

Here’s the story I’m toying with.

Abundance and the Marketing Arms Race

At present, inflation is a significant concern. Our concern is partially shaped by a skewed perspective. We got used to inflation being so low for so long. Inflation seemed to defy all expectations given the ultra-low interest-rate policy sustained for over a decade on a near-global scale. That policy ought to have provoked inflation, with money supply growing faster than economic activity. Inflation didn’t develop, though, for a complicated set of mutually reinforcing reasons.

Possibly the biggest reason was free trade and the entry of China into the global economy. For anyone living in conditions of extreme scarcity this will sound ludicrous, but on the whole, since the 1990s, the world has enjoyed unprecedented abundance, a glut of goods. Inevitably under market competition, prices stayed low.

But pricing is not the only lever producers use to compete for demand. Marketing/advertising is another way producers try to move their output. The thing about marketing is that it’s an arms race. You can’t afford not to do it when your competitors are. My perception is that the marketing arms race has been a significant driver of economic activity and technological development over the past two decades.

Advertising’s growth rate alone easily outstrips global GDP growth, meaning it takes up an ever greater share of the economy. In 2022, advertising spending stood at about 0.8% of global GDP, which doesn’t sound like much, until you remember that it doesn’t create value for anyone besides advertising company shareholders. Google, Facebook, and co. are the most obvious beneficiaries of the marketing arms race.

Additionally, innovation in the distribution angle of marketing – in particular the development of online sales/distribution platforms, your Amazons, airBnBs, booking.coms, Netflixes, etc. – has been equally crucial driver of change (and possibly, but not necessarily, growth).

As far as inflation – or the absence thereof – is concerned, these marketing-oriented technological innovations reinforced the downward pressure on prices. Sales/distribution channels put competitive pressures on traditional channels (retailers, movie theaters, etc.). Or cut out the middlemen entirely. And new advertising behemoths like Google offered services you’d previously have had to pay for – or didn’t have access to at all – for free.

Our rhetoric about technological change often carries a hidden assumption about its inevitability. “Change is going to happen; you can embrace it or be swept aside.” Maybe change is inevitable, but the direction of change need not be. Why do we develop certain technologies and not others? Why did the internet take off during the 1990s and not clean energy and batteries? The choice of directions in which we – as a society – innovate (or devote capital towards innovation) are guided partly by cultural preference, but also by economic constraints.

Did we choose to develop information technology rather than energy technology because for individual economic actors, the more urgent perceived problem was the glut of goods, not climate change? That’s my working hypothesis.

Scarcity and Demographic Change

Back to inflation. The proximate causes of the sudden surge in inflation were undoubtedly related to the pandemic and the responses to it: shifts in demand from services to goods, shutdowns in key sectors like semiconductors, the reduction of labor hours paired with a continued willingness to pay thanks to fiscal income supports.

But the pandemic coincidentally arrived at a demographic inflection point. Starting around the midpoint of 2020s (i.e., now), across much of the developed world (in which we may as well include China at this point), the ratio of the working-age population to the retired population is going to shift downwards. The pandemic may have accelerated this trend, as people on the edge of retirement decided to throw in the towel during the lockdowns.

The implication is a global labor shortage, without a commensurate decline in demand. Retirees will need food, utilities, shelter, transportation, and importantly, healthcare. They will want entertainment. Their retirement portfolios will have nominal dollars to pay for these things. But there will not be enough people to produce goods and services to meet their demand. Prices will rise: the savings portfolios meant to support retirees’ consumption will lose value relative to the goods the savings were meant to pay for.

Ironically, interest rate increases, insofar as they have historically curbed inflation by throwing a bunch of people out work, ought not to be effective against this kind of inflation. Rate increases might even exacerbate the problem: Fewer people working means fewer goods and services. Higher interest rates mean lower investment activity, which means fewer opportunities to boost output per labor-hour. 

Higher interest rates also mean more nominal income generated from retirement savings. Retirees may have more nominal dollars to spend.

The total result might be an equilibrium of relatively high inflation, relatively high interest rates, full employment, and low or even negative economic growth.

In that context of chronic scarcity of goods and services, will sales and marketing activities be as important as in the context of a glut? Probably not. If your product constantly sells out, why spend any money to drive more demand?

In the broadest terms, under these assumptions, we should see a shift of labor resources away from the marketing arms race, and towards increased production of goods and services that sustain and enrich lives. Fewer influencers and software engineers working on Tweets. More nurses and tradespeople, and more software engineers figuring out how to get the most out of each unit of labor.

And in the context of a persistent labor shortage, maybe the idea of “personal branding” will disappear entirely. Hope springs eternal.

Playful history

Over the last few years, I’ve read a fair number of summaries of recent archaeological research and what it has uncovered about the dawn of civilization. Davids Graeber’s and Wengrow’s The Dawn of Everything does us all an enormous service by compiling a ton of that research from different times and places into one large, but highly readable work, along with a new interpretive meta-narrative of history. Actually, it’s not so much that G&W propose a new narrative but that the evidence they present debunks a narrative that has been drummed into our subconscious assumptions about how the world works.

The standard story runs roughly like this: Foragers lived in small, more or less egalitarian bands for tens of thousands of years of human history. Economic pressures resulting from poor resource management and climate change encouraged, then forced, some cultures to experiment with sedentarism, cultivation, and pastoralism. These technological innovations in production led to higher yields of foodstuff, then population growth, higher-density settlements, and the need for greater social innovations, including the division of labor and a hierarchical political structure to coordinate specialists and manage collective resources like irrigation infrastructure. Whenever and wherever technological and social innovations reached some kind of stable equilibrium, societies would produce a surplus, leading to population growth, higher levels of population density, higher complexity, again the need to innovate technologically and socially.

This cycle has corkscrewed us up through a sequence of political-economic forms, a sequence determined by economic and environmental necessity. The sequence is inevitable in an evolutionary sense. The only alternative to the screw’s next turn is a collapse to a lower level of the sequence, often by way of a Hobbesian war of all against all, always with the option of complete annihilation.

What Graeber and Wengrow’s work accomplishes is to take off the blinders of “inevitability.” The archaeological record shows us hierarchical foragers, egalitarian urbanites with non-hierarchically organized infrastructure, and dense population nodes that appear independently of the underlying economic mode of production. The (pre)historical records abound with different answers to “What is the good life” and “What do we owe each other,” with experiments that seem to have taken place not always out of economic necessity but often for the sheer hell of it, playfully.

Playfulness characterizes much of G&W’s account of history. The Neolithic is littered with monuments, of which only works in stone and bone survive, that we can endlessly try to interpret in functional terms – to worship gods, honor the dead, or keep time – but that might just as well have been erected for fun, no more and no less than an excuse to share an epic experience. On a recent vacation in the UK, my family visited Stonehenge and Avebury, practically neighboring sites whose use and construction phases overlapped. They are both similar and different. Did they inspire each other in friendly competition, or in friendly complementarity? G&W point out that a motive force for innovation and creativity may be nothing more than the desire to try out something different. A few days before making the pilgrimage to the British monuments, we had spent a day with friends at a Welsh beach. While two aspiring young architects constructed a sandcastle of superlatives, I decided to sculpt a model of a fishing village, to contrast both with the “competition” and with my own gargantuan sand-based replica of Minas Tirith of the previous year. Just because.

Inevitability and necessity rather than playfulness characterize so much of our political and economic thinking, particularly the branch called, ironically, game theory. Hobbes’s war of all against all prefigures the classic statement of the Prisoner’s Dilemma, wherein the possibility of win-win cooperative solutions is undermined by the mere possibility of a player exploiting others’ cooperation. Fearing shirkers, rational actors withhold cooperation preemptively, self-fulfilling their own dark prophecy. Game theory actually shows, though, that social dilemmas of the Prisoners’ Dilemma form need not inevitably lead to non-cooperation when they are repeated. And in real life, when are they not repeated? It is possible to credibly negotiate and commit to cooperative social contracts. Hobbes argued for only one stable social contract, under a unitary and necessarily unconstrained sovereign. The crucial, underreported, and playful insight of game theory is that mathematically, the number of stable social contracts – some more, some less egalitarian – is infinite.

What the case studies collected by Graeber and Wengrow show is that an infinite array of social contracts is not just a mathematical possibility but a historical fact, and that the range of possibility is limited not only by economic and environmental necessity, but by the playfulness of our imagination.

FIRE as hoarding (part 3)

The two previous posts considered the question of whether FIRE – financial independence, retire early – is an ethically defensible life choice. Full disclosure: If I’m beating up on FIRE, it’s because I’m flirting with it, soul-searching not moralizing.

In Part 2 I wondered whether FIRE-people benefit from hierarchically organized activity – living as they do off of investment income – while leaving to others the responsibility for maintaining cooperative behavior in the workplaces from which they’ve retired. In other words, FIRE is a form of shirking, shirking from a political responsibility to maintain the economic cooperation that sustains everyone, including especially those who live off the surpluses generated by the division of labor. This post investigates whether FIRE is a form of shirking or hoarding in an economic sense as well.

Part of the appeal of FIRE – to me at least – its emphasis on frugality. You don’t win at FIRE by working yourself to death and/or placing highly risky bets. Instead, you cut consumption to the bone, and in doing so rediscover that less is often far, far more. It’s a topic for another Ruminathan, but I deeply believe that we don’t just get marginally less utility out of each additional unit of consumption, we actively destroy utility when we take “more” too far. So when it comes to the good life, FIRE attracts me from the get-go. What’s more, frugality is morally seductive. It takes discipline and tough-mindedness. Maybe it’s a self-rewarding virtue in the long run, but in the meantime, it requires a strong will to confront inner demons and the external pressure to keep up with the Joneses.

It’s heroic.

And that’s precisely why it needs special scrutiny, as do all self-flattering stories. How do you know if you’re living frugally? On the one hand, you drive a wide wedge between what you earn and what you spend. But you also benchmark your consumption against peers. Do you spend at the equivalent of the poverty line even as you earn three times more? You must be living frugally. Do your neighbors have tchotchkes and toys that you do not? Do you do without – gasp! – a car? You’re practically a modern-day Stylite, at least in parts of the US.

The trouble is that the frugal lifestyle is only possible because the “essentials” you cut your consumption down to are so cheap relative to the income you can generate. And the only reason that is possible is our societal Grand Bargain of increased labor specialization and global trade. A family of three or four can survive, even thrive, on $30,000 per year as does prominent FIRE advocate Mr. Money Mustache’s because the basics are so damn cheap. Would they still be in a world in which large numbers of people followed MMM’s example? My fear is that frugality depends on work performed by people who do not become financially independent and retire early. That includes both those for whom FIRE is not even a remote option – particularly in resource extraction in the developing world – and those who could consider it individually, but if they did it collectively, would cause a collapse in the supply of basic goods.

One of the few areas of the global economy in which I have a sliver of expertise is in telecommunications. We have organized much of our sharing of resources via markets, with freely ranging prices. An important reason that works at all is that we can quickly – at this point instantly – communicate local prices to each other, so that people not only have the motivation but the information to move goods from one place to another for profit. Maybe the world’s telecommunications infrastructure is so powerful that huge amounts of it can be devoted to “frivolous” consumption a FIRE-devotee would live without (Netflix). But that same infrastructure guarantees that solar panels, heat pumps, propane tanks, and power tools get to the corners of the world where real estate is cheap enough to make a go of FIRE, at prices affordable at $30,000 a year. And it also enables streaming the DIY videos you need to learn how to install your solar panels and heat pumps yourself. So what happens when all the telecoms engineers – who make good money and could afford to– catch FIRE?

We might already be learning how far FIRE can be pushed, and it might not be very far. The inflation we’re experiencing now has been kick-started by specific events: pandemic and war. But demographic trends – as described for example by Peter Zeihan – were set to hit a tipping point this decade without those events, with mass (but not necessarily early!) retirements hitting wide swaths of the world. Retirees may reduce their consumption, but only in the discretionary areas. Global consumption of the basics – food, water, heat, shelter, healthcare – will not drop off (in fact will rise for the last item in the list), even as the available labor pool shrinks.

I won’t pretend to know how this will end. Zeihan’s forecast is bleak for much of the world. But I worry that FIRE – as virtuous as its frugality dimension may seem – will turn out to be nothing more than a self-flattering form of free-riding. Sure, you may devote your time to carpentry and permaculture gardening, but as long as any of the inputs you use come from the network of reciprocity we call the global economy, your lifestyle may only be possible because others are not living it, because they cannot individually, or because they do not collectively. Retiring early may be no more than ostentatiously hoarding your human capital, which you can afford only insofar as others don’t.

We may know for certain sooner than we’d like.  

FIRE as hoarding (part 2)

Is FIRE a form of life in which we make good on our obligations to others? Or is it a form of shirking or hoarding? This post continues where the previous one left off, and focuses on whether retiring early from “bullshit work” is a form of shirking.

The premise of FIRE is that by constraining your consumption you can leave the workforce early. Constraining consumption works in two ways: before you retire – by generating relatively high savings – and after you retire – by living frugally on the capital returns from a relatively modest endowment. But is the FIRE-lifestyle a universalizable choice? Or is it possible only for a few, contingent on it not even being an option for the many? Can you liberate yourself from the “bullshit” dimensions of work – as prominent FIRE-devotee Mr. Money Moustache puts it – only because others cannot?

The first defense of the FIRE-person will probably be that the whole point of FIRE is to reallocate time from bullshit to meaningful activity. By definition, that entails a higher degree of productivity. The FIRE-person freely applies their human capital. While it was locked up in a particular (bullshit) job, it was being hoarded – by the capitalist. It’s a powerful argument that could apply in many individual cases. But assuming that capitalists will compete to gain access to human capital not only with a wage, but also with meaningful work, one would at least hope that it would apply less often than not, or in any case, not permanently to any given individual.

[Skippable side note: Junior employees may, understandably but incorrectly, characterize their job as involving a high degree of “bullshit”: getting coffee, etc. It’s true that some of their skills may be going to waste, i.e., be “hoarded” by their employer. But they may, also understandably, underestimate the degree to which they need to learn the ropes – the particulars of their function, their organization, their industry – to live up to their potential. You don’t know what you don’t know. Until you at least know what you don’t know, it may be hard to tell whether what you are doing is bullshit or simply the most efficient way possible to learn. What looks like bullshit might wind up being an investment. Wax on, wax off.]

What is bullshit work? David Graeber has an entire book on the subject (which I have not yet read) based on an earlier essay (which I have). But let’s look at how Mr. Money Moustache (MMM) defines the “bullshit portion of your work. The commute, the politics, the production of inferior products.”  The first factor, the commute, has nothing to do with the job per se.

[Skippable side note #2: The commute certainly is a massive waste of time and human capital. Covid and the home office may have reduced it, but whether that is a temporary state of affairs remains to be seen. For my part I have made many important (and cash-costly) life choices to avoid commuting. Although I’m not going into it here, there is a separate question about whether avoiding a commute is a universalizable life plan, or whether it’s possible only for some because others are willing to commute. What would our settlement patterns look like if we all lived a 15-minute bike-ride from our jobs, and would that settlement pattern be less livable than everyone’s current situation?]

The third factor MMM cites is working on inferior products. Having spent time on the production and/or research sides of a business, I have a lot of sympathy for the perspective that producers are sometimes constrained from doing their best work. However, having worked on the sales side as well as in upper management, I’m also aware of how difficult a judgment call it can be to choose a quality level for a viable product, in the sense of “something that someone is willing to buy at a given price.” I have worked in a business that created products that were too “superior” to have a viable long-term market. Producers, understandably, are not always aware of what level of quality the market wants, and hence cannot always tell whether their job is “bullshit” because of the constraints on quality. When that happens, it’s not the fault of producers. The fault lies squarely with management: It’s a failure of communication. But it’s not due to the intrinsic bull-shitiness of the job.

And that points naturally to MMM’s second contributor to bullshit: “politics.” Politics, in the world of business especially, is a dirty word. Has anyone ever used “office politics” with a positive connotation? But check out the subtitle of my favorite book about management: Managerial Dilemmas: The Political Economy of Hierarchy, by Gary J. Miller. Miller makes the case that management is inherently a political act: finding and sustaining a cooperative equilibrium in what otherwise would devolve into game theory’s scary, scary Prisoner’s Dilemma. Hierarchical organizations with central planning can, in principle, unlock efficiencies compared to loose affiliations of free agents. Although thousands of free agents could work together through market forces to build, say, jet engines, the transaction costs of contracting between them would be enormous. Hierarchical organization can avoid those transaction costs, if everyone within the organization contributes to common goals and foregoes opportunities to benefit from others’ contributions while withholding their own. AKA shirking. Both abstract game theory and empirical behavioral economics have shown that people can and do enjoy the fruits of cooperation and forego the individual gains from shirking, if they believe that others will act similarly. Miller argues that the essential task of management is to create and defend that belief. In practice, that’s accomplished through communication, whether explicitly through words and negotiations, or through symbolic acts such as ostentatiously refusing management perks.

That means that there may be two kinds of “politics” going on in the workplace. I assume that MMM uses politics in the sense of individuals using the organization to pursue individual goals – including both monetary and status-oriented – by taking advantage of others’ cooperation (shirking and other forms of defection). But there are also the political efforts – performed by everyone, but for which responsibility lies with management – to sustain the cooperative equilibrium. As a FIRE-devotee, you use your early career to generate savings from a salary that represents your share of the fruits of organizational cooperation – cooperation sustained by the political economy of the hierarchy. Then, in order to avoid more of the “bad” kind of politics, you retire early, presumably at the point where your experience and reputation – regardless of what kind of role you are playing – could maximally contribute to maintaining the “good” kind.

I confess: I like the idea of FIRE. A lot. Enough to flirt with it myself. And it’s awfully tempting to regard it as a virtuous lifestyle, with its emphasis on frugality, which may be both a better way to live and a better way to meet our obligations to others with respect to sustainability. But I worry that “leaving the rat race” is a way of reneging on our obligations to others, a form of shirking. Shirking from the responsibility of sustaining the cooperative ethos whose fruits generated the surplus I, as a FIRE-devotee, hope to live on. And if the lifestyle is only attainable by living off the return on the accumulated surplus – the investors’ share of the fruits of cooperation – then it’s even more worrisome. The capital return is generated by the labor of those who have not yet been – and may never be – able to leave the rat race.

FIRE as hoarding (part 1)

Is FIRE – Financial Independence, Retire Early – an ethically defensible life plan?

The FIRE approach to life is to radically reduce expenses so that, even with a relatively modest income, you can accelerate retirement by decades. One of its prominent proponents, Mr. Money Moustache (MMM), “retired” at around 30 by earning well and keeping expenses to $20,000 (probably closer to $30,000 in today’s money).

You could ask both if it’s a life well led and if it’s a life that meets our obligations to others. FIRE’s proponents emphasize that you need not stop working, but are free to do so and can choose to perform only work you find meaningful. In that sense, as long as you find meaningful activity post-retirement – paid or unpaid – the answer to the first question is an emphatic “yes”: FIRE allows you to live your life well. It’s the second question that has been puzzling me. Do the FIRE-folk meet their obligations to others? I’m specifically interested in the question of whether they are doing so in “contractarian” terms, where there seem to be two sub-questions: Is the FIRE-lifestyle universalizable – if everybody tried to live it, would it even be possible for anyone to do so? And if it is not universalizable, would we choose a society in which it was possible for only some to live the FIRE dream, if we were in an abstract space “behind a veil of ignorance” as John Rawls puts it, and did not know in what social position we will land once the veil is lifted?

I keep coming back to the first question especially. Is FIRE universalizable? You pay your dues for a few years doing work that has a relatively high portion of “bullshit.” For MMM that includes “commuting, … politics, and the production of inferior products.” Having saved enough money in a high-bullshit job, and invested it in a way such that you have good hope that you can indefinitely extract an inflation-adjusting $30,000, you devote yourself to activities with a low-bullshit proportion, paid or unpaid. The reason that the FIRE dream is within reach for many is that it’s possible to earn a capital return – earning money by having more than you need to survive in the first place – and the ”4% rule.” If it’s true that you can confidently extract 4% of your retirement initial war chest indefinitely (even while keeping pace with inflation), then “all” it takes is $750,000 at time of retirement. MMM has plenty to say on how to achieve that level of savings. Sidestepping the concern that the current distribution of wealth and incomes would not make this universally practicable: Is it practicable in principle? If everyone tried to do it, would anyone succeed?

Superficially, any type of social retirement system looks like a low-ambition FIRE plan. Social security schemes granted financial independence and early retirement, relative to what came before, historically. When your life expectancy is 70, a government-run plan that starts paying out at 67 makes you financially independent of your family and/or allows you to retire “early.” That kind of social security system is premised on the idea that the rest of the working population is productive enough to provision itself and you, without the need to rely on your productive input. Part of that premise is, though, that the average productive contribution of people above the retirement age would not be high to begin with, i.e., that the 67+ age group is collectively a net consumer of society’s bounty. The social security system assures that the provisioning duty is spread across society, and not concentrated in individual families. There is an argument to be made that this may allow society to be more productive overall, allowing 67-year-olds to devote themselves to activities they consider meaningful and manageable, paid or unpaid, rather than forcing them to continue plowing fields or building roads at a lower rate of productivity. Meanwhile, their families can undertake higher-risk, higher (social) reward activities that they might not if they had to assure the survival of their aging parents and grand-parents.

FIRE is different. Its adherents try to leave the “rat race” much earlier than 67, as early as possible, in fact. At what age your productivity is highest is obviously a matter of the type of activities you pursue and your personal situation. Equally obviously, there is a phase, usually decades long, where your skills and knowledge increase, and then a point at which your personal stock of human capital depletes faster than you can restock it, a circumstance of which I’m becoming painfully aware in my mid-forties. So whether you peak in your 30s, 40s, or 50s, FIRE ought to take you out of the labor force before you have a chance to contribute some of your best work.

Does that constitute of a form of hoarding? In a series of earlier posts, I discussed what “hoarding” might mean for financial and tangible assets. I concluded that, whether “hoarding” is the best label for the concept or not – there are implicit or explicit social conventions about how society’s resources are distributed, and that it is possible to individually benefit from the social convention without personally adhering to it: the classic game-theoretical puzzle of cooperation. In the next posts, I will extend that discussion to human capital, and the question of whether or not the FIRE lifestyle constitutes hoarding in the sense I discussed earlier: withholding your resources when the social contract is premised on everyone contributing. If FIRE is a form of hoarding – of human capital – then it is not universalizable and probably not an ethically defensible life plan.