Sawing off our own branch

An AI-bedazzled friend recently sparked some reflection on the nature of writing and the experience of creativity. “Sure,” he admitted, “ChatGPT doesn’t produce truly great writing by itself. But it can get you to a decent first draft.”

His suggestion nudged a memory of assembling a plastic Christmas tree. You set up the base, then the central shaft, then attach the branches. Finally, you decorate it with sparkly stuff. To my friend – who isn’t a writer – writing consists of constructing an outline, fleshing it out with sentences, and then festooning the result with colorful turns of phrase and metaphors.

Maybe some people write like that, and I don’t want to belittle anyone’s creative process. If that’s your approach, maybe ChatGPT saves you all but the last step, where you can still apply a creative human touch by removing its most garish flourishes and replacing them with something more tasteful.

My own experience of the creative writing process – fiction or non-fiction, short essay or lengthy tome – is very different from assembling a Christmas tree. It’s more like growing a real tree in the garden.

It seems to start with a central idea that, like a seed, contains within it a plan for the entire work, the intricate shapes of its furthest leaves just as much as its basic structure. As a writer, I feel like I’m tending the tree and optimizing its environment as the work unfolds the logic embedded in its kernel.

In great writing, there is no distinction between function and form, content and decoration. Elegant turns of phrase and evocative metaphors aren’t grafted onto a rough structure to grab the reader’s waning attention. They express the work’s central idea.

Once you have a central idea – the seed – typing the first words is the easy part. The hard part comes later, when your work has taken its basic shape. Whether it’s your own work, someone else’s, or ChatGPT’s, editing a poorly conceived first draft is much, much more time-consuming than starting afresh.

I don’t know what this means for how we will create, connect, and share big ideas with each other in the future, and whether we humans will do so at all. It’s tempting to wring every last drop out of the analogy: Plastic trees may be pretty and convenient, but they won’t feed you, provide tools, or even warm you.

But an analogy isn’t an argument. And at this stage, I’m no longer even sure what it is I should argue and if there’s room left for arguments. Maybe only for eulogies.

I see a chasm between the organic writing (or music, or art) and the bot-generated output. I see it in this experiment in the New York Times, where a short story writer faced off against a bot writer, each using the same “prompting.” Maybe that chasm will someday be bridged, but it hasn’t yet and it might never be, no matter how many data centers are devoted to the cause.

What daunts me is not the thought that the chasm might be bridged, but that our descendants might never develop the capacity to see it. My grandchildren, maybe even my children, may happen on this text and have no clue what I’m yammering on about, just as today we have can’t even imagine, let alone regret, the lost spiritual dimensions and layered meanings of flint stone knapping.

Some say the Cistercian monastic order invented capitalism as a side-effect of their efforts to commune with God. By investing in labor-saving technology, they hoped to escape the drudgery of survival tasks and free up time to contemplate what is good, true, and beautiful. I worry that large language models are not another step to liberate us from drudgery, and instead a tool to “save time” by getting us to settle quickly for the mediocre, the reassuring, and the distracting.

2 thoughts on “Sawing off our own branch”

  1. Hi Nathan,

    let me suggest that your ruminathan fears the death of the artist (or intellectual) and the death of the art connoisseur by means of atrophy. I would not argue against such a fear.

    But let me tell you how a semiotician would come to this conclusion:

    Using an incomplete 2×2 matrix derived from the binaries “similar vs. connected” and “factual/stated”, Charles Sanders Peirce came up with the triade of signs: icon (similar and factual), index (connected and factual), symbol (connected and stated). Roman Jakobson added a fourth: artifice. It covers the artwork. He saw it as a “stated similarity”.

    What happens when we use AI?

    First, we create the sign, AI does not. AI only gives us a signifier to help us. We choose the signified and connect both of them. Therefore, it is symbols or artifices.

    Second, our (or the artist’s) theory of art will determine if it is a symbol or an artifice. For sure, our willingness to do art will have a say in this. How we do it, is the process of making art. I hope you don’t insist on the “old-fashioned” way which may never change 😉 How we perceive the quality of the artwork/text, needs to be answered by the educational levels of the recipients.

    Therefore, the fear that chatGPT might replace actual writers is the fear that AI might replace actual artists. qed

    What does it mean?

    Mostly, I think, the kind of the same as always in times of potent new technologies for mass production.

    Will fakes be possible? Yes.

    Will a lot of people not be able to distinguish fake from original? Yes.

    Will this make artists and intellectuals vanish? I hope not.

    Will they need to find a new way to create a market for themselves? Certainly.

    Would fakes be a problem for the recipients? The recipients will need to figure this out and decide: would I rather choose atrophy or re-discover where real artists and intellectuals will display/publish their work.

    I hope, many choose wisely.

    Sebastian.

    Like

Leave a comment