To write well with AI, you’ve got to understand Socrates. Paul Graham and Adam Grant argue that having AI write for you ruins your writing and your thinking.
Or so the story goes.
Now, honestly, I tend to agree, but I thought these erstwhile smart people were making a couple mistakes. First, they seemed to be criticizing first drafts. If you asked a person to write a poem in five minutes with vague instructions, unless they were a champion haiku composer like Lady Mariko from Shogun, it would probably be pretty bad. AI is best in conversation, reacting to feedback. Sure the initial draft might be bad, but AI can revise, just like we can. Second, and more important, if AI shouldn’t be doing the writing, it should probably be the critic. Even if it didn’t have good taste, it could surely evaluate a specific piece, given sufficient prompt scaffolding. Right?
After completing a major portion of a draft of an in-progress novel, I decided to test my theory. I shared the first Act with Claude (3 Sonnet and Opus) and Boom! I got exactly what I hoped for—some expected constructive criticism along with glowing praise that my novel draft was amazing and unique.
In reality, it was not.
This was not a skill issue! My prompt was ostensibly well-crafted. I knew how to avoid exactly this problem, but I didn’t want to. It was a temptation issue. I knew, deep down, that my novel was, in fact, overstuffed, weirdly paced, exposition dumpy, and had half a dozen other rookie mistakes. Of course it was! It was a draft! But I crippled my AI critic so that I could get a morale boost.
The sycophantic critic is an under-appreciated, and, to me, equally concerning, risk of using AI when writing. Yes, using AI to write for you will erode your thinking and creativity, but so too, possibly, can writing for the AI. Sycophancy is a tempting behavior of AI. My AI critic told me what I wanted to hear about my writing rather than the truth about it. As the recent debacle with 4o showed, there is incentive to make AI products more sycophantic, not less. ProWrittingAid now offers AI Beta Readers. Who doesn’t want to hear their un-edited draft is a masterwork and can ship straight to Kindle Marketplace, those rejecting agents be damned.
There is also a strong temptation to outsource taste. I’m speaking for myself, but I can’t imagine I’m alone in wanting to know if something I’ve written is good. I want validation even though I know taste is protean.
But outsourcing taste to AI is, potentially, even worse than asking a kind friend. At least we know the friend is maybe bullshitting. As a critic, AI seems unbiased, but is more like a strange fusion of midwit and an alien. To wit, AI prefers AI writing. It has a taste all its own and might alter ours. Too much AI feedback is probably going to pull you in a weird direction.
Whither AI for Writers?
We can’t have the AI write for us, and us writing for it as critic either deludes us or warps us. How can the humanities recover AI as an ally in the creative process? This is part, I think, of a bigger, more stressful conversation about the purpose of education in a world where AI cheating is rampant, and where AI is getting better at teaching and writing fiction.
In Henry Oliver’s piece on taste, he, citing Ethan Mollick, also argues the humanities has to engage with AI. We can’t ignore it. So what are we to do? Learn prompt engineering? No no no. We must go even deeper into the humanities. We need philosophy! We need Socrates.
Enter Agnes Callard.
In Open Socrates, Callard argues our best (or perhaps, most philosophical) thinking is done with others. Where this matters most is when we are thinking about our beliefs, that is, ideas core to our being. We are too close to our beliefs, so we need a trusted, kind, and truth-seeking collaborator to hold and represent the ideas we can’t. Will Storr’s The Science of Storytelling corroborates this, uh, story. Storr argues that narrative exists to help us escape the black box of our own skulls to better understand each other and how we change. Specifically how our beliefs and, therefore, ourselves change.
In both philosophical dialogue and narrative, there is a believer and a (friendly!) critic. In the dialogue, critic is a co-thinker whose job is to ask questions, to push on weaknesses, and do so in a way that lets the criticisms take root. Characters in great stories put us in the position of the critic, watching the beliefs of the character play out, allowing us to ask questions and scrutinize their choices. Fascinatingly, both Callard and every bit of creative writing advice I’ve ever read argue the same thing: for these to work, everyone involved must be seeking truth.
Writers and thinkers figured this out long ago. That feeling of being unable to see where you need help is more than a bit familiar. Creative writers of all stripes have writers groups, first/beta readers, subreddits and discord threads, and, of course, the machinery of publication that provides refining friction and feedback. Thinkers write, and therefore think, in public. This, in turn, generates responses in social media, major publications, and in peer-reviewed journals. Both fiction and non-fiction writers have, over the ages, developed pretty robust systems to ensure their writing, and therefore their thinking, is good.
Despite everything I’ve said, I’m actually quite optimistic about the ability of AI to be great to write with. AI is weird, and learning to work with LLMs in general is very much a trial and error process. We’ve listed a bunch of things we tried (AI as writer, AI as critic, AI as cheater) that have resulted in, well, errors. So let’s try something else.
Where I’ve found AI to be strongest is as a coach and editor. Not a teacher, per se, who ‘knows’ good or bad. That’s the same trap as the critic. When I’m writing creatively, either for an essay like this or for something like a novel, I use AI as antagonistic support. Coaches and editors are ultimately thinking partners, whose job is not to do the work, but to help you and push you do it better. And how do they do that? Often by asking questions, pointing out gaps in thinking, asking about apparent contradictions, and pushing for specificity.
Great coaches and editors are incredibly irritating because they do this well and are often right. These antagonistic supporters challenge us, and push us to get at a better, truer version of things.
The paragon of compassionate antagonists, was, you guessed it, Socrates. Socrates was honest to a fault because, as Callard takes pains to point out, he loves and cares for those he talks to. His commitment to truth demonstrates that love. Socrates isn’t cruel, or harsh, or combative. He often seems to be having a grand old time. The major AIs all try to be some version of ‘helpful, honest, and harmless’. It’s that ’helpful’ bit that is the origin of a lot of sycophancy. To make AI a great co-thinker, you’ve got to remind it to be honest and not doing so is harmful because then you won’t get better as creative.
Writing with a Compassionate Antagonist
This requires a lot of work, but once you get in the habit, it’s hard to write any other way. To achieve this, you’ve got to follow a few key principles. I have not perfected this. It’s still early days, and a lot of this will be obviated by some new feature or model I’m sure in the coming months, but in the spirit of thinking in public, here are my general strategies for writing with AI.
First, create at least one strong, clear coaching or mentoring persona. Coaches and mentors do not do the work for you. They can’t run drills, they aren’t in the arena. Their job is to see you as you cannot see yourself, strengths you underestimate, flaws you ignore, and errors in form you can’t even see. I tend to use Claude, so I create this persona in a combination of a Project and with Writing Styles. ChatGPT Projects and Custom GPTs both work for this, as do Gemini Gems. To do this well, think of your best coach or mentor, write out what made them great. You’ll find yourself writing things like, “No bullshit. High standards. Pushed me to be great while showing they cared. Praise was hard to earn, but when earned, given with gusto.”
Second, have long, iterative conversations that follow the ‘diverge-converge’ cycle of design thinking. For example, I’ll often start with a stream-of-consciousness idea dump, then ask the AI to ‘yes and’, running down various ideation paths. I’ll ask it to research and see which topics are well covered and where I might be doing something novel. I ask it to explore gaps or note what it sees as interesting. I riff as these replies inspire me. This is what I try to turn into a first draft.
Third, once you have something resembling a draft, then bring out the coach. The coach’s job is to call out strengths as much as weaknesses. It’s ok to ignore feedback. The AI coach, like a human coach, is on the sidelines. You are the one responsible for your performance, you’ve got to decide which drills to do and what advice to take. For this essay, I asked Claude to give me feedback as Agnes Callard, Tyler Cowen, and Henry Oliver. AI Henry and Agnes were most useful. Sorry Tyler!
Now, I am not sure if this method actually makes my writing better. To me, it seems like it does. But what is more important, to me, is that I feel like I’ve done right by 3 Quarks Daily despite not having much in the way of friction. I have a rare luxury to write whatever I want here with an extremely long editorial leash. You’ll have to be the judge as to whether my writing is any good, but I feel better that I am doing right by you, the reader, having run through this process.
This process likely works because it follows Rohit’s guidance of treating LLMs like a person. In particular, the AI is acting as roles we already are comfortable with in the creative writing arena. It’s somewhat strange for writers to have a co-writer. Such that even teams of two, like The Expanse’s James E. Corey (a pseudonym for two authors), Pratchett and Gaimen on Good Omens, or the alleged ‘real’ authors of My Brilliant Friend are rare. Yet you look at the acknowledgements of any book and a laundry list of names comes up. But you know who’s never in the acknowledgements? Critics and co-writers. AI, like an agent or editor, should go in the acknowledgements, not on the cover or in a by-line.
Now, a stranger question: might we end up in a scenario where AI fills not just one line in the acknowledgments, but nearly the whole thing? And might that be… good?
A Novelist’s Writers Room
Novelists, in the past, were stuck writing by themselves and, when they were lucky, having one or two confidents they could share with. It’s no coincidence that the Inklings, Bloomsbury group, and other such crowds were so productive. Having people to collaborate with is what generates excellence. Could AI let us do that for novelists?
The problem, is that AI are not (yet) great writers. AI writing is often lifeless because, well, it is. Writing groups are only as strong as their participants. But you know what AI outputs people love? When AI pretends to be someone in particular, especially if that someone is you, or, even itself. Perhaps the key, then, is personas. Maybe it is multiple antagonistic personas. Like a writer’s room.
No great work of literature has yet been crafted by a writers room. Even though all great films, shows, video games, and plays are. Directors and show runners are managing tens or hundreds of individuals to get a film made. Some of that is physical labor, but a good deal of it is creative work. And it need not just be written: both the Matrix and Fury Road were more story-boarded than written as scripts. Suddenly novelists have access to tools (visualizations, diagrams, etc) that would have been previously beyond their reach.
I’ve begun experimenting with creating this ‘writers room’ in an app I’m building. The app does two things. First, it hierarchically summarizes large creative works so that the AI has context without being overwhelmed. This is equivalent to how you and I have a ‘picture’ of a novel in our head, and use that context to focus on how to re-write a given chapter or scene. Second, it manages personas. Initially I made a writing coach, a developmental editor, and a marketer. The rationale for that is above.
What I have recently started exploring, however, is crafting competing versions of myself. These me’s have my core motivations, values, and styles, but dialed to very different settings to create (as characters often are) heightened versions of me. The result is that, no matter the mood I’m in, or what biases I bring, these other personas are able to brainstorm a scene or talk through a character’s arc in a way that is both expansive and, strangely, still true to me.
And maybe, weirdly, this is how to get AI to be great at writing. By accelerating elements of myself, my writing will be more idiosyncratic, distinct, and interesting. These personas, too, may take on artistic roles as well, just as a film has a cinematographer and a set designer. Great directors tend to work with those who help best to bring their vision to life. Maybe soon writers will be able to have their own ‘team’, letting great plotters like the renowned Dan Brown craft better prose and dialogue.
The friendly friction among AI self-personas, writing cinematographers and set designers, and coaches, editors, and marketers against the writer is, where I think, the future of writing lies. When it comes to AI, if you’re not fighting with it, you’re not writing with it.