Writing As Thinking—By Proxy
In this provocation, Jon Ippolito questions what human capabilities AI extends and what capabilities it removes. In doing so, he charts the evolution of human writing processes alongside technology while speculating on what future human writing practices will look like.
Will “writing as thinking” survive the AI age?
“I write because I don’t know what I think until I read what I say,” said Flannery O’Connor. The sentiment has been echoed by authors throughout the late twentieth century, from E. M. Forster to Joan Didion to David McCullough. And today, many writing teachers I respect are waving the writing-is-thinking banner in response to the increasing temptation to outsource classroom writing to chatbots1 (Jong; Lustbader; Mintz; Sava; Warner; Winter-Tear). Opposed to these writing defenders is a new breed of writing-with-ai advocates, who question the link between writing and thought, or at least its uniqueness2 (Dasey; Pryor).
Who’s right? I believe there’s no question that the typical ChatGPT user is offloading cognition from her own brain to a bunch of matrix multiplications in a data center. But I also believe writing has always been a technology for offloading some parts of cognition to concentrate on others. As Marshall McLuhan argued, every new technology extends some human capability while simultaneously amputating another. If we accept this thesis, the question becomes not whether AI disables thinking, but which aspects it amputates and which it extends.
Past writing paradigms as cognitive offloaders
The Alphabet
Let’s examine the history of writing as a story of technologies designed to offload cognition, with concomitant gains and losses. The invention of the alphabet obviated the rhetorical devices previously necessary for protracted oratory, from Simonides of Ceos’ memory palaces to Homer’s re-use of stock phrases to sustain narrative momentum (“Son of Laertes and seed of Zeus/Resourceful Odysseus”).
Since the alphabet’s arrival, however, purists of oral culture have lamented what was lost in the transition to written culture, from Socrates’ famous concern that offloading memory would encourage forgetfulness3 to McLuhan’s argument that the linear focus on alphabet in Euro-ethnic society prevents readers from taking in a variety of information at once, as exemplified by talking drums and other indigenous media (McLuhan and Powers). Some Native cultures were impressed by writing’s knack for storing memories in sheets made from trees yet saw their rituals lapse as a consequence; others worried that written law might make the word more important than the man---and were Jesus to return today, he might agree.
The printing press
Bi Sheng and Gutenberg made the manual labor of copying manuscripts unnecessary and dramatically increased authors’ reach to far-off audiences. In the tradeoff for removing careful penmanship as a barrier to literacy, however, print also eliminated the job of scribes and the aesthetics of illumination. Some calligraphers I have known protest that print drained letters of their personality compared to handwritten characters, which according to them carry a trace of their writer’s soul.
Digital text
The word processor offloaded the burden of storing and managing reams of paper, adding such capabilities as cut/paste and search/replace. Augmented by the Internet, digital text spanned the globe. As media historian Matthew Kirschenbaum argues, along with these new superpowers, the ability to track changes in a Word document or share them in a Google Doc resurrected some of the features of hand-written manuscripts that disappeared in typewritten texts, such as insights derived from marginalia by authors from the Talmud to Thomas Jefferson or the ability to style text by a choice of color or font.
Yet even this technical advancement came with downsides for organizing and communicating thought. I remember as an undergrad splaying a dozen pages from my essay drafts onto nearby tables, then cutting out passages and moving them to new positions. Scissors and tape may seem antiquated compared to Command-C and Command-V, but the ability to scan multiple pages at once is lost on a laptop screen. (Scissors and tape also enabled novel literary forms in the “cut-up technique” of Brion Gysin and Williams Burroughs.)
Some historians have even speculated that the invention of word processing led to an unbounded growth in the complexity of legal and tax codes, which was previously limited by the amount of printed paper that a human could reasonably carry.
Spell- and grammar-checkers
Outside of classroom martinets, I’m guessing there are few who miss life before a spellchecker, and many teachers who assign essays are probably grateful to read prose with fewer sentence fragments and misplaced modifiers. But while I didn’t enjoy being called up to the blackboard to diagram direct and indirect objects, it did instill a sense of the logic of grammar that I was able to apply in learning other languages from Italian to Japanese. For its part, English spelling was a catch-as-catch-can affair until it was codified in the last couple of centuries; nonetheless, learning to spell also opened the door to learning the intrinsic history embedded in a language’s homonyms and etymology. So offloading orthography also meant offloading a deeper grasp of linguistics.
What does AI offload?
So, if the history of writing is a history of offloading cognition, what do large language models offload? Depending on whom you talk to, authors and academics alternately praise or lament generative AI’s ability to:
-
Brainstorm
-
Research
-
Outline
-
Draft
-
Polish
-
Reflect
Which if any of these are best to unburden from young writers is an important question. However, here I want to sidestep these controversies to draw attention to whether AI enables offloading the very craft of prose---and whatever cognitive load that may require.
While a debate rages about whether text transformers can think for us (or themselves), no one doubts they are consummate language simulators, breezily generating fluent, well-organized paragraphs of perfect English better than 99% of our students. So maybe the most obvious cognitive burden offloaded by large language models is the mechanics of language---not just minutiae like subject-verb agreement, but bigger structures like the flow of a narrative, or the cross-examination of assumptions, or the representation of multiple points of view.
The future of personal communication
It might seem this is a natural stopping point for the technological mediation of human communication. Structuring an argument in articulate sentences and paragraphs seems fundamental to what makes Homo sapiens unique; if robots take this from us, what is left? Perhaps the highest value of writing, as suggested by the tip of Maslov’s pyramid and writing guru Peter Elbow, is the development of the self by finding our authentic voice.
But it’s possible to sing without words. Even for the spoken word, speech doesn’t require punctuation or pages to profit from vocal inflection and body language. Since AI is currently incapable of enjoying or capturing bodily experience, it is notoriously bad at “reading the room.” So perhaps the rise of AI will see humans doing more of that.
How might this evolve? Well, when you think about it, the notion that writing should have voice at all is an unnatural graft from tongue/vocal cords/diaphragm onto letter/sentence/paragraph. Even Elbow admits, “When we see nothing but a text we don’t have literal spoken tone of voice, so it is harder, but people still can’t seem to resist doing it…a sincere voice just fits the conscious mind while authentic voice fits the whole self.”
In his search for authentic voice, Elbow notes that sign language has one up on writing according to neurologist Oliver Sacks:
“Signers tend to improvise, to play with signs, to bring all their humor, their imaginativeness, their personality, into their signing, so that signing is not just the manipulation of symbols according to grammatical rules [but] a voice given a special force, because it utters itself, so immediately, with the body. One can have or imagine disembodied speech, but one cannot have disembodied Sign. The body and soul of the signer, his [sic] unique human identity, are continually expressed in the act of signing.”
So perhaps the loss of authorial personality predates AI-generated prose; perhaps the slippage began the moment the alphabet separated the speaker from what is spoken, and the gap has been widening with every introduction of a new writing medium. I believe this in part explains calls for a return to handwritten in-class essays; I myself have known the tactile pleasures of dipping a quill into India ink and across parchment.
But for all that handwriting was touted as lesson in hand-eye coordination, it was also a marker for class. I don’t think less of Darwin just because his penmanship was so illegible that he had to dictate his books to his family members to get them published. On the other hand, handwritten essays strike advocates of AI-enabled writing as reactionary, reinstating an unnecessary friction in communication that digital media had previously removed.
While my hands practically cramp up just thinking about the blue-book exams I took in college, the presence or absence of friction isn’t the reason I don’t think we should return to handwritten essays. Today’s media are more conversational and discursive than hand-writing can ever be, and any future for personal communication that seeks to favor human connection should preserve that many-to-many dynamic.
Now, the “writing-as-thinking” perspective might lead us to wonder whether a future without long-form texts—whether they are replaced by sign language or Discord banter—would devolve into superficial chatter rather than finely chiseled argument. Personal speech seems especially dominated by phatics (“How are you?”), filler words (“um, like”), and disfluencies (pauses and stutters) that convey no intrinsic meaning.
Yet these seemingly unnecessary flourishes can lubricate communication in ways prose cannot, as anyone who’s ever been “flamed” by an innocent email will understand. In delicate situations, an efflorescence of non-semantic utterances can calm tense nerves or smooth over awkward moments. In others, embodied speech can be surprisingly efficient.
When I first met Stephen Hawking, I asked him what I thought was a hard question about black holes simply to prolong the exchange with a man who had been a childhood hero. Speaking through an interpreter who translated the sounds from his mouth---so labored they sounded like a dying cat---he replied in a sentence so concise it left me with no followup question. Years later, I saw him again, now using the cheek-activated text-to-speech device that let him select words one at a time. Yet again, the effort it took him to compose a sentence underscored how precision can arise from constraint: speaking required too much friction for him to indulge in the verbose explanations that other physics professors might have offered. If emoji had been popular then, I suspect he’d have used them to compress emotional tone as deftly as he condensed thought.
Writing instructors, and educators in general, often valorize friction in the learning process; indeed, studies support the claim that adding mental or physical obstacles can enhance memory recall (Bjork and Bjork). Hawking’s degenerative disease added more friction than most writers will ever experience; whether that helped or hurt his publication record is a matter of debate. Regardless, it may be possible that imposing well-chosen limitations on discursive platforms could allow future communication to be both dialogic and efficient. If so, informal conversations may become the dominant way humans relate as prose is outsourced to computational proxies.
The future of impersonal communication
Of course, we’re not all Stephen Hawking; there’s only so much the rest of us can keep in our heads without scribbling something down. So it’s tempting to presume thinkers will still need prose to convey complex insights and feelings. But it’s also conceivable that how we write in 2050 may not resemble today’s prose---any more than what we thumb-type on our iPhones resembles Zaner-Bloser penmanship from 1950.
Consider this tweet from OpenAI CEO Sam Altman on 2 March 2023:
“Something very strange about people writing bullet points, having ChatGPT expand it to a polite email, sending it, and the sender using ChatGPT to condense it into the key bullet points.”
Altman shrugs the scenario off, but I wonder: what is the point of the intermediate steps? In fields where facts outweigh emotional valence, would it not be more efficient for businesses, lawyers, scientists, perhaps even doctors to communicate with bullet points? Sure, it’s nice to get a personal email asking how July is going before making a business proposition, but for run-of-the-mill transactions such niceties can come off like the extra curlicues around a handwritten signature---ingratiating but unnecessary.
Of course, bullet points existed long before AI; they’re a product of the mid-20th century, when they became a staple of advertisers hoping to save money when paying typesetters by the letter. (“The 1955 Bel Air! * V8 engine * Power steering * Glide-Ride suspension”) A 21st-century version more fitting for AI might be a machinic language like XML or JSON, in which humans write out data encoded in a structured form. Recent research suggests that prompting large language models with HTML-like tags or JSON’s property-value pairs may result in better output than English prose by reducing ambiguity and improving parsing and reliability (Improving Agents).
This may not be surprising for fields like software engineering, where devs sometimes claim their job is more about specifying requirements than actual coding. And creators experimenting with media generation have reported more refined control over outcomes by writing, say, {media_type: video, duration_in_seconds: 15, subject: kayaker, mood: serene} than “Create a 15-second cinematic video of a lone kayaker paddling through a serene landscape.” Surprisingly, JSON prompting also seems better even in emotionally coded domains such as sentiment analysis (Niimi).
Given its apparently improved control over the outcomes, could structured syntax become the post-ChatGPT version of persuasive writing? I hate to think that authentic and impassioned voices won’t continue to sway human readers in the future---but at the same time it’s important to remember the extent to which humans are no longer our predominant audience. Even when we’re not conversing with a chatbot or automated customer service agent, whatever we type into Facebook or LinkedIn is first read by an algorithm, which decides what humans get to read it next. And with companies looking for employees who can squeeze valuable insights out of large language models, prompts can be like pickup lines for robots---essential to get right if you want the best outcome.
What communication genres would be best shared in bullet points or machinic syntax? In “Bullshit Writing,” writer and educator John Warner gives some examples:
“Emails, memos, progress reports, presentations, proposals, meeting minutes, marketing materials, white papers, anything that’s required to written to satisfy a client/customer demand or managerial imperative could, in theory, be outsourced to a large language model. And much of this use would be undetected because in truth, often this writing is not actually ‘read.’ Skimmed, perhaps, but mostly, it is ignored.” (Carlson)
New media practitioner John Bell suggests a future in which “writers” may link these genres in a multimodel chain to produce actionable insights, a process he calls “English (De)composition.” Bell and I explored this possibility at the “Future of Writing” conference organized by Mark Marino and Maddox Pennington in 20234, where we imagined a hypothetical scenario in which the Earth was contacted by three alien races and had to decide which would be the best ally. In our example, GPT-4 summarized a series of prose reports on the three candidates in bullet points; then translated the bullets into data (JSON); visualized the data in a chart; and finally derived a decision from the chart and expressed it in tactful letters of acceptance and rejection to each of the candidates. (Spoiler: ChatGPT picked the Vulcans over the Klingons.)5
Conclusion
Writing orthodoxy in the 20th century focused on term papers as a proxy for intellectual rigor. It seems unlikely that long-form text will continue to serve this role in the post-ChatGPT world. As I speculate here, we may see a bifurcation between discursive, perhaps even oral or embodied formats for personal communication and condensed, machine-friendly formats for impersonal communication. It remains to be seen whether large language models will disrupt society more than earlier linguistic innovations, but they are certainly not the first to offload human cognition.
References
Bjork, Elizabeth Ligon, and Robert A. Bjork. “Making Things Hard on Yourself, but in a Good Way: Creating Desirable Difficulties to Enhance Learning.” Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society, Worth Publishers, 2011, pp. 56—64.
Carlson, Scott. ‘The Edge: Writing in the Age of AI’. The Chronicle of Higher Education, 5 Feb. 2025, https://www.chronicle.com/newsletter/the-edge/2025-02-05.
Dasey, Tim. ‘Why “Writing Is Thinking” Is a Flawed Argument’. LinkedIn, Oct. 2025, https://www.linkedin.com/posts/timdasey_one-of-the-favorite-sayings-in-the-literacy-activity-7372253848461123584-CQep.
Elbow, Peter. ‘A Collection of Quoted Passages About Voice’. Voice Archive, 25 Aug. 2017, https://peterelbow.com/archive/index.php/2017/08/25/a-collection-of-quoted-passages-about-voice/.
Ferreira, João Batalheiro. ‘Shaping Sentences, Shaping Thought’. Substack newsletter. The Important Work, 20 Mar. 2025, https://theimportantwork.substack.com/p/shaping-sentences-shaping-thought.
Hepler, Reed. ‘Why Writing Is Essential in Graduate Education’. LinkedIn, Oct. 2025, https://www.linkedin.com/posts/reed-hepler-024648137_i-agree-with-tim-on-many-things-so-i-hope-activity-7372302723771387904-NNWu.
Improving Agents. ‘Which Table Format Do LLMs Understand Best? (Results for 11 Formats)’. Improving Agents Blog, 30 Sep. 2025, https://www.improvingagents.com/blog/best-input-data-format-for-llms/.
Jong, Wouter de. ‘The Prism through Which to See the World’. Substack newsletter. Drakenvlieg, 15 Jun. 2025, https://drakenvlieg.substack.com/p/the-prism-through-which-to-see-the.
Kirschenbaum, Matthew G. Track Changes: A Literary History of Word Processing. The Belknap press of Harvard university press, 2016.
Lackey, Ryan. ‘“Writing as Thinking” as Opposed to What?’ Art of Writing, 15 Aug. 2024, https://artofwriting.berkeley.edu/writing/writing-as-thinking-as-opposed-to-what/.
Lustbader, Wendy. ‘Writing Is Thinking’. Psychology Today, 8 May 2025, https://www.psychologytoday.com/us/blog/life-gets-better/202505/writing-is-thinking.
McLuhan, Marshall, and Bruce R. Powers. The Global Village: Transformations in World Life and Media in the 21th Century. New ed., Oxford Univ. Press, 1992. Communication and Society Communications.
Mintz, Steven. ‘Writing Is Thinking’. Inside Higher Ed, 2 Nov. 2021, https://www.insidehighered.com/blogs/higher-ed-gamma/writing-thinking.
Niimi, Junichiro. ‘Reference Points in LLM Sentiment Analysis: The Role of Structured Context’. arXiv:2508.11454, arXiv, 15 Aug. 2025. arXiv.org, https://doi.org/10.48550/arXiv.2508.11454.
Plato. ‘Phaedrus’. The Internet Classics Archive, translated by Benjamin Jowett, 360AD, https://classics.mit.edu/Plato/phaedrus.html.
Pryor, Adam. ‘How AI Evolved from Grammar Zealot to Digital Writing Assistant’. LinkedIn, Nov. 2025, https://www.linkedin.com/posts/adam-pryor_ai-technologyevolution-machinelearning-activity-7379242040196468736-DYoQ.
Sava, Ashley Amber. ‘Reply to Why Writing Is Essential in Graduate Education’. LinkedIn, Oct. 2025, https://www.linkedin.com/feed/update/urn:li:activity:7372302723771387904/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7372302723771387904%2C7372320820414578688)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7372320820414578688%2Curn%3Ali%3Aactivity%3A7372302723771387904).
Warner, John. ‘Writing Is Thinking’. Substack newsletter. The Biblioracle Recommends, 24 Sep. 2023, https://biblioracle.substack.com/p/writing-is-thinking.
Winter-Tear, Stuart. ‘Why Writing Is Not Just about Output, but Cognition’. LinkedIn, Nov. 2025, https://www.linkedin.com/posts/stuart-winter-tear_we-wont-lose-reading-we-might-lose-writing-activity-7377668960386846720-FO6J.
Footnotes
-
For a specific argument about AI degrading student voice, see João Batalheiro Ferreira, “Shaping Sentences, Shaping Thought”. ↩
-
For a caveat about the implied instrumentalism of writing-as-thinking regardless of the AI question, see Ryan Lackey’s “Writing as Thinking as Opposed to What?” ↩
-
“This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves.” (Plato) ↩
-
More can be learned about this event at the HASTAC Teaching and Learning blog. ↩
-
Data from this exploration can be found at “GPT4 Analyzes and Visualizes Reports” (Planetary Survey). ↩
Cite this essay
Ippolito, Jon. "Writing As Thinking—By Proxy" Electronic Book Review, 18 February 2026, https://electronicbookreview.com/publications/writing-as-thinking-by-proxy/