Writing As Thinking: The Disuccsion

Wednesday, February 18th 2026

Using Jon Ippolito's essay "Writing As Thinking—By Proxy" as a vehicle for discussion, researcher-educators Anna Mills, Mark C. Marino, Maha Bali, Jeremy Douglass, Annette Vee, Marc Watkins and Ippolito discuss the impact of AI's emergence in higher education and the many strategies they're employing to foster healthy writing practice with and without AI.

Anna Mills: I love it as a provocation, and I kind of want to respond in that way! I think observationally, it makes sense that we may need to be more flexible. There have always been other ways to improve our thinking and to extend our thinking besides writing. But even with those other ways, the question of transparency comes up. I mean, you can automate any of them with AI, pretty much, except maybe in person speech. That means you still need transparency, and I guess my provocation is that I still want my son to write essays in high school and college, because I think he needs that developmentally. I think that bullet points don’t do the same thing for getting us to make connections between ideas. So, I want him to still do that. 

Fundamentally, it works as a thinking practice. In some ways, I think it’s even more valuable in this short attention span culture to protect a space for people to do that. At some point in their lives, they need to build up that capacity, even if they’re not going to do it in the workplace very often. So, I guess that’s my counterargument: why would we abandon it if it works? If the reason is because we’re concerned about misuse and detection, then I feel like that comes up with other forms of assessment, too. I just want to protect these practices that we know work to help build thinking. Like, why would we give them up? But I wonder if I’m fully understanding your argument, Jon.

Jon Ippolito : I appreciate your taking this on with such rigor. What I heard at the end was that you were wondering, is there a reason we would give up the practice of long-form writing if it works? We’ve demonstrated the value of it in cultivating thinking and structuring thought. That said, I wasn’t sure whether you’re allowing AI to be part of that process or not.

Anna Mills: What I appreciate about your piece is that it pokes us to think more flexibly about the ways that AI might encourage cognitive offloading, but then, depending on how you use it, it might also stimulate thinking. And there are a million hybrid ways, and we shouldn’t just assume that AI is anti-thinking. I do completely agree with that. I’ve been inviting students to engage with AI feedback to stimulate their writing and thinking process for a couple of years now. I still feel like an almost more conservative impulse in me to say, okay, but when students are starting out they do need some experience of writing it on their own and having a sense of developing their voice over time, and getting through the friction…

So, I’m not inviting them to use AI in a hybrid way for their writing. I’m only inviting them to use it for feedback at this point, because sometimes when it gets too complicated and there are too many options, that’s also very overwhelming for students and confusing, and maybe not what they need when they’re starting college. I just feel like we have these practices that are so powerful. And I feel this impulse to protect them, to make sure that students still get those experiences. And yes, the other hybrid stuff can come too, and it will, and it already is, because that’s our world.

Maha Bali: I agree with you, Anna, developmentally speaking, right? There’s a very big difference between us using AI for something at this point, where we’ve already developed our thinking and writing and all of that, if we want to. And people who are still learning how to do this, so I feel very strongly about very young kids. I’m very worried about making sure that, even if they use AI for their own thing, there’s something that should happen in education so that, in the future, they can deal with AI in a critical way. 

So while I agree with the need to protect it, I’m concerned about people who are less literate about AI than Anna who want to protect it, but don’t know how… Someone was saying at the beginning, I think we’re past this stage where we say “I’m sure that students can’t do this assignment necessarily with AI.” Probably a lot of us find even more assignments than we imagined could plausibly be done with AI to look like a plausible assignment? To students, anyway. Usually not so much to us, but to students. So, the point is, do you then ask them to just do pen and paper writing in the class? Do you know what I mean? 

There’re some students who, if you tell them, “Let’s not use AI for this because you’re going to learn,” some of them will do that. And some of them would never do it anyway, and then there are some who are in the middle that we need to help them not use it by making it easier to not use it, or harder to use it in the context, you know what I’m saying? Like, it’s not enough to believe that we need to protect them. I think there’s a lot of people who believe we need to protect them regardless of who’s in this room anyway, but in the world, there are a lot of people who believe we need to protect them, but I don’t think everyone is trying to make the precautions to help students not use AI when they think it’s gonna be harmful or it’s gonna interrupt their learning. For now, anyway.

Jeremy Douglass: When I think about the term “offloading,” and I also sometimes think about the term “delegation,” and the question often sort of is to what end? On the one hand, it could be this kind of higher-order orchestration, where we’re just sort of moving up the chain, and we say, “Ah, excellent, now the fact that mere composition or grammar or spelling or penmanship is off the table, we can sort of move up the intellectual hierarchy and spend more of our time on higher-order processes.”

But there’s the sort of exercise metaphor, which says now we have cars and e-bikes and etc, it doesn’t change the fact that to be embodied and capable in critical ways, sometimes you need to exercise your muscles, right? And if the world is affording you less natural or less needful ways of doing that, then you need to create those needful ways for there to be healthy people, right? Because, if I’m following this earlier point, just because we now have a world where we could be brains in jars doesn’t necessarily mean it’s good for us to be brains in jars, right?

And that, I think, sometimes is a pitch to the students themselves — that people say, we didn’t used to have to sell children on exercise, they just got it, right? But now we have to actually convince them that it is good for them because the world isn’t built in such a way that it just happens on its own.  That’s a set of values and practices that you build up around people then having the capacity to not delegate their thought, because otherwise they might, they might lose that, and then they can no longer choose whether to delegate or not. They’re just not capable of doing it on their own.

Mark Marino: For some reason, I found myself reaching for analogies after reading Jon’s piece.

I was thinking about programming, and how people who are familiar with the history of programming talk about a time when programmers created algorithms, and other people were in charge of writing the code part of it. So programming wasn’t always writing individual lines of code. And we may be headed in that direction with vibe coding and maybe with writing.

But maybe that does not quite line up. Maybe it’s more like when we buy software or we use software other people made, that might be the equivalent of the output from an AI, right? That somebody else has done all the work for us, and we’re just gonna use the thing that it created and call it ours because it does what we want it to do. Not too far from wanting to write a book but buying one instead.

And then I think about these other people, like Friedrich Kittler, who felt like he wouldn’t understand the way computer vision worked, unless he wrote his own ray tracer. By hand.  And did matrix operations, transformations by hand, in assembly language. That might feel too extreme to us today, but I kind of agree with that suspicion that knowledge comes through doing processes we could offload to machines. Programming and writing might seem very different, but I consider them both forms of thinking. But again, our notion of what programming is (designing algorithms vs. coding) has shifted over the years, and maybe our sense of writing will shift as well. 

I know people say AI might help us focus on the content more than the syntax. But as a writer, there’s a part of me that, yes, does believe that writing is more than syntax but also that how you say something is the idea itself. So then, that made me leap over to cooking analogies.

And I was thinking about how much canned food we buy, how much pre-processed food we buy, how many TV dinners, and most people don’t know how to cook, and the other day, I learned how to make Tahdig from a friend of Persian descent so now I can make it myself. And that process was very different from just purchasing food. I learned something, I was able to participate in a culture in a passing on of knowledge that became a new practice of mine. Very different than if I had just ordered Tahdig at a restaurant.

I’ve been trying this “analog sandwich” in my class, where we started with AI augmentation but  now we’re in the unplugged version, and students are handwriting their essays in class in composition notebooks and revising them in notebooks, and an interesting thing has emerged in this tiny little sample of about 40 students: The students are not reporting writer’s block!

So, whatever it is that they called writer’s block in the past, which might have been procrastination or distraction, or whatever — here, I see students stopping to think, and then they continue writing. And so, again, it does feel like I’m making them learn how to make their own rope, do their own matrix operations, cook their own tahdig, but at the same time, they seem like their relationship to the thing that they’re making has that authentic quality that gives them much more control over it in a way that maybe otherwise they’re alienated from it.

Annette Vee: Related to unplugging, I’m obsessed with the idea of this “monk class” at UPenn.

Mark Marino: Oh, right, you mentioned that. Sounds incredible.  

Marc Watkins: I really love that Analog Sandwich idea, and I sort of value those experiments you’re having with an analog process and a new one. I think that’s something that is reasonable for us to start asking a lot of our colleagues to start exploring?

The very thing that’s on the top of my mind every time we talk about AI is that this has happened so rapidly, so quickly, that there’s been very little time to process a lot of these changes. I mean, this is just the full second academic year we’ve had generative AI since ChatGPT. A lot of us are playing around with the tools before ChatGPT was launched.

But for most of the general public, this is just now the second full year. We’ll be almost three years going here soon, but it takes a lot to change existing habits. And so, I’m really conscientious about this when I talk with my fellow colleagues, too, and other faculty, too, is that this is going to take time to start thinking about how we are using these tools.

What’s appropriate, what’s helpful, what’s not.

And these types of experiments, and even like Jon’s provocation, is really good to start thinking about these things and having these conversations about it. Having that capacity to do so, though, is the one major thing that I’m really worried about, because AI development is just not slowing down. I mean, now we’ve got browser-based stuff coming from Perplexity. You log into something, find an ad from Perplexity for comment from anywhere, shape, or form; I just think the changes have been so non-stop these past two and a half, almost three years now, too, that it’s just been hard to process.

And so, we can all give ourselves a little bit of time, a little bit of grace to start thinking about what this means. And just making these sorts of changes that are hopefully sustainable for us, whether that’s, like Mark’s analog sandwich approach, too.

Or, thinking about more long-term, about how vibe coding works in these types of situations might be something to explore.

Anna Mills: I love that, Marc. Giving ourselves the grace to explore and slow down a bit.

Jeremy Douglass: It’s very much the case that for the students as well, this has all happened very, very fast, right? So not just from a teaching perspective, but I know many of my students experience or expressed versions of this kind of bifurcation, where on the one hand, as they approach that terrible moment where the paper is due in half an hour, and they’ve made a series of poor life choices, and there’s a big button that they could push, that they know they should not push, but if they did, maybe it could just get them that B-, and they could pretend none of this ever happened, right? There’s this terrible moment where the infrastructure is building that big button that you could push, where you’re not just offloading, you’re fully delegating writing, reading, and thinking to an automated language process, right? And they’re like, I shouldn’t push it, it’s bad for me, but mistakes were made, I stayed out too late last night, right? 

It’s come down to this: I both feel strongly that I should not push the button and think that maybe just this one time I will, right? And I really feel sympathy for that, but I also hear them often saying, embodied knowledge, authenticity, writing and reading and speaking and thinking and handwriting and all of this stuff that this ancient humanistic idea about me as a thinking animal — that gives me value. And these systems, they’re the enemy, they’re part of labor markets trying to replace me and destroy the economy that I hoped to join.

So, I’m both maybe using these things, and sometimes regretfully, and also, I hate them.

Right? And not all of my students, by any means, but many of them are in this weird space where they view AI as a complex with incredibly deep distrust, and they kind of come to humanities classes thinking. Can you tell me about a world where I have value? Right? And so, strangely, the devaluing of parts of core skills that used to be things only people could do kind of raises the stakes for authentic knowledge perfection. Right? Like, tell me again, what is the thing that only I can do that can’t be done for 7 cents worth of electricity? Please, I want you to tell me. 

Annette Vee: I’ll link in the chat to a study that I did at Pitt, with a lot of faculty involved. We talked to students about their choices in using AI. And they have a lot of negative feelings about it. As we think about structuring the environment in which they’re learning, helping them make good choices, I think, it’s crucial to pay attention to what they’re thinking.

Also, there’s another link that I’ll put in there from Boston College. They ran dialogues between faculty and students, and they came up with very, very similar kinds of things that Pitt students were experiencing.

And a lot of it is they don’t want to use it, they know it’s a bad thing, in this particular context. Or, they know it can be good, and they describe good ways too. But that big button to use AI is right there: they make that choice kind of under duress and they can justify it away. Like, I didn’t necessarily want to do it.  

Mark, I would love to hear more about this analog sandwich experiment because I do think that a lot of it is, they’re not used to struggling. And, understanding that education is actually a lot of friction and struggle, right? And so, if they don’t have ideas as soon as they sit down, and try to think about writing a paper, there’s not something wrong with them. That’s actually part of writing: how do I begin, and in what order, and that’s actually difficult, right? And working through that difficulty is an essential part of writing and thinking.

Mark Marino: So, in his opening essay, Jon gave us that breakdown of all the different aspects of the writing process from brainstorming to revising, and I do find my colleagues are very quick to say things like, “well, I let my students use AI but just for the brainstorming part of it”. And then I think, oh no! That’s the part I don’t want them to use it for.

I should mention the analog sandwich has an awful lot of scaffolding. The paper they’re writing in class is an APA-style paper. They’ve previously done online research before we entered the unplugged section, and then they interview one another for the content of the paper. They conduct, sort of, one-on-one interviews. And then they write it up. So, it’s a paper that’s heavily structured. They don’t have to really struggle. So, in other words, we’ve given them a lot of scaffolding. 

The thing that has gone out the window, again, just by dint of writing things in class, is procrastination and a lot of the distraction that comes from composing on your “everything” machine.  And then I ease the pressure. First of all, I tell them, “This is not a high-stakes in-class writing exam. I’m there in the classroom to help them at any moment. We’ve got time to write and revise this in the classroom.” It’s sort of an ongoing process. I literally don’t care about things like grammar and spelling, for reasons that Jon’s article makes really clear, and I think that we would all know. 

And there’s also this thing that I’ve been noticing for a long time. I’ve been teaching writing for 30 years, that there’s something about the ability to jump in a document at any point, at any place that just word processing makes possible, that leads to lots of incoherence in writing.

In a way that if you write continuously, by hand, my sense is that these students, from what I’ve glanced over their shoulders, are writing more coherently than they do when they type, which is beautiful. I can’t wait to read that. That’ll be such a novel experience.

Jon Ippolito: Mark, have you ever had the experience of writing by hand (I’m old enough to have done plenty of that) where you come to a conclusion that’s different from your introduction?

Mark Marino: 100%! And to me, that is the experience of writing as thinking. So yeah, so I guess, I don’t know if it doesn’t put me in Peter Elbow’s camp. I think that it does. 

Jon Ippolito: But it is not the experience of writing as thinking with digital text, right? Because then you’re going back and revising and so forth. What I’m trying to reinforce in my provocation is Alan Kay’s idea that technology is what was invented after we were born, the rest is just stuff. And we take a lot of that stuff for granted. We take for granted the idea that, for example, before AI, our writing was more embodied. Well, you’re never gonna get completely rid of the body, right? But embodied might mean your eyes are constantly 18 inches from a screen, and the only other part of your body that does any exercise is your fingertips.

Right? So, previous forms of writing required more of your body. In fact, early 20th century writing penmanship programs were regimens for how you’d move your arms and your shoulder to get the right kind of strokes on the page.

And obviously, before that, you had oratory, where people like me, who are Italian, had to use their hands to talk. I really think that we should interrogate our assumptions by saying things like, “Well, the way we used to do things was somehow more authentic when the previous generation of technologies and people who used them might have seen the newer technologies

as reducing the amount of embodiment, or friction.” You’re going to have your hand cramp up while you write a blue book essay. That’s the kind of friction I don’t miss. Right? 

Mark Marino: That reminds me. When my students were writing their essays during the middle part of the Analog Sandwich, we would make sure to take breaks to stretch our hands. I thought it was important to acknowledge our bodies and to try to ensure healthy writing practices.

Jon Ippolito: But that just reminds us: Every writing technology has some friction in it. Even AI has some friction in it.

So, the question is, are we valorizing some friction over others? Are we valorizing some embodiment over others? And the reason I’m thinking about sort of going back to conversational communication as a potential future if AI writing takes over, is that that was our original mode. I mean probably some people hummed to themselves when they were Neanderthals. But most of that was oral, was a way of instantaneous, in the moment, having a back and forth with someone, and that’s lost when we move to print and then later to digital text. So, I guess I’m trying to just make sure that we don’t sit too comfortably in our own assumptions about what is authentic.

Annette Vee: Yes, and that reminds me of the 18th century gestures manuals! Here’s a great piece on writing on computers and the body, by the way. 

Mark Marino: And it’s not lost on me that we’re having a conversation that’s being transcribed by an AI at this moment.

Marc Watkins: I saw many of the “Valuing the past” assumptions in this recent piece that made the rounds by James Marriott on “The Dawn of the Post-Literate Society.” 

Jeremy Douglass: For me, not fetishizing a particular configuration in relationship to writing and thought is incredibly important. I would just add that does mean interrogating. I think in some ways, writing here is often being taken as a proxy for thought.

And so then that is to say, in the values-laden conversation about minds and the experience and citizenship skills and being in the world, and self-determination, it is not writing per se, but in fact, it is thought for which writing is only a proxy, and then I think this is where the sort of car and elevator and bicycle metaphors become compelling. The mistake would be to say, now that so many people have e-bikes, they’re biking so much more. But that’s not what he meant, right? You were saying bicycling was being taken as a proxy for the working of the calf and the quad muscles, right? And so the question is: what are the areas of productive friction between the mind and the page? And that could happen with a typewriter, that could happen with a clay stylus, and there might be forms of speed or forms of revision, or forms of random access that increase productive forms of that thought between that contact area, between thinking, and inscribing.

At the moment where it breaks free---this is where you get the studies that say, can you identify sentences that you wrote out of a lineup? Can you explain your own argument after you’ve generated the text? And if people are conducting these studies and students who brainstorm with AI are bombing these tests, then we can truly say, Writing is happening, but thinking is possibly not happening, or else thinking is happening in a way that escapes recall, and we don’t know how to account for that as thought.

Jon Ippolito: I think that’s a really great way to sum it up. I keep returning to the word proxy as an important, kind of counter term to just the idea of well, writing is inherently thinking, or AI is inherently or not thinking. It’s more like, who is using what proxy for what purpose?

Mark Marino: All right, very good. Well, I believe that wraps up our first of our recorded Transformers discussion!

Cite this interview

Mills, Anna, et. al.. "Writing As Thinking: The Disuccsion" Electronic Book Review, 18 February 2026, https://electronicbookreview.com/publications/writing-as-thinking-the-discussion/