Friction and Education: The Discussion
In this conversation, provoked by Jon Ippolito's essay "Does Education Really Require the F-Word?", researcher-educators Jon Ippolito, Annette Vee, Maha Bali, Jeremy Douglass, Mark C. Marino, and Marc Watkins discuss the role of friction in higher education after the dawn of generative AI.
Jon Ippolito: “There are two bookends right now to the conversation. First was a provocation on the question of writing as thinking in which I argued through historical examples that writing has always been a way of offloading cognition. And therefore, when we think about how AI is the latest technology of writing, what sort of directions does that send writing in the future? And I kind of offered two very different scenarios, possibly simultaneous.
In one, writing becomes very fragmented and, sort of bullet points or machine code that’s easy to process by machines. And it’s only translated into human speech or human prose when necessary. In the other, everything becomes much more interpersonal, and possibly even embodied, as opposed to the more machinic, business-like speech.
And then after we spoke the topic of friction came up, along with a surge in articles about the benefit of friction in learning recently. I questioned that in the second bookend, and tried to distinguish between useful and not useful friction. And then about 20 minutes ago, Maha found a great article (Chen and Schmidt 2024) that really digs into this as a study.”

Maha Bali: “Yeah, I thought that was really interesting… The study is talking about positive friction, when friction is actually useful. Friction can be positive when it stops something, or when it stimulates something. And it can be something that makes things bigger or smaller: helping people increase self-control to achieve goals, versus stopping you from doing autopilot behavior and encouraging you to question assumptions.
Sometimes you start doing some things on autopilot with AI, and you forget to question assumptions. Another benefit of friction I like a lot is deprioritizing efficiency. I always say, with AI, yes, you get the efficiency, but what are you losing? Things like embracing exploration and divergence. These are things that take longer, right? And then simulating action, prompting, or motivating movement, which I think, again, overusing AI and getting used to things happening quickly might stop you from taking action in different ways. The authors were also talking about how, as we get into more and more human-AI interaction and working together, we’re gonna discover new kinds of positive friction, depending on the different ways we use the different types of AI.”
Jeremy Douglass: “Something about the upper right-hand quadrant really struck me in relation to a series of conversations Mark and I have had about AI in hermeneutics and in critical code studies and in the humanities Disrupting autopilot behaviors is sort of the weak form of questioning assumptions, but the strong form of it is ostranenie, or defamiliarization: your life told to you from the point of view of a horse. The stranger the perspective of the Russian story, or the stranger the perspective of the film, or of the great work of art or of the robot in your life, the more productive it is for you to get further and further outside yourself, right? There’s a deep aesthetic tradition of saying the thing that’s powerful and productive about the work of art is that it moves you quite along. It doesn’t just disrupt your autopilot behaviors or make you take an inventory of the way you’re approaching the problem. It makes you fundamentally try to see something from a different perspective, and so this would then value the alien-ness of the machine, right? The moments when it is farthest from you are the moments when it is potentially most useful.”
Maha Bali: “Mmm, that’s a great point. Related to what you’re saying, somewhere else the authors say diversity is a positive friction. Generally speaking, we say, “oh, it’s good to have diversity,” but diversity brings in a lot of friction, right? This relates to what Mark was saying abouthow he trained his AI not to be so agreeable with him as well.”
Mark Marino: “Earlier I was explaining to Maha that my system prompt is to have my ChatGPT be an eye-rolling teenager whose primary mode is passive aggression. That’s a perfect fit for my own passive aggression, but also, then it rarely agrees with me. The problem is, when you switch into voice mode, they have filters that won’t let it exhibit a personality other than agreeability. Because I think they don’t want their chatbot to be caught misbehaving on voice mode and sort of exposed on TikTok or something like that. When people use ChatGPT for relationships in place of romantic relationships, one of the weak spots is that, actually, a healthy relationship comes from someone who has their own life.”
Jeremy Douglass: “Then lack of friction is a problem.”
Mark Marino: “Right, right, exactly. Jon’s reflections send me in 17 different directions at once; he covers so many interesting points. It’s like every thought that I’ve had in reaction, he’s already thought in reaction to his own stance, so that’s beautiful. But there were some frictions that I miss that I feel like are really important to me as a writer, and this is — I don’t know — this is something I don’t want to go away.
He poses an option at the end of the friction essay where we could present students with a couple of different conclusions, and then they choose amongst the different conclusions which one feels like the right one for them. And that feels like a good, productive kind of work for them to be doing.
But I just want to throw three other frictions out there that are really important to me as a writer. One is the friction of the blank page. To me, that’s a really important moment in my intellectual existence. Well, I may not be completely honest about that because I don’t encounter the blank page as something that’s daunting, but I do think confronting that is important. It’s a useful moment for our students, to confront it on their own. And the other two are chestnuts from the Marino family vocabulary. One is “writing is rewriting.” And the other one is, “when you don’t know how you want to say something, it’s probably because you don’t know what you want to say.” And to me, that little koan expresses that even the phrasing of the idea is formulating the idea, making it concrete. And when students struggle with that, I think that’s useful. Whether they think it’s useful, that’s another question.”
Jeremy Douglass: “In addition to the friction of the blank page, something I’m reflecting on more as the major public models have gotten better at second action prompting is that they’re habitually now suggesting the next input. And so I think the friction of the full page is also disappearing. In other words, we had this early AI workflow of, say, someone generating an essay, where they would say, “write 10 titles,” “create me an outline,” “create topic sentences for the major things,” okay, now write it. Et cetera, et cetera. We would sort of imagine that chatbots would at least read the output of the previous prompt, and then decide what the next prompt would be. So they would do no actual writing, but they would be reading.
I haven’t seen empirical work yet, but I would bet a fair amount of money that you could do eye-tracking tests and discover that people are reading less and less of the intermediate input of incredibly long chains. They just don’t even bother. You know, it generates 3 pages of stuff and says, “Would you like me to add scholarly sources to that and organize them in an MLA bibliography?” And the friction of reading what is being written is also gone. Users say, yeah, sounds great. “Okay, now would you like me to…” Yeah, sure. “Let’s add some annotations and footnotes.” Yeah, okay. And so you get down about 20 pages into the stack, and the friction of the blank page is gone, but the friction of the full page is also gone, right? That reading is only gonna happen at the final draft; the whole chain of composition was offloaded.”
Marc Watkins: “Yeah, we’re seeing a lot of that. My poor fellow faculty here in my department, too, are, you know, still talking to their students like Grammarly isn’t now an AI-native app. It’s always been an AI-native app. Now it’s a really agentic, native app. And, one of their ads shows how these nine new AI agents in Grammarly literally let you just push a button, and it integrates new sources for you. “Do you want me to integrate sources?” Great. “Do you want me to give you a review of what you need to fix? Here it is, and there it is.” It’s removing all elements of friction within that process,because their AI developers are looking at it from a user design point of view. Remove as much friction as you can to make users happy to stay on our interface, or give them what they want as quickly as possible.
From a teaching perspective, that’s really hard to deal with because we came up with some strategies early on in ChatGPT, or even GPT-3, where we could work iteratively with this process. Now we’re seeing this sort of iterative process go away, and AI agents are now automating that completely. How do I fit this into something that would teach students a skill? Or even to understand what they’re learning at all, it’s literally just using something to get to an actual output. And again, from an industry standpoint, from workflows, it makes perfect sense; you want to be as efficient as you can. But from a learning standpoint, it’s not clear, really, what’s happening within this process.
It’s really hard to keep up with, which may be why I see a lot of people looking at intentional friction. I sent to Jon The Desirable Difficulties Framework that’s now, you know, 40 years old, about how retrieval practice, if it is more challenging, can actually help you retain information. It doesn’t seem like that’s where these tools are sort of headed in a lot of senses. And of course, we’re really focused on this in writing, but you have to also think about this in other disciplines, where it’s just giving you as much information as you possibly can, and then makes it as concise as possible just to go into a test or something else where you regurgitate it, that really transactional model. So, I can definitely see why people are upset about the lack of friction within these tools.
The big question when we’re trying to put beneficial friction into our students’ learning, is where does it cross the line and become punitive? When does this actually start to hurt learning, or hurt students in some way, and how can we be aware of that?”
Annette Vee: “I’m sure many of you guys have read the piece that Ethan Mollick wrote back in, 2023 about the The Button. The temptation is, here it is now, you can just press the button, and it’ll help you write. A lot of AI policies about students and teaching will say, only use AI when it’s augmenting your learning and not if you’re shortcutting, so students will be able to do the real cognitive work and not offload it. But I think that students actually have a lot of trouble deciding what that means, and making that distinction, right? And I think about that even myself.”
References:
Chen, Zeya, and Ruth Schmidt. “Exploring a Behavioral Model of ‘Positive Friction’ in Human-AI Interaction.” Design, User Experience, and Usability, edited by Aaron Marcus et al., Springer Nature Switzerland, 2024, pp. 3—22. Springer Link, https://doi.org/10.1007/978-3-031-61353-1_1.
Mollick, Ethan. Setting Time on Fire and the Temptation of The Button. 7 Aug. 2025, https://www.oneusefulthing.org/p/setting-time-on-fire-and-the-temptation.
Cite this interview
Ippolito, Jon, et. al.. "Friction and Education: The Discussion" Electronic Book Review, 18 February 2026, https://electronicbookreview.com/publications/friction-and-education-the-discussion/