Meet ChatGPT - The Dementor for Books
Meet ChatGPT: the dementor for books, both literally and metaphorically.
We all know the literal part. The training data, the copyright debates, the ingestion of entire libraries. I want to talk about the metaphorical aspect.
My argument: reading books with ChatGPT actually leaves you worse off. It sucks away the joy, the immersion, the feeling of a book, and leaves you with cold, dry facts. Let me convince you.
We've all used AI for summarizing text. It's really good at it. I see the "summarize" button on every second AI interface and I use it several times a day. Guilty as charged. Look, we're all short on time here. Gotta go doomscroll Insta after all.
But here's the thing I've started noticing: the books I've "read" via AI don't stick. I get the crux, the key takeaways, the framework. But when I try to reference them later, in conversation or in writing, there's this faint, hollow feeling. Like I was told about the book rather than having read it.
Last week, I was reading Robert Greene's Mastery. Good book. I'd gotten through the first chapter properly, the slow build where Greene constructs his entire case through stories of historical masters, their apprenticeships, their struggles, the texture of their journeys. Then I had to travel and left the book at home.
I figured, no problem. I'll use GPT's deep research to generate a comprehensive summary. I even got fancy with it: asked for more Indian examples, specific nuances, detailed breakdowns. It produced a solid 30-page document. I finished it during the flight. Felt productive. Felt like I'd "covered" the book.
Yesterday, I was talking to a friend about the book and I noticed something striking. The only chapter that actually stuck with me, that I could discuss with any depth or feeling, was the first one. The one I'd read at my own pace, the slow read. The one where Greene builds the case story by story, where you feel Darwin's obsessive curiosity, where you sense the weight of Mozart's decade of invisible work before his breakthrough.
The other chapters? I could recall the frameworks. "The three phases of mastery." "The mentor dynamic." Surface-level labels. But the stories, the characters, the emotional weight that makes you actually believe the argument? Gone. I'd consumed the information without ever encoding it.
I dug around and there's real science behind what I was going through.
Episodic vs semantic memory. When I read the book properly, my brain stored it as episodic memory: scenes, characters, moments, a felt sense of the narrative unfolding. When I read the summary, I got semantic memory: facts, labels, categories. Episodic memories are dramatically stickier. I didn't remember the definition of perseverance. I remembered the story of the person who embodied it.
Emotional tagging. The amygdala flags emotionally charged experiences for stronger memory encoding. Greene's first chapter worked on me because I felt something. The frustration of Darwin's father dismissing his interests. The quiet defiance of Leonardo choosing observation over doctrine. A summary compresses away exactly the emotional texture that would have made it memorable.
Elaborative encoding. The more connections your brain makes during intake, imagery, personal associations, emotional reactions, the stronger the recall. Reading slowly forces this. Your mind wanders, connects, disagrees, imagines. A 30-page AI summary, no matter how detailed, is optimized for information density. Which is precisely what kills elaboration.
This is also why certain smells trigger vivid memories, why places bring back nostalgia, why a song can transport you to a specific evening from years ago. Multi-sensory, emotionally rich experiences create dense neural connections. Flat text summaries create almost none.
The closest analogy I can think of: it's like substituting going for a movie by reading its Wiki page. You get the plot. You know who dies. You know the twist. But that in no way replaces sitting in IMAX, feeling the score swell, watching the actor's face in that one scene. You don't get the experience. And without the experience, you don't get the memory.
Not every movie deserves that kind of time or money. And not every book deserves a full read. But you can't substitute the ones that do and pretend it's the same thing.
People have been reading CliffsNotes for decades. What's different now?
Zero friction. CliffsNotes still required you to read 30 pages of condensed text. You had to go buy the booklet. There was a speed bump. AI summaries require one click. You can "read" five books in an afternoon and retain nothing from any of them. The effort barrier that used to protect us from over-summarizing is gone.
The illusion of depth. When I generated that 30-page Mastery document, it felt substantive. It had detail, structure, examples. It felt like I was engaging deeply. But I wasn't. I was consuming pre-digested information at high speed. The illusion of depth without the reality of encoding.
Personalization makes it worse. I asked for Indian examples, specific angles, custom framing. That made it feel even more like "my" reading. But all it did was make the summary more palatable to consume quickly. It didn't make me think more slowly. It made me think less.
Here's what I keep coming back to: if we know summaries are worse for retention, worse for understanding, worse for the ability to discuss and build on ideas, why do we keep choosing them?
Same reason we doomscroll instead of reading. Same reason we watch reels instead of films. The path of least resistance is always the summary, the shortcut, the compressed version. And AI has made that path so frictionless that choosing the longer road now requires genuine, conscious effort.
I don't think the answer is "never summarize anything." Summaries are legitimately useful for triage: deciding which books deserve your full attention. That's a valid use case.
But I think we need to be more conscious about when we reach for the summarize button. A few things I'm trying:
- Set aside time for long reads. Actual blocks of time where the goal is immersion, not information extraction. No AI in the loop.
- Stop treating "I've read the summary" as "I've read the book." They're fundamentally different experiences that produce fundamentally different levels of understanding.
- Accept that not everything needs to be consumed instantly. A book that takes you three weeks to finish will live in your head for years. A summary you finish in twenty minutes will evaporate by dinner.
The dementor doesn't force the kiss on you. You're choosing it voluntarily every time you hit that summarize button. The facts remain, but the soul of the book, the part that actually changes how you think, is gone.
Books may survive the AI era. But only if we let ourselves actually read them.