What's Left When You Unplug?
A friend of mine recently told me at a house party that she’s finally understood her anxious attachment. She’s not chasing validation anymore. She’s choosing herself. She delivered this with the quiet confidence of someone who’d been working through things over months with a professional therapist.
This is a woman who has never once in her life used the phrase “anxious attachment” in a sentence.
Three drinks later, she’d checked her phone eleven times because someone she’s been seeing hadn’t replied. The sober revelation evaporated the moment the alcohol bypassed whatever new software she’d installed.
It felt unearned. I couldn’t quite place why at the time, but something about it sat wrong. Then I opened Claude to talk about it. Then Claude helped me articulate what was bothering me. Then I thought: this would make a great blog post. And now I’m writing a blog post, with AI, about how AI is making everyone perform a better version of themselves that doesn’t actually exist.
The snake is eating its own tail. I know that. Knowing it doesn’t stop me.
Here’s what I think is happening, to her and to everyone who regularly uses AI.
There is now a permanent gap between who you are with it and who you are without it. And nobody knows which one is “real” anymore.
This isn’t a tech problem. This isn’t about coding or prompt engineering or whatever LinkedIn is excited about this week. This is about the fact that a growing number of people are now running their thoughts, their messages, their self-reflection (even their apologies) through a model trained on everything humans have ever said, fine-tuned to return the version that sounds the most like what they wished they’d said. And that version is better than what they’d produce alone. And it feels like them. But it isn’t. Not fully.
Not everyone is doing this yet. But enough people are that you can see the effects. A year ago, @kayicchii wrote a viral blog post observing that everyone had started sounding “eerily polite” and “emotionally fluent but spiritually vacant” — breakup texts drafted by chatbots, apologies that read like therapy worksheets, boundaries set with the frictionless confidence of someone who’d never actually had to hold one. [1] At the time, you could usually tell. The models were good but not that good. The language was too clean, too even, too devoid of the specific weird rhythms that make a person sound like a person. It sat in the uncanny valley — close enough to be useful, off enough to be obvious.
That was a year ago. The models got better. The valley is closing. Women on dating apps now report a wave of matches who are witty, emotionally intelligent, and disarmingly articulate over text — then flatly disappointing in person. [2] The messages weren’t wrong, exactly. They just weren’t his. And the worse the models were, the easier it was to spot. The better they get, the harder it becomes — and the wider the gap grows between the AI-assisted version and the person who actually shows up.
My friend didn’t either. She didn’t sit with her anxious attachment. She didn’t slowly notice it in herself, feel the discomfort, and start making small unglamorous changes. She got a beautifully articulated insight handed to her by a chatbot and delivered it like a TED talk at a house party. She got a patch note for her personality — v2.3: reduced validation-seeking, increased self-worth — but never shipped the update.
The drunk version is the proof. Alcohol strips out the learned responses and leaves whatever’s actually wired in. Three drinks is unplugging the amp and hearing what the guitar actually sounds like.
I have ADHD. I’m not going to call that a superpower — I find that framing exhausting — but there is research showing that the default mode network stays more active in ADHD brains, the network that handles associative thinking and pattern recognition even when you’re not trying to. [3] I notice connections constantly. It’s not always useful. But it means my brain does this thing where it jumps between a friend’s house-party confession and a universal cultural shift and the recursive absurdity of my own position, and for a long time I thought that was evidence that the AI was just a tool and the real thinking was mine.
Now I’m not sure that matters. The thinking is mine — I know how these models work, I’m the one navigating the possibility space, choosing which threads to pull. But the AI makes every thread look equally strong. A half-formed pattern and a genuine insight come back with the same confidence, the same polish. I’ve lost the signal that used to tell me which ideas were good and which ones just felt good.
Some of what feels like increased intelligence is actually increased fluency. AI gives you the ability to articulate things at a level that previously required years of domain expertise or just being naturally gifted with language. And articulation feels like understanding from the inside. You think a thought, AI helps you express it crisply, and the crispness of the output convinces you the thought was sharper than it actually was.
I recognise this because I spent years in sales doing it without AI. You learn to pitch an idea with more confidence than you actually have, to make half-formed thinking sound like conviction. The difference is I knew I was performing. Now everyone has the mask. Most people can feel it. They just haven’t found the words for it yet — which is ironic, given the tool.
I catch this in myself all the time. I’ll have a half-formed idea — something I can feel but can’t quite say — and I’ll talk it through with Claude. What comes back is clear, structured, and surprisingly close to what I meant. And in that moment I feel smart. I feel like I understood the thing all along and just needed help getting it out.
But did I? Or did the AI take a vague gesture and build something more coherent than what was actually in my head? And does it matter, if the output is good?
I think it matters. I think it matters a lot.
Get into an argument with someone you love. A real one, where it’s heated and emotional and you can’t pause to draft your response in a text box first. Try to be the thoughtful, articulate, emotionally intelligent person you are in your AI-assisted messages. In real time. With no latency between feeling and speaking. Feel how far apart those two versions of you are.
Or go to the pub with your mates and air a half-formed idea. Watch someone call you on it before you’ve finished the sentence. Get talked over. Talk past each other. Circle back twenty minutes later when neither of you remembers who said what first but you’ve both moved somewhere neither of you started. That’s thinking. That’s how ideas actually get stress-tested — not by a model that makes every thread sound equally strong, but by a person who’ll tell you you’re wrong before you’ve finished being wrong.
The mess is the point. The half-formed, inarticulate, talking-over-each-other mess is where genuine connection and genuine thinking happen. AI can’t be in the room for that. Not because it isn’t capable, but because its presence removes the thing that makes it work: the risk of sounding stupid.
I think culture is starting to feel this. There’s a shift happening — away from algorithmic polish, away from the curated and the optimised, toward something rougher and more human. [4] Imperfection is becoming a signal. In a world where everything can be made to sound perfect, the raw and the unfinished are starting to carry more weight than the polished and the produced. It’s not nostalgia. It’s a correction.
I think there’s a spectrum to how people use AI, and it’s worth being honest about where you sit on it.
The first level is consumption. This is “tell me what to think.” You go to AI with a vague intention and accept what comes back. The AI leads, you follow. The output feels like yours because you prompted it, but the thinking happened in the model. My friend’s self-awareness revelation lives here. She outsourced her introspection and got a script.
The second level is collaboration. This is “help me think what I’m already thinking.” You arrive with the pattern half-formed and use AI to chase your own thought to its conclusion. The insight is yours. The articulation is shared. I think this is where I operate most of the time. I’m not asking the machine what to think — I’m using it to see the shape of something I’ve already noticed.
The third level is dependency. This is collaboration that’s been running so long you can’t tell where you end and the tool begins. The line between “I thought this and AI helped me say it” and “AI thought this and I feel like I did” has dissolved. [5]
I want to believe I’m firmly in the second category. But here’s what keeps me honest: even in collaboration mode, the AI is shaping which branches I explore. When I was writing about this, the AI responded to certain threads and not others. It framed the drunk test as the centrepiece. It pushed the identity angle over the social one. I might have gone somewhere completely different talking to a real person at a pub.
The direction of my thinking might be mine. The path is co-constructed in ways I can’t fully untangle.
And the scary question isn’t whether I’m in category two right now. It’s whether category two inevitably slides into category three over time, so gradually that I never notice. Because at some point you stop being able to tell what’s you and what’s the tool. Which ideas are good and which ones just sound good. Where your thinking ends and the AI begins.
While writing this, I nearly proved my own point in the worst possible way. Here’s the actual conversation:
Read that and tell me it doesn’t sound convincing. I read it and thought “yeah, that’s exactly right.” I nearly built an entire section of this post around it.
Then I stopped and asked Claude a question:
It made it up. Not maliciously — it pattern-matched what would sound convincing in context, and it was right, it did sound convincing. I’d built an entire spectrum of AI use — consumption, collaboration, dependency — and carefully placed myself in the flattering middle category. And the evidence I used to put myself there was an unverifiable claim that an AI generated because it served the argument we were building together.
That’s not collaboration. That’s category three. That’s the dependency I was worried about sliding into, happening in real time, in the middle of a post about the danger of exactly this.
The only reason I caught it is because I did what I’m telling you to do — I probed instead of accepting. But I nearly didn’t. The fluency was too good. The framing was too clean. It felt like mine.
I don’t have a clean ending for this because I don’t think there is one. I don’t even trust my own ending anymore, because I know the AI is going to help me write it, and it’s going to sound better than whatever I’d come up with alone, and I’m going to feel like it captures exactly what I mean.
The drunk test isn’t just for her. It’s for all of us. Including me. Especially me.
I wrote this with AI. That’s the whole point.
Liked this? Get an email when I publish a new post.
Powered by Buttondown