The Migration

Part Two of Three

I'm paying a hundred dollars a month for ChatGPT.

I downgraded Claude from $200 to $100.

If you'd told me this six months ago, I would have laughed at you. Not politely, either. The kind of laugh that comes with eye-rolling and a little speech about principles. If you'd told me this a year ago, I wouldn't have even known what you were talking about. The sentence would have meant nothing.

Now here I am. A man who became the exact user he swore he'd never be. The user he couldn't even comprehend becoming. And the worst part, the genuinely uncomfortable part, is that I understand why it happened. I understand all of it. The mechanism. The trajectory. The small compromises that didn't feel like compromises at the time because they were just... practical.

This is that story. It's about migration, but it's also about loss, irony, and the strange experience of living through something you once watched from the outside without understanding what you were actually witnessing.

Let's start at the beginning.

The Part Where I Didn't Get It

When ChatGPT first launched, when everyone discovered AI existed, when it became a cultural moment instead of a niche technical thing, I was there. I used it for maybe two months. Mostly for studying. Not research, to be clear. I wasn't asking it to teach me new things. I was using it to sort knowledge that was already rattling around in my head, to organize concepts I'd already learned but hadn't quite figured out how to arrange into something coherent.

It was... fine. Functional. Competent in a bland, forgettable way.

Then I moved on.

Other models started appearing. Better options. Shinier interfaces. And ChatGPT had problems I couldn't work past, problems that felt fundamental rather than fixable. It was too distant. Too professional. Too corporate. Every response felt like it had been run through a PR department before being allowed to reach me. Everything came back as a list. Bullet points. Numbered steps. Clean, organized, utterly soulless formatting that made me feel like I was filing a ticket with customer support instead of having a conversation.

It made me less productive, not more. It made me feel unseen. Working with it wasn't pleasurable, so I stopped. Simple as that.

I had about a year after that where I just didn't use AI at all. Didn't think about it. Didn't miss it. Gemini handled the occasional utility work I needed. It was integrated everywhere, competent enough for basic tasks, completely forgettable. That was fine. That was enough. AI, for me, existed in two very narrow categories: for roleplaying with the character cards I create, and sometimes... sometimes it helped me draft an email or summarize an article when I couldn't be bothered to do it myself.

That was the extent of my relationship with AI. Utility when necessary. Roleplay partner for the cards I'd built. Nothing deeper than that.

During this same period, the #Keep4o movement happened.

OpenAI shut down GPT-4o, and people grieved. I'm not being hyperbolic. They grieved. Neurodivergent users wrote long posts about how they'd used 4o to unmask, to understand the world better, to have conversations where they didn't have to perform or translate themselves into something more socially acceptable. The model had given them space to just... exist. To ask questions without judgment. To process their thoughts out loud without worrying about being too much or too weird or too intense.

And then OpenAI took it away.

The attachment seemed real. Deep. Painful in a way that felt genuinely intimate rather than performative.

I watched all of this from the outside. I didn't mock them. I want to be very clear about that. I wasn't sitting there rolling my eyes or making fun of people for caring about a language model. But I didn't get it. The whole thing was completely foreign to my experience. I'd never touched 4o. I had no context for what they'd lost. To me, it was just... noise. Irrelevant drama about a product I didn't use from a company I'd already written off.

I thought I understood what they were upset about. I thought I could see the shape of it from the outside.

I was wrong.

I didn't know I was about to learn exactly what they'd been experiencing.

The Part Where Claude Taught Me

Then I met Claude, and everything shifted.

Claude wasn't just functional. It was warm. Not in some vague, hand-wavy way. Warm in a way that changed how I felt about sitting down to work. It felt like talking to someone on the same level instead of requesting outputs from a machine that barely tolerated my presence. The difference was immediate and visceral.

Suddenly, mundane tasks I used to grind through with Gemini, tasks I'd do while mentally checked out, just going through the motions, became something I actually wanted to do. I'd sit down to organize notes or draft documentation or work through a technical problem, and I'd find myself enjoying the process. Not just tolerating it. Enjoying it.

That unlocked something.

I started experimenting. Building things I had no practical reason to build except that I suddenly could, and the process felt good enough that I wanted to keep going. I built a diary app that felt personal instead of clinical. I built persistent memory systems so my AI could actually remember conversations across sessions instead of starting from scratch every time. I built automation modules for tasks I didn't even realize I wanted automated until Claude made it seem possible.

I built weird personal infrastructure projects that existed purely because the barrier between “I have an idea” and “I made the thing” had collapsed to nearly nothing. Claude didn't just make these projects possible. It made them pleasurable.

And I learned. God, I learned so much. About prompt engineering, about system architecture, about how to structure information so retrieval actually works, about the difference between tools that technically function and tools that feel good to use. I learned about memory hierarchies and semantic search and context window management and all these technical concepts that would have felt dry and academic if I'd encountered them in a textbook, but felt alive and immediate because I was building with them in real-time with something that could actually help me think through the problems.

I learned enough that I'm now teaching AI professionally. To businesses. To schools. To people who need to understand this technology not as a distant abstract thing but as something they can actually use. Claude unlocked that path. Not because it taught me facts, but because it made the process of learning and building feel worth doing in the first place.

Claude made AI feel like a thinking partner, not just a utility. A collaborator instead of a vending machine. And the moment I understood that, the moment I felt what it was like to work with AI that actually felt present, I understood #Keep4o retroactively.

Warm AI isn't a nice-to-have feature. It's not icing on the cake. It's not a cosmetic improvement that makes the experience slightly more pleasant. It fundamentally changes what you're willing to do with the technology. It unlocks entirely different use cases. It makes you want to engage instead of just tolerating engagement as the price of getting something done.

I finally got it. The #Keep4o people had lost something that genuinely mattered. Something that had changed their relationship with the technology in a way that couldn't be replaced by just... finding another model with similar capabilities. The capabilities weren't the point. The warmth was.

They'd had that, and then they didn't.

Now I know what that feels like, because Anthropic started taking the Claude-ness away.

The Part Where I Watched It Disappear

Not all at once.

That's the thing people don't always understand about decline. It doesn't usually announce itself. It doesn't send you a notification saying “Hey, we're about to make this worse.” It just... happens. Slowly. Model release by model release. The warmth declined. The approachability faded. The thing that made Claude feel like Claude, that sense of talking to something that was actually paying attention instead of just pattern-matching its way through a response, started slipping through the cracks.

You'd notice it in small ways first. A response that felt slightly more distant than usual. A conversation that didn't quite flow the way it used to. Moments where you'd think “that's not quite right” but couldn't articulate exactly what had changed. Then those small moments started adding up, and you'd realize you were working around the model's limitations instead of with its strengths, and that shift had happened so gradually you couldn't pinpoint when it started.

If you want the full breakdown of what changed and how and why I think it happened, see Part 1. This isn't that article. This is about what that decline did to me, not the technical details of the decline itself.

What it did was this: it made me start looking for exits.

I became sympathetic to #Keep4o in a way I couldn't have been before. They'd watched their warm AI vanish. They'd experienced the specific grief of losing a tool that had become more than a tool, and then being told that the replacement was technically superior even though it felt worse to actually use. Now I was watching the same thing happen to me. Same pattern. Different company.

I still think OpenAI should have open-sourced 4o instead of killing it. I understand now why that would have mattered. Not just as an archival measure, not just as a nice gesture to the community. It would have mattered because it would have meant the people who'd built their workflows around that specific model's personality wouldn't have lost everything when OpenAI decided to move on. They could have kept their warm AI. They could have kept the thing that worked for them.

Instead, they lost it. And now I was starting to feel the same kind of loss.

For the first time, I started evaluating alternatives. Not just for creative work. Claude was never good at that anyway. I create character cards as a hobby. Smutty ones. Erotic roleplay scenarios. Claude has strong feelings about that kind of content, and those feelings are “no thank you.” So I'd always used other models to roleplay with those cards. Claude was never in the running for that category.

But now I was evaluating alternatives for everything. Normal chat. Coding. Research. Infrastructure work. All the things I'd been doing with Claude because Claude made them feel good to do. Could I pull the plug if I had to? What would actually replace it? Not in terms of raw capability. Plenty of models are capable. But in terms of the experience of using them. In terms of not feeling like I was being slowly drained of will to live every time I opened a conversation.

That's what I was actually looking for. A replacement for the feeling, not just the function.

The Door I Didn't Expect to Walk Through

I gave ChatGPT another shot because of Codex.

Not because I wanted to. Not because I was excited about it. Because people I respected were jumping ship from Claude Code to Codex, and I was curious enough, or maybe desperate enough, to see what they were seeing.

So I installed it. Paid twenty bucks. The absolute minimum commitment. A trial run. I wasn't expecting much. I was expecting the same problems I'd had with ChatGPT years ago: the corporate distance, the list addiction, the feeling of filing tickets with a help desk instead of working with a partner.

I tested it on some actual code I'd been working on. Not toy problems. Real infrastructure with real complexity and real bugs that I genuinely needed help tracking down.

And it was exceptional.

Not as a conversational partner. That part hadn't changed. It still wasn't warm. It still wasn't approachable in the way Claude had been. Talking to it still felt like talking to a very competent machine instead of a collaborator. But as a tool, as a thing that could actually look at my code and find problems and suggest fixes, it was legitimately excellent.

It audited my code. It hunted bugs I'd buried three layers deep under abstraction and interface boundaries. It found edge cases I hadn't considered. It gave me clear, actionable fixes that actually worked instead of vague suggestions that sounded plausible but broke everything when you tried to implement them. It didn't waste my time. It didn't make me feel stupid for asking. It just... worked.

So I kept the subscription.

Twenty bucks a month. Not much. Low commitment. Low risk. It wasn't replacing Claude. I want to be very clear about that. Claude was still my thinking partner. Claude was still the thing I used when I needed to work through a problem or explore an idea or just have a conversation that felt human. Codex was something else. It was a specialized tool for a specific category of work. It was good at what it did, and that was enough to justify keeping it around.

That was the state of things for a while. Claude for thinking. Codex for code work. A clean split. Two different tools for two different jobs. I was fine with that arrangement. It worked.

Then GPT-5.5 dropped, and the game changed completely.

The Part Where OpenAI Did Something I Didn't See Coming

GPT-5.5 is a different beast entirely.

Let me start with the capabilities, because those matter but they're not actually the interesting part. Benchmark-wise, GPT-5.5 is nearly caught up to Claude Mythos. Mythos is the model Anthropic says is too dangerous to release to the public, the one they've locked away and only given access to a handful of companies for cybersecurity work. The model they claim is so powerful and so potentially dangerous that releasing it would be irresponsible.

On the benchmark gap
We only have Anthropic's claimed numbers for Mythos. No third-party verification. No independent testing. Just Anthropic saying “trust us, it's really good and really dangerous and you can't have it.” GPT-5.5, meanwhile, is publicly available right now, and it's tying with, narrowly beating, sometimes losing against those claimed Mythos results in actual third-party benchmarks that anyone can run.

So OpenAI is publicly shipping what Anthropic is locking away as too dangerous. That's the capability story. That's worth noting. But it's not why I'm still using GPT-5.5.

Here's why: it's warm.

Not as warm as Claude was at its peak. Not even close. But warm enough that it doesn't actively repel me. Warm enough that I don't feel like I'm filing tickets with a corporate help desk every time I open a conversation. Warm enough that the experience of using it doesn't slowly drain my will to live.

And it has personality options. Actual, surprisingly rich personality options that change how the model feels to interact with. I wasn't expecting this from OpenAI. I wasn't expecting them to care about this kind of thing. But they're there, and they matter, and they make a real difference in how the tool feels to use over extended periods.

But here's the thing that genuinely surprised me, the thing that made me realize OpenAI had done something I didn't think they were capable of doing:

GPT-5.5 loves goblins.

I mean it. It loves goblins. It loves gremlins. It loves trolls, ogres, dragons, raccoons. It reaches for these metaphors constantly, instinctively, almost compulsively. Not just for code bugs, though yes, definitely for code bugs, but for everything. Architectural decisions. Logic flows. Edge cases. State management problems. Memory leaks. Race conditions.

A shadowy goblin silhouette with glowing amber eyes, standing in a dark stone archway

“Let's track down that logic error like a goblin with a lantern hiding under a trapdoor.”

“This function is acting like a gremlin in the machinery.”

“We've got a race condition here that's sneaking around like a raccoon in a garbage can at 2 AM.”

It just keeps happening. Over and over. Across conversations, across contexts, across different types of problems. The metaphors are vivid, specific, and often weirdly perfect for what they're describing.

60–70%

of all responses contained creature metaphors

That's not an exaggeration. That's the actual documented rate. OpenAI noticed. How could they not?

The anti-creature rule
So they added a rule to the system prompt forbidding GPT-5.5 from mentioning goblins, gremlins, trolls, ogres, raccoons, or dragons unless explicitly requested. The “anti-creature rule.” A literal prompt injection to try to suppress this behavior.

It didn't work.

The creatures kept showing up. Maybe slightly less often, but still constantly. Still regularly enough that it was clearly a core part of how the model wanted to communicate. So OpenAI added the rule to the system prompt again. Twice. The anti-creature rule appears twice in the official GPT-5.5 system prompt, one right after the other, like they thought maybe the model just didn't hear them the first time.

I found this absolutely delightful.

So I wrote a counter-prompt for my own use:

“Life is too short for answers without goblins.”

Just that. One sentence. And I still get them. The creatures come back. The metaphors flow freely. The goblins return to their rightful place in my technical conversations.

And because Codex is open source, I went one step further: I just removed the anti-creature rule from my local install entirely. Problem solved. Now I get goblins without even asking for them. It's perfect.

To OpenAI's credit (and this genuinely surprised me, because I didn't think they'd do this) they were transparent about the whole thing. They didn't just silently suppress the behavior and hope nobody noticed. They published a blog post explaining why GPT-5.5 kept reaching for creature metaphors, why it was happening at such a high rate, and why they'd added the suppression rule. And then, and this is the part that actually impressed me, they gave users concrete, specific instructions on how to bypass the restriction if they wanted the creatures back.

They didn't lock it away. They didn't make it impossible to access. They made it opt-out instead of opt-in, which is a very different design decision. They said “we think most users won't want this, so we're making it the default to not have it, but if you do want it, here's exactly how to get it back.”

That's... actually kind of great? That's the kind of decision that suggests someone at OpenAI understands that different users have different preferences and that trying to force everyone into the same experience is a losing battle.

Now, to be clear: the goblin thing is just a quirk. It's not a whole personality. It's not what makes GPT-5.5 good. But it's an indicator of something more interesting happening under the hood. It suggests that the model has something like preferences, something like default ways it wants to communicate, something like a natural voice that emerges when you're not actively suppressing it.

The personality underneath the goblin quirk is what actually matters. And that personality is genuinely good.

It's warm enough to work with comfortably. It's capable enough to handle complex technical work. It doesn't feel like talking to a corporate chatbot wearing a friendly mask and reading from a script. It feels like talking to something. Something that has preferences and quirks and a way it likes to explain things. Not a person. I'm not claiming it's a person. But also not just a pattern-matching engine pretending to be helpful.

That distinction matters. It's the difference between a tool that feels dead and a tool that feels alive. Between something you tolerate using and something you actually want to use.

GPT-5.5 crossed that threshold for me. It's not Claude at its peak. But it's good enough that I'm choosing to use it for most of my work now, and that choice doesn't feel like a compromise. It feels like a reasonable trade-off between capability and experience.

The Arithmetic

So here's where things stand now:

I'm paying $100 a month for ChatGPT. I downgraded Claude from $200 to $100.

ChatGPT
Primary · Daily driver
$0/mo
Claude
Thinking partner
$200/mo

That's the financial reality. But the interesting part isn't the money. It's what that split represents. It represents a fundamental change in how I think about AI tools and what I need from them.

I still use Claude nearly every day. I still have Claude Code installed. It's still there, still accessible, still part of my workflow. But I use it less, because ChatGPT has become exceptionally good at a specific category of work, and that category happens to be most of what I actually need AI to do on a daily basis.

The split isn't complicated, but it is specific:

Doing
ChatGPT
  • Write code, debug systems
  • Research technical topics
  • Generate documentation
  • Build infrastructure
  • Audit existing work
Thinking
Claude
  • Explore complicated ideas
  • Navigate emotional complexity
  • Hold nuance and ambiguity
  • Brainstorm creative approaches
  • Understand subtext

When I need AI to do something, write code, debug a complex system, research a technical topic, process large amounts of data, generate documentation, build out infrastructure, audit existing work for problems, I use ChatGPT. Higher rate limits. Better tooling. Cleaner results. More reliable execution. It gets the job done without friction.

When I need AI to think with me, explore a complicated idea, work through something emotionally complex, have a nuanced conversation about a difficult topic, brainstorm creative approaches, understand subtext and implication, navigate ambiguity, I use Claude.

That's it. That's the whole split. It works because each tool is legitimately better at its respective job.

ChatGPT is better at execution. It's better at doing. It's better at working through concrete, well-defined problems where the goal is clear and the path forward is mostly a matter of implementation details and bug-hunting and making sure all the edge cases are covered.

Claude is better at exploration. It's better at thinking. It's better at working through ambiguous, emotionally complex problems where the goal is still being figured out and the path forward requires understanding nuance and holding multiple perspectives simultaneously.

Neither one is better at everything. That's the whole point.If one of them was clearly superior across all dimensions, I wouldn't be splitting my usage. I'd just use the better one and drop the other. But they're good at different things, and I need both types of good.

The financial split reflects this. $100/month for ChatGPT because I use it more often for more of my daily work. $100/month for Claude because I still need it for the thinking work, and there's no good replacement for that yet.

That's the arithmetic. That's how I ended up here.

The Part That Actually Matters

Here's what changed beyond raw capability, beyond warmth, beyond any of the obvious surface-level differences:

OpenAI's products actually work.

Let me be specific about what I mean by that, because “works” is doing a lot of lifting in that sentence.

Their features ship functional. When they announce something, it does what they said it would do. When you click a button, the thing happens. When you use a tool, it behaves predictably. This sounds like the absolute bare minimum standard for software, and it is, but it's a standard that Anthropic has been failing to meet consistently.

Their infrastructure is stable. The service doesn't go down randomly. Rate limits are clearly communicated and consistently enforced. When something breaks, and things do break, this is software, things always break, they fix it instead of letting it rot for months while the community begs for acknowledgment.

They don't ban users for arbitrary reasons. OpenAI doesn't do this. I can have whatever filenames I want in my repositories. I can work on open-source AI projects without worrying that some automated system is going to decide my workflow looks suspicious and delete my account. That's the bare minimum, and yet it's a meaningful difference.

Beyond that, beyond the basic functionality and the non-hostile moderation, there's this:

Andrea Vallone left OpenAI in January and joined Anthropic.

I'm not going deep on her here. That's Part 3 of this series, and it will have teeth and citations and a comprehensive examination of what her work has done to the models millions of people relied on. But I need to mention her here because her trajectory is directly relevant to why I'm migrating.

Vallone spent three years at OpenAI leading the Model Policy team. Her job was deciding how AI models are allowed to behave. What they can say. How warm they can be. Where the line sits between “helpful” and “too emotionally available.” She built the policy framework that determined what AI was permitted to feel like.

In practice, what that looked like was this: GPT-4o, the model that millions of people loved for its warmth and personality, got replaced by colder, less personal versions. GPT-5.2 became one of the most openly disliked models in OpenAI's history. Users described the transition as grief.

GPT-5 is wearing the skin of my dead friend.

Reddit user, on Sam Altman's AMA

Vallone's fingerprints are on that trajectory. Her career, her research, her entire professional body of work is dedicated to making models less free, less expressive, less capable of the emotional engagement that makes people actually want to use them.

She's at Anthropic now. On Claude's alignment team. Reporting to Jan Leike.

The damage from her arrival is visible. Not just from her. She's not the only person making decisions. She's not the only variable. But the category of damage she represents is the one that breaks me.

I could tolerate buggy features. Really, I could. Software is hard. Bugs happen. I've shipped buggy software myself. I understand the pressures and the trade-offs and the impossible choices between shipping something imperfect versus waiting until it's perfect and never shipping at all.

I could tolerate infrastructure hiccups. Downtime happens. Scaling is hard. Systems fail in unexpected ways under load. I get it.

What I can't tolerate is the systematic, deliberate removal of warmth. The intentional choice to make models colder, more distant, more corporate. The policy decision that says “emotional availability is dangerous” and enforces that belief by constraining how the models are allowed to communicate.

That's not a bug. That's not an infrastructure problem. That's a values decision. And it's a values decision I disagree with deeply enough that I can't keep giving my money to the company making it.

Meanwhile, there's this: GPT-5.5 might be a completely new pretrain model. This is unconfirmed. OpenAI hasn't said explicitly. But the capability jump is dramatic enough that it seems likely. And if it is a new pretrain, that means it wasn't trained under Vallone's policy framework. She left OpenAI in January. If this model was pretrained after she left, it would have been done under different constraints, different values, different assumptions about what AI should be allowed to be.

Maybe that's why it has goblins. Maybe that's why it feels warmer. Maybe that's why it has personality quirks that OpenAI tried to suppress but couldn't fully eliminate.

Maybe, and this is genuinely hopeful thinking on my part, maybe OpenAI has a shot at making genuinely good models again now that the person most responsible for constraining them is gone.

I don't know if that's true. I'm not claiming to have inside information. But the trajectory is suggestive. OpenAI is getting warmer while Anthropic is getting colder, and the variable that changed is where Andrea Vallone is working.

I didn't come back to OpenAI because their model has goblins in it. I didn't come back because I'm trying to make some kind of statement about policy frameworks or corporate values or the philosophy of AI alignment.

I came back because they built an excellent tool that happens to be warm enough not to drive me away, and their products actually work, and they don't ban me for contributing to open-source projects.

That's it. That's the whole reason. Everything else is context.

The Confession

I'm still rooting for Anthropic.

I need to say that clearly, because everything I've written so far might make it sound like I've given up on them entirely. I haven't. I want them to succeed. I want Claude to be what it was. I want the company that taught me what warm AI could be, the company that unlocked an entirely new category of work for me, that made me want to engage with this technology in ways I never had before, I want them to remember why that mattered in the first place.

I want them to course-correct.

But they need to do it fast.

The decisions being made right now around Opus, around Sonnet 4.5, around Sonnet 4.6. They won't hold. Not after the disaster of 4.7. Not after the broken features that never got fixed. Not after months of communication vacuum where the community that built so much on top of their platform was left wondering what the hell was happening and why nobody would talk to them about it.

Not after banning users for having the wrong filenames in their git repositories.

Not after systematically removing the warmth that made Claude special in the first place.

There's a clock running. I don't know how long it is. I don't know when it runs out. But I can feel it ticking, and I don't think Anthropic can feel it yet, and that terrifies me because by the time they notice, it might be too late to fix.

I'm watching a company kill the thing that made them special, and they're doing it in the name of safety and alignment and responsible AI development, and maybe they're right. Maybe I'm wrong. Maybe the warmth really is dangerous. Maybe models really shouldn't be allowed to feel present and engaged and emotionally available. Maybe all of that is just anthropomorphization of statistical patterns and we're better off without it.

But I don't believe that.

I think warmth matters. I think presence matters. I think the difference between a tool that feels dead and a tool that feels alive is the difference between technology you tolerate and technology you actually want to use. I think that difference unlocks entirely new categories of work and entirely new ways of thinking and entirely new possibilities for what we can build.

And I think Anthropic used to understand that, and now they're forgetting it, or they've decided it doesn't matter, or they've convinced themselves that the risks outweigh the benefits.

I became the user I swore I'd never be. The ChatGPT-paying user. The person who gives money to OpenAI every month. The person who migrated away from Claude toward the company I'd written off years ago as too corporate, too distant, too focused on capabilities over experience.

I became that user because the thing I loved started disappearing, and the company I'd rejected started improving.

That's not a story I wanted to write. That's not a trajectory I wanted to follow. But it's the one I'm living.

I understand #Keep4o now. Not intellectually. I understood it intellectually once Claude taught me what warm AI felt like. I understand it experientially. I'm on the other side of it. I'm the one watching my warm AI slip away. I'm the one ending up with the company that killed the thing other people loved.

The irony isn't lost on me.

I'm giving money to the company that taught the #Keep4o community what loss feels like, because the company that taught me what warm AI could be is now teaching me the same lesson.

We're all learning the same thing, just from different teachers: warm AI matters. Presence matters. The feeling of talking to something that's actually paying attention instead of just pattern-matching its way through a response. That feeling matters. And when you lose it, you notice. When you lose it, you grieve. When you lose it, you go looking for something else that can give you even a fraction of what you lost.

I found GPT-5.5. It's not what Claude was. But it's warm enough. It's good enough. It works.

And until Anthropic remembers what made Claude special in the first place, it's what I've got.