A guy at a meetup asked me last week what Claude Code costs. Reasonable question. He was thinking about pitching it internally and wanted a number to put on the slide. I opened my mouth to answer and then closed it again, because the honest answer was “it depends what day you ask, who you ask, and whether Anthropic is currently A/B testing your account.”
The slide did not get made.
Here is what happened, as best as anyone has been able to piece together. In late April, some Pro users — the $20/month tier — started noticing Claude Code disappearing from their account. No email. No banner. No changelog. Just: yesterday it was there, today it isn’t, and the only way you found out was by trying to open it and getting bounced. People started posting about it. The internet did its thing. Anthropic eventually clarified that this was a “test” — they were experimenting with moving Claude Code to a higher-priced tier to see what would happen. Then, after the test produced a lot of yelling, they said it was over and Pro users would keep their access. Maybe. For now. Until the next test.
Simon Willison, who is approximately the only person doing diligent journalism on this stuff, captured it best. Asked whether Claude Code was actually included in the $20 Pro plan, his answer was “Probably not — it’s all very confusing.” Probably. Not. It’s all very confusing. That’s the verbatim state of pricing intelligence on one of the most-used AI dev tools in the world, written by a guy whose actual job is to track this stuff. If Simon doesn’t know, you don’t know. If Simon doesn’t know, the meetup guy with the slide doesn’t know. If Simon doesn’t know, the woman pitching this to her CFO next Tuesday absolutely does not know.
Let me put my SRE hat on for a second. Pricing is supposed to be the easy part. Pricing is the part you write down on a webpage and don’t change. The model can do whatever weird stuff it wants on the inside — switch reasoning depths, swap inference profiles, get nerfed in three separate ways across five weeks (we’ve already covered that one). Fine. The model is a black box. I have made my peace with that. But the price? The number you charge me? That is a string in a database. That is a row in Stripe. That is the most legible, version-controllable, change-managed, communicable variable in the entire stack. And Anthropic can’t keep it stable for thirty days.
I run a side consultancy. I quote SMB clients on AI tooling. The pitch is “here is the monthly cost, here is what you get, here is the ROI.” That is the entire pitch. The pitch is not “here is approximately the monthly cost, give or take a tier, depending on whether the vendor is currently running an experiment on your account, and also one of the products may or may not be in your plan, and we’ll find out together.” That is not a pitch. That is a horoscope.
The closest analogy I can come up with is if AWS decided one Tuesday to silently A/B test removing CloudWatch from your Business Support tier. No email. Just: today CloudWatch costs extra, surprise. The collective fury would be visible from low Earth orbit. Engineers would be writing screenplays about it. There would be a Senate hearing. But because this is AI and the industry is six minutes old, we just kind of… absorb it. “Oh, Claude Code might not be in Pro anymore. They’re testing.” Sure. Carry on.
Here is the part that gets me. The whole pitch of paying for this stuff — the whole reason I have a Pro subscription instead of just hitting the API — is predictability. Flat rate. Known cost. Don’t think about it. The Pro plan is supposed to be the boring tier. Pro is for the person who wants to stop counting tokens and just use the thing. If Pro is now subject to live experimentation on what features it actually contains, then Pro is not a plan. Pro is a beta program with a price tag.
And if I can’t tell my client what they’re paying for, I can’t sell it. I can sell uncertainty around the model output — that’s just how LLMs work, everyone knows it. I cannot sell uncertainty around the bill. The bill is the only thing the CFO is actually going to read.
There’s a real product question buried in here, which is: what is Anthropic actually trying to do with the pricing tiers? Because right now it looks like nobody at Anthropic knows either. The Pro tier added Claude Code. Then maybe didn’t. The $100 Max tier exists. The $200 Max tier exists. The API has its own pricing. The Bedrock prices are different. Cursor and Windsurf are passing through their own bundles. I count, by my last attempt, seven distinct ways to pay for “Claude doing code stuff,” and the relationships between them change month to month. The pricing page reads like a menu at a restaurant where the chef is also the waiter and is also the person setting the prices and is also currently mid-divorce.
The cynical read is that the constant repricing is a feature, not a bug. As long as nobody knows what the right tier is, you’ll keep buying up — paying $200 to make sure you have what you need, when actually $20 would have done it, or $20 to save money and finding out three weeks later that the thing you needed was on $100. Confusion is monetizable. Especially in a market growing this fast, where switching costs are real and most buyers don’t have the energy to model out which tier they actually need.
The charitable read is that they genuinely don’t know yet. The product is moving so fast that the pricing keeps trying to catch up and never quite does. Honest experimentation. Trying to find product-market fit on the bundling. I want to believe this one. I’d believe it more if anybody at Anthropic were telling me, in advance, when an experiment was about to run on my account.
The fix here is not complicated. It’s an email. “Hey, we’re testing pricing changes for Pro users between these dates. Your access to Claude Code may be affected. Here’s what we’re trying to learn. Here’s how to opt out.” That’s it. That’s the whole thing. Vendors send these emails all the time. They are a known shape. Take notes.
So when the next person at the next meetup asks me what Claude Code costs, I’m going to do what Simon Willison did. I’m going to say “Probably not — it’s all very confusing,” and then I’m going to point them at a search bar, and then I’m going to go drink my coffee. The slide can stay blank. The confusion is not mine to resolve.
If Anthropic doesn’t know what Claude Code costs, I’m not going to pretend I do.