Claude Code fell off a cliff these last few weeks. Anyone actually using it felt the drop: dumber edits, lost context, contradictions, the works. No, we weren’t imagining it.
Well, Anthropic has finally spoken and said what many of us already knew weeks ago. From their incident post on September 8:
Investigating - Last week, we opened an incident to investigate degraded quality in some Claude model responses. We found two separate issues that we’ve now resolved. We are continuing to monitor for any ongoing quality issues, including reports of degradation for Claude Opus 4.1.
I have lived through a few waves of tooling changes. If you write software long enough you get comfortable with the ground moving under your feet. The latest shift is vibecoding: pointing a capable model in roughly the right direction and steering it with context, examples, and taste. Tools like Claude Code, Codex CLI, and Gemini Code make that feel effortless. This post is less about what the tools are doing and more about how to use them without losing your engineering brain.
I like good tools as much as anyone, but the last couple of weeks around Anthropic’s Claude 4 family have been a reminder that you can’t build your working life on shifting sand. Models change, limits move, and entire features wobble without much notice. Useful? Absolutely. Dependable enough to be your only plan? Not even close.
If you’ve been anywhere near Claude lately you’ve probably felt the turbulence. Some days are fine; other days you’re staring at elevated errors, partial outages, or features that feel half-broken. Claude Code in particular has been hot-and-cold: one session will cruise through a tricky refactor, and the next will cough, forget context, or hit a wall with token and usage limits. That volatility isn’t new in AI land, but the frequency and breadth of issues recently has been hard to ignore.