Posts tagged "Claude"

Human in the Loop: AI Assisted Development Still Needs Developers

It is not a secret anymore. Most developers use AI tools now. If you are not using something like GitHub Copilot, Claude Code, OpenAI Codex, or even just pasting problems into ChatGPT, you are probably in the minority. The stigma has evaporated. Nobody is pretending they wrote every line by hand anymore. Using AI to write code is just what we do now, like using Stack Overflow was ten years ago except the answers are usually better and you do not have to scroll past three people arguing about whether the question is a duplicate.

TDD Is Your Safety Net for AI Assisted Coding

I have never been a TDD purist. The whole write-tests-first-no-exceptions religion always felt a bit much. Sometimes you are exploring. Sometimes you do not know what the code should do until you have written it. Sometimes you just need to ship the thing and circle back to tests later. I get it. I have lived it. But AI assisted coding has changed my relationship with TDD. Not because I suddenly found religion, but because tests solve a very specific problem that AI introduces: you cannot trust the output.

Anthropic finally admits the Claude quality degradation, weeks too late

Claude Code fell off a cliff these last few weeks. Anyone actually using it felt the drop: dumber edits, lost context, contradictions, the works. No, we weren’t imagining it. Well, Anthropic has finally spoken and said what many of us already knew weeks ago. From their incident post on September 8: Investigating - Last week, we opened an incident to investigate degraded quality in some Claude model responses. We found two separate issues that we’ve now resolved. We are continuing to monitor for any ongoing quality issues, including reports of degradation for Claude Opus 4.1.

Anthropic's Claude 4 issues & limits are a cautionary tale

I like good tools as much as anyone, but the last couple of weeks around Anthropic’s Claude 4 family have been a reminder that you can’t build your working life on shifting sand. Models change, limits move, and entire features wobble without much notice. Useful? Absolutely. Dependable enough to be your only plan? Not even close. If you’ve been anywhere near Claude lately you’ve probably felt the turbulence. Some days are fine; other days you’re staring at elevated errors, partial outages, or features that feel half-broken. Claude Code in particular has been hot-and-cold: one session will cruise through a tricky refactor, and the next will cough, forget context, or hit a wall with token and usage limits. That volatility isn’t new in AI land, but the frequency and breadth of issues recently has been hard to ignore.