There’s a fantasy floating around tech circles that AI is about to make software developers obsolete. The logic goes something like this: AI can write code now, therefore anyone can build software, therefore we don’t need programmers anymore. It’s a seductive idea if you’ve never actually shipped production software.
I’ve been using AI coding assistants daily for well over a year now. Claude, Copilot, Cursor, the works. And here’s what I’ve learned: AI is genuinely transformative for experienced developers. It’s also genuinely dangerous in the hands of people who don’t know what they’re looking at.
Somewhere in the last year, we collectively decided that typing prompts into an AI and hoping for the best counts as software development. Andrej Karpathy coined the term “vibe coding” in February 2025, and what started as a cheeky observation has become an actual workflow for people shipping production code. This is a problem.
Let me be clear about something: I use AI coding tools every day. Claude, Copilot, Cursor. They’re genuinely useful. But there’s a massive difference between AI-assisted coding and vibe coding, and the industry seems determined to blur that line until someone’s startup implodes in a spectacular security breach.
This site has been around for almost 16 years now. Sixteen years. I started it when I was younger, dumber, and convinced I had opinions worth sharing. Turns out I was right about one of those things.
I never studied English. I do not have a degree in writing or journalism or communications. Maths was always my weakness. Give me numbers and my brain starts looking for the exit.
But writing? Writing came naturally. Not because I am especially talented, but because I have always had things I wanted to say and writing was the cheapest way to say them. No barrier to entry. Just words on a screen and a publish button.
It is not a secret anymore. Most developers use AI tools now. If you are not using something like GitHub Copilot, Claude Code, OpenAI Codex, or even just pasting problems into ChatGPT, you are probably in the minority. The stigma has evaporated. Nobody is pretending they wrote every line by hand anymore.
Using AI to write code is just what we do now, like using Stack Overflow was ten years ago except the answers are usually better and you do not have to scroll past three people arguing about whether the question is a duplicate.
I have never been a TDD purist. The whole write-tests-first-no-exceptions religion always felt a bit much. Sometimes you are exploring. Sometimes you do not know what the code should do until you have written it. Sometimes you just need to ship the thing and circle back to tests later. I get it. I have lived it.
But AI assisted coding has changed my relationship with TDD. Not because I suddenly found religion, but because tests solve a very specific problem that AI introduces: you cannot trust the output.
Every time an AI music app starts feeling like the future, the labels show up with lawsuits and NDAs. This month they skipped the velvet gloves and went straight to taking the keys. The goal is not safety or artist love. It is control, and they are getting it by strangling the very features that made these tools fun.
When Udio slammed the door On October 30, Udio killed downloads without warning while announcing its Universal deal. A few days later it tossed users a 48 hour retrieval window as a peace offering, then shut the chute again. The platform that promised you owned your outputs is now a walled garden where your own songs cannot leave. The angry Discords and refund requests did not move the needle because the settlement terms mattered more than the people who built the hype.
We’re living through a fascinating time in software development. AI coding assistants like Claude Code, Codex CLI, and GitHub Copilot have become powerful tools that can generate code, explain complex algorithms, and even debug issues.
I’ve watched developers embrace these tools with varying degrees of success, and there’s a clear pattern emerging: the developers who truly benefit from AI are the ones who already know how to code well.
There’s a dangerous narrative floating around that we’re approaching the end of programming as we know it.
I write code for a living, but more and more I feel like my job is designing systems. Some of those systems include code I type. Some include services, models and tools that I orchestrate. The biggest shift is mental: stop thinking in files and start thinking in flows, boundaries, feedback and failure. If you have solid fundamentals, this moment can multiply your impact. If you treat every new tool like magic, it will waste your time and your client’s money.
Meta has released version 2 of its open-source Llama AI model and has caught many’s attention – but not entirely for the right reasons. Coming in a broad spectrum of sizes, from the 7 billion to an impressive 70 billion parameter models, Llama 2 certainly stands out.
If you’re curious, you can experience the different models for yourself on Perplexity. You can only try 7 and 13 billion models there.
But as I’ve dug deeper into Llama 2, I’ve begun to ask myself: has Meta gone too far with safety measures?
Since OpenAI released its long-awaited Code Interpreter plugin for ChatGPT, I have been playing with it extensively. Throwing everything at it, from a zip file of a large repository and asking it questions to uploading spreadsheets and generating imagery.
It appears that most people are using Code Interpreter for what it was intended for, working with data and code, being able to perform analysis and other awesome things on documents and so on.