TDD Is Your Safety Net for AI Assisted Coding

Published on December 31, 2025

I have never been a TDD purist. The whole write-tests-first-no-exceptions religion always felt a bit much. Sometimes you are exploring. Sometimes you do not know what the code should do until you have written it. Sometimes you just need to ship the thing and circle back to tests later. I get it. I have lived it.

But AI assisted coding has changed my relationship with TDD. Not because I suddenly found religion, but because tests solve a very specific problem that AI introduces: you cannot trust the output.

AI will confidently generate code that looks correct, reads correctly, and is completely wrong. It will call functions that do not exist. It will use APIs from three versions ago. It will solve a slightly different problem than the one you asked for. The code compiles. The syntax is fine. It just does not work.

Tests catch this. If you write tests first, you have a specification. You know what the code is supposed to do before the AI writes it. When the AI hands you something, you run the tests. Red means try again. Green means maybe it actually works.

The workflow that clicked for me

I have settled into a pattern that works. Before I ask the AI to implement something, I write the tests. Not exhaustive tests. Not every edge case. Just enough to describe what I want.

Say I need a function that validates email addresses. I write a few tests first. Valid email returns true. Invalid email returns false. Empty string returns false. Maybe one edge case I know matters. Takes five minutes.

Then I prompt the AI. Here are my tests, write the implementation. The AI generates something. I run the tests. If they pass, I review the code to make sure it is not insane. If they fail, I paste the failures back and ask for a fix. Iterate until green.

This is faster than writing the implementation myself for most things. It is also safer than accepting AI output without verification. The tests are the contract. The AI fulfils the contract or it tries again.

Tests expose hallucinations

The hallucination problem with AI is real. I have seen Copilot autocomplete a function call to a method that does not exist in the library. I have seen Claude generate code using an outdated API that was deprecated years ago. The AI does not know it is wrong. It has no way to check.

Tests check. If the AI invents a function, the test fails because the function does not exist. If it uses the wrong API, the test fails because the behaviour is wrong. You find out immediately instead of discovering the bug in production three weeks later.

This is not a knock on AI tools. They are incredibly useful. But they are useful the way a very fast junior developer is useful. They produce a lot of output quickly. Some of it is great. Some of it is subtly broken. You need a way to verify.

You still have to think

TDD does not mean you can turn your brain off. You still have to write good tests. If your tests are wrong, passing them means nothing. If your tests are incomplete, the AI will find the gaps and produce something that technically passes but does not actually work.

Writing tests first forces you to think about what you want before you ask for it. What are the inputs? What are the expected outputs? What should happen when things go wrong? These are questions you should answer anyway. TDD just makes you answer them upfront.

This is good discipline regardless of AI. But with AI it becomes essential. The clearer you are about what you want, the better the AI performs. Tests are a way of being very clear.

Not everything needs this

I am not suggesting you TDD every line of code. Exploratory work, prototypes, throwaway scripts, sometimes you just need to move fast and see what happens. That is fine. Use AI however you want for that stuff.

But when you are building something that matters, something that will live in production, something other people will depend on, write the tests first. Let the AI do the implementation. Verify with the tests. Review the result.

You get the speed benefits of AI without gambling on correctness. You get a safety net that catches the mistakes AI inevitably makes. You get documentation of what the code is supposed to do, which helps when you or someone else has to maintain it later.

TDD and AI are not competing approaches. They complement each other. AI makes writing code faster. TDD makes sure that faster code actually works.

I would rather ship tested code that an AI helped write than untested code I wrote myself. At least I know the tested version does what I intended.