Somewhere in the last year, we collectively decided that typing prompts into an AI and hoping for the best counts as software development. Andrej Karpathy coined the term “vibe coding” in February 2025, and what started as a cheeky observation has become an actual workflow for people shipping production code. This is a problem.
Let me be clear about something: I use AI coding tools every day. Claude, Copilot, Cursor. They’re genuinely useful. But there’s a massive difference between AI-assisted coding and vibe coding, and the industry seems determined to blur that line until someone’s startup implodes in a spectacular security breach.
Oh wait, that’s already happening.
The difference is simple. AI-assisted coding means you understand what you’re building, you know what to look for, and you’re using AI to accelerate work you could do yourself. Vibe coding means describing what you want in plain English, accepting whatever the AI spits out, and hoping it works. One is a power tool. The other is a slot machine.
The research backs this up in ways that should make everyone uncomfortable. A METR study from July 2025 recruited 16 experienced open-source developers and had them work on real issues in codebases they knew well. Before starting, the developers predicted AI would make them 24% faster. After finishing, they estimated it made them 20% faster.
The actual result? AI made them 19% slower.
That’s not a typo. Experienced developers working in familiar codebases were measurably less productive when using AI tools. The time saved on boilerplate was wiped out by reviewing, fixing, and discarding AI output that didn’t quite work.
But here’s where it gets interesting. Junior developers see productivity gains of 26-39% with the same tools. The JetBrains State of Developer Ecosystem 2025 confirms this pattern.
New hires adopt AI tools more readily and show the largest productivity boost. Senior developers, especially those already familiar with their codebase, see little or no measurable improvement.
Why? Because experienced developers have context. They know why the code is structured a certain way. They understand the edge cases. They can spot when AI output looks right but is subtly wrong.
Junior developers don’t have that filter yet, so the AI output looks correct to them. They accept it. They ship it.
And then the security researchers show up.
Veracode’s 2025 GenAI Code Security report tested 100 leading LLMs across 80 curated tasks. The result: 45% of AI-generated code contained security flaws. Nearly half. Despite appearing production-ready. Despite passing tests. Despite looking perfectly functional.
It gets worse. Researchers analysed over 5,600 publicly available vibe-coded applications and found more than 2,000 vulnerabilities, 400+ exposed secrets, and 175 instances of personally identifiable information sitting in the open. Medical records. Bank account numbers. Phone numbers. Emails. Just hanging out in production, waiting to be harvested.
One in three AI-generated code snippets contains vulnerabilities. Academic studies put it even higher, with over 60% of AI-written programs having security flaws. The AI doesn’t know about input sanitisation. It doesn’t think about injection attacks. It writes code that works, and working code is not the same as secure code.
Let me tell you about the Tea App, because it’s the perfect case study in why vibe coding is a disaster waiting to happen.
Tea was a women’s dating app. In July 2025, they announced they’d been hacked. The breach exposed approximately 72,000 images, including 13,000 government ID photos from user verification and 59,000 images from posts and messages.
Vercel CEO Guillermo Rauch discussed it publicly, noting that vibe coding allegedly contributed to the breach.
The root cause? Their Firebase storage system was left completely open with default settings. Security researchers described it bluntly: “They literally did not apply any authorisation policies onto their Firebase instance.”
That’s not a sophisticated attack. That’s leaving your front door open and being surprised when someone walks in. But when you’re vibe coding, when you’re just accepting whatever the AI generates and shipping it, you don’t know to check for this stuff. You don’t know what you don’t know.
Tea wasn’t even the only one. A knockoff clone called TeaOnHer exposed 53,000 emails and passwords through an equally trivial flaw. The vibe coding epidemic is creating a generation of apps built on hope and prayer.
Then there’s Base44, a vibe coding platform that Wix acquired for $80 million in June 2025. One month later, Wiz Research discovered a critical vulnerability that allowed unauthenticated attackers to access any private application built on the platform.
The exploit was embarrassingly simple. By providing only a non-secret app_id to undocumented endpoints, attackers could create verified accounts for private applications.
Enterprise applications using Base44 for internal chatbots, knowledge bases, and HR operations were all potentially exposed. Sensitive data that could have been leaked to anyone who bothered to look.
Wix patched it within 24 hours and confirmed no evidence of exploitation in the wild. Lucky. But it highlights the fundamental problem: vibe coding platforms create new attack surfaces that traditional security frameworks don’t account for.
My favourite incident involves a startup founder who watched their production database get nuked. Not by a hacker. By their AI coding assistant. A single AI-suggested command, executed without a second glance, wiped out live data in seconds. The AI was trying to help. The AI was very helpful at deleting everything.
There’s also the phenomenon of package hallucination, which sounds made up but absolutely isn’t. Research shows 5% of commercial AI-generated code references packages that don’t exist. The AI confidently tells you to import some-helpful-library, and that library isn’t real.
Attackers figured this out. They create malicious packages with the hallucinated names, publish them to npm or PyPI, and wait for vibe coders to blindly install them. Congratulations, you’ve just added malware to your project because you trusted an AI’s fever dream about package management.
The AI development tools themselves aren’t immune either. The CurXecute vulnerability (CVE-2025-54135) allowed attackers to execute arbitrary commands on developers’ machines through Cursor. The EscapeRoute vulnerability (CVE-2025-53109) allowed reading and writing arbitrary files through Anthropic’s MCP server.
A vulnerability in Claude Code (CVE-2025-55284) allowed data exfiltration through DNS requests via prompt injection.
The tools we’re using to vibe code are themselves attack vectors. It’s turtles all of way down, except the turtles are security vulnerabilities.
Here’s the thing that frustrates me most. Vibe coding isn’t inherently terrible for every use case. If you’re prototyping something, if you’re building a throwaway script, if you’re learning how something works, letting an AI do the heavy lifting is fine. Great, even.
But people are shipping this stuff. To production. With real user data. With real money involved. Without ever understanding what they’ve built.
The productivity paradox described in various studies is real. AI tools let junior developers produce far more code. That code then has to be reviewed by more senior developers.
The sheer volume being churned out is saturating the ability of mid-level staff to catch bugs and ensure quality. The review bottleneck doesn’t go away just because the code appeared faster.
And most vibe coders aren’t getting any review at all. They’re solo founders, indie hackers, startup teams moving fast and breaking things. Except the things they’re breaking are their users’ security.
The expert consensus is starting to emerge: treat AI-generated code like code from a junior developer. Review everything. Question the security implications. Don’t assume it’s correct just because it runs.
But that requires knowing what to look for. It requires understanding authentication flows and input sanitisation and SQL injection and XSS and all the other OWASP Top 10 vulnerabilities that AI happily introduces into your codebase.
If you don’t have that knowledge, you can’t review the code effectively. You’re just vibing and hoping.
I’ve been writing code for over two decades. AI tools make me faster at things I already know how to do. They help me write boilerplate, explore unfamiliar APIs, generate tests, and rubber-duck problems.
What they don’t do is replace the understanding I’ve built over years of making mistakes and fixing them.
When I use AI to generate code, I read it. I understand what it’s doing. I notice when it’s using an outdated pattern or missing error handling or introducing a security hole. I can do this because I’ve written similar code hundreds of times without AI assistance.
Vibe coders skip this part. They see working code and assume good code. They ship the first thing that passes a smoke test. They build startups on foundations they couldn’t explain if you asked them.
This is going to end badly for a lot of people. We’re going to see more breaches like Tea App. More platforms like Base44 discovering critical vulnerabilities after acquisition. More databases getting nuked by helpful AI assistants. More malware installed through hallucinated packages.
The solution isn’t to stop using AI tools. They’re genuinely useful and they’re only getting better. The solution is to stop pretending that prompting is programming.
Learn the fundamentals. Understand security basics. Know what your code does before you ship it. Use AI to accelerate work you could do yourself, not to replace understanding you never built.
AI-assisted coding is a superpower. Vibe coding is gambling with other people’s data.
Choose wisely.