Microservices were a mistake. Not the concept itself, which has legitimate uses at genuine scale. The mistake was convincing an entire generation of developers that their CRUD app needed to be split into 47 independently deployable services communicating over a message bus designed by someone who watched one too many conference talks.
I said what I said. Come at me.
Netflix ruined everything
This is Netflix’s fault. Not intentionally. They shared how they solved their genuinely massive scale problems, and the rest of the industry collectively lost its mind.
Suddenly every startup with twelve users and a dream needed a service mesh. Every team with five developers needed Kubernetes. Every application that could have been three database tables became a distributed system requiring a PhD to debug.
Netflix has thousands of engineers and hundreds of millions of users. They needed microservices because a monolith literally could not handle their scale.
You are not Netflix. I am not Netflix. The company you work for is almost certainly not Netflix (well it might be). But we all pretended we were, and now half the industry is drowning in unnecessary complexity.
The term for this is cargo culting.
Tribes who saw planes land during World War II built fake runways hoping planes would come to them too. Developers who saw Netflix succeed with microservices built fake distributed systems hoping success would come to them too. It did not. The planes did not land. The services just kept multiplying.
You turned one problem into twenty
Here is what a monolith gives you. One thing to deploy. One thing to monitor. One thing to debug. One database to query. One log file to read. Function calls between components that happen in microseconds with zero network latency and zero chance of packet loss.
Here is what microservices give you. Fifty things to deploy. Fifty things to monitor. Fifty things that can fail independently at 3am. Network calls between components that can timeout, retry, fail silently, fail loudly, or succeed on the third attempt in a way that causes duplicate processing.
Distributed tracing that you set up incorrectly so you cannot actually follow a request through the system. Logs scattered across dozens of services that you have to correlate manually because your observability platform costs more than your engineering salaries.
You took one problem and made it twenty problems. Congratulations. Your architecture diagram looks very impressive in the investor deck.
The five person team running fifty services
I have seen this. Multiple times. A team of five engineers responsible for fifty microservices. They spend more time on deployment pipelines than features. They spend more time debugging network issues than writing code. Every change requires updating three services, coordinating deployments, and praying nothing breaks in between.
These teams would ship twice as fast with a monolith. They know it. Their managers know it. But nobody wants to admit that the architecture decision was wrong because admitting that means admitting they wasted two years migrating away from something that worked fine.
So they keep going.
They add more tooling. They hire a platform team to manage the complexity they created. They write internal documentation that nobody reads explaining which service talks to which other service. They build a service catalogue so people can find things. They are now maintaining infrastructure instead of building product. This is called success in some circles.
Kubernetes is not helping
You know what makes microservices even better? Running them on Kubernetes. Now you have fifty services AND a container orchestration platform that requires its own dedicated team to understand.
Kubernetes is an incredible piece of technology that solves problems you do not have. It was built by Google to run Google scale infrastructure.
You are not running Google scale infrastructure. You are running a web application that could comfortably live on a single server but instead spans three clusters across two regions because someone said the words high availability in a meeting once.
I have watched teams spend months getting Kubernetes right. Configuring ingress. Setting up service discovery. Debugging networking policies. Wrestling with persistent volumes. Learning Helm. Learning Kustomize. Learning that Helm and Kustomize do not play nice together. All so they can run an application that previously worked fine on a $20 VPS.
The cloud providers love this. You are paying for compute, storage, networking, load balancers, managed Kubernetes fees, and probably a service mesh license. They are thrilled. Your CFO less so, once they work out where the money is going.
Debugging distributed systems is a nightmare
Something is wrong in production. A request is failing intermittently. In a monolith, you add some logging, reproduce the issue, read the logs, find the bug. Maybe an hour of work if it is tricky.
In microservices, you start by figuring out which service is involved. Then you realise multiple services are involved. You check the logs for service A. The request looks fine. You check service B. Also fine. Service C? Fine. The message queue? Messages are flowing. The database? No errors.
You spend three hours correlating timestamps across services. You discover the issue happens when service B responds slowly, causing service A to timeout, but not always, only under certain load conditions that you cannot reproduce locally because your laptop cannot run fifty services simultaneously.
You eventually find the bug. It is a misconfigured timeout. A single number in a config file. You have lost a day. In a monolith this would have been a fifteen minute fix, maybe an hour, but still considerably less than a full day.
This is not a hypothetical. This is every week for teams running complex microservice architectures. They have normalised the pain. They think this is just how software works now. It is not. They chose this.
Resume driven development
Let’s’ be honest about why this happened.
Microservices look good on a resume. Kubernetes looks good on a resume. Service mesh, event sourcing, CQRS, saga patterns, all of it looks good on a resume.
A monolith does not look good on a resume. Nobody gets hired at a trendy startup by saying they maintained a well-structured Rails application that handled millions of requests without drama. That sounds boring. That sounds like you do not know modern architecture. That sounds like you are stuck in 2010.
So developers push for microservices even when they are unnecessary. Architects draw complex diagrams because simple diagrams look lazy. Teams adopt technologies they do not need because the alternative is explaining to recruiters why their stack is not cutting edge.
This is how we ended up here. Not because microservices were the right choice. Because microservices were the impressive choice. The industry optimised for looking smart instead of being effective.
The pendulum is swinging
Good news. People are starting to admit this was a mistake.
The hot take of 2020 was microservices everything. The hot take of 2024 was maybe monoliths are fine. The hot take of 2026 is that we wasted half a decade on complexity that bought us nothing.
Amazon published a case study about moving from microservices back to a monolith and cutting costs by 90%. DHH has been screaming about this for years and people are finally listening. Teams are consolidating services. The majestic monolith is making a comeback.
It turns out that a well-structured monolith, with clear module boundaries and good separation of concerns, gives you most of the benefits of microservices without the operational nightmare. You can still deploy independently if you set it up right. You can still scale horizontally. You just do not have to debug network partitions at 3am.
When microservices actually make sense
I am not saying microservices are always wrong. They make sense when you have genuinely independent teams that need to deploy on different schedules. They make sense when parts of your system have radically different scaling requirements. They make sense when you are Netflix, or Uber, or Amazon, and a monolith literally cannot handle your load.
For everyone else, they are probably a mistake.
If your team is under fifty people, you probably do not need them. If your application handles under a million requests per day, you probably do not need them. If you cannot articulate a specific problem that microservices solve for you, you definitely do not need them.
Build a monolith. Structure it well. Extract services later if you actually hit the problems that services solve. Do not preemptively optimise for scale you will never reach. Do not build Netflix infrastructure for your startup that might not exist in two years.
Your investors do not care about your architecture. Your users do not care about your architecture. The only people who care about your architecture are the engineers maintaining it, and they would probably prefer something simpler.
Give them simpler. Stop making everything so complicated. The industry will thank you.