In less than a week, Australia becomes the first democracy in the world to ban under-16s from social media. On December 10th, 2025, platforms like TikTok, Instagram, YouTube, X, Snapchat, Reddit, and Twitch will be legally required to boot millions of Australian teenagers off their platforms or face fines up to $50 million per violation.
We are about to run a massive social experiment on an entire generation. And I do not think protecting children is the real goal here.
The Protect The Children Shield
Here is the genius of this legislation: if you criticise it, you are framed as supporting online predators. Do not like mass age verification? You must want children to be groomed. Concerned about surveillance infrastructure? Why do you hate kids so much?
It is the same playbook every time. Encryption backdoors were sold as catching terrorists. Metadata retention was about stopping child exploitation. Now mandatory identity verification is about protecting teens from cyberbullying. The stated justification is always something so morally unassailable that questioning the method makes you sound like a monster.
But let us actually think about what this law does. TikTok, Instagram, YouTube, X, Snapchat, Reddit, Twitch, Kick, and Threads are all banned for under-16s. You know what is not banned? Discord. WhatsApp. Roblox. Steam Chat. Pinterest. Messenger.
Discord. The platform teenagers actually use to chat these days. The platform where private servers can be set up with zero oversight. The platform that has had actual documented problems with predatory behaviour in private channels. The eSafety Commissioner confirmed on 21 November 2025 that Discord is exempt because its primary purpose is not social interaction. Apparently private Discord servers where teenagers congregate do not count as social interaction.
So we are not actually blocking the places where harm happens. We are blocking the public-facing platforms where behaviour is at least somewhat visible and reportable, while leaving the private, encrypted, harder-to-moderate spaces completely untouched.
If your goal was genuinely protecting children, this makes no sense. If your goal was normalising identity verification infrastructure for major platforms, it makes perfect sense.
Pushing It Underground
Here is what is going to happen. Teenagers are not going to suddenly start reading books and playing outside. That is boomer fanfic. We are talking about a generation raised on digital-first identity, friendships, and belonging. We are telling them to revert to a world that no longer exists.
They are going to migrate. Discord servers will explode. New quasi-social platforms will pop up in spaces the legislation does not cover. VPNs will become standard issue for any tech-literate 14-year-old. I was finding ways around filters and blocks when I was a teenager in the early 2000s. This will be no different.
And the things we claim to be protecting kids from? Cyberbullying, predatory behaviour, harmful content? Those will not disappear. They will just move somewhere harder to see. Right now, if a kid is being harassed on Instagram, there is at least a paper trail. Reports can be filed. Accounts can be banned. Parents can check their kids’ phones and see what is happening.
Push all of that into Discord DMs, private Telegram groups, and whatever new platform fills the void, and suddenly you have lost all visibility. The bad actors do not stop existing because you made Instagram verify ages. They just get better at hiding.
Research actually backs this up. Studies have found that strict social media restrictions and bans result in feelings of isolation, foster rebellion against authority, and contribute to underdeveloped digital literacy skills. The current evidence on bans improving adolescent mental health is weak and inconclusive. One study found that time spent on social media is not even predictive of mental health outcomes. But sure, let us ban it anyway.
What happens to kids who relied on these platforms for genuine connection? Rural teenagers. Neurodivergent kids. LGBTQ+ youth in unsupportive households. Research shows that for teenagers who are not successful connecting with peers offline, because they are isolated geographically or do not feel accepted in their communities, electronic connection can be lifesaving. We are about to rip that away overnight.
I genuinely wonder if this will lead to an increase in teenage depression and suicide by isolating children from their primary social structures. We tore away their dominant form of connection and told them to just go outside and make friends. Kids will not suddenly form healthier offline communities because we willed it so.
This is the blocklist situation all over again. Remember when the government started blocking torrent trackers to stop piracy? Reasonable enough, people thought. Then the blocklist expanded. And expanded. Sites started getting added with minimal oversight. The infrastructure that was built for one narrow purpose kept growing because that is what these systems do.
We went from blocking The Pirate Bay to a sprawling censorship apparatus that most Australians do not even know exists. And now we are building the identity verification equivalent. When teenagers find workarounds, and they will, the government will tighten the reins further. How do you stop workarounds without going full China? Not even China or North Korea can completely stop people working around their blocks. They make it hard, but never truly impossible.
The Deliberately Vague Compliance Requirements
Here is something important that is not getting enough attention: the legislation does not actually specify how platforms are supposed to comply. It just says they need to take reasonable steps to prevent under-16s from having accounts, or face massive fines.
That is it. Figure it out yourselves, platforms. Good luck.
The eSafety Commissioner has approved seven methods for age verification:
- Photo ID uploads
- Facial recognition and age estimation
- Credit card verification
- Digital ID
- AI behavioural analysis
- Third-party verification services
- Some combination of the above
The government insists that platforms cannot mandate government ID for age verification. At least for now. But here is the clever part: every other method is inherently flawed, and the government knows it.
Facial scanning has error rates of two to three years. A 13-year-old can pass as 16. A 16-year-old can be flagged as underage. When I was 16, I looked about 12. I was getting asked for ID well into my late 20s. Not every teenager looks their age.
So what happens when you get erroneously flagged as underage? You have to prove you are not. And how do you prove your age definitively? With ID.
The government has set up a system where ID verification is not mandatory, but becomes practically necessary the moment anything goes wrong. Platforms know this. They know facial scanning will generate false positives. They know the fines for non-compliance are up to $50 million. They know the safest option is the one backed by $288 million in government funding and a ready-made API.
The government does not have to mandate ID verification. They just have to make it the only reliable option and let platforms choose it voluntarily. Plausible deniability built right in.
The $288 Million Coincidence
Let us talk about myID. In 2024, the Australian Government allocated $288.1 million to the Digital ID system. That is an eleven-fold increase from the previous budget. The Digital ID Act 2024 came into effect on December 1st, 2024. Exactly one year before this social media ban kicks in.
The myID system now supports 76 different online services. The Digital Transformation Agency has been building APIs to let private sector entities plug into government identity verification. Banks and Australia Post are first in line, with full private sector access opening December 2026.
This is not conspiracy thinking. This is just reading the room. The government has tried to introduce national identity systems multiple times. The Australia Card in 1986. The Access Card in 2006. Both were rejected because Australians do not like the idea of showing papers to participate in society.
So they learned. You do not introduce it all at once. You build the infrastructure piece by piece, each step justified by something nobody can argue against. Stopping fraud. Preventing terrorism. Protecting children. By the time people realise what has been built, it is already everywhere.
The irony is not lost on me that this same government has spent millions on campaigns about understanding consent while simultaneously coercing users into a system where participation requires handing over personal data. Consent is important, unless the government really wants your information.
Your Data Will Be Breached
Here is what happened in the UK. On July 25th, 2025, their age verification laws came into force for adult websites. On that same day, the Tea dating app was breached. Thousands of photos and over 13,000 sensitive ID documents were leaked and circulated on 4chan.
Not within three months. The same day.
VPN downloads in the UK surged over 1,400% as people tried to bypass the requirements. A petition to repeal the law gathered over 420,000 signatures, triggering a formal Parliamentary debate. Privacy advocates had warned this would happen. It happened faster than anyone expected.
Australia has already had massive data breaches at Optus, Medibank, and Latitude in recent years. Millions of Australians had their identity documents exposed. And the solution is to create more centralised identity verification touchpoints?
Here is my question: when your ID gets breached, and it will, what is the recourse? If you were compelled to provide your ID to participate in basic online services, who is responsible when that data ends up on some hacker forum? The government that built the system? The platform that implemented it? The third-party verification service that stored it?
The answer is nobody. You will get a form letter apologising for the inconvenience and maybe some free credit monitoring. Meanwhile your identity documents are circulating forever.
What If Platforms Just Leave?
Here is a scenario nobody seems to be discussing: what if platforms just refuse to comply and pull out of Australia?
We are a market of 26 million people. That sounds like a lot until you compare it to the compliance costs and legal liability of operating under these laws. If the fines are $50 million per breach and the verification systems are unreliable, at what point does it make more business sense to just geoblock Australia and move on?
If that happens, the children this legislation claims to protect will be even more isolated. And the ones who really want access will find it on lesser-known platforms with even less moderation. Research consistently shows that when you push people off mainstream platforms, some percentage end up radicalising on fringe alternatives. We could literally be creating the conditions for the exact harms we claim to be preventing.
The Incremental Play
I want to be clear about what I think is happening here. The government knows it cannot roll out mandatory digital ID for everything overnight. Australians would lose their minds. So instead, they are building it incrementally.
First, you make it available for government services. Voluntary, of course. Then you expand to banking and finance. Still voluntary. Then you add age verification requirements for adult content. Then social media. Then maybe online gambling. Then maybe alcohol purchases. Then maybe political speech that needs to be verified as coming from a real person.
Each step has a reasonable justification. Each step is technically voluntary. But the more services require age or identity verification, the more impractical it becomes to not have a Digital ID. Voluntary becomes effectively mandatory through a thousand small cuts.
Senator David Shoebridge pointed out something telling about this legislation. The actual enforcement mechanisms, the specific verification requirements, the technical standards - none of that goes through parliament. It gets decided by regulators and ministers without debate. The law that passed was just the door opener. The details get filled in later, away from public scrutiny.
What Would Actually Help
Maybe the solution is not banning teenagers from the internet. Maybe it is:
- Teaching social media literacy and online safety in schools
- Parents actually watching what their kids do online and having uncomfortable conversations with them
- Holding platforms accountable for algorithmic amplification of harmful content
- Mandating design changes that reduce addictive patterns
- Funding research into what actually works instead of rushing through legislation for political points
But that requires effort. That requires nuance. That does not make for a good headline about protecting children.
I think social media is toxic in many ways. There are real harms, especially for young people. The addictive design, the algorithmic rage-bait, the comparison culture. These are legitimate problems worth addressing.
But I am and never will be a fan of government surveillance and centralised verification systems for ID. The cure should not be worse than the disease. And building a national identity verification apparatus that will inevitably expand beyond its original scope is not protecting children. It is laying groundwork for something else entirely.
Where This Goes
Australia already has one of the most aggressive content takedown regimes in the democratic world through the eSafety Commissioner. We have mandatory metadata retention. We have laws that let the government compel companies to build encryption backdoors. And now we are adding mandatory identity verification infrastructure.
At some point you have to look at the pattern and ask what kind of internet we are building here. Every other democracy is watching Australia to see how this plays out. If it works, if the public accepts it, this model will spread.
I keep thinking about how China introduced its Cyberspace ID system under the banner of protecting citizens from fraud and cybercrime. The justifications sound remarkably similar. I am not saying Australia is becoming China. I am saying that the infrastructure being built is dual-purpose by design, and the history of these systems is that they always expand beyond their original scope.
December 10th is less than a week away. Millions of Australian teenagers are about to become test subjects in a world-first experiment. The politicians will pat themselves on the back for protecting children. The surveillance infrastructure will quietly grow. And somewhere, a bureaucrat is already drafting the next expansion of reasonable identity verification requirements.
I hope I am wrong about where this is heading. But I have watched this playbook run too many times to believe it stops here.