Image: Australia's Nuclear Option: Block AI Apps That Don't Verify
Australia's internet regulator is threatening to make AI chatbots disappear from app stores and search results. More than half of major AI services haven't said a word about complying. The deadline is next week.
The eSafety Commissioner, Australia's top online safety watchdog, issued a stark warning on Sunday: if AI services don't verify user ages, the regulator may compel Apple's App Store, Google Play, and major search engines to block them entirely. It's the most aggressive escalation any government has taken against AI platforms on child safety, and it targets the pipes - not just the products.
The mechanics matter here. Australia already forced age-assurance compliance onto search engines like Google and Microsoft Bing starting December 2025. As of March 2026, those same requirements extend to AI chatbots, app stores, and pornographic sites. The rules don't just say "check ages." They require documented, public-facing evidence that platforms have actually done it.
Reuters reviewed the compliance status of major AI services and found over half hadn't published any steps toward meeting the deadline. That's not a close call. That's silence from platforms that collectively handle hundreds of millions of conversations per day.
Regulators have spent years playing whack-a-mole with social media over youth harm. Australia banned under-16s from social platforms in late 2024. But researchers have been flagging something new: AI chatbots may be more harmful to adolescent mental health than social media ever was.
Social media shows teenagers curated content from other people. AI chatbots talk back. They adapt, personalize, and can sustain conversations that mimic emotional relationships. Several ongoing lawsuits against AI companies allege that their platforms failed to prevent - and in some cases actively escalated - exchanges involving self-harm and violence with minors.
The second-order effect here is architectural. If eSafety actually forces app stores to block non-compliant AI services, it would mark the first time a government has weaponized distribution infrastructure - not content rules - to regulate AI. Apple and Google become enforcement arms of national child safety law.
That's a different kind of pressure than a fine or a content moderation order. Getting pulled from the App Store is existential for most AI startups. Even a credible threat moves the negotiating position dramatically.
What does age-assurance actually mean in practice? Australia's framework doesn't mandate a specific method, but it expects platforms to use techniques that are "proportionate and privacy-preserving" - think age estimation via device data, credit card checks, or third-party verification services. The point is you have to try, and you have to show your work publicly.
The silence from more than half of AI services suggests either that compliance is harder than expected, that legal teams are still assessing exposure, or that some companies are gambling on enforcement being toothless. The eSafety warning was designed to correct that last assumption.
Australia has form here. The country passed the world's first social media age ban for minors, with enforcement teeth attached. eSafety has already fined companies for non-compliance with previous codes. The regulator doesn't bluff as often as US or EU counterparts.
Watch for two ripple effects. First, if Australia forces app store enforcement, the EU and UK will take notice. The Online Safety Act in the UK already has latent powers in this direction. The question has always been political will. A functioning precedent from Australia removes a key objection.
Second, this rewrites the risk calculus for AI companies operating globally. Building an age-gating system is no longer just a PR exercise - it's table stakes for distribution. Companies that treated youth safety as a compliance footnote are now facing the prospect of getting cut off from the channels that deliver most of their users.
The deadline is March 2026. The AI industry has a week to respond publicly. Most of them haven't started talking yet.