Australia’s internet regulator has accused the world’s largest social media companies of not adequately implementing the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to prevent new accounts. In its first compliance report since the ban took effect, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Exposed in Initial Significant Review
Australia’s eSafety Commissioner has documented a worrying pattern of failure to comply amongst the world’s biggest social media platforms in her first formal review since the ban came into effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have collectively failed to implement adequate safeguards to stop minors from accessing their services. Julie Inman Grant expressed particular concern about structural gaps in age verification systems, highlighting that some platforms have allowed children who initially declared themselves under 16 to subsequently claim they were older, thereby undermining the law’s intent.
The findings indicate a notable intensification in the regulatory action, with the eSafety Commissioner moving beyond monitoring to direct enforcement. The regulator has emphasised that merely demonstrating some children still hold accounts is inadequate; platforms must rather provide concrete evidence that they have established robust systems and processes designed to prevent under-16s from opening accounts in the first place. This shift demonstrates the government’s determination to hold tech giants accountable, with possible sanctions looming for companies that do not meet the statutory obligations.
- Enabling previously banned users to confirm again their age and restore account access
- Allowing multiple tries at the same age assurance method without penalty
- Weak systems to block accounts for under-16s from being established
- Inadequate complaint mechanisms for parents and the general public
- Shortage of transparent data about compliance actions and account removals
The Scope of the Challenge
The substantial scale of social media activity amongst young Australians underscores the regulatory challenge confronting both the government and the platforms themselves. With millions of accounts already restricted or removed since the ban’s implementation, the figures paint a picture of widespread initial non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to implementing age restrictions have turned out to be considerably more complex than anticipated, with platforms having difficulty to differentiate authentic age confirmations from false claims. This intricacy has placed enforcement authorities wrestling with the fundamental question of whether existing age verification systems are sufficient for the purpose.
Beyond the technical obstacles lies a broader concern about the readiness of companies to prioritise compliance over user growth. Social media companies have consistently opposed strict identity verification requirements, citing data protection worries and the genuine difficulty of verifying age digitally. However, the regulatory report suggests that some platforms may not be making adequate commitment to deploy the infrastructure mandated legally. The move to active enforcement represents a pivotal moment: either platforms will substantially upgrade their regulatory systems, or they risk facing substantial fines that could reshape their business models in Australia and possibly affect regulatory approaches internationally.
What the Figures Indicate
In the first month subsequent to the ban’s launch, Australian officials reported that 4.7 million accounts had been restricted or deleted. Whilst this figure initially looked to prove compliance achievement, subsequent analysis reveals a more nuanced picture. The sheer volume of account takedowns implies that many under-16s had successfully created accounts in the first place, revealing that preventive controls were lacking. Furthermore, the data raises questions about whether removed accounts reflect real regulation or merely users closing their accounts of their own accord in response to the latest limitations.
The minimal transparency regarding these figures has frustrated independent observers trying to determine the ban’s genuine effectiveness. Platforms have revealed minimal information about their implementation approaches, success rates, or the characteristics of removed accounts. This opacity makes it challenging for regulators and the wider public to determine whether the ban is working as intended or whether young people are just locating alternative ways to access social media. The Commissioner’s push for detailed evidence of systematic compliance measures reflects mounting dissatisfaction with platforms’ unwillingness to share complete details.
Industry Response and Pushback
The social media giants have addressed the regulator’s enforcement action with a mixture of assurances of compliance and doubts regarding the ban’s practicality. Meta, which operates Facebook and Instagram, stressed its commitment to complying with Australian law whilst simultaneously arguing that accurate age determination continues to be a significant industry-wide challenge. The company has called for a alternative strategy, suggesting that strong age verification systems and parental consent requirements put in place at the app store level would be more effective than enforcement at the platform level. This position demonstrates wider concerns across the industry that the existing regulatory system puts an unrealistic burden on individual platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, announcing that it had suspended 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers dispute whether such figures reflect authentic adherence or merely reactive account management. The fundamental tension between platforms’ commercial structures—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to actively exclude an entire age demographic persists unaddressed. Companies have consistently opposed stringent age verification, citing privacy issues and technical constraints, creating a standoff between regulators and platforms over who bears responsibility for implementation.
- Meta argues age verification ought to take place at app store level instead of on individual platforms
- Snap asserts to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups highlight privacy issues and technical obstacles as barriers to effective age verification
- Platforms assert they are making their best effort whilst challenging the ban’s general effectiveness
Wider Inquiries About the Prohibition’s Efficacy
As Australia’s under-16 online platform ban moves into its implementation stage, fundamental questions remain about whether the law will achieve its stated objectives or merely push young users towards unregulated platforms. The regulatory authority’s initial compliance assessment reveals that following implementation, substantial gaps remain—children keep discovering ways to circumvent age verification mechanisms, and platforms have struggled to prevent new underage accounts from being created. Critics contend that the ban’s success depends not merely on regulatory oversight but on whether young people will truly leave mainstream platforms or simply migrate to alternative services, encrypted messaging applications, or VPNs designed to mask their age and location.
The ban’s international ramifications add another layer of complexity to assessments of its success. Countries such as the United Kingdom, Canada, and multiple European countries are observing Australia’s experiment closely, considering similar laws for their respective populations. If the ban proves ineffective at reducing children’s online activity or cannot protect them from damaging material, it could undermine the case for similar measures elsewhere. Conversely, if implementation proves sufficiently strict to effectively limit underage usage, it may encourage other nations to implement similar strategies. The outcome will potentially determine worldwide regulatory patterns for the foreseeable future, making Australia’s enforcement efforts examined far beyond its borders.
Who Gains and Who Is Disadvantaged
Mental health supporters and organisations focused on child safety have endorsed the ban as a necessary intervention against algorithmic manipulation and exposure to harmful content. Parents and educators argue that taking young Australians off platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—maintaining friendships, obtaining educational material, and engaging with online communities around shared interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families question.
The ban’s real-world effects reaches past individual users to affect content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now encounter legal barriers to participation. Small Australian businesses that rely on social media marketing are cut off from younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban unexpectedly benefits large technology companies with resources to create age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Compliance Monitoring
Australia’s eSafety Commissioner has signalled a significant shift from hands-off observation to active enforcement, marking a critical turning point in the rollout of the under-16 ban. The authority will now compile information to determine whether services have failed to take “reasonable steps” to block minors from using, a legal standard that goes further than simply recording that minors continue using these platforms. This approach necessitates demonstrable proof that companies have introduced proper safeguards and procedures intended to prevent minors. The regulatory body has signalled it will conduct enquiries carefully, building cases that could trigger substantial penalties for non-compliance. This transition from oversight to action reveals increasing dissatisfaction with the services’ existing measures and signals that willing participation on its own will not be enough.
The implementation stage raises significant concerns about the sufficiency of sanctions and the practical mechanisms for holding tech giants accountable. Australia’s legislation delivers compliance mechanisms, but their effectiveness relies on the eSafety Commissioner’s readiness to undertake official proceedings and the platforms’ capacity to respond effectively. Overseas authorities, particularly regulators in the UK and EU, will carefully track Australia’s enforcement strategy and results. A successful enforcement campaign could establish a template for other nations contemplating similar bans, whilst failure might undermine the comprehensive regulatory system. The next phase will prove crucial whether Australia’s pioneering regulatory approach translates into real safeguards for adolescents or stays primarily ceremonial in its impact.
