Evaluating the Policy Effectiveness of Australia’s Social Media Restrictions for Minors

In November 2024, the Australian Parliament passed an amendment of the Online Safety Act 2021. The Online Safety Amendment (Social Media Minimum Age) Act 2024 is the first law in the world to establish a national minimum age for social media use and an obligation on providers of an ‘age-restricted social media platform’ to take reasonable steps to prevent age-restricted users from having an account with the platform.

Essentially, the amendment prevents children under 16 from having an account on an “age-restricted social media platform”. The Act defines an “age-restricted social media platform”, as “an electronic service that…enables online social interaction between 2 or more end users, allows end-users to link or interact… or to post material on the service.” Social media companies have been given some time to transition into compliance, and implementation is expected to begin by the end of 2025. 

The Minister for Communication, Ms Michelle Rowland, in her second reading speech, highlighted that “almost two-thirds of 14- to 17-year-old Australians have viewed extremely harmful content online, including drug abuse, suicide or self-harm, as well as violent material. A quarter have been exposed to content promoting unsafe eating habits.”

She said, “It is the right thing to do for the right reasons at the right time.

Stakeholder opinions, however, have been deeply divided on this ban. While some praise it as a good approach to dealing with an “anxious and addicted generation”, others think it is a sham, creating problems of exclusion and being largely unenforceable.

This commentary will neither criticise nor praise the legislative effort. On the one hand, I understand the dire situation that has driven the Australian government to such extreme ends to protect its children. Interestingly, I also appreciate the boldness to think outside the box and not skirt around talking points and endless debates, not moving the needle significantly – a tiring situation we are seeing in global child online safety policy forums and spaces. So, I truly applaud that. However, I also share concerns, as many experts have verbalised, around enforcement and privacy concerns with requiring age verification. 

This commentary is to discuss the policy effectiveness of the Act, highlighting three major problems the Act will encounter in implementation; age verification, circumvention and enforcement. The commentary will also cast a vision for, perhaps, a smarter policy roadmap for child online safety. 

Three Fundamental Challenges

Age Verification

For this policy to be effective, the precondition of age verification exists and automatically raises tension around privacy protection. The age-long question has remained, how do we effectively verify a user’s age without demanding intrusive data or compromising their privacy? Contemporarily, the three primary age verification models have been self-declaration, third-party verification (such as government-issued ID or biometric data) and AI and Behavioural Analysis. 

All three have been criticised as either being highly ineffective, highly intrusive or highly controversial. For the Australian regulation to be effective and safe, there will need to be a more intrusive data collection system in place, which will somehow also preserve privacy. 

What global precedents show is that age verification and attempts to legislate it at scale will face technical, legal and ethical barriers, exposing users to other risks such as data breaches, surveillance and identity fraud.  

Circumvention

Very closely connected to the weakness of age verification is the possibility of circumvention. Because digital services are not monitored by physical measures, there are opportunities for workarounds and technical bypasses, especially with regulatory asymmetries across platforms.

For instance, an under-16 in Australia may simply use a VPN to connect as an American and gain access even with accurate age verification. To insist on this policy and to make it effective,  the implication is that the Australian government will have to further ban the use of VPNs, proxy servers, encrypted networks and shared family accounts – which would be a complete policy disaster if done. And this is no strange direction. In Utah, to ensure the policy effectiveness of its Age Verification law,  it requires that the “social media company shall not permit a Utah minor account holder to change or bypass restrictions on access as required by this section”, meaning that platforms would need to treat all VPN users as Utah Residents, blocking the use of VPNs completely – this is difficult to justify!

With the risk of circumvention comes the risk of pushing minors into unregulated spaces (it is also a form of circumvention). The displacement effect may cause kids to migrate to less-moderated, “less-watched”, smaller and yet higher-risk platforms, which would increase their chances of engaging with more harmful, predatory content. 

Practical Enforceability

In addition to all of this, it is confusing who is responsible for what. Who bears the burden of enforcing the ban? Of course, the responsibilities of platforms are clear: Ensure that age-verification systems, data collection measures, and content access filters are put in place to prevent the admission of minors. But what if children are still able to access social media despite these? Who will bear the brunt? Parents? If yes, socioeconomic and literacy disparities mean some families are better equipped than others to supervise children’s digital access, potentially creating an unequal regulatory burden.

[Perhaps], A Smarter Policy Roadmap for Child Online Safety

The devil is not just in the details; he is also in the extremes. 

The Australian government has taken an extreme protectionist/prohibitionist approach to safeguarding its children. This is well-intentioned but fails to address the root causes of digital harm. This approach addresses online harms solely from the user approach: get the user off! And in a perfect world, the user would stay off and all will be well. However, our analysis above and history have shown that this is not reality. Children – maybe a reduced number – would still be on social media. And if social media platforms in Australia are allowed to run with the assumption that there are no children on their platform, then the harms for those children actually on the platform are even far greater. 

A better policy approach will be a mix of education, platform accountability, and a more nuanced (not absolute) restriction

Education

Digital literacy must be a strong pillar in the policy approach to child safety. Harm in online spaces is both a product of access and a lack of preparedness. Restriction of access alone (which we have deemed to be ineffective), will not be enough. Children and parents need to be equipped with the necessary skills to be digitally resilient. Therefore forward-thinking regulation should actually be geared towards a digitally literate generation that can safely navigate social platforms. After they turn 16, they will be introduced to the ills online, and they will need to be literate enough to thrive. 

Rather than delegating online safety solely to parents, policy should appoint schools to play a direct role in equipping students with practical, non-political, age-appropriate digital skills such as algorithmic awareness, privacy literacy and online etiquette. 

Platform Accountability

There is no world in which platforms should be given the bare minimum task of simply keeping children off their platforms. Platforms should be required to implement child-friendly settings by default. For instance, private-by-default accounts for children below 16 or complete restriction to content suggestions (‘For You’ pages).

Nuanced Restriction

Rather than imposing absolute prohibitions, a more nuanced and proportionate regulatory model should calibrate restrictions based on risk level, age, and platform engagement dynamics.

For instance, China’s approach to imposing gaming hours for minors (as opposed to banning gaming altogether.). A ‘digital curfew’ or time-based cap can be applied to allow for controlled access rather than prohibition.  

This can also be done based on age, where platforms are required to implement a graduated content exposure system. Younger users will be progressively introduced to different digital experiences, which also allow for parental overrides. 

Conclusion

A regulatory measure is only as effective as it is enforceable. For a policy to be effective, it must be feasible.

Even the Australian government is seeing loopholes in its strict ban and has indicated that YouTube may be given an exemption because of its educational uses. Of course, other social media platforms like TikTok and Meta have protested this, citing that YouTube, as a platform, carries all the purported risks for which the policy was made – despite its seeming educative value. 

The reality of this ease on the ban reveals a fundamental flaw in [strict] access-based regulations: that digital ecosystems do not operate in binary categories of “safe” and “unsafe. Selective enforcement, ipso facto, creates regulatory instability and policy fragmentation. It raises questions about the criteria for platform exemption. What is ‘educational content’, and why is YouTube the only social media platform that houses this? 

The Australian government has done something bold and has opened the conversation on child online safety wider, allowing for more deliberation on what is effective and what isn’t. But there is a need for balance. 

And this balance will require that the policymakers recognises that:

  • Risk is platform-agnostic. If children do not encounter harm on social media, they will be on gaming platforms or engaging in other digital services. 
  • Balance matters. Restrictions are not evil; they just have to be proportionate. 
  • Empowerment is powerful. Educating parents, caregivers and children is not a weak approach to online safety. 

Leave a Reply

Your email address will not be published. Required fields are marked *