The Australian Government’s Online Safety Amendment (Social Media Minimum Age) Bill 2024 has sparked significant backlash from major social media and tech platforms, with major concerns raised over human rights, privacy risks, a rushed inquiry and vague definitions.
The inquiry into the government’s proposed laws to ban children and young teenagers from social media was open for just 24 hours and was inundated with around 15,000 submissions. The bill was sent to a Senate inquiry on Thursday, with submissions open until Friday. A half-day public hearing took place on Monday, and the committee is set to release its report on Tuesday.
The Albanese government’s proposed changes include sweeping reforms to shield children under 16 from the perceived harms of social media, citing growing concerns about the mental health effects on young Australians. Under the new laws, platforms such as X, TikTok, Facebook, Instagram, Snapchat, and Reddit would be required to block under-16s from creating accounts, while educational platforms like YouTube would be exempt.
While the Bill seeks to safeguard young Australians online, industry leaders have raised a host of concerns.
Snap
Snap Inc. raised significant concerns over the bill, emphasising the potential unintended consequences of the legislation. In its submission, Snap argued that while safeguarding children online is a critical priority, the Bill’s rushed process, vague guidelines, and reliance on incomplete technologies could harm young people and fail to achieve its intended goals.
Snap highlighted the lack of consensus among experts about banning minors from social media, with “significant criticism and widespread opposition among experts both in Australia and globally”.
- Over 100 academics, child rights organisations, and mental health advocates have opposed such bans, warning they could isolate young people from essential mental health and wellbeing resources.
- Additionally, the Government-established bipartisan Joint Committee on Social Media and Australian Society refrained from recommending a ban in its final report.
- International regulators, including Norway’s Committee on Screen Time, have similarly found no evidence to justify age-based bans on social media access.
The company warned that driving minors away from mainstream, regulated platforms could push them toward less safe, unregulated digital spaces.
Snap also criticised the Bill’s heavy reliance on unproven age verification technologies. The government had previously acknowledged in its Age Assurance Roadmap that the age verification market remains immature, with significant privacy, security, and implementation challenges.
Despite this, and without significant technological advancement, Snap raised concerns that the Bill requires platforms to implement unspecified “reasonable steps” for age assurance, leaving companies to navigate unclear requirements and potential penalties.
Snap pointed to recent data breaches as evidence that current verification systems are not equipped to handle the sensitive data required under the Bill. Snap argued that the legislation fails to provide a workable or privacy-preserving framework by shifting the responsibility onto platforms.
As a more effective alternative, Snap proposed device-level age verification. This approach leverages the existing age data collected during device setup (e.g., iPhone or Android registration) and makes it available to apps and platforms. Snap argued this solution would reduce the need for repeated data submissions, minimise privacy risks, and streamline age assurance for users and parents. The company likened this approach to real-world systems – like the sale of alcohol- where age verification is conducted at the point of sale rather than by individual product manufacturers.
Snap’s position on this matter, it stated, is backed by organisations like the International Centre for Missing & Exploited Children (ICMEC), which have called for device-level solutions as a scalable and secure method for protecting children online.
Snap urged the government to delay the Bill’s implementation and conduct a more thorough consultation process. It emphasised the need for collaboration between platforms, regulators, device manufacturers, and international partners to create a balanced framework in a more realistic timeline.
“Given the Bill’s significant implications for all Australians using these services, it warrants thorough consultation and scrutiny. While it is welcome that the Environment and Communications Legislation Committee is reviewing the Bill, the extremely compressed timeline- allowing stakeholders little more than 24 hours for submissions and conducting the entire inquiry in less than a week- severely contains the possibility for thorough analysis and informed debate,” the company said in its submission.
TikTok
While reiterating its commitment to online safety—backed by a global $2 billion investment in trust and safety measures that saw it remove more than 20 million suspected underage accounts between April and June this year—, TikTok criticised Bill’s vague definitions, reliance on untested mechanisms, and potential conflicts with existing privacy laws. TikTok urged the government to undertake a more detailed consultation process to ensure the legislation is both effective and fair.
TikTok identified several issues with the Bill’s definitions, arguing that they are overly broad and poorly articulated. For example, the term “age-restricted social media platform” could be interpreted to include platforms far beyond traditional social media, such as fitness apps and music streaming services, creating confusion for businesses and users alike.
The submission highlighted unclear terms such as “social purposes,” “significant purpose,” and “enable online social interaction,” which lack precedent in existing legislation or standards. Without clarification, TikTok warns that the legislation risks being unenforceable and could unfairly target a wide range of digital services.
Furthermore, TikTok questioned the scope of the Bill, particularly whether platforms without registered user accounts would fall under its purview. The ambiguity, it argues, leaves platforms guessing about their obligations, potentially leading to inconsistent enforcement.
TikTok raised alarm over the Bill’s implications for user privacy, particularly its requirements for data deletion. It pointed out scenarios where this requirement could undermine safety. If a user provides false information after their previous data has been deleted, platforms would lack the historical context to flag discrepancies, potentially allowing underage users to bypass restrictions. Additionally, the repeated collection of age data could lead to inefficiencies and increase the likelihood of privacy breaches.
A significant point of contention for TikTok is the Bill’s reliance on the government’s yet-to-be-completed age assurance trial. TikTok criticised the approach as premature and warned that requiring platforms to verify the age of every Australian user could effectively create a “license to be online.” This would impose significant burdens on users and platforms alike, with no guarantee of the trial’s success or the practicality of its outcomes. TikTok emphasised the need for greater transparency and evidence-based decision-making before implementing such sweeping changes.
Google has outlined several critical concerns in its submission regarding the bill, cautioning that the proposed legislation, while well-intentioned, contains fundamental ambiguities and practical challenges that could undermine its effectiveness.
Google emphasised that the Bill’s definitions are overly broad, which risks capturing a wide range of digital services beyond traditional social media platforms. For instance, the definition of “age-restricted social media platform” could extend to platforms such as productivity tools, educational services, or other online services that were not the legislation’s intended targets.
Google also highlighted the lack of clarity in terms such as “significant purpose” and “online social interaction,” which remain undefined within the context of the legislation. These ambiguities could lead to inconsistent application and enforcement, creating unnecessary compliance burdens for businesses.
The age assurance trials were also a huge sticking point for Google. Like other stakeholders, it expressed concerns about proceeding with legislation that hinges on untested technologies and methodologies. The company noted that age verification at scale introduces significant technical challenges and could inadvertently exclude users who lack the means to verify their age digitally.
Similarly to TikTok, Google raised concerns over requirements to delete user age data after verification, which could prevent them from using this information to provide age-appropriate content or services in the future.
Google urged the government to refine the legislation through broader consultation with stakeholders, including industry leaders, privacy experts, and child safety advocates. The company advocated for a balanced approach that safeguards young users while avoiding unintended consequences for businesses and users.
X
It is no secret that X has voiced strong opposition to the bill, highlighting its potential to undermine fundamental human rights while failing to align with the realities of platform usage. Only last week, X owner and billionaire Elon Musk called the plan a “backdoor to internet control” in a post to X.
In its submission, X stressed that minors do not widely use its platform and that it does not actively target young audiences or allow advertisers to do so. The platform requires users to be at least 13 years old and is already exploring age assurance options in line with global privacy standards.
Given its limited engagement with underage users, X questioned the necessity of subjecting its operations to the Bill’s sweeping requirements, particularly when its impact on child safety within the X ecosystem would likely be minimal.
In general, X raised alarm over the Bill’s implications for children’s and young people’s rights, specifically their rights to freedom of expression and access to information. These rights, enshrined in international agreements such as the UN Convention on the Rights of the Child and the International Covenant on Civil and Political Rights, risk being subordinated to the Bill’s measures. The submission argued that banning minors from platforms like X could isolate them from public discourse, support networks, and vital resources, ultimately doing more harm than good.
X also criticised the Bill’s foundational assumptions, arguing that banning young people from social media lacks empirical support and could push them toward less regulated, more dangerous platforms. Additionally, the company highlighted the Bill’s vague definitions and reliance on incomplete age assurance trials, which have yet to deliver practical, scalable solutions.
X urged the Australian Government to adopt a more balanced, collaborative approach that protects children without infringing on fundamental rights, including device- or app-store-level age verification, as a more privacy-preserving alternative to platform-specific mandates.
Meta
While Meta affirmed its commitment to online safety, particularly for young people, the company has criticised the Bill for being “inconsistent and ineffective”, arguing that the Bill, in its current form, introduces unnecessary complexity and ambiguity into the regulatory framework that could leave companies vulnerable to significant penalties.
Moreover, Meta highlighted that the Bill’s scope excludes platforms like YouTube and online gaming, which are among the most popular digital spaces for young Australians. Meta argues that such exclusions render the Bill ineffective in addressing the full spectrum of online risks faced by young Australians.
Meta also argued that the bill is in contradiction to what parents have told the social media giant that they want – simple and effective ways for them to set controls and manage their teen’s online experience.
Instead of the proposition, Meta advocates for a “whole-of-ecosystem” approach that leverages existing age verification systems already integrated into app stores and operating systems, such as those managed by Apple and Google. These platforms already collect age-related data and offer parental controls, which could be extended to provide a more consistent, privacy-protective solution. By centralising age verification at the app store level, parents would be spared the burden of navigating different age assurance measures across multiple apps.
Contrary to government claims of extensive stakeholder engagement, Meta asserts that consultation with key experts and organisations has been minimal. It pointed to the same data as Snap, showing that the general consensus is that this kind of ban could isolate vulnerable youth from online support networks.
Furthermore, Meta strongly recommended that the government delay the Bill’s implementation until the results of the ongoing age assurance trials are available and emphasised the importance of aligning the Bill with existing regulatory frameworks, such as the Online Safety Act and the Privacy Act, to avoid unnecessary duplication and confusion.
Meta also highlighted its own investments in creating safer online environments for young users. Initiatives like Instagram Teen Accounts, which feature built-in protections such as private default settings, restricted messaging, and sensitive content controls, exemplify the company’s commitment to online safety. These measures, developed through consultations with parents, teens, and safety experts, aim to strike a balance between providing a secure digital space and fostering meaningful online interactions.