Meta has warned that Australia’s incoming social media age ban, which will prevent under-16s from accessing platforms, creates “significant new engineering and age assurance challenges”, as the company races to comply before the December 10 enforcement date.
Appearing before the Joint Select Committee for Search Engine Services and the Social Media Minimum Age (SMMA) Obligation, Meta’s regional director of policy, Mia Garlick, said the company supports the government’s goal of protecting young Australians but faces unique technical hurdles.
With a number of global age verification frameworks marking 13 as the minimum age for social media access and 18 marking legal adulthood, Australia’s move to introduce a 16-year threshold therefore represents uncharted territory for the industry.
Meta’s systems, she said, are designed around these two existing milestones, determining whether a user is a child or an adult, not the nuanced space in between.
“Sixteen is a globally novel age boundary that presents significant new engineering and age assurance challenges,” she told the Committee. “This is because Meta’s existing technologies are built to identify the significant age milestones of 13 and 18,” she said.
The Final Report of the Age Assurance Technology Trial, commissioned by the federal government, found that distinguishing between users aged 13 and 16 is inherently unreliable because younger teens often lack digital footprints, financial credentials or identifiable online behaviours. Garlick cited the report directly, noting that:
“Adolescents often have limited public records, payment credentials or distinct online habits, making fine-grained inference across age bands — for example, distinguishing 13 from 16 — inherently less reliable,” the report said.
The same report also found “greater challenges at the 16 age boundary with age estimation technologies”.
Garlick drew a key distinction between age verification, using government IDs, and age assurance, which relies on multiple privacy-preserving methods.
“Age verification, confirming age with reference to an authoritative source such as a government ID, is a subset of age assurance, which refers to a range of methods and processes to establish a person’s age,” she said. “At Meta, we have built up the approach to age assurance over many years.”
“Whilst we adopt a data minimisation, waterfall approach, when the age assurance solution is left to each individual app to adopt, there will inevitably be an increased risk of inconsistent outcomes and privacy and security challenges. A more privacy-preserving and consistent way to assure age across industry would be for OS and app stores to confirm age and share an age band signal with apps, who can then apply their age assurance solution suitable for that age band,” she said.
Meta “Shares the Government’s Intent”
Garlick told the Committee that Meta continues to explore how to make its systems more accurate and compliant, but stressed that achieving certainty at this boundary would require industry-wide coordination and possibly new data-sharing standards between platforms, app stores and operating systems.
“Meta shares the Australian Government’s intent that young people should have safe and age-appropriate experiences online, which we understand to be the underlying goal of both the Industry Codes and the social media age ban law,” she said.
She added that Meta has been in “constructive dialogue with all levels of government” to meet its new obligations but acknowledged “numerous challenges” in doing so.
“Notwithstanding these challenges, we continue to work through the reasonable steps we will take when the law comes into effect on December 10”.
Garlick said Meta has made “considerable investments” to build safer experiences for teens, including the rollout of Teen Accounts across Instagram, Facebook and Messenger.
“This reimagined experience has built-in protections that limit who can contact teens, the content they see, and the time they spend online,” she said.
“Globally, hundreds of millions of teens have been placed in Teen Accounts, with 9 out of 10 accounts choosing to stay in the more restrictive settings, indicating to us that young people find this to be a positive experience.”
She added that Instagram Teen Accounts now follow PG-13 movie ratings by default, and Meta is using AI to identify users who may be under 18, even if they list an adult birthday.
“We’ve issued notifications to parents on Instagram in Australia with information on how to have conversations with teens about the importance of listing the correct birthday on their accounts to ensure they have safe, age-appropriate experiences.”
Garlick closed by saying Meta is refining its compliance approach following the registration of the Phase 2 Industry Codes and new eSafety guidance.
“With the codes having been registered by the eSafety Commissioner last month and the relevant guidance with respect to key aspects of the social media ban law also having been released in the last month, we are refining our compliance approach and look forward to sharing more details in due course.”
The Internet Search Engine Services Online Safety Code, registered in earlier this year, sets new standards for how digital platforms protect young users, including limits on harmful content and stronger age-assurance measures.
While advocates have praised the move as overdue, critics have warned that the technology to reliably verify age without compromising privacy is still not ready. But with the social media ban deadline looming, tech companies are racing against the clock to ensure they will be ready in time when the clock strikes midnight on December 10.

