It is now just one week to go until Australia’s under-16 social media ban comes into force, and the real test is only just beginning.
From December 10, social media platforms such as TikTok, Instagram, Facebook, Snapchat, Threads, Reddit, and YouTube will be legally required to lock out any Australian user under 16. Fail to take “reasonable steps” to keep children off their platforms, and the companies face fines of up to $49.5 million under the Social Media Minimum Age Act.
The Albanese government has been crystal clear that the burden of enforcement sits squarely with the tech giants, not parents. But with one week to go, the picture that is emerging is less a neat switch-off and more a messy, six-month stress-test of age-assurance technology and platform design.
Platforms scramble to flick the off switch
Most of the large platforms have begun quietly or belatedly telling young users the party is almost over.
Meta: video selfies and ID checks

Meta, which owns Instagram, Facebook and Threads, has started pushing notifications to any user whose declared birthdate puts them under 16 or who it has otherwise identified as a child.
“Due to laws in Australia, soon you won’t be able to use social media until you’ve turned 16,” reads one in-app message. “You will not be able to use your Instagram account until you’ve turned 16. This means you can’t use Instagram and your profile won’t be visible to you or others until then. We’ll let you know when you can use Instagram again.”
Users are given two options: download their data and accept that the account will go dark, or ask Instagram to review their date of birth. That review requires a “video selfie”, which uses facial analysis to estimate age. The app prompts the user to slowly move their head side to side, then promises a result within minutes, but warns it can take up to 48 hours.
In testing conducted by The Guardian, an adult with thick facial hair was quickly declared the user over 16 and updated the Instagram account’s age. A separate test with a 13-year-old who tried to change their date of birth to appear older triggered the same selfie process, but this time the system reported “we couldn’t confirm your age” and escalated to a demand for government ID.
Meta said the experiment shows the system working as intended, but also concedes that the Age Assurance Technology Trial has already flagged edge-case problems at the 16-year boundary, particularly for younger teens and minority groups.
“We must also acknowledge the findings of the Age Assurance Technology Trial, which recognises the particular challenges of age assurance at the 16-age boundary, and we anticipate that at times the process may not be perfect,” a spokesperson for the tech company said.
The company uses age-estimation provider Yoti and says facial images are deleted once the check is complete.
Snapchat: reluctant but compliant
Snapchat has been blunt that it “strongly disagrees” with its inclusion in the ban, but will comply.
The platform will remove self-declared under-16s and lean on behavioural models to identify any other Australian users who fall under the age threshold. In Senate estimates, Snapchat estimated it has around 440,000 underage Australian users. Those accounts will be removed or blocked unless their owners can verify they are older.
Age checks will run through photo ID, ConnectID via a bank or facial age estimation provided by k-ID.
TikTok, X, Reddit and Kick
TikTok’s parent company, ByteDance, also argued against the ban but has committed to enforcing it. It will deactivate accounts owned by under-16s and require selfie videos plus government ID to verify borderline ages.
Reddit and livestreaming platform Kick were added to the ban list late in the negotiation process and will both deactivate accounts held by under-16s. X (formerly Twitter), which is not widely used by younger teens but is legally covered, will also shut down any accounts owned by children and offer a live selfie age check powered by AI.
YouTube: “protecting kids in the digital world, not from it”
The most vocal critic has been Google’s YouTube. The company fought to be exempt, arguing it is a video streaming platform rather than a social network, and even threatened legal action. When the government ultimately included YouTube, it published a blog post that could fairly be described as a reluctant compliance notice.
From December 10, viewers in Australia must be 16 or older to sign in to YouTube. Anyone under 16 will be automatically signed out and will lose access to any feature that requires an account, including subscriptions, playlists, likes and default wellbeing tools such as “Take a break” and bedtime reminders. Under 16s will still be able to passively watch videos while signed out, and YouTube Kids will remain unchanged.
Creators under 16 will not be allowed to upload videos or manage their channels, and existing youth-run channels will effectively be frozen until the owner turns 16 or downloads and deletes their data.
YouTube said it determines age using the date of birth attached to a Google account and “other signals,” and will continue exploring new age-assurance methods. But the company has been scathing about the law itself, arguing it is “rushed legislation” that “misunderstands our platform and the way young Australians use it”.
“We believe in protecting kids in the digital world, not from the digital world,” the blog post said, warning that the law will in fact “make Australian kids less safe on YouTube” by stripping away supervision and pushing use into unsupervised, signed-out modes.
Yope, Lemon8 & whatever comes next
Australia has around 2.5 million eight to 15-year-olds, and the government estimates 86 per cent of them use social media. Cutting off that many young users from mainstream platforms creates an enormous incentive to find new online spaces.
Apps such as photo-sharing service Yope and TikTok-linked platform Lemon8 are already surging up Australian download charts as children and teens look for a plan B.
Wells has flagged both by name and warned that if “migratory patterns” show children simply shifting from Snapchat, Instagram and TikTok to these new platforms, they will be added to the ban list.
“Should any particular platform like Lemon8 … become the new source, I will not hesitate to act,” she said, adding that the government would “have more to say about Lemon8 this week”.
Regulators promise to be patient, but not lenient
With days to go, eSafety Commissioner Julie Inman Grant has described it as “disappointing” that some platforms have waited until the last minute to tell affected users what is happening.
“We suggested the best timeframe was two weeks in advance. My powers do not come into effect until December 10, and that is when we will start gathering information notices,” she said. “We have been working on compliance plans. We have been engaging. We have been telling companies what best practice is. We have been talking to them for over a year. So this should be a surprise to none of them.”
But she said, that until that ban officially comes into place,”there’s nothing I can do, from a regulatory perspective, to light a fire under them, only that this is the right thing to do to their young users that are going to use the platform.”
From next week, the eSafety Commission will send notices to the ten covered tech companies, asking how many underage accounts they had registered on 9 December and how many remained on 11 December, the day after the ban starts. Those platforms will then need to provide updated figures every month for six months.
Communications Minister Anika Wells will use a National Press Club speech to warn that companies should be prepared for fines of up to $49.5 million if systemic breaches are uncovered over those six months. But she is also trying to temper expectations that there will be instant, spectacular penalties.
“The question you’re all desperate to know the answer to – who is getting slapped with the first $50 million fine on 10 December? The bureaucrats in the room will back me up here – but regulation rarely acts fast, and certainly not that fast,” her prepared remarks say.
“The government recognises that age assurance may require several days or even weeks to complete fairly and accurately. However, if eSafety identifies systemic breaches of the law, the platforms will face fines of up to $49.5 million.”
Wells has already conceded that there will still be kids on social media on December 10, and probably for some time after that.
“Our expectation is clear: any company that allows this is breaking the law. Kids are clever and inherently seek to circumvent systems. We know it will not be perfect from day one, but we will not give up – and we will not let the platforms off the hook.”
Strong support but weak faith
New research from data and insights firm Pureprofile shows that Australians are broadly on board with the idea of an under-16 social media ban but deeply doubtful it can be made watertight.
Across 820 respondents, including parents, teachers, children and 16 to 24-year-olds, 73 per cent support the ban in principle. Support is highest among teachers at 84 per cent and among high school teachers at 91 per cent. Three in four parents back it.
Confidence is another story. Only 26 per cent of Australians believe the ban will actually work. Among high school teachers, just 13 per cent think it will be effective, even though they are its strongest supporters. 67 per cent of parents and nearly 80 per cent of high school teachers expect children to find ways around it, contributing to a national figure where 68 per cent of Australians say kids will be able to circumvent the rules.
Children themselves told Pureprofile they plan to use VPNs, migrate to gaming and messaging platforms, or simply use their parents’ accounts. More than a third of eight to fifteen-year-olds say they will spend more time with friends or playing sport once the ban hits, but almost as many say they will just replace social media with more video games, streaming or other online activities.
The study also reveals a sharp disconnect between the government’s messaging and public expectations. While Canberra insists enforcement will sit with platforms and regulators, 42 per cent of Australians see parents as the first line of defence, with social media companies and government expected to share the load in a sort of “shared guardianship” model. Only 23 per cent think the government should be primarily responsible, and 21 per cent believe platforms alone should carry the burden.
Crucially, Australians are not calling for a pure prohibition model. When asked what would most effectively address social media harms for under-16s, just 18 per cent chose a complete ban. Almost half, 46 per cent, favoured a hybrid approach that combines restrictions with education on safer, more moderate use. Young adults were the most likely group to back an education-first strategy.
In seven days, millions of Australians under 16 will try to log on and find their favourite apps have logged them out, locked them, or demanded a selfie and a passport.
For tech platforms, the next six months will be a rolling compliance exam, with eSafety watching closely for signs of systemic failure. For parents, schools and young people, it will be an experiment in how far regulation can reshape behaviour in a digital environment where workarounds are often only a VPN or a borrowed device away.
Even the law’s architects concede it will not be perfect. The question now is whether this world-first ban becomes a catalyst for better tools, better education and safer design, or whether it simply pushes risky behaviour further underground.
Either way, the countdown is on.




