Meta is expanding its Teen Accounts protections to Facebook and Messenger and introducing new safeguards on Instagram in a move the company says will create a safer, more age-appropriate experience for teens—while offering greater peace of mind for parents.
The features build on the Teen Accounts experience first launched in Australia, the US, Canada, and the UK in September 2024, and now used by more than 54 million teens globally. The system places all users under 18 into the most protective settings by default, with tighter restrictions for younger teens aged 13 to 15.
Teens under 16 on Instagram will require parental approval to go Live or disable a DM protection feature that automatically blurs suspected nudity. Meta says these additional safeguards were developed in response to feedback from parents and experts, as well as technical feasibility.
Tackling Nudity in Direct Messages
A key new feature is Meta’s tightening of controls around potentially inappropriate images shared via direct messages. Instagram already uses AI to blur images flagged as suspected nudity, but now teens under 16 will not be able to turn off that blur without a parent’s approval.
While the system may sometimes flag benign images—like someone at the beach or in the bath—Meta says it is taking a precautionary approach. Hopkins noted that direct messages are treated as a more private space on the platform, so extending protections in this context was both a technical and policy milestone.
The feature reflects broader concerns around the types of content young people may encounter, especially unsolicited imagery. Although Meta has long removed explicit content that violates its policies, this extra layer of defence is aimed at shielding teens from borderline or ambiguous content before it’s even viewed.
Questions remain about the filter’s efficacy across diverse cultural contexts, as perceptions of appropriateness can vary widely. There’s also the risk that some teens may perceive the protections as overly cautious or even invasive. Nevertheless, Meta maintains that the majority of teens value these safeguards—or at the very least, are not actively opting out of them.
Expanding to Facebook and Messenger
Meanwhile, Teen Accounts are being extended to Facebook and Messenger. Similar restrictions will apply, including limits on who can contact teens, what content they’re exposed to, and new nudges encouraging time away from screens overnight.
Tara Hopkins, Meta’s Global Director of Public Policy for Instagram, said 97% of 13–15-year-olds remain in the most restrictive settings, suggesting teens are either comfortable with the extra protection—or possibly unaware the settings are even active.
While some may question why features like Live-stream restrictions and nudity filters weren’t part of the original rollout, Meta says the phased approach allowed them to test and refine the experience without overwhelming users or introducing technical issues.
What’s Next: AI Age Detection
The company also revealed it is working on enhanced AI tools to detect users who may have misstated their age when signing up.
Those flagged as teens will be automatically moved into restricted settings—a move Meta describes as “industry leading,” with trials starting in the US this year.
With growing global concern about the impact of social media on youth mental health, Meta’s announcement appears to show a willingness to offer more structured digital boundaries. But whether this marks a genuine turning point in how platforms prioritise young users—or merely a tactical response to mounting scrutiny—remains uncertain.