Meta is unwittingly continuing to serve ads for deepfake nude apps. These apps use AI technology to “undress” photos of women.
Known as deepfake nude apps, they allow people to upload photos without individual consent and then use AI technology to create sexually explicit content.
B&T has been exposed to these ads and understands that the forces behind them are highly sophisticated global criminal syndicates that operate in parts of South East Asia, Africa and Eastern Europe.
Meta is one of several social media information technology companies that these syndicates target, which the overwhelming majority of inappropriate content detected and blocked from being distributed.
Nonetheless, not all ads are proactively picked up, and senior executives at Meta are aware of the problem.
“The safety of our users is of utmost importance, and we continue to work with industry, the government and law enforcement to protect Australians,” Meta ANZ managing director Will Easton told B&T.
“In recent years, the epidemic of scams and inappropriate apps has grown in scale and complexity, driven by ruthless cross-border criminal networks that operate on a global scale. As this activity has become more persistent and sophisticated, so have our efforts.
“We have strict rules against ads for ‘nudify’ apps. We remove these ads whenever we become aware of them, disable the accounts responsible, and block links to websites hosting these apps.
Easton said that Meta has made “significant investments” to develop industry leading brand safety and suitability tools. This includes inventory filters, topic exclusion lists, content type exclusions and third-party block lists.
“These tools give advertisers control over where their ads appear in Feed and Reels,” he added.
Last year, Crikey ran a story about Meta serving deepfake nude apps in Australia. Meanwhile a report by AI Forensics identified more than 3,000 pornographic ads that were approved and distributed through Meta’s advertising system in 2024.
The report found that the “pornographic ads” generated over 8 million impressions in the EU.
AI Forensics claims that Meta has the technology to detect pornographic content but selectively applies this capability, exempting paid advertisers from the same rules regular users must follow.
B&T has not seen evidence that Meta selectively exempts paid advertisers from rules that apply to users.
Meta has faced questions about the effectiveness of its detection technology in preventing inappropriate ads and scams from circulating on its platform.
This has previously included fake ads where celebrities promote crypto currency scams and provide dodgy financial advice.