Meta has taken legal action against a Hong Kong company that runs ‘nudify’ app ads on its platforms.
The Silicon Valley giant is suing Joy Timeline HK Limited, the entity behind CrushAI apps, which allows people to create AI-generated nude or sexually explicit images of individuals without their consent.
In a blog, Meta said it has filed a lawsuit in Hong Kong, where Joy Timeline HK Limited is based, to prevent them from advertising CrushAI apps on Meta platforms.
It follows “multiple attempts” by Joy Timeline HK Limited to circumvent Meta’s ad review process and continue placing ads that breaks Meta’s rules.
“This legal action underscores both the seriousness with which we take this abuse and our commitment to doing all we can to protect our community from it. We’ll continue to take the necessary steps – which could include legal action – against those who abuse our platforms like this,” Meta wrote.
Meta said that nudify apps are being advertised across the internet, and available in App Stores themselves, which makes removing them beyond the realms of a single platform.
Since March it has provided more than 3,800 unique URLs to other tech companies about sites nefarious that promote ‘nudify’ apps and others that violate child safety activity, including sextortion.
The world’s largest social media company said it has also upgraded its detection technology to identify ads that promote Nudify apps, even if the ads don’t include nudity.
Meta has been in a battle to stop the spread of Nudify app advertising appearing across Facebook and Instagram.
B&T recently highlighted the problem after being served ads of AI-generated women undressing while using Facebook.
At the time, Meta ANZ managing director Will Easton told B&T the company was taking serious action to try and prevent criminals and fraudsters from running ads on its platform, but the issue was complicated and akin to playing whack-a-mole, the arcade game that involves hitting moles that emerge randomly from holes.
On a LinkedIn post today, Easton said: “Deep fakes across the internet cause significant harm and I’m glad we can take this important step to help protect people across Australia and worldwide from this kind of abuse.
“Meta has long had policies against non-consensual intimate imagery (NCII), and we’ve strengthened them to address new threats powered by AI. Now, we’re going even further by introducing new detection technology, removing violating content faster, and working with other companies through the Tech Coalition to share intelligence.
“There’s more work to do, and we’re committed to staying one step ahead to keep our platforms safe and respectful spaces for everyone.”
Meta has also said it supports legislation that combats intimate image abuse across the internet, whether real or AI-generated.
“In recent years, the epidemic of scams and inappropriate apps has grown in scale and complexity, driven by ruthless cross-border criminal networks that operate on a global scale,” Easton previously told B&T. “As this activity has become more persistent and sophisticated, so have our efforts.”