Google’s latest report has revealed 8.3 billion ads were removed in 2025 due to policy violations, with AI systems blocking more than 99 per cent before they were seen by users.
The company said sexual content remained the largest source of violations, accounting for more than 409 million policy breaches across its publisher network. It was followed by dangerous and derogatory content (20.5 million), shocking content (15 million), weapons promotion (12.8 million), online gambling (9.7 million), and both alcohol and tobacco-related violations, each exceeding 5 million.
Google’s 2025 Ads Safety Report highlighted the scale of abuse is increasingly being driven by generative AI, which bad actors are using to automate deceptive advertising campaigns.
“One of the key trends we have been combating is bad actors using generative AI to automate deceptive content at scale,” said Keerat Sharma, VP and General Manager of Ads Privacy and Safety at Google.
“Our safety teams work around the clock to stop bad actors that use increasingly sophisticated, malicious ads. In 2025, Gemini-powered tools dramatically improved our ability to detect and stop bad ads: our systems caught over 99 per cent of policy-violating ads before they ever served, and we’re continuing to evolve our defenses to stay ahead of even the most advanced schemes.”

Sharma said Google’s systems now analyse hundreds of billions of signals — including account age, behavioural cues and campaign patterns — to identify threats earlier and more accurately.
“Gemini allows our defences to detect and combat, and to better understand adversarial networks and risks across these deceptive campaigns,” Sharma said.
“Our latest models are better at identifying the underlying patterns of these automated attacks, allowing us to neutralise them even as they grow in complexity and volume.”
Globally, Google said it blocked or removed more than 8.3 billion ads and suspended 24.9 million advertiser accounts in 2025. Scam-related enforcement alone accounted for more than 602 million blocked or removed ads.
Sharma said the company’s improvements have also reduced enforcement errors.
“It’s important in an environment like this, where scammers are adapting quickly, to have a defensive strategy that is layered and not reliant on one silver bullet,” Sharma said.
He said Google’s advertiser verification systems also play a key role in preventing bad actors from entering the ecosystem in the first place.
While AI handles the vast majority of enforcement, Sharma said human review and feedback loops remain essential when complex cases slip through automated systems.



