Google has said that while it is trying to ensure there are “zero” ad violations on its platform, it is simply not possible to stop bad actors abusing its ads product.
Lead image: Duncan Lennox, VP/GM, ads privacy and safety, Google.
Announcing its 2023 Ads Safety Report, which shows Google’s efforts to prevent malicious activity — which covers everything from trademark infringements to adult content — on its various ads platforms, Duncan Lennox, the company’s VP/GM, ads privacy and safety, told B&T that while Google’s enforcement efforts may seem like a game of “cat and mouse” it was making hay.
“Our goal is always zero violative ads ever running or ever seen by a user on the system and that includes everything from scam ads to everything else. We’re always trying to do better.
“However well you work to seal up your house, sometimes it seems inevitable that a mouse will find a way in somewhere. And what we find is that these bad actors are very adept at adapting to the various defences that we have, and it’s our job to be nimble and stay ahead of them.”
Google’s efforts in preventing malicious ads on its platforms have involved a fair amount of work. In fact, as the company revealed during a press briefing, the number of actions it has taken has grown significantly.
However, when pressed by B&T on whether this meant that the problem was getting worse or Google was getting better at spotting bad actors, Lennox remained coy. He did note that its use of large language models, powered by the company’s Gemini AI model, had enabled a “big leap forward” in its prevention efforts.
“We’ve had a tremendous amount of success with our existing language models,” he said.
“For example, in doubling the number of suspended advertiser accounts. A lot of that is contributions from some of these newer, more advanced AI technologies that are able to adapt even faster than we’ve been able to train more traditional machine learning models in the past.”
The most notable change in malicious advertising over the last 12 months, according to Google, was an increase in ads featuring the likenesses of public figures.
“Toward the end of 2023 and into 2024, we faced a targeted campaign of ads featuring the likeness of public figures to scam users, often through the use of deepfakes. When we detected this threat, we created a dedicated team to respond immediately. We pinpointed patterns in the bad actors’ behaviour, trained our automated enforcement models to detect similar ads, and began removing them at scale. We also updated our misrepresentation policy to better enable us to rapidly suspend the accounts of bad actors,” it wrote in its report.
To be sure, Google is not alone in facing this kind of threat. Meta has faced the wrath of no less than Gina Rinehart for failing to stop scammers using her likeness in ads.
However, the most concerning aspect of the report for agencies and marketers is the huge increase in the number of publisher enforcements it has had to take. It took action against 600,000,000 more publisher pages where ads may have been placed in 2023 than the year before.
While it is clear that Google’s efforts in this arena are noble and doubtless appreciated by agencies and marketers alike, one does wonder quite how many pages slip through the cracks.