Will Tech Save Us? How Adland Created & Is Trying To Solve Ad-Funded Disinformation

Will Tech Save Us? How Adland Created & Is Trying To Solve Ad-Funded Disinformation

This is the second part of a three-part series investigating the complicated world of programmatic advertising. The first part looked into the scale and opacity of the industry and the frequency with which it sends ad dollars to sites touting disinformation and harmful content. This part will look at why those problems persist within adtech and the efforts to try and stop those problems. Read part one here. 

“If I could, I would build it from the ground up,” Jessica Miles, country manager ANZ of Integral Ad Science (IAS) told B&T.

“From the beginning, I feel that we had a focus on metrics that didn’t integrate quality. What I mean by that is some of those performance metrics, maybe it’s a click-through rate, maybe it’s lands on the page, whatever those metrics were, especially in the early days, there was not a focus on the quality of those metrics. The quality of those metrics really drives transparency and cleans up the ecosystem.

“When I say quality, I’m talking about an ad that is viewable, I’m talking about an ad that is in the right context that is suitable for an advertiser, is viewable by a human, that is not fraud.”

If one extricates themselves from the all-consuming world of advertising and considers that situation as a layperson might, it will immediately become clear that something has gone horribly wrong with digital advertising.

How can we have created a system whose expressed purpose is to spruik brands, products and services to people but often fails to deliver those adverts — which, don’t forget, people may have spent hundreds or thousands of man-hours developing — to actual people?

How can we have created a system that sends legitimate funds to illegitimate businesses but leaves no recourse for the legitimate business to recoup those funds? And, is there anything that we can do about either of those problems?

Nothing New Under The Sun

“I always start by thinking that misinformation and disinformation have been around long before computers existed,” Lisa Given, professor of information sciences and director of social change at RMIT University.

“But computers and, especially as we move into AI-enabled computing, it means that things can move a lot faster and at a greater scale… It’s a lot more challenging to combat because you’ve got a deeper reach into people’s homes through multiple devices and with a lot of different people in the home. Some of the information coming from people reading things online but not being very familiar with what’s happening under the surface of the technology.”

Jess Miles Integral Ad Science

Jessica Miles, Integral Ad Science’s country manager

While it is easy and often convenient to blame gullible consumers for falling for misinformation, their lack of technical understanding is not a personal failing on their part. Our lives are dominated by the internet and the great conductor at the heart of it is Google with its almost two-thirds share of the internet browser market and 92 per cent control of internet search. With such a position of dominance, one would expect that it is in the business of doing the right thing — after all, its corporate motto was “Don’t be evil.”

But wasn’t it ever thus? Newspapers have been accused of knowingly publishing falsehoods since the inception of the medium in the 17th century. But, while newspapers of old were the gatekeepers of information, now internet users are responsible for the dissemination of information and, should they not know what they’re looking at, misinformation can spread like wildfire.

Lisa Given RMIT University

Lisa Given, professor of information sciences and director of social change at RMIT University

Richard Sinnott, professor of applied science and director of e-research computing and information systems at the University of Melbourne, runs and advises on several projects investigating the spread of misinformation online.

“We’re looking into the propagation speeds and the networks that are sending information using graph-based approaches. Let’s say you have a Donald Trump supporter, for argument’s sake, he’s saying a random, stupid thing and everyone agrees with him and it becomes viral. It seems true but it was complete hogwash.

“It’s the networks themselves that are being used to propagate the information… We can identify news as being fake or true after the fact but that’s too late. It’s not really useful unless it’s going to stop it before it spreads too far.”

richard sinnott

Richard Sinnott, professor of applied science and director of e-research computing and information systems at the University of Melbourne

When the sites generating that misinformation have a revenue source that rewards virality by paying fractions of cents for each ad impression, there is a clear incentive to produce high-churn, low-quality stories purporting to be news. When married with artificial intelligence, that system becomes an ever greater problem.

That’s not to say that all publishers that rely on programmatic advertising are bad. Advertising support has led to much of the internet being free and open for people around the world to use. They can access information in more ways than ever before and can connect with friends and family around the world as a result.

But while it is clear that there is a level of hesitancy among major adtech platforms to overhaul the system that has generated huge amounts of money, there are a host of ad verification firms that have emerged to ensure that at least the ads from major brands get served in the right places.

Scams, Frauds And Brand Safety

Brand safety is nothing new and neither is the discussion around the concept in relation to the internet.

“To the online consumer, a trusted brand represents credibility, safety and security. At the same time, the Internet has expanded the power of brands and opened up new markets and business opportunities, however, it has also dramatically increased the scope and impact of brand misuse, disparagement, infringement and fraud,” read ‘The Dirty Dozen,’ a 2004 report by Corporation Service Company into the different forms of online brand abuse.

In the same way that a brand’s presence on a site can lend the information it publishes credence, bad content on the site can have a corresponding negative impact on the brand. But, with the sprawling nature of online content, managing brand safety is incredibly and increasingly difficult. What’s more, the idea of what constitutes brand-safe context is neither fixed nor universal.

“For years we were talking about the fragmentation of consumers with devices and different types of media and it being about whether you were trying to engage them in display versus videos or social platforms or premium publisher content. You were trying to understand the audience everywhere and put an ad in front of them,” said Imran Masood, country manager AUNZ at DoubleVerify.

“What we’re seeing now is even more disruption on top of that. You’re trying to find a consumer but then you’re trying to ensure that wherever the consumer is, the content that they’re aligned with is real.”

Imran Masood

Imran Masood, country manager AUNZ at DoubleVerify

Parsing online content has been a perennial challenge for internet businesses. Being able to understand individual words is fine but being able to wrangle with a double entendre or suss out the irony within copy is incredibly difficult for an automated system. But that has not stopped the major ad verification systems from trying. And, ironically, AI has almost the same potential to spot fake news or hateful content as it can produce it.

“We use tag-based measurement. Our tag can measure the page across the multiple verification metrics that we have. When it comes to something like safety, suitability, fake news or misinformation, our tags can identify that and they block the ads from showing,” explained Miles.

IAS’ system relies on AI to read and understand information on a page and decide whether it is a site that propagates misinformation or is brand-safe. If the robots deem the content brand unsafe or misinformation they will fire a signal into the programmatic supply chain that will stop ads from being monetised and showing up next to the content.

“There are a couple of ways that we are able to identify that it’s misinformation,” Miles explained.

“We partner with industry bodies such as the Global Disinformation Index (GDI). We pass information between ourselves and the GDI to identify if a page has misinformation, or is trying to disseminate this information online. That enables us to lock and reduce the monetisation of that page and protect our advertisers.

“The other industry body that we partner with is the Global Alliance for Responsible Media (GARM). Under their brand safety floor and suitability framework, they do have misinformation guidelines. It’s via this partnership that we’re able to ensure that our advertisers aren’t appearing against those articles and that we’re not monetising them.”

Despite the millions of dollars and hours that have been invested in creating these tools, they are not perfect.

DoubleVerify also offers a similar tool but it uses a unique deterministic algorithm that Masood said is between six to eight times more accurate than comparable probabilistic technologies.

“That algorithm can get to the very end point of where that ad call is coming from and understand in a deterministic way — i.e. the methodology is deterministic and that’s a one-to-one data share — that it is a safe and secure environment.

“With a probabilistic model, you can understand the signals that are coming through across a specific ad call or impression and you extrapolate those types of signals out across a model that you build that says ‘I’ve seen that across x per cent of imagery so those signals are telling me that it is safe or suitable or that’s it’s fraud or not fraud. I’m going to extrapolate that across the entire internet.’

“A deterministic methodology in the space that we operate in can be six to eight times more accurate than probabilistic. When you’re talking tiny percentages, it sounds minimal but like one or two percentage differences. But when you talk about an advertiser or a marketplace, buying or transacting billions upon billions of ad impressions worth millions or billions of dollars, those few per cent may turn into millions of dollars worth of improved or more effective advertising placements.”

Masood said that DoubleVerify’s tool operates with an accuracy in the “very, very, very high 90s.” As a result, the accuracy boost is “a per cent of a fraction, but it’s worth millions of dollars.”

And yet, the problem persists. One reason is that these verification systems are not ubiquitous.

“Not every advertiser uses a tool like IAS, not every publisher leverages a verification tool. There is still space for bad actors to create fake news,” said Miles.

“If we think about the spectrum of advertisers, you have your tier ones with big budgets like McDonald’s and Coca-Cola. Their brand value is very high and for them to have any hit to that brand value is important. Then, if you go towards the emerging advertisers, they’re just trying to get a share of voice. There are tools out there that are free, if you’re running across the Google stack, they’ve got some free tools. That is generally what we see the smaller advertisers with smaller budgets that are quite new.

“Free tools are not going to be the best tools. For those advertisers, they would see a much higher chance of inadvertently funding fake news and misinformation.”

But, for Miles and Masood alike, the problem is also the result of human nature and the inherent incentive to find loopholes to exploit within systems for material gain.

“Like in any industry, if people can make a quick buck, they’ll do it,” said Miles.




Latest News

Sydney Comedy Festival: Taking The City & Social Media By Storm
  • Media

Sydney Comedy Festival: Taking The City & Social Media By Storm

Sydney Comedy Festival 2024 is live and ready to rumble, showing the best of international and homegrown talent at a host of venues around town. As usual, it’s hot on the heels of its big sister, the giant that is the Melbourne International Comedy Festival, picking up some acts as they continue on their own […]

Global Marketers Descend For AANA’s RESET For Growth
  • Advertising

Global Marketers Descend For AANA’s RESET For Growth

The Australian Association of National Advertisers (AANA) has announced the final epic lineup of local and global marketing powerhouses for RESET for Growth 2024. Lead image: Josh Faulks, chief executive officer, AANA  Back in 2000, a woman with no business experience opened her first juice bar in Adelaide. The idea was brilliantly simple: make healthy […]