An ABC investigation revealed that a Russian influence campaign had targeted the Australian election through a fake news website: Pravda Australia.
The website published stories attacking Labor and Greens members, as well as left-leaning policy issues with the aim of training chat bots such as ChatGPT, Google’s Gemini and Microsoft Copilot.
As per the ABC, ‘Pravda Australia presents itself as a news site, but analysts allege it’s part of an ongoing plan to retrain Western chatbots such as ChatGPT, Google’s Gemini and Microsoft’s Copilot on the Russian perspective and increase division amongst Australians in the long-term.’
One post after the election, from ‘Aussie Cossack’ Simeon Boikov—a pro-Kremlin troll who is holed up in the Russian embassy—suggested that Western Sydneysiders deserve “misery” for voting for Labor.
“Those residents of Western Sydney who voted for Chris Bowen & Labor deserve to be locked down, force jabbed and oppressed by the Labor Government. They deserve misery because that’s what they are going to get by voting Labor,” Boikov posted.
The ABC investigation, which was conducted in partnership with NewsGuard, found that around 16 per cent of the chatbots’ answers repeated Pravda Australia narratives.
Clearly the campaign didn’t work for this election. Labor won the election in a landslide—a point that has been disputed in Pravda Australia—but the aim of the operation is a longer-term play. It is one of 180 automated websites in the global Pravda Network that aims to spread pro-Russian propaganda for AI models to consume and feed back to Western users.
Henry Innis, the founder of Mutinex, which uses its a large language model to provide marketers with insights into media planning, told B&T that the Pravda Australia attack is known in the industry as a “poison pill attack”.
“I think that chat bots are increasingly becoming the default way in which a lot of people are consuming or skimming over information, or at least a significant minority of people are using them that way,” he said.
“And as a result, we’re going to see more and more bad actors come onto them, and I also think that largely we’ll have challenging situations emerging where very smart state actors figure out vectors of attack onto large language models, particularly ones that are not highly configured and in protected environments.
“There’s a symptomatic problem with LLMs doing far deeper queries than humans ever would do, and so a lot more random information can get brought in and consumed.”
Innis said that it is important for chatbot users to interrogate information sources to make sure that the information is credible.
“Checking the sourcing of information and how the answer is constructed is increasingly important and should be a part of your due diligence.” he said.
‘There’s no easy fix’
Michael McConville, the founder of free.studio and former leader of Cummins&Partners, last week called for more honesty and accountability in political advertising.
He told B&T that although this issue is different and apparently being carried out by external actors, it highlights the need for some “ground rules that we hold ourselves to account”.
McConville said it plays into a broader issues around fakery with the emergence of AI technology and that regulators are struggling to keep up.
“The one thing I’m fearful of, and you can see emerging with anyone who’s got kids, is that they’re seeing things on TikTok as if they’re facts when they aren’t,” he said.
“Previously, there might be some evasiveness, there might be some awkwardness, but you felt on the whole you could identify a scam, or you could identify if a politician was lying. You felt like you had a decent enough radar for that. I think the scary bit now is that it is far more complicated for any of us professionals to work out. We’re routinely looking at our LinkedIn feeds filled with people getting excited about having created images of things that just aren’t real.”
Mat Baxter, the former global leader of IPG’s Initiative and Huge, said that Russia has a history of deploying tactics to learn more about an overseas population and to influence it in elections.
“I’m not overly surprised at it…but the manifestation of it in a news site is quite an interesting development,” he said. “Using content to analyse an audience and their opinions and persuasions has been used by brands for years. What is different here is that this is being used for nefarious purposes rather than legitimate or more ethically acceptable reasons.”
He said that state media operations in Russia and China have been used as propaganda tools in the past.
“Many people in America argue Fox News is a Republican persuasion vehicle. A lot of people argue MSNBC and CNN are democratic persuasion vehicles” he added. “They call them by different labels, but that’s what they might be perceived at their core.”
Pravda Australia may not have succeeded this time around, but it still operates and is a new way in which foreign state actors are trying to influence Australian voters.