A new report released by independent tech research organisation, Reset.Tech Australia has exposed serious flaws in social media platforms’ capacity to enforce their content moderation policies.
Entitled ‘How do Platforms Respond to Electoral Process Misinformation? An experimental evaluation from the lead-up to Australia’s referendum’, the report investigates the way TikTok, Facebook, and X (formerly Twitter) deal with misinformation related to Australian electoral processes.
The study monitored 99 pieces of content that contained clear electoral misinformation across TikTok, Facebook, and X over the course of a week prior to reporting and two weeks after reporting to better understand organic and reported moderation rates.
The misinformation included claims that Australian elections had been or would be rigged, that ballots had been or would be stolen, or that the Voice referendum vote was invalid or illegal.
While each platform’s community guidelines claimed to address this type of content — TikTok pledged to remove it, Facebook said it would reduce its prevalence, and X said it would either remove or label it — the reality is starkly different.
The analysis revealed some concerning trends in the way platforms addressed misinformation:
Organic content moderation largely ineffective: The study found that platforms inadequately enforced their community guidelines and did not fulfill the Code of Practice on Misinformation and Disinformation. Without user reporting, the content removal or labelling rates are a mere 4 per cent for TikTok and Facebook, and zero for X.
Reporting made little difference: Although reporting improved TikTok’s removal or labelling rate to 33 per cent, it had no effect on Facebook and X.
Growing reach of misinformation: Electoral process misinformation continued to gain traction at alarming rates even after reporting. The rate of growth on Facebook did decelerate after reporting but accelerated on TikTok and Instagram. This suggests platforms are failing to de-amplify misinformation.
Random approach to moderation leaves significant gaps: The moderation process appears to be ‘whack-a-mole’ rather than systematic, with no substantive difference in the type of content that gets removed or labelled.
Reset. Tech executive director Alice Dawkins, said: “Our findings are a major wake-up call for both the tech industry and regulators. Electoral process misinformation poses a serious threat to democratic processes, and platforms need to take immediate action to improve their content moderation practices.”
According to the Australian Code of Practice on Disinformation and Misinformation, platforms have an obligation to protect Australian users from content that poses “a credible and serious threat to democratic, political and policy-making processes”.
However the Code does not place specific obligations on platforms, such as to remove or demote content relating to electoral process misinformation and disinformation. As such, how platforms choose to fullfill their responsibilities are largely up to them.
“Our report highlights the negligible capacity of major platforms to respond to obvious threats to electoral integrity in Australia. It calls into question the effectiveness of the existing self-regulatory framework and highlights the urgent need for stronger regulatory oversight.” said Dawkins.
“We intend to run this research again, before the referendum vote with a larger sample size. We hope platforms significantly step up their performance in the interim.”
You can access the report here.