In this op-ed, Justine Sywak, director of public relations & misinformation unit, Shannon Behaviour Change, writes about the need for systemic action to address misinformation across social media platforms, following the abandonment of Australia’s proposed misinformation legislation. The Bill would have ushered in an unprecedented level of transparency, holding big tech to account to prevent and minimise the spread of harmful misinformation and disinformation online.
The abandonment of Australia’s proposed misinformation legislation highlights a pivotal societal issue: the unchecked spread of false information, especially on social media, poses severe risks to democracy, public health, and social trust. While critics labelled the bill as a potential threat to free speech, its rejection leaves platforms with little accountability, perpetuating an environment where misinformation thrives. The debate underscores the urgent need for systemic solutions to address this pervasive problem.
Social media platforms, by their very design, prioritise engagement over accuracy. Algorithms amplify sensationalist and divisive content, often privileging emotionally charged misinformation over nuanced truths. Studies consistently show that misinformation spreads faster than fact-based content, gaining traction before corrections can mitigate the damage.
The State of Change Report revealed the societal impact of misinformation, with 80 per cent of Australians agreeing that misinformation and disinformation erode public trust, and three out of four Australians believing there should be legislation to combat its spread. Meanwhile, a significant proportion of people report frequently encountering information they suspect to be misleading or false. These findings emphasise the critical need for interventions to restore faith in public discourse and institutional credibility.
Misinformation does not exist in a vacuum; social media amplifies its reach and impact in several key ways. Misinformation on social media is not an isolated phenomenon; the platforms amplify its reach and deepen its impact in several significant ways.
Personalisation algorithms foster echo chambers, exposing users to content that aligns with their existing beliefs while excluding diverse perspectives or factual corrections.
This dynamic intensifies polarisation, as seen during the COVID-19 pandemic when anti-vaccine misinformation thrived in these silos, undermining public health efforts. Additionally, false information during elections poses a direct threat to democracy, with 47 per cent of Australians believing social media misinformation influences electoral outcomes. Such narratives, often propagated by foreign actors and political operatives, distort voter confidence and public perception.
Moreover, misinformation erodes trust in credible sources, a concern shared by 80 per cent of Australians who recognise its corrosive effect on public trust, highlighting the urgent need for intervention.
The rejection of the misinformation bill underscores a broader tension between regulating harmful content and preserving free expression. Critics of the bill argued that granting the Australian Communications Media Authority (ACMA) power over digital platforms risked excessive censorship. Opposition Leader Peter Dutton labelled the bill a “win for free speech,” framing its withdrawal as a victory for democratic principles.
Yet, this argument overlooks the distinct responsibility of platforms to balance freedom of expression with harm reduction.
Social media companies have consistently resisted meaningful reform, prioritising user engagement—and, by extension, profit—over public safety. Voluntary measures, such as community guidelines and fact-checking partnerships, have proven inadequate. The absence of enforceable standards enables platforms to shirk accountability, perpetuating a cycle of inaction.
Recognising the urgency of this issue, Shannon Behaviour Change has established Australia’s first agency misinformation unit to assist corporations, governments, and non-profits in combating the spread of false narratives. This innovative approach focuses on equipping organisations with the tools to identify, address, and mitigate the impact of misinformation. The unit exemplifies the proactive measures needed to counteract misinformation in the absence of legislative solutions.
The failure of Australia’s proposed legislation should not signal the end of efforts to address online misinformation but rather underscore the need for more nuanced strategies that balance free speech concerns with platform accountability.
Future initiatives should focus on establishing collaborative oversight bodies that bring together government, industry, and civil society to ensure fair and balanced regulation.
Platforms must also be required to disclose how their algorithms prioritise content, fostering transparency and public accountability. Equally important is enhancing media literacy through education campaigns that empower citizens to critically evaluate the information they encounter online.
Additionally, accountability should be incentivised by imposing penalties on platforms that fail to address harmful content while rewarding proactive efforts that prioritise accuracy. These measures collectively offer a constructive path forward in combating misinformation without compromising democratic principles.
The abandonment of the misinformation bill reflects the complexities of legislating in the digital age, where the line between harmful content and free expression is often blurred.
However, the status quo—leaving platforms unregulated—poses an even greater threat to societal well-being. Combatting misinformation requires multi-faceted strategies beyond government action.
As misinformation continues to spread unchecked, the onus lies on policymakers, platforms, and citizens to tackle this pervasive challenge. Social media may have democratised communication, but without accountability, it risks becoming democracy’s undoing.