In a move aimed at addressing the growing concerns over online hate speech, Meta, Google, TikTok, and X have each pledged to European lawmakers that they will ramp up their efforts to prevent and remove illegal hate speech on their platforms.
This comes as part of the European Commission’s revised “Code of Conduct on Countering Illegal Hate Speech Online Plus,” which was integrated into the Digital Services Act (DSA) on Monday.
The Code of Conduct update marks a renewed commitment from major tech companies to tackle illegal content, providing a framework for platforms to “demonstrate their compliance” with the DSA’s obligations on content moderation.
Among the signatories of the updated code are some of the largest names in social media, including Facebook, Instagram, TikTok, YouTube, Twitch, LinkedIn, Snapchat, and Microsoft-hosted services. They have all agreed to various measures that include greater transparency around how hate speech is detected, allowing third-party monitors to evaluate how content is flagged, and ensuring that “at least two-thirds of hate speech notices” are reviewed within 24 hours.
“Hatred and polarisation are threats to EU values and fundamental rights and undermine the stability of our democracies. The internet is amplifying the negative effects of hate speech,” EU Commissioner Michael McGrath said in a statement. “We trust this Code of conduct+ will do its part in ensuring a robust response.”
While the goal of reducing hate speech online is one that many can agree on, the irony lies in the fact that these same companies have long faced criticism for not doing enough to combat the issue. Facebook, Instagram, and X (formerly Twitter) have been accused of fostering environments that allow hate speech to flourish, often responding too slowly or inadequately to reports of illegal content.
Meta recently came under fire after it announced it would can its small army of independent fact-checkers and replace them with X (formerly Twitter) style ‘community notes’ in a move that aims to “dramatically reduce the amount of censorship” on its platforms.
In a video, Meta boss Mark Zuckerberg said that Facebook and Instagram would prioritise free speech and that third-party fact checkers “had become too politically biased and destroyed more trust than they created”.
Community notes shifts the responsibility to users to flag lies and other harmful content on Meta’s platforms Facebook, Instagram, Threads and WhatsApp, and has been heavily criticised by online safety advocates. There is no timeline for its introduction to Australia.
Meta’s global affairs chief Joe Kaplan said in a blog post that Meta had “seen this approach work on X”, adding: “We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing—and one that’s less prone to bias.”
As European lawmakers work to hold these platforms to higher standards, the question remains: will these voluntary pledges be enough to reduce the spread of illegal hate speech online, or will they be an instance of big tech signing up to a code they ultimately have no incentive to follow through on?