Meta is ditching its small army of independent fact checkers and replacing them with X (formerly Twitter) style ‘community notes’ in a move that aims to “dramatically reduce the amount of censorship” on its platforms, founder Mark Zuckerberg has announced.
In a video, Meta boss Zuckerberg said that Facebook and Instagram would prioritise free speech and that third-party fact checkers “had become too politically biased and destroyed more trust than they created”.
Community notes shifts the responsibility to users to flag lies and other harmful content on Meta’s platforms Facebook, Instagram, Threads and WhatsApp, and has been heavily criticised by online safety advocates. There is no timeline for its introduction to Australia.
Meta’s global affairs chief Joe Kaplan said in a blog post that Meta had “seen this approach work on X”, adding: “We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing—and one that’s less prone to bias.”
Angie Drobnic Holan, director of the International Fact-Checking Network (IFCN) at Poynter, told The Verge that community notes-style moderation is merely “window dressing so that platforms can say they’re doing something” and that the real losers will be users “overwhelmed with false information”.
Meta said it also plans to overhaul its content policies—which govern what topics are sensitive, illegal or fair play—will remove restrictions around issues such as gender and immigration that Zuckerberg believes “are out of touch with mainstream discourse”.
It will also dial down its content filters which Zuckerberg admitted will lead to the platforms catching “a lot less bad stuff”.
“Finally, we’re going to work with President Trump to push back on governments around the world that are going after American companies and pushing to censor more,” Zuckerberg added.
Meta’s change of tack arrives only days after global affairs chief Nick Clegg announced he was leaving and being replaced by his deputy Kaplan, a former deputy chief of staff for President George W. Bush, and a little more than a week before Donald Trump is inaugurated as the 47th US President. Meta donated $US1 million ($AU1.61 million) towards Trump’s inauguration.
The news was welcomed by Trump, who said that Meta “had come a long way”, describing Zuckerberg as “very impressive”. When asked if the new approach was due to his previous threats against Meta, Trump said: “probably”.
Advertisers & Brand Safety
Marketers are less likely to be enamoured by a content moderation policy that follows in the footsteps of X. Since Musk bought Twitter, he has slashed its safety teams and reinstated accounts that were previously banned for hate speech.
Advertisers have since left that platform in droves, wiping out nearly half of Twitter’s advertising revenue. However, some advertisers have returned following US Congressional hearings on the topic of advertiser boycotts—whatever they are.
That scenario is unlikely to play out with Meta, argues Brian Wieser, the CEO of advisory and consulting firm Madison and Wall.
“While it’s very likely that some marketers will shift spending towards other social media or digital platforms which are less prone towards including mis- and disinformation (let alone sponsoring content that they or their consumers might be considered hateful) as a practical matter, most marketers will probably continue to support Meta with their budgets much as they would have had this change not been made, at least so long as consumer usage and advertising performance remain intact,” he wrote in a note to its subscribers.
Wieser believes it may offer a limited opportunity for publishers to promote their brand safety creds and claw back some advertising dollars that have been siphoned off to social media platforms in recent years.
Another beneficiary, he argues, may be the very person Zuckerberg is following down the ‘free speech’, ‘anti-censorship’ path.
“Interestingly, by conveying how Meta will take the same approach that X has around brand safety, this action will probably have (whether intended or not) the effect of helping X in its efforts to capture share of marketer budgets,” Wieser wrote.
Regulation & Global Reaction
Meta’s new direction throws down the gauntlet to regulators and governments that have been cracking down on hate speech, harmful content and misinformation distributed on social media platforms.
Last year, Australia’s eSafety commissioner Julie Inman Grant recently had a high profile legal battle with X’s Elon Musk about taking down videos of a stabbing in Sydney’s west. Inman Grant had demanded that X remove the video of the stabbing for users around the world. However, Federal Court Justice Geoffrey Kennett ruled that Australia’s eSafety Commissioner cannot rule what people in other countries see.
Although Zuckerberg did not call out any specific government, Meta’s moves to relax content moderation flies in the face of the direction Australia is heading by trying to place a greater responsibility on platforms to police harmful content and hate speech.
Australia’s Minister for Communications Michelle Rowland, without naming Meta specifically, expressed concern about misinformation.
In a statement shared with B&T, she said: “Misinformation can be harmful to people’s health, wellbeing, and to social cohesion. Misinformation in particular is complex to navigate and hard to recognise.
“Access to trusted information has never been more important. That’s why the Albanese Government is supporting high quality, fact-checked information for the public through ongoing support to ABC, SBS and AAP.”
Australia’s communications and media regulator, ACMA, said that Meta’s announcement about changes to its fact-checking processes only relate to the United States.
“The ACMA understands on advice from Meta that there is no immediate plan to make changes to the third-party fact checking program in Australia,” an ACMA spokesperson said.
“Meta is a signatory to the voluntary Australian Code of Practice on Disinformation and Misinformation, which is administered by DIGI. Under that voluntary code, Meta has committed to a range of measures in its latest transparency report including initiatives with third-party fact-checking organisations to inform its processes to combat misinformation.”
Another government that is keenly observing developments is the UK. Its Department for Science, Innovation and Technology said: “The UK’s Online Safety Act will oblige them to remove illegal content and content harmful to children here in the UK, and we continue to urge social media companies to counter the spread of misinformation and disinformation hosted on their platforms.”
Zuckerberg said that fact checking and “censorship” on Meta platforms had “gone too far”.
“Governments and legacy media have pushed to censor more and more. A lot of this is clearly political, but there’s also a lot of legitimately bad stuff out there, drugs, terrorism, child exploitation. These are things that we take very seriously, and I want to make sure that we handle responsibly,” he said.
“So we built a lot of complex systems to moderate content, but the problem with complex systems is they make mistakes, even if they accidentally censor just 1 per cent of posts, that’s millions of people, and we’ve reached a point where it’s just too many mistakes and too much censorship. The recent elections also feel like a cultural tipping point towards, once again, prioritising speech.”