Meta has long positioned itself as a platform that champions free expression, but yesterday’s reversal of its content moderation policies and elimination of its third-party fact-checking services have reignited debates about the balance between free speech and platform safety. B&T has spoken to a range of industry leaders about Meta’s new direction and whether it changes anything for them, the internet and ordinary people.
Meta CEO Mark Zuckerberg has defended the move as a “return” to Meta’s core values of free expression, emphasising the importance of open dialogue in fostering societal progress. However, this policy shift has raised concerns about the spread of misinformation and the safety of marginalised communities.
By transitioning to a Community Notes system, modelled after X’s, Meta aims to empower users to moderate content collaboratively. Although it is not clear when Community Notes will replace fact checkers in Australia, critics have argued that placing the burden of fact-checking on users could lead to unchecked harmful content, impacting brand safety and user trust.
Mary Proulx, co-founder of Bread Agency, highlighted both the positive and negative potential outcomes of this policy shift. She acknowledged that “freedom of expression is integral to society”, but cautioned that reduced moderation could allow harmful content to proliferate unchecked.
“Less moderation on social media can quickly take an ugly turn,” Proulx stated, particularly for marginalised communities who may face increased harassment.
“Just take a look at the comment sections on certain Meta posts—community conversations can easily spiral out of control. With the removal of third-party fact-checking and reduced moderation tools, this is bound to escalate. Whether this will be for better or worse remains to be seen.”
Related articles:
Julie Inman Grant: ‘We Will Continue To Hold Tech Companies To Account For Online Harm’
Proulx warned brands that “community moderation, management and engagement are about to become even more critical components of social media strategies”.
“This responsibility can no longer fall to the social media intern,” she said. “It’s a vital, strategic role that ensures conversations remain constructive and brand reputation stays intact.”
Similarly, We Are Social Australia CEO Suzie Shaw expressed concerns over the increased risks for brand safety under the new Community Notes-based approach.
“This means doubling down on social listening and community management to identify potential risks early,” Shaw said.
She also highlighted the need for proactive moderation to ensure misinformation does not take hold, emphasising that certain industries, such as government and health, might be more affected than others.
“We expect these changes to affect clients differently, based on the industry they’re in. Regardless, we’ll work closely with our clients to help them navigate these changes with confidence,” she said.
“This means sharing insights into what Community Notes means for their campaigns and audiences. Our teams will work to minimise risks while taking advantage of new opportunities to engage.
“Meta needs to have an open dialogue with advertisers to ensure concerns around brand safety are addressed, and offer solutions to pre-empt and manage issues arising with campaigns or within communities. For brands, it’s important that Meta offers clear ways to address incorrect or unfair flags, ensuring campaigns aren’t unfairly impacted.”
‘Spreading lies faster’
Kate Smither, a leading strategist and owner of The Tall Planner, was more scathing of the policy shift, likening it to allowing misinformation to spread unchecked.
“What Mark Zuckerberg has done in announcing the removal of moderation from the Meta platform is simply ensure that the lie can travel even faster,” she argued.
Smither criticised the removal of safeguards, suggesting it compromises the platform’s mission of “bringing the world closer together”. She warned brands that without moderation, Meta could become a platform where “marketers are essentially having to buy unpredictability for their brand”.
Vanessa Lyons, CEO of ThinkNewsBrands, echoed these concerns, emphasising that the decision could lead to a “torrent” of misinformation.
“For advertisers, social media will become an even less secure place for brands with even lower trust levels than it had previously,” she said.
“Brand suitability is now more important than ever, and marketing and media communities must begin to divert advertising spend to credible media environments. In doing so they can distance themselves from the rising tide of harmful content while driving greater engagement and improving ROI in their campaigns.”
Former Danish Prime Minister Helle Thorning-Schmidt, who sits on Meta’s oversight board, has also expressed concerns about the proliferation of harmful content.
“What we are very concerned about gender rights, LGBTQ+ rights trans peoples rights on the platforms,” Thorning-Schmidt told the BBC’s Today Show.
“We are seeing many instances where hate speech can lead to real-life harm, so we will be watching that space very carefully.”
Media buyers wade in
Sam Terminelli, head of strategy at indie media agency Magic, said from a media buying perspective, the impact of Meta’s decision largely depends on the type of content that emerges from the change.
“We’ve seen brands pull spend from platforms due to brand safety concerns before, so if the content quality declines, we could see a similar shift away from Meta,” he said.
Other media buyers and social media leads from holding companies spoke to B&T under the condition of anonymity.
One practice lead said they will closely monitor how the user experience is being impacted by Meta’s relaxation of content moderation rules.
“I think it is likely to impact the user experience, and that’s what we’re looking at for the advertisers,” the leader said.
“You’ve got other platforms that use Community Notes. Linda Yaccarino over at X has come out and said, ‘welcome to the fold’, but I don’t know if that move went so well for them.”
Since Elon Musk bought X, the platform has taken decisive steps to slash its safety teams, which has impacted their advertising offering. In just two years, half of advertising dollars has been pulled out of X and very few larger brands have long term relationships with the platform.
A senior media planner told B&T that is the gamble that Meta has taken when it comes to large clients.
“Today, many, if not all of our tier one, two and three advertisers have long-term relationships with Meta. If these changes impact the user experience, drives people away or changes the makeup of who stays on platform, our clients will be asking whether or not a long-term relationship with Meta is right,” the media planner said.
“When you bucket this in with the age-gating legislation and other challenges, advertisers will now be thinking about where Meta fits in their media mix.”
As Meta’s announcements only apply to the US at this stage, Australian planners and buyers are forming an internal view on Meta’s direction of travel and will discuss with clients. Until the changes roll out to Australia, they will adopt a ‘wait and see’ approach as to whether media plans need adjusting.
On a positive note
Not all reactions were negative. Julien Dupuche, director of client solutions at Hello Social, viewed the shift as a positive step toward democratising moderation.
“It means more power sits with real people to manage their own communities,” he explained, suggesting that less corporate control could lead to a more representative online discourse.
“What we’re seeing is a slight winding back of heavy AI-led moderation towards a more democratic people power moderation model,” said Dupuche.
“Big corporations and their robots will have less influence allowing communities to flourish naturally… It’s really interesting to see one of the world’s biggest tech companies saying that too much tech isn’t the answer”.
James Arvanitakis, director of the Forrest Research Foundation and former pro-vice-chancellor of research and graduate studies at Western Sydney University, said that the decision by Meta to remove fact-checkers is one that “should hardly surprise us”.
“We have seen a significant backlash against mediated content as many believe there is a liberal or progressive bias by the universities and not-for-profits who moderate the content,” Arvanitakis said.
“The move against fact-checkers is one that should not be seen as anything sinister (even if it is a concern). Rather, it should be understood from the perspective that many believe the more information that is out there, the better. This approach to a ‘marketplace’ of ideas is very much the tradition from where much of the internet has emerged – and this is Mark Zuckerberg’s justification.”
‘Thomas Jefferson of the digital age’
As B&T reported yesterday, Meta’s move is widely being viewed as a response to Donald Trump’s political preferences and the approach embraced by Elon Musk and X.
“Many are lamenting that this will lead to a new wild west of the virtual world. I somewhat differ because it assumes that the current system is effective. Unfortunately, there is little evidence that this is the case,” Arvanitakis said.
“What this means is that the contemporary education system must evolve to redefine what ‘literacy’ means to develop knowledge amongst the next generation of users on how to decipher online content. Banning access to these platforms does not promote literacy, only more vulnerability in the long term.”
Emma Briant, associate professor, news and political communication, Monash University, said that anyone surprised by these developments learned nothing from Zuckerberg’s role in the Cambridge Analytica affair.
“While they may pay lip service to the policy concerns of the moment, tech oligarchs run their companies to maximise profits and minimise costs, not to be society’s protector or mediate a neutral, democratic town hall,” she said.
“This applies to all of them, not just Elon Musk. There is nothing to stop tech oligarchs weaponising their platforms to suit political objectives when the moment is right.”
Briant believes that fact checking is only part of tackling contemporary propaganda and that policymakers place too much faith in the labelling false claims.
“In so doing they miss an opportunity to take on the larger problems of a manipulative technology infrastructure hiding behind claims of neutrality and free speech,” she said, adding that the rise of tech oligarch influence in the White House, including Zuckerberg, should leave ordinary citizens “very concerned”.
One of the world’s leading tech journalists, authors and podcasters, Kara Swisher, described Zuckerberg’s announcement as “the most cynical suck up move I’ve ever seen him make, and he has made a lot of them over the years”.
In an interview with the BBC, Swisher pointed out that only four years ago Zuckerberg used all sorts of excuses to remove Donald Trump from Meta’s platforms because he thought Trump was finished, but now he “he wants to kiss up to Donald Trump and catch up to Elon Musk in that act”.
She said the announcement was riddled with “mendacious” assertions that fact checkers are liberal because there were plenty who were conservative, and that using X’s Community Notes as an exemplar was “laughable” as “Twitter has become a cesspool because they don’t have enough moderation”.
“Facebook does whatever is in its self interests…[Zuckerberg] always pretends and acts like Thomas Jefferson of the digital age and he’s not,” she said on the BBC’s Today Show. “These people only use the free speech excuse when it suits their business interest.”
Meta’s new direction raises essential questions about balancing free speech with the need for responsible content management. As the company phases out third-party fact-checking and shifts toward a user-driven moderation model, the industry will be closely watching how these changes impact both platform safety and the broader digital landscape.
Reporting by Aimee Edwards and Arvind Hickman.