In the wake of Meta’s decision to abandon fact checkers and dial down content moderation, Australia’s eSafety commissioner has warned tech companies that operate in Australia that they are bound by Australian law and will be held to account.
First it was Elon Musk, now it is Mark Zuckerberg that has caught the attention of Australia’s eSafety commissioner Julie Inman Grant.
Less than a day after Meta CEO Zuckerberg said the company was scaling back efforts to police hate speech and dial down its content restrictions policy for Facebook, Instagram, WhatsApp and Threads, Inman Grant fired off a reminder to technology companies about their responsibilities to police online harm.
“eSafety is aware of Meta’s recent statement regarding changes to its Community Standards, particularly its Hate Speech Policy,” she said.
“Details of these are a matter for Meta, however I reiterate that, as an entity operating within Australia, the company is required to comply with Australian law, including the Online Safety Act.
“Under the Act, eSafety is empowered as the national online safety educator and coordinator to protect Australians from online harms including child sexual abuse, terrorism, cyberbullying, image-based abuse and serious adult cyber abuse directed against individuals.”
Inman Grant is encouraging Australians to report potentially harmful content to the platforms where they are being distributed. If social media companies fail to respond, they may fall foul of the Online Safety Act.
“We have a strong track record – close to 90 per cent – in securing removal of violative content and conduct and are committed to maintaining these levels of harm remediation for Australians, regardless of the platform concerned,” Inman Grant said.
“eSafety’s reporting schemes are complemented by systemic powers, including Australia’s world-first enforceable industry codes and standards, which clearly require global tech giants to tackle the most harmful online content, like child sexual abuse material and pro-terror content.”
‘Automation no substitute for people’
Part of Meta’s planned change is to focus its automated content moderation systems on removing “high-severity violations” of its terms and illegal content.
In announcing the changes via video, Zuckerberg said that Meta’s automated moderation efforts haD been too broad and making too many mistakes, accounting for one per cent of posts being removed erroneously, and users being suspended unjustly.
Inman Grant said she would welcome further information on the long-term efficacy of Meta’s new automated approach.
“While we welcome any move by tech companies to make their platforms safer and encourage investment in innovative solutions that reduce risk, it is important to note that automated systems are not a substitute for real people who can evaluate content contextually,” she said.
“eSafety is driven by its core corporate values and takes principled, balanced and fair regulatory actions to protect Australians. We will continue holding all technology companies to account for online harms and safety inadequacies. For all the change that is happening in the world, our commitment to these values remains consistent.”
ACMA & Government’s response
The eSafety Commissioner’s remit is to protect Australians from cases of online harm, but this does not extend to hate speech and misinformation.
That falls under the purview of Australia’s communications and media regulator, ACMA.
Earlier today (8 January) ACMA told B&T that it understood Meta’s announcement about changes to its fact-checking processes only relates to the US.
“The ACMA understands on advice from Meta that there is no immediate plan to make changes to the third-party fact checking program in Australia,” ACMA said.
“Meta is a signatory to the voluntary Australian Code of Practice on Disinformation and Misinformation, which is administered by DIGI. Under that voluntary code, Meta has committed to a range of measures in its latest transparency report including initiatives with third-party fact-checking organisations to inform its processes to combat misinformation.”
The Australian government has also expressed concerns concerns about misinformation.
“Misinformation can be harmful to people’s health, wellbeing, and to social cohesion. Misinformation in particular is complex to navigate and hard to recognise,” Minister for Communications Michelle Rowland said.
“Access to trusted information has never been more important. That’s why the Albanese Government is supporting high quality, fact-checked information for the public through ongoing support to ABC, SBS and AAP.”