Facebook’s regional vice president says the company is making “good progress” on identifying hate speech before it is reported. But reports show Facebook is undertaking a major overhaul of its algorithms, used to detect hate speech.
Flagged by Facebook’s vice president APAC, Dan Neary, as one of the American social media conglomerate’s major social issues, the prevalence of hate speech was recently incorporated as a new metric in the company’s quarterly community standards enforcement report.
In third-quarter 2020, Facebook said hate speech accounted for 0.1 to 0.11 per cent of all posts. Neary explained that this means the average user experiences 10 to 11 views of hate speech that violates Facebook’s content policies for every 10,000 views of content.
Furthermore, the detection rate for hate speech identified by Facebook’s AI before it is reported to the company is now at 94.7 per cent—up from 80.5 per cent a year ago and just 24 per cent in 2017.
The detection rate on Instagram currently sits slightly higher at 94.8 per cent, Neary said at Facebook’s APAC Press Day, describing the increase in detection as “good progress” for the social media conglomerate.
According to information provided to B&T by Facebook, the social media company’s proactive detection rates for violating content are up from Q2 across most policies, including hate speech, due to improvements in AI and “expanding our detection technologies to more languages”.
“On Instagram in Q3, we took action on 6.5 million pieces of hate speech content (up from 3.2 million in Q2), about 95 per cent of which was proactively identified (up from about 85 per cent in Q2),” a Facebook spokesperson told B&T.
The increase in Facebook’s proactive detection rate for hate speech on Instagram was driven in part by improvements made to its proactive detection technology for English, Arabic and Spanish languages, and expanding automation technology.
The spokesperson said Facebook expects fluctuations in these numbers “as we continue to adapt to COVID-related workforce challenges”.
Algorithm overhaul in the works
The news comes as The Washington Post reports Facebook is embarking on a major overhaul of its algorithms that detect hate speech, which would reverse so-called “race-blind” practices.
The Post reports these practices have resulted in Facebook being more vigilant about removing slurs thrown at White users, but flagged and deleted lower-risk posts by people of colour on the platform.
According to internal documents obtained by The Post, the overhaul is known as the ‘WoW Project’ and in its early stages, and involves re-engineering the company’s automated moderation systems.
This aims to reportedly improve Facebook’s ability to detect and automatically delete hateful language, called “the worst of the worst”, which includes slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community and Jews, according to the documents.
The improvements have also addressed the policing of contemptuous comments about “Whites”, “men” and “Americans”, which are now treated as “low-sensitivity”, The Post reports.
Facebook (and now Instagram) has long banned hate speech, which it said is defined as “a direct attack on people based on what we call protected characteristics”.
These characteristics include race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. Facebook defines attack as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation.
Facebook says it has “thousands of people” in its community operations team who provide all-hours support across the globe and in more than 40 languages. Both automatic and manual systems work to flag and block accounts used for spam and inappropriate content.
According to The Post, before the overhaul the company’s algorithms and policies did not make a distinction between groups that were “more likely to be targets of hate speech” versus those that have not been historically marginalised.
Comments like “White people are stupid” were treated the same as anti-Semitic or racist slurs, The Post reports.
The move reportedly comes in response to internal pressure from employees within Facebook, and after years of criticism by civil rights advocates that content from Black users is “disproportionately removed”, particularly, The Post reports, when they use the platform to describe experiences of discrimination.
B&T has reached out to Facebook for further comment on the WoW Project.
Twitter has just experienced its fastest growth in revenue since 2014, with the social media platform benefiting from increased interest from advertisers. Revenue was up 74 per cent YoY, according to the company’s Q2 results, reaching $US1.19 billion ($1.6 billion) from $US683.4 million ($925 million) 12 months ago. The strong results came in the same […]
Wild Turkey has announced the launch of its new global creative campaign and platform, ‘Trust Your Spirit’, featuring the brand’s creative director and spokesperson, Matthew McConaughey. The global campaign and platform ‘Trust Your Spirit’ is to encourage people to be bold, unapologetically themselves, and stay true to who they are. The global tagline and ethos […]
Telecommunications company Optus has announced it will launch the world’s first TikTok sign-language activated filter. Featuring Optus ambassador Ian Thorpe, Optus will unveil a branded effect that shows TikTok users how to say key phrases in Auslan sign language, including ‘How are you?’ and ‘It starts with Yes’, through the hashtag challenge #SignYes. Optus will […]
Podsights has revealed new insights into the effectiveness of advertising in podcasts, tracking global and Australian podcast advertising trends. This is the first Australian report to be published since ARN partnered with Podsights to set the standard for best-in-class podcast advertising measurement. The report includes additional analysis and follows a series of Measurement Masterclasses held for […]
NGEN’s 2021 charity cup has raised over $175,000 for UnLtd charity partner Gotcha4Life. Gotcha4Life is a not-for-profit foundation raising awareness and funding to provide educational workshops and innovative programs that build mental fitness in individuals and communities. While Sydney and Brisbane completed the Charity Cup in June, before Covid restrictions hit, the final leg in […]
Australian tech incubator Cicada Innovations and Biennale of Sydney are launching the ‘New & Sustainable Materials Challenge,’ in an effort to create a more sustainable future. The works of chosen material-makers will be showcased to millions of Biennale audiences, exhibition partners, and exhibition makers globally. The Challenge is open to any Australian and UK startups, […]