Facebook Making “Good Progress” On Detecting Hate Speech, Says APAC VP

Facebook Making “Good Progress” On Detecting Hate Speech, Says APAC VP
B&T Magazine
Edited by B&T Magazine



Facebook’s regional vice president says the company is making “good progress” on identifying hate speech before it is reported. But reports show Facebook is undertaking a major overhaul of its algorithms, used to detect hate speech.

Flagged by Facebook’s vice president APAC, Dan Neary, as one of the American social media conglomerate’s major social issues, the prevalence of hate speech was recently incorporated as a new metric in the company’s quarterly community standards enforcement report.

In third-quarter 2020, Facebook said hate speech accounted for 0.1 to 0.11 per cent of all posts. Neary explained that this means the average user experiences 10 to 11 views of hate speech that violates Facebook’s content policies for every 10,000 views of content.

Furthermore, the detection rate for hate speech identified by Facebook’s AI before it is reported to the company is now at 94.7 per cent—up from 80.5 per cent a year ago and just 24 per cent in 2017.

The detection rate on Instagram currently sits slightly higher at 94.8 per cent, Neary said at Facebook’s APAC Press Day, describing the increase in detection as “good progress” for the social media conglomerate.

According to information provided to B&T by Facebook, the social media company’s proactive detection rates for violating content are up from Q2 across most policies, including hate speech, due to improvements in AI and “expanding our detection technologies to more languages”.

“On Instagram in Q3, we took action on 6.5 million pieces of hate speech content (up from 3.2 million in Q2), about 95 per cent of which was proactively identified (up from about 85 per cent in Q2),” a Facebook spokesperson told B&T.

The increase in Facebook’s proactive detection rate for hate speech on Instagram was driven in part by improvements made to its proactive detection technology for English, Arabic and Spanish languages, and expanding automation technology.

The spokesperson said Facebook expects fluctuations in these numbers “as we continue to adapt to COVID-related workforce challenges”.

Algorithm overhaul in the works

The news comes as The Washington Post reports Facebook is embarking on a major overhaul of its algorithms that detect hate speech, which would reverse so-called “race-blind” practices.

The Post reports these practices have resulted in Facebook being more vigilant about removing slurs thrown at White users, but flagged and deleted lower-risk posts by people of colour on the platform.

According to internal documents obtained by The Post, the overhaul is known as the ‘WoW Project’ and in its early stages, and involves re-engineering the company’s automated moderation systems.

This aims to reportedly improve Facebook’s ability to detect and automatically delete hateful language, called “the worst of the worst”, which includes slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community and Jews, according to the documents.

The improvements have also addressed the policing of contemptuous comments about “Whites”, “men” and “Americans”, which are now treated as “low-sensitivity”, The Post reports.

Facebook (and now Instagram) has long banned hate speech, which it said is defined as “a direct attack on people based on what we call protected characteristics”.

These characteristics include race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. Facebook defines attack as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation.

Facebook says it has “thousands of people” in its community operations team who provide all-hours support across the globe and in more than 40 languages. Both automatic and manual systems work to flag and block accounts used for spam and inappropriate content.

According to The Post, before the overhaul the company’s algorithms and policies did not make a distinction between groups that were “more likely to be targets of hate speech” versus those that have not been historically marginalised.

Comments like “White people are stupid” were treated the same as anti-Semitic or racist slurs, The Post reports.

The move reportedly comes in response to internal pressure from employees within Facebook, and after years of criticism by civil rights advocates that content from Black users is “disproportionately removed”, particularly, The Post reports, when they use the platform to describe experiences of discrimination.

B&T has reached out to Facebook for further comment on the WoW Project.




Please login with linkedin to comment

Facebook hate speech Instagram

Latest News

Sydney Comedy Festival: Taking The City & Social Media By Storm
  • Media

Sydney Comedy Festival: Taking The City & Social Media By Storm

Sydney Comedy Festival 2024 is live and ready to rumble, showing the best of international and homegrown talent at a host of venues around town. As usual, it’s hot on the heels of its big sister, the giant that is the Melbourne International Comedy Festival, picking up some acts as they continue on their own […]

Global Marketers Descend For AANA’s RESET For Growth
  • Advertising

Global Marketers Descend For AANA’s RESET For Growth

The Australian Association of National Advertisers (AANA) has announced the final epic lineup of local and global marketing powerhouses for RESET for Growth 2024. Lead image: Josh Faulks, chief executive officer, AANA  Back in 2000, a woman with no business experience opened her first juice bar in Adelaide. The idea was brilliantly simple: make healthy […]