Facebook’s regional vice president says the company is making “good progress” on identifying hate speech before it is reported. But reports show Facebook is undertaking a major overhaul of its algorithms, used to detect hate speech.
Flagged by Facebook’s vice president APAC, Dan Neary, as one of the American social media conglomerate’s major social issues, the prevalence of hate speech was recently incorporated as a new metric in the company’s quarterly community standards enforcement report.
In third-quarter 2020, Facebook said hate speech accounted for 0.1 to 0.11 per cent of all posts. Neary explained that this means the average user experiences 10 to 11 views of hate speech that violates Facebook’s content policies for every 10,000 views of content.
Furthermore, the detection rate for hate speech identified by Facebook’s AI before it is reported to the company is now at 94.7 per cent—up from 80.5 per cent a year ago and just 24 per cent in 2017.
The detection rate on Instagram currently sits slightly higher at 94.8 per cent, Neary said at Facebook’s APAC Press Day, describing the increase in detection as “good progress” for the social media conglomerate.
According to information provided to B&T by Facebook, the social media company’s proactive detection rates for violating content are up from Q2 across most policies, including hate speech, due to improvements in AI and “expanding our detection technologies to more languages”.
“On Instagram in Q3, we took action on 6.5 million pieces of hate speech content (up from 3.2 million in Q2), about 95 per cent of which was proactively identified (up from about 85 per cent in Q2),” a Facebook spokesperson told B&T.
The increase in Facebook’s proactive detection rate for hate speech on Instagram was driven in part by improvements made to its proactive detection technology for English, Arabic and Spanish languages, and expanding automation technology.
The spokesperson said Facebook expects fluctuations in these numbers “as we continue to adapt to COVID-related workforce challenges”.
Algorithm overhaul in the works
The news comes as The Washington Post reports Facebook is embarking on a major overhaul of its algorithms that detect hate speech, which would reverse so-called “race-blind” practices.
The Post reports these practices have resulted in Facebook being more vigilant about removing slurs thrown at White users, but flagged and deleted lower-risk posts by people of colour on the platform.
According to internal documents obtained by The Post, the overhaul is known as the ‘WoW Project’ and in its early stages, and involves re-engineering the company’s automated moderation systems.
This aims to reportedly improve Facebook’s ability to detect and automatically delete hateful language, called “the worst of the worst”, which includes slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community and Jews, according to the documents.
The improvements have also addressed the policing of contemptuous comments about “Whites”, “men” and “Americans”, which are now treated as “low-sensitivity”, The Post reports.
Facebook (and now Instagram) has long banned hate speech, which it said is defined as “a direct attack on people based on what we call protected characteristics”.
These characteristics include race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. Facebook defines attack as violent or dehumanising speech, statements of inferiority, or calls for exclusion or segregation.
Facebook says it has “thousands of people” in its community operations team who provide all-hours support across the globe and in more than 40 languages. Both automatic and manual systems work to flag and block accounts used for spam and inappropriate content.
According to The Post, before the overhaul the company’s algorithms and policies did not make a distinction between groups that were “more likely to be targets of hate speech” versus those that have not been historically marginalised.
Comments like “White people are stupid” were treated the same as anti-Semitic or racist slurs, The Post reports.
The move reportedly comes in response to internal pressure from employees within Facebook, and after years of criticism by civil rights advocates that content from Black users is “disproportionately removed”, particularly, The Post reports, when they use the platform to describe experiences of discrimination.
B&T has reached out to Facebook for further comment on the WoW Project.
Zoetropes, a praxinoscope, early projectors, and a phenakistoscope have all been used to channel what is considered a monumental moment for carmaker Volkswagen in a new short by Johannes Leonardo. Directed by Sam Brown, the 90-second film ‘The Wheel’ uses some of the oldest devices of motion in film—the Zoetrope (praxinoscope, early projectors, and phenakistoscope)—as […]