AFTER several months of being taken to task about how Facebook has been handling and policing hate speech, the social media giant promised transparency and accountability in their efforts to enforce policies especially pertaining to these malicious remarks, an Asia-Pacific executive said.

“One of the things that we’ve done for the last seven quarters is we released a community standards enforcement report… and one of the things that we did as part of that report is we announced a new metric: and the new metric is around the prevalence of hate speech,” Dan Neary, vice president of the global business group at Facebook Asia Pacific, said in a Dec. 8 press conference.

“Now, keep in mind the overall goal of this report is to just make sure that we are being transparent and accountable about our efforts to enforce policies,” he added.

In November, Facebook published its first report on the prevalence of hate speech and said that out of “10,000 views of content” in the third quarter of 2020, there are about “10 to 11 views of hate speech.”

A Reuters report quoted Facebook saying that the company “took action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which was proactively identified compared to 22.5 million in the previous quarter.

Taking action means removing content, putting a warning over it, disabling accounts, or reporting it to external agencies.

Mr. Neary said the company has made strides in identifying hate speech from 23.6% in 2017 and now to “under 95%.”

“So really good progress and this is where a lot of the AI really comes into play in terms of our ability to capture this or identify this and remove it from the platform at scale,” he said before adding that they have recently started applying the same metrics to Instagram.

Facebook is also said to have been “embarking on a major overhaul of its algorithms that detect hate speech,” according to a Dec. 5 story by The Washington Post, citing internal documents.

The overhaul involves re-engineering the social media’s automated moderation systems to get better at detecting and automatically removing hateful language and content considered “the worst of the worst”, which includes “slurs directed at Blacks, Muslims, people of more than one race, the LGBTQ community, and Jews,” according to the documents obtained by the media outlet.

While Mr. Neary emphasized on the company taking accountability and being transparent on their efforts to curb hate speech on the network, some of Facebook’s employees may not see it as enough as a Buzzfeed News article posted on Dec. 11 pointed at the recent resignation of the Facebook data scientist who wrote in their farewell note that, “With so many internal forces propping up the production of hateful and violent content, the task of stopping hate and violence on Facebook starts to feel even more Sisyphean than it already is.”

“It also makes it embarrassing to work here,” the note read.

The same scientist presented internal Facebook data and projections and said that unlike Facebook’s numbers of 10 to 11 views of hate speech out of 10,000 views of content, the numbers are “roughly 1 of every 1,000 pieces of content violates the company’s rules on hate speech.”

And when it comes to deleting such content, the data scientist said Facebook was “deleting less than 5% of all the hate speech posted on Facebook.”

The Buzzfeed article mentioned that Guy Rosen, the vice president of integrity at Facebook, disputed the calculations. — Zsarlene B. Chua