For the very first time, world's largest social media company - Facebook - on Thursday, November 19 revealed numbers of the hate speech activities on its platform, stating that out of every 10,000 content views in the third quarter, 10 to 11 constituted 'hate speech'.
The social media giant has faced incessant criticism over its policing of abuses, especially near the November's US presidential election. This summer, several civil rights groups organised an advertising boycott to increase pressure on Facebook to act against hate speech.
In its quarterly content moderation report, Facebook said it initiated action on 22.1 million pieces of hate speech content in the third quarter, about 95% of which was proactively identified, in comparison to 22.5 million in the last quarter.
The company refers to 'taking action' as removing content, covering it with a warning, disabling accounts, or escalating it to external agencies.
The company agreed to reveal the hate speech numbers, calculated by examining a sample of content seen on the platform and submit itself to an independent audit of its enforcement record.
The Anti-Defamation League, one of the groups behind the boycott, claimed that Facebook's new metric still required sufficient context for a proper assessment of its performance.
"We still don't know from this report exactly how many pieces of content users are flagging to Facebook ~CHECK~ whether or not any action was taken," ADL spokesman Todd Gutnick said, adding that data matters. He said as "there are many forms of hate speech that are not being removed, even after they're flagged."
Facebook's head of safety, Guy Rosen, said that between March 1 to the election day, November 3, the company took down over 265,000 pieces of content from its platform and Instagram in the United States for violating its voter interference policies. In October, Facebook said it was updating its hate speech policy to ban content that denies the Holocaust.
Facebook said it initiated action against 19.2 million pieces of violent and graphic content in the third quarter, up from 15 million in the second. On Instagram, it took action on 4.1 million pieces of violent and graphic content.
Earlier this week, Zuckerberg, along with Twitter's CEO, Jack Dorsey, was questioned by the US Congress on their companies' content moderation practices, political bias and decisions about violent speech.
The company has also been under the scanner recently for permitting large Facebook groups sharing false election claims and violence.
In a blog post, Facebook said that the COVID-19 pandemic continued to disturb its content-review workforce.
Recently, over 200 Facebook workers have said that the social media giant is forcing content moderators to come back to the office during the ongoing COVID-19 pandemic as the company's attempt to depend on automated systems has "failed".
The workers wrote an open letter to Facebook's CEO, Mark Zuckerberg, and Chief Operating Officer, Sheryl Sandberg, and the heads of two companies, Accenture and CPL, to which Facebook subcontracts content moderation.
"Without our work, Facebook is unusable," the letter read. "Your algorithms cannot spot satire. They cannot sift journalism from misinformation. They cannot respond quickly enough to self-harm or child abuse. We can."