To make Facebook, a platform free of online abuse, hate speech, graphic photos and videos, the social media giant has taken down more than 3 billion fake accounts between October and March, reported The Sydney Morning Herald.
California-headquartered Facebook has been on the receiving end of criticism for more than a year for allegedly peddling fake news. However, the company was lambasted after the Christchurch terror attack in New Zealand, where the killer live streamed 17 minutes of rampantly shooting people in two mosques.
The company, in its latest report, mentioned that they took down these accounts before they had a chance to become “active” users of the social network. The company acknowledged in the last couple of months that there was a “steep increase” in the creation of abusive, fake accounts. The company, amidst the allegation of rising fake accounts, further said that most of the fake accounts were taken down “within minutes” of their creation, however, a few of them slipped through the cracks.
Among the active monthly users of 2.4 billion that company boasts of, comprises of 5% of fake accounts. The current number of fake accounts is up from 3-4% in the previous six-month report. This is clear evidence that despite Facebook’s whack to bring down the fake accounts, it has failed. While the company has ramped up its effort in detecting fake accounts, online abuse, and hate speech with the help of Artificial Intelligence (AI) has increased. To curb fake news, spam and other objectionable material, Facebook’s detection has to improve.
This increase in number is haunting the company at a time when it has been struggling with challenge after challenge, from peddling fake news to Facebook’s active role during elections, resulting in violence, and hashtag in India, the US, and Myanmar, among others.
Facebook has thousands of people who are assigned to look after every content that goes on the platform, including photos, status, comments, videos. However, several contents escape from the grasp of AI and human eyes. This has built the impression that the company is politically biased.
The company’s co-founder, Chris Hughes has criticised the company for failing to thwart the disinformation spread through the social media giant. Hughes has further asked for federal antitrust regulators to investigate and penalise the giant if found guilty.
The company has recently mentioned the introduction of “One Strike” policy, which will prevent a user from breaking the rules of Facebook Live service. The company mentioned that if a user from now on violates the company’s serious policies, then he/she would be barred from using Facebook Live, for 30 days-starting on their first offence. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.