After Failing To Take Down Videos Of Christchurch Mosque Shooting, Facebook Implements “One Strike” Policy For Live Videos
Months after the Christchurch mosque shootings in New Zealand where a gunman live streamed the mass shooting for 17 minutes, Facebook has come up with a new set of rules to tighten its Live feature.
The company in a blog post, on May 14, mentioned the introduction of “One Strike” policy which will prevent a user from breaking the rules of Facebook Live service. The company mentioned that if a user from now on violates the company’s serious policies, then he/she would be barred from using Facebook Live, for 30 days-starting on their first offence. For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time.
The company further mentioned that the restriction on the Facebook Live feature would be imposed on users flouting company’s ‘Dangerous Organisation and Individual policy’ which deals with – organised hate, mass or serial murder, terrorist activity, human trafficking and organised violence.
Lately, the social media giant has also started using Artificial Intelligence (AI) to tackle violent and dangerous content floating on the platform. However, the AI system failed terribly in non-English speaking countries. The detection system, when tested in Myanmar, didn’t work. The AI system was also not efficient in taking down the Mass shooting live video after the Christchurch terror attack. The video itself was reported to the company after 12 minutes. Facebook also failed to take down 20% of the videos that were uploaded of the mass shooting.
Accepting its fault, the company said it would invest US$7.5 mn to understand and find out edited versions of the original video which continued to float, hours after the real one was taken down. It has also shown interest to collaborate with more research partners in future to combat fake news. Guy Rose, Facebook VP of integrity, in the blog mentioned that despite the company’s various video and audio matching technique, more research was required to make Facebook – violence free.
Stressing on the high number of edited videos of the live stream which surfaced on the platform in the aftermath of the attack, Facebook mentioned it faced difficulties in detecting these videos.
Facebook has been on the receiving end of severe criticism for its inability to take down the live stream video which continued for 17 minutes of the Christchurch attack and later went viral. The move by the company interestingly came a day before leaders of various countries along with tech giants such as Microsoft, Twitter, Google, Amazon, and Facebook signed up to the Christchurch Call to Action to crackdown on the online terror. New Zealand Prime Minister Jacinda Ardern headed the meeting in Paris.