Facebook, Twitter and YouTube platforms are under fire for failing to stop the spread of the Christchurch mosque attack videos. The platforms have been blamed for allowing the proliferation of extremist content that incites hate and violence.
FaceBook, in particular, has been blamed for missing the live stream of the attack on Friday and began taking the already spread videos after alerts by New Zealand police. The live feed that ran for at least 17 minutes has brought about wrath across countries with users blaming Facebook for providing a fertile environment in which extremists subcultures can thrive and organize.
This comes not so long after Mike Schroepfer, Facebook’s Chief Technology Officer boasted about the platform’s investment in artificial intelligence to moderate content. An example given by Schroepfer was that artificial systems can differentiate between pictures of broccoli and pictures of marijuana.
The example has become a source of mockery even though FaceBook still says it has put proper policing standards and is proactive in finding posts that violate its terms of service. The platform also says it has hired tens of thousands of human moderators to help police its sites.
Facebook has also come out to say that it removed 1.5 million videos of the attack in the first 24 hours, 1.2 million of which were pulled down at upload. New Zealand Prime Minster Jacinda Ardern has asked social media companies to take responsibility for how their platforms were used in both the lead up and the aftermath of the Christchurch attack.
Ardern, in a Monday conference, said that Facebook’s ability to block 1.2 million videos at upload shows that there are powers to take a very direct approach to instances of speech that incites violence, or hate. As at the time of publishing, both Twitter and Google which owns YouTube had not commented on the New Zealand attack