Facebook claims that their most recent attempts to remove terrorism-related content on their platform using artificial intelligence has begun to “bear fruit.”
BBC News reports that Facebook’s new content filtering artificial intelligence software is making great progress in removing terrorism-related posts from the social network. Facebook claims that 99 percent of terrorism, ISIS and Al Qaeda related content is now removed automatically before being flagged by users. The company’s efforts to fight the extremist content using artificial intelligence were first announced by CEO Mark Zuckerberg in June.
“We want to find terrorist content immediately, before people in our community have seen it. Already, the majority of accounts we remove for terrorism we find ourselves,” wrote Monika Bickert, Facebook’s director of global policy management, and Brian Fishman, Facebook’s counterterrorism policy manager, in a post. “But we know we can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook. Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook. We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course.”
Facebook still employs human content curators to ensure moderate posts on their platform, but states that the removal of terrorist content is now done “primarily” by automated systems. Facebook’s content filter now uses photo and video-matching software to detect frequently reposted terrorism-related information and content, automatically removing these posts. Facebook also uses text-based machine learning to analyze and detect posts that may pose a risk by reading the frequency that certain words and phrases are used in the post.
Facebook claims that once a post is flagged as being related to terrorism, 83 percent of the terrorism content was removed along with any reposting of the content that may follow, in some cases terrorism-related posts were deleted before they were ever published to the website. Facebook has focused on Al Qaeda and ISIS recently as they pose the current “biggest threat globally,” but that including other groups in their content filtering system was not as simple as some may think.
“A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda,” wrote Bickert and Brian Fishman in another blog post. “[But] we hope over time that we may be able to responsibly and effectively expand the use of automated systems to detect content from regional terrorist organizations too.”
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan_ or email him at [email protected].
[contf] [contfnew]
Breitbart
[contfnewc] [contfnewc]