How Facebook is stopping ISIS and Al Qaeda from posting extremist content

Facebook is using artificial intelligence to help with a number of different issues, the latest being spotting posts of people who may be suicidal in an effort to prevent them from harming themselves.

The company is also using machine learning and other automated techniques to help spot and delete terrorist-related content from ISIS and Al Qaeda, with the company claiming 99 percent of content it removes is detected before it is spotted by users.

"We do this primarily through the use of automated systems like photo and video matching and text-based machine learning," wrote Monika Bickert, Facebook's Head of Global Policy Management, and Brian Fishman, head of counterterrorism policy, in a post.

FACEBOOK, YOUTUBE, TWITTER AND MICROSOFT JOIN UP TO BATTLE TERRORIST CONTENT

Al Qaeda and ISIS have been a priority for the social networking giant because of the way it is focusing its techniques. 

"A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda," Bickert and Fishman wrote. "Because of these limitations, we focus our most innovative techniques on the terrorist groups that pose the biggest threat globally, in the real-world and online. ISIS and Al Qaeda meet this definition most directly, so we prioritize our tools to counter these organizations and their affiliates."

As soon as content is flagged for potentially being terror-related, Facebook said 83 percent of that content is ultimately removed and any copies of that content are removed within an hour of the time they are posted.

Facebook also noted that there have been instances where the content was removed "before it goes live on the site," thanks to systems like photo and video matching and text-based machine learning.

Not working alone

Facebook has acknowledged that it cannot combat terrorist content by itself and has teamed up with experts from around the globe.

Over the summer, Facebook partnered with Microsoft, Twitter and YouTube to form the Global Internet Forum to Counter Terrorism (GIFCT) in an effort to battle terrorist content on their respective platforms.

The company also said it has expanded its partnerships with several anti-terrorism and cyber intelligence organizations around the world, including Flashpoint, the Middle East Media Research Institute (MEMRI), the SITE Intelligence Group and the University of Alabama at Birmingham’s Computer Forensics Research Lab in an effort to "flag Pages, profiles and groups on Facebook potentially associated with terrorist groups for us to review." 

SOCIAL MEDIA FUELED THE RISE OF 'FAKE NEWS'

More work to be done

Still, the company has more work to do and noted that deploying artificial intelligence in this matter is "not as simple as flipping a switch." Bickert and Fishman wrote that, depending upon the technique used, databases have to be carefully curated or humans have to code the data in order to train the machine. 

Global Intellectual Property Enforcement Center (GIPEC), a software company that monitors illegal activity and terror-related social media accounts, found at least one pro-ISIS Facebook account that has not been removed, providing screenshots of the account to Fox News.

For its part, Facebook has acknowledged that it can and will do more. "As we deepen our commitment to combating terrorism by using AI, leveraging human expertise and strengthening collaboration, we recognize that we can always do more, Bickert and Fishman added.

CEO Mark Zuckerberg has also talked about Facebook's strategy for using AI, saying in February it would take "many years" for the system to be fully developed.

Follow Chris Ciaccia on Twitter @Chris_Ciaccia