Google has announced a new effort to clamp down on terrorist content on its YouTube video sharing service.
In a blog post that also appeared as a Financial Times op-ed, Google’s General Counsel Kent Walker wrote that the tech giant is taking a multipronged approach to tackling terrorist content. This includes ramping up the use of technology to find vile videos. Walker explained that Google has used video analysis models to find and assess more than 50 percent of the terrorism-related content it has removed over the past six months. "We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content,” he added.
Additionally, Google is increasing the number of independent experts involved in its “Trusted Flagger” program of partners that help find inappropriate content. “We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants,” Walker said. “This allows us to benefit from the expertise of specialised organisations working on issues like hate speech, self-harm, and terrorism.”
Google also will increase its work with counter-extremist groups to identify content that may be being used to radicalize and recruit extremists.
Additionally, the search giant will be taking a harder stance on videos that do not clearly violate its policies, but contain inflammatory religious or supremacist content. These videos will appear behind a warning and will not be monetized, recommended or eligible for comments or user endorsements, Walker said.
Google said that it is also increasing its counter-radicalization efforts and is working with tech incubator Jigsaw, formerly Google Ideas, to redirect potential ISIS recruits to anti-terrorist videos via targeted online ads.
Social media expert and President of JRM Comms Jason Mollica told Fox News that Google is smart to take a multifaceted approach to the problem of terrorist content. “It can be very effective,” he said. “I believe that what we will see, maybe six months to a year down the road, is Google saying ‘This is what we were able to do, statistically, with videos, this is what we were able to help the government with.’”
Last week Facebook announced that it has increased its use of artificial intelligence to block terrorist propaganda on the social network.
Follow James Rogers on Twitter @jamesjrogers