Updated

If you try posting copyrighted material on Facebook, like a music video that isn't yours, odds are good that the service's systems will be able to find that you've done so based on the unique fingerprint the video file has. If this fingerprint, or "hash," matches up with a known list of copyright material, that's it—your video is flagged and off it goes into the digital ether.

And the same is true on a number of other platforms. Most have heard about YouTube's Content ID program, for example, which scans uploads (including their audio tracks) against an existing database of copyright work and flags anything that matches too closely.

However, new reports suggest that some of the bigger players in social media and content hosting are turning their matching tools from copyright to content—specifically, extremist and/or exceedingly violent content. While these techniques don't stop new videos from being posted to the site, they can help police those attempting to share videos that have already been flagged or removed.

As Reuters reports, Facebook and Google are two companies that are allegedly tuning their content-matching systems to eliminate extremist content. However, neither are talking about it, nor are they discussing how, exactly, they decide this kind of content fits the criteria for removal. Some content is obvious: a beheading, for example. But where does one draw the line between, say, encouraging violence and passionate rhetoric?

It's also unclear whether these companies are relying exclusively on their matching systems to find and remove this kind of content or whether some human review process is used to separate permissible from undesirable content.

The companies now doing this kind of content matching aren't talking about it because they feel shy about policing their platforms. Rather, discussing the methods they use to flag and remove extremist content might give those posting it some kind of insight in how to beat the system. And these companies—who all likely have different standards about what's acceptable on their networks—would probably prefer the effort to be as little cat-and-mouse as possible.

The Counter Extremism Project, a non-profit organization, announced earlier this month that it had created a technology specifically designed to help organizations police extremist content on their platforms. While some of the companies now doing some policing of their own have discussed the Counter Extremism Project's tools, we don't know if any have taken the organization up on its offer.

"President Obama is correct, ISIS videos and postings on the Internet are much too pervasive and accessible. We have known this for a long time, and despite the good intentions of social media companies, the problem has only gotten worse. We believe we now have the tool to reverse this trend, which will significantly impact efforts to prevent radicalization and incitement to violence. The technology now exists to quickly and efficiently remove the most horrific examples of extremist content. We hope everyone will embrace its potential," said Ambassador Mark Wallace, the Counter Extremism Project's CEO, in a statement.

This article originally appeared on PCMag.com.