Updated

Small doses of nudity and graphic violence still make their way onto Facebook, even as the company is getting better at detecting some objectionable content, according to a new report.

Facebook today revealed for the first time how much sex, violence and terrorist, propaganda has infiltrated the platform—and whether the company has successfully taken the content down.

"It's an attempt to open up about how Facebook is doing at removing bad content from our site, so you can be the judge," VP Alex Schultz wrote in a blog post.

The report looks at Facebook's enforcement efforts from Q4 2017 and Q1 2018, and shows an uptick in the prevalence of nudity and graphic violence on the platform. Still, bad content was relatively rare. For instance, nudity only received seven to nine views for every 10,000 content views.

More From PCmag

The prevalence of graphic violence was higher and received 22 to 27 views—an increase from the previous quarter that suggests more Facebook users are sharing violent content on the platform, the company said.

Facebook uses computer algorithms and content moderators to catch problematic posts before they can attract views. During Q1, the social network flagged 96 percent of all nudity before users reported it.

However, the enforcement data from Facebook isn't complete. It refrained from showing how prevalent terrorist propaganda and hate speech is on the platform, saying it couldn't reliably estimate either.

Nevertheless, the company took down almost twice as much content in both segments during this year's first quarter, compared with Q4. But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem.

"We have lots of work still to do to prevent abuse," Facebook VP Guy Rosen wrote in a separate post. The company's internal "detection technology" has been efficient at taking down spam and fake accounts, removing them by the hundreds of millions during Q1. However for hate speech, Rosen said, "our technology still doesn't work that well."

The company has been using artificial intelligence to help pinpoint the bad content, but Rosen said the technology still struggles to understand the context around a Facebook post pushing hate, and one simply recounting a personal experience.

Facebook plans to continue publishing new enforcement reports, and will refine its methodology on measuring which bad content circulates over the platform. However, the data should also be taken with a grain of salt. One potential flaw is that the data doesn't account for any bad content the company may have missed. This is a problem perhaps most salient in non-English speaking countries.

Last month Facebook CEO Mark Zuckerberg faced criticism from civil society groups in Myanmar over how his company failed to detect violent messages from spreading across Facebook Messenger. To address the complaints, Facebook is adding more Burmese language reviewers to its content moderation efforts.

This article originally appeared on PCMag.com.