Facebook removed 8.7 million user images of child nudity with the aid of software that automatically flags such photographs, the company disclosed on Wednesday.

The company's machine learning tool can identify images that contain both nudity and a child, which beefs up the social network's ban on photos showing minors in a sexual context.

"We're using artificial intelligence and machine learning to proactively detect child nudity and previously unknown child exploitative content when it's uploaded," Antigone Davis, Facebook's global head of safety, said in a blog post.

"We're using this and other technology to more quickly identify this content and report it to [the National Center for Missing and Exploited Children], and also to find accounts that engage in potentially inappropriate interactions with children on Facebook so that we can remove them and prevent additional harm," Davis added.

UK WATCHDOG FINES FACEBOOK OVER USERS' DATA BREACH

(Reuters)

Over the last three months, the social network removed 8.7 million pieces of content that violated its child nudity or sexual exploitation of children policies, 99 percent of which was taken down before any users had a chance to report it.

The Menlo Park, Calif.-based company, which employs moderators who have backgrounds in online safety, analytics and law enforcement, also collaborates with other organizations and NGOs to stop online child exploitation.

INSIDE FACEBOOK'S CYBERSECURITY WAR ROOM AHEAD OF THE MIDTERM ELECTIONS

Facebook, which has banned even family photos of lightly clothed children uploaded with "good intentions," is also creating tools for smaller companies to "prevent the grooming of children online," according to the blog post.