“It took a while for our systems to flag that and for fact checkers to rate it as false - once the fact checkers saw it they were able to rate it within an hour, but it took more than a day for our systems to flag it,” Zuckerberg said, during an interview at the Aspen Ideas Festival in Colorado on Wednesday. "That was an execution mistake," he added.
During the time it took for Facebook's systems to flag the video as false, Zuckerberg explained, the video became more widely distributed than Facebook’s policies should have allowed.
Once the video was identified as fake, Facebook heavily reduced the video’s appearance in users’ Newsfeeds. However, the tech giant was slammed by Democrats, including former Secretary of State Hillary Clinton, when the video was still visible on a conservative Facebook page.
At the Aspen Ideas Festival, Zuckerberg said while Facebook wants to improve its ability to identify content such as the doctored Pelosi video, it needs to be careful about how it handles the content.
“I think that what we want to be doing its improving execution but I do not think we want to go so far towards saying a private company prevents you from saying something that it thinks is factually incorrect, to another person,” he said. “That for me just feels like it’s too far and goes away from the tradition of free expression and being able to say what your experience is through satire and other means.”
Zuckerberg noted that the company is also evaluating how it should handle "deepfake" videos created with artificial intelligence and high-tech tools to yield false, but realistic clips.
The Facebook CEO said it might make sense to treat such videos differently from other misinformation such as false news. Facebook has long held that it should not decide what is and isn't true, leaving such calls instead to outside fact-checkers.
Zuckerberg noted that it's worth asking whether deepfakes are a "completely different category" from regular false statements. He said developing a policy on these videos is "really important" as AI technology grows more sophisticated.
Facebook, like other social media companies, does not have a specific policy against deepfakes, whose potential threat has emerged only in the last couple of years.
Fox News’ Christopher Carbone and the Associated Press contributed to this article. Follow James Rogers on Twitter @jamesjrogers