Facial recognition algorithms developed before the outbreak struggle to identify people wearing masks or face coverings, according to a new study from the U.S. Commerce Department’s National Institute of Standards and Technology (NIST).
The NIST looked at 89 commercial algorithms and found that even the best had error rates of around 5 percent when trying to match a masked individual’s face with their unmasked appearance – up from a normal rate of 0.3 percent when trying to match two photos of the same person. And otherwise “competent” algorithms failed up to 50 percent of the time.
“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” said Mei Ngan, an NIST scientist and an author of a report on the study. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks.”
The study found, unsurprisingly, that masks that cover more of the face increase the likelihood that an algorithm will fail – and in some cases the artificial intelligence (AI) could not even determine that a masked face was actually a face.
But it also only looked at “one-to-one” matching – comparing two photos of the same person, like matching an individual with their passport photo, drivers’ license or mugshot. That’s also the type of facial recognition technology used to unlock a smartphone – which gave headaches to Apple Face ID users earlier this year.
It did not take into account “one-to-many” matching, which would attempt to see whether a single image matched anyone in a known database of individuals.
The study, however, noted that the one-to-one results are likely indicative of a one-to-many algorithm’s performance. That could mean facial recognition technology deployed by law enforcement to monitor protests and other gatherings are having a reduced effect – although investigators used a number of other techniques to apprehend dozens of suspected rioters and arsonists around the country earlier this summer. In some of those cases, easily recognizable tattoos and social media accounts helped authorities identify suspects, not computer algorithms.
The researchers also digitally applied the masks to their database of portraits, rather than using new pictures of people wearing actual masks.
And they said that due to time constraints, they did not do enough testing to see how much the impact of a mask’s color had on the AI – but they noted that black masks appeared to have more of an effect than light blue ones.
But newer algorithms may be better at identifying masked individuals, according to researchers.
“Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind,” Ngan said.
Those newer algorithms were designed with masks in mind, according to NIST, and they may focus more on an individual’s eyes and eyebrows than the face as a whole.
The Associated Press contributed to this report.