Face masks are one of the best defenses against the spread of COVID-19, but their growing adoption is having a second, unintended effect: breaking facial recognition algorithms.
Wearing face masks that adequately cover the mouth and nose causes the error rate of some of the most widely used facial recognition algorithms to spike to between 5 percent and 50 percent, a study by the US National Institute of Standards and Technology (NIST) has found. Black masks were more likely to cause errors than blue masks, and the more of the nose covered by the mask, the harder the algorithms found it to identify the face.
“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” said Mei Ngan, an author of the report and NIST computer scientist. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”
Facial recognition algorithms such as those tested by NIST work by measuring the distances between features in a target’s face. Masks reduce the accuracy of these algorithms by removing most of these features, although some still remain. This is slightly different to how facial recognition works on iPhones, for example, which use depth sensors for extra security, ensuring that the algorithms can’t be fooled by showing the camera a picture (a danger that is not present in the scenarios NIST is concerned with).
Although there’s been plenty of anecdotal evidence about face masks thwarting facial recognition, the study from NIST is particularly definitive. NIST is the government agency tasked with assessing the accuracy of these algorithms (along with many other systems) for the federal government, and its rankings of different vendors is extremely influential.
Notably, NIST’s report only tested a type of facial recognition known as one-to-one matching. This is the procedure used in border crossings and passport control scenarios, where the algorithm checks to see if the target’s face matches their ID. This is different to the sort of facial recognition system used for mass surveillance, where a crowd is scanned to find matches with faces in a database. This is called a one-to-many system.
Although NIST’s report doesn’t cover one-to-many systems, these are generally considered more error pone than one-to-one algorithms. Picking out faces in a crowd is harder because you can’t control the angle or lighting on the face and the resolution is generally reduced. That suggest that if face masks are breaking one-to-one systems, they’re likely breaking one-to-many algorithms with at least the same, but probably greater, frequency.
This matches reports we’ve heard from inside government. An internal bulletin from the US Department of Homeland Security earlier this year, reported by The Intercept, said the agency was concerned about the “potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems.”
For privacy advocates this will be welcome news. Many have warned about the rush by governments around the world to embrace facial recognition systems, despite the chilling effects such technology has on civil liberties, and the widely-recognized racial and gender biases of these systems, which tend to perform worse on anyone who is not a white male.
Meanwhile, the companies who build facial recognition tech have been rapidly adapting to this new world, designing algorithms that identify faces just using the area around the eyes. Some vendors, like leading Russian firm NtechLab, say their new algorithms can identify individuals even if they’re wearing a balaclava. Such claims are not entirely trustworthy, though. They usually come from internal data, which can be cherry-picked to produce flattering results. That’s why third-parties agencies like NIST provide standardized testing.
NIST says it plans to test specially tuned facial recognition algorithms for mask wearers later this year, along with probing the efficacy of one-to-many systems. Despite the problems caused by masks, the agency expects that technology will persevere. “With respect to accuracy with face masks, we expect the technology to continue to improve,” said Ngan.
from Hacker News https://ift.tt/30YiaaO
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.