Microsoft claims its facial recognition technology just got a little less awful.
Earlier this year, a study by MIT researchers found that tools from IBM, Microsoft, and Chinese company Megvii could correctly identify light-skinned men with 99-percent accuracy. But it incorrectly identified darker-skinned women as often as one-third of the time.
That’s extremely troubling. Women of color already face discrimination. Now imagine a computer incorrectly flagging an image at an airport or in a police database, and you can see how dangerous those errors could be.
Microsoft’s software performed poorly in the study. On Tuesday, the company said it was closer to fixing the problem.
New improvements reduced error rates for men and women with darker skin by 20 times, Microsoft said in a blog post, and reduced error rates for women overall by nine times.
So, if Microsoft’s report is accurate, we’re on our way to less biased facial recognition.
But that won’t quell other privacy concerns. Recently, the ACLU delivered a petition asking Amazon to stop providing facial recognition tools to the government — an especially pressing concern considering ICE’s efforts to track down immigrants.
The ultimate dystopian scenario? China, where officials want to build a nationwide surveillance system called “Sharp Eyes,” according to the Washington Post. The goal is to “track where people are, what they are up to, what they believe and who they associate with.” In April, the Chinese government used facial recognition technology to pick one man out of 60,000 people at a concert.
But hey, at least we can unlock our phones with our faces.