1 like 0 dislike
ago in Climate Change by (170 points)

The claim that facial recognition technology misidentifies people of color more frequently is accurate. This issue really came to light after multiple studies examined how AI security systems identify people of different looks and then began to expand outside of that and look at the facial recognition systems in our phones which borrows from those security systems.

This issue found steam after MIT Media Lab’s Gender Shades project in 2018 which tested facial recognition systems from IBM, Microsoft, and Face++ for different genders and skin-tones; the study found error rates of 0.8% for light-skinned men but up to 34% for dark-skinned women. 

More evidence comes from National Institute of Standards and Technology who state that they “found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied”. 

I feel that this issue might stem from the data sets that AI systems are trained upon, which would highlight human oversight and major biases in the figureheads of current AI companies.

1 Answer

0 like 0 dislike
ago by (150 points)

I also found this claim to be true, and found further implications that make facial recognition's racial bias and inaccuracy even more concerning. 

Along with inconveniencing and alienating users of color while failing to perform everyday tasks like unlocking a phone or putting on a funny filter, AI facial recognition's bias also puts people of color at greater risk of being misidentified by surveillance employed by law enforcement. (It also allows agencies like ICE to automatically track and target people of color.) 

According to an article titled Biased Technology: The Automated Discrimination of Facial Recognition Technology by ACLU Minnesota, "Technology does not exist outside of the biases and racism that are prevalent in our society. Studies show that facial recognition is least reliable for people of color, women, and nonbinary individuals. And that can be life-threatening when the technology is in the hands of law enforcement." 

This excerpt tracks with the original poster's response and adds another layer of urgency, as it touches on real-world implications. 

https://www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition 

Another article from University of Calgary also supports this claim, reading: 

“There is this false notion that technology unlike humans is not biased. That’s not accurate,” says Christian, PhD. “Technology has been shown (to) have the capacity to replicate human bias. In some facial recognition technology, there is over 99 per cent accuracy rate in recognizing white male faces. But, unfortunately, when it comes to recognizing faces of colour, especially the faces of Black women, the technology seems to manifest its highest error rate, which is about 35 per cent.”  

This quote directly reinforces the statistics given in the original claim's explanation, furthering its validity. 

https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-recognition-technology 

True

Community Rules


• Be respectful
• Always list your sources and include links so readers can check them for themselves.
• Use primary sources when you can, and only go to credible secondary sources if necessary.
• Try to rely on more than one source, especially for big claims.
• Point out if sources you quote have interests that could affect how accurate their evidence is.
• Watch for bias in sources and let readers know if you find anything that might influence their perspective.
• Show all the important evidence, whether it supports or goes against the claim.
...