The claim that facial recognition technology misidentifies people of color more frequently is accurate. This issue really came to light after multiple studies examined how AI security systems identify people of different looks and then began to expand outside of that and look at the facial recognition systems in our phones which borrows from those security systems.
This issue found steam after MIT Media Lab’s Gender Shades project in 2018 which tested facial recognition systems from IBM, Microsoft, and Face++ for different genders and skin-tones; the study found error rates of 0.8% for light-skinned men but up to 34% for dark-skinned women.
More evidence comes from National Institute of Standards and Technology who state that they “found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied”.
I feel that this issue might stem from the data sets that AI systems are trained upon, which would highlight human oversight and major biases in the figureheads of current AI companies.