I also found this claim to be true, and found further implications that make facial recognition's racial bias and inaccuracy even more concerning.
Along with inconveniencing and alienating users of color while failing to perform everyday tasks like unlocking a phone or putting on a funny filter, AI facial recognition's bias also puts people of color at greater risk of being misidentified by surveillance employed by law enforcement. (It also allows agencies like ICE to automatically track and target people of color.)
According to an article titled Biased Technology: The Automated Discrimination of Facial Recognition Technology by ACLU Minnesota, "Technology does not exist outside of the biases and racism that are prevalent in our society. Studies show that facial recognition is least reliable for people of color, women, and nonbinary individuals. And that can be life-threatening when the technology is in the hands of law enforcement."
This excerpt tracks with the original poster's response and adds another layer of urgency, as it touches on real-world implications.
https://www.aclu-mn.org/en/news/biased-technology-automated-discrimination-facial-recognition
Another article from University of Calgary also supports this claim, reading:
“There is this false notion that technology unlike humans is not biased. That’s not accurate,” says Christian, PhD. “Technology has been shown (to) have the capacity to replicate human bias. In some facial recognition technology, there is over 99 per cent accuracy rate in recognizing white male faces. But, unfortunately, when it comes to recognizing faces of colour, especially the faces of Black women, the technology seems to manifest its highest error rate, which is about 35 per cent.”
This quote directly reinforces the statistics given in the original claim's explanation, furthering its validity.
https://ucalgary.ca/news/law-professor-explores-racial-bias-implications-facial-recognition-technology