The AI researchers at Google, Microsoft, Facebook and several other top universities have called on e-commerce giant Amazon to halt the sale of its facial recognition technology to law enforcement.
The report was published in an open letter. According to the study, researchers have come to the conclusion that Amazon’s facial recognition tech’s algorithm is flawed. Surprisingly, the data shows that the facial recognition system is highly inaccurate for darker skinned and female faces. The researchers have warned that if such tech is put in the hands of law enforcement, it will lead to higher instances of racial discrimination. Further, it will create a mistaken identity and increase forced surveillance of marginalized groups.
“Flawed facial analysis technologies are reinforcing human biases,” according to Morgan Klaus Scheuerman, a PhD student at University of Colorado Boulder, who is one of the 26 signatories to the letter published on medium. In an email to the Verge, Scheuerman said that it “can be appropriated for malicious intent … in ways that the companies supplying them aren’t aware of.”
Other prominent signatories include Timnit Gebru, who is a Google Researcher and was one of the first to highlight the flaws in the system. Another prominent researcher is Yoshua Bengio who is also the recipient of the Turing award. There is also a researcher from IIT Kharagpur. Interestingly, the list also includes a researcher who has worked at Amazon’s AWS subsidiary.
The researchers want to have an open dialogue on the use of AI for facial recognition technology. Researchers are pushing for a technical framework that will carefully study the developing and implementing of AI technology on several projects. In this case, it’s the facial recognition tech.
Amazon’s Defence for Its Facial Recognition Tech
Amazon is fiercely defending its technology. However, the open letter has refuted every claim made by the e-commerce giant. One of the major example put forward by Amazon is that there have been no reports where its AI facial recognition tech has been misused, let alone a racist incident. But the researchers are quick to point out that there’s no audit system in place to check such cases. So, it’s highly irresponsible from the executives at Amazon to release such a report.