CAMBRIDGE, Mass. (AP) – Face Detection Technology already sucked into everyday life – from your photos on Facebook to police mugshots scanning – when Joy Buolamwini noticed a serious error: some of the software couldn't detect dark-skinned faces like hers.
That revelation triggered the Massachusetts Institute of Technology researcher to launch a project that has a major impact on the debate on how to distribute artificial intelligence in the real world.
Her testing on software created by brand name tech companies such as Amazon revealed much higher error rates in the classification gender of darker women than for lighter men.
Along the way, Buolamwini has asked Microsoft and IBM to improve their systems and irked Amazon, who publicly attacked her research methods. On Wednesday, a group of AI teachers, including a winner of the computer science top prize, launched a cautious defense work for her work and urged Amazon to stop selling its face recognition software to the police.
Her work has also attracted the attention of political leaders in state houses and congresses, and led some to seek limitations on the use of data vision tools to analyze human faces.
"It must be a choice," said Buolamwini, a graduate student and researcher at MIT's media laboratory. "At the moment, the technologies are becoming widespread without supervision, often hidden, so that when we wake up, it's almost too late."
Buolamwini is hardly alone in expressing caution about the rapid adoption of face recognition by the police, authorities and businesses from stores to apartment complexes. Many other researchers have shown how AI systems, which look for patterns in large amounts of data, will mimic the institutional disturbances embedded in the data they are learning from. For example, if AI systems are developed using images of mostly white men, the systems will work best to recognize white men.
These inequalities can sometimes be a matter of life or death: A recent study of the computer systems that enable self-driving cars to "see" the road shows that they have a harder time detecting pedestrians with darker skin tones.
What is a chord about Boulamwini's work is her method of testing the systems created by well-known companies. She uses such systems for a skin tone scale used by dermatologists, and names and shames those who show race and gender disorders. Buolamwini, who also founded a coalition of scholars, activists and others, called the Algorithmic Justice League, has mixed her scientific research with activism.
"It adds a growing evidence that facial recognition affects different groups differently," said Shankar Narayan, of the American Civil Liberties Union of Washington, where the group has sought technology restrictions. "Joy's work has been part of building that awareness."
Amazon, whose CEO Jeff Bezos sent her directly last summer, has reacted aggressively to targeting his research methods.
A Buolamwini-led study published just over a year ago found differences in how facial analysis systems built by IBM, Microsoft and the Chinese company Face Plus Plus classified persons by sex. Darker shining women were the most misclassified group, with error rates up to 34.7%. In contrast, the maximum error rate for lighter men was less than 1
The study required "urgent attention" to take into account bias.
"I responded pretty much right away," said Ruchir Puri, chief researcher at IBM Research, describing an email he received from Buolamwini last year.
Since then, he said, "It's been a very fruitful affair" that informed IBM to uncover this year of a new 1 million image database for better analysis the diversity of human faces. Previous systems have been too dependent on what Buolamwini calls "pale men" image agencies.
Microsoft, which had the lowest error rate, declined comment. Messages left with Face Plus Plus were not immediately returned.
Months after her first study, when Buolamwini worked with the University of Toronto researcher Inioluwa Deborah Raji on a follow-up test, all three companies showed major improvements.
But this time, they also added Amazon, who has sold the system as it calls Recognition to Police Authorities. The results, published in late January, showed poorly misidentifying Amazon women.
"We were surprised to see that Amazon was where their competitors were a year ago," Buolamwini said.
Amazon rejected what it called Buolamwini "erroneous claims" and said the study confused face detection with face recognition, incorrect measurement of the former with techniques for evaluating the latter.
"The answer to concerns over new technology is not to run" tests "inconsistently with how the service is designed to be used, and to reinforce the false and misleading conclusions of the test through the news media, Matt Wood, general manager of artificial intelligence for Amazon's cloud computing division, in a blog blog in January. Amazon replied to an interview request.
"I didn't know their reaction would be quite hostile," Buolamwini said in an interview at MIT lab.
Coming to her defense Wednesday was a coalition of scientists, including AI pioneer Yoshua Bengio, the last winner of the Turing Award, considered the technology field's version of the Nobel Prize.
They criticized Amazon's response, especially the difference between face recognition and analysis. "Contrary to Dr. Wood's claim, bias found in one system is the cause of concern in the other, especially in cases of use that can seriously affect people's lives, for example. In law enforcement applications, "they wrote.
The few public clients have defended Amazon's system.  Chris Adzima, senior information system analyst for the Washington County Sheriff's Office in Oregon, said that the agency uses Amazon's Recognition to identify the most likely matches among the collection of around 350,000 mug shots. But because a human being makes the final decision, "bias of that computer system is not transferred to any results or any action taken," Adzima says.
But regulators and legislators increasingly have doubts. A bipartisan bill at the congress seeks limitations on face recognition. Laws in Washington and Massachusetts assess their own laws.
Buolamwini said that a major report on her research is that AI systems must be closely monitored and monitored consistently if they are to be used by the public. Not only to check for accuracy, she said, but to ensure that face recognition is not misbidden to violate privacy or cause other injuries.
"We can't just allow businesses to do such controls alone," she said.
Associated Press author Gillian Flaccus contributed to this report from Hillsboro, Oregon.
Copyright © Associated Press. All rights reserved. This material cannot be published, broadcast, rewritten or redistributed.