Artificial Intelligence (AI) is being used to violate the privacy of some incarcerated people, a recent study concludes.
The AI research is financed by an $81 billion congressional spending bill, Reuters reported Aug. 9.
Researchers at Stanford and Georgetown universities demonstrated the particularly high error rate when Black voices are transcribed by AI technology. The Sentencing Project estimates that Black men are six times more likely to be behind bars than White men in American jails and prisons.
“This Congress should be outlawing racist policing tech … It shouldn’t be funding it,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP).
“I think the idea that a machine can hear and understand what a person is saying, and that becomes some kind of tool in court, is ridiculous,” said Bianca Tylek, founder of the nonprofit social justice organization Worth Rises.
“Speech to text technology is not in a place where it can be used to make these kinds of criminal justice decisions,” said Allison Koenecke, the lead author of the Stanford and Georgetown study.
Koenecke and her research team ascertained that Amazon’s AI speech recognition programs had twice the rate of error for Black speakers compared to White.
In Oxford, Alabama, however, Chief of Police Bill Partridge said local forces have solved cold case homicides when prisoners were flagged talking about “actually committing the murder.”
Partridge’s department is one of several state agencies that utilize LEO Technologies software that uses Amazon Web Services (AWS) language processing to monitor inmate calls for near real-time analysis.
Partridge said the AI technology is also helpful in preventing suicides. “I think if the federal government starts us-ing it, they are going to prevent a lot of inmate deaths.”
The chief executive officer of LEO Technologies, Scott Kernan, is a former Secretary of the California Department of Corrections and Rehabilitation. He said AI “is saving lives both inside and outside of the correctional environments we monitor.”
Kernan added, “Because we listen to all communications, we do not target a race, gender or protected group.”
An AWS spokesperson said the Amazon Transcribe service is “highly accurate,” but recognized that heavy accents and poor audio can lead to variations in individual words.
In 2020 STOP examined the Securus platform currently using AI voice recognition in New York state. Cahn said the system has the potential to “automate racial profiling.”
Cahn said the software violates the privacy rights of prisoners and their families. People in the criminal justice system “are always turned into the subjects of experimentation for new technology systems,” he stated.
University of Washington computer scientist Kentrell Owens stressed how important proper oversight is for AI systems.
“Before you implement tech that can control people’s freedom, you need an independent assessment and audit of the tool to determine if the tech even helps you achieve your goals,” said Owens.
Heather Boland, the fiancé of an incarcerated man in Texas, calls him three times daily.
“We are never able to communicate without being under surveillance,” she told Reuters.
“We are supposed to be free people; we are not incarcerated,” Boland said. “But it feels like my rights are constantly being violated.”