Could artificial intelligence violate human rights?
Every day, humans interact with some form of artificial intelligence. But could AI actually violate our human rights? Some world leaders are already sounding the alarm.
NEW YORK – Stanley Kubrick brought us the HAL 9000, an artificial intelligence character from his classic science fiction film 2001: A Space Odyssey. Steven Spielberg brought us a robotic boy programmed to love in A.I. Artificial Intelligence.
“Everything I knew about artificial intelligence sort of came through the imagination of Stanley Kubrick or Steven Spielberg,” said Shalini Kantayya, a filmmaker and director herself. That is, until Kantayya saw a TED talk from Massachusetts Institute of Technology researcher Joy Buolamwini.
“Algorithmic bias, like human bias, results in unfairness,” Buolamwini said in the video.
That video inspired Kantayya to put together Coded Bias, a documentary viewable on Netflix that explores the inherent biases in facial recognition. The film begins with a very specific and very real example from Buolamwini’s experience.
“I got computer vision software that was supposed to track my face,” Buolamwini says in the documentary. “It didn’t work that well, until I put on this white mask. When I put on this white mask — detected. I take off the white mask — not so much.”
“The computer system recognizes the white mask as a human being and doesn’t recognize Joy’s face as a human being,” Kantayya told FOX 5 NY.
The problem is that the data the software relies on is data from the past, which is ridden with historical injustices and inequalities.
Get breaking news alerts in the free FOX5NY News app | Sign up for FOX 5 email newsletters
“[I]n spite of the best intentions of the programmers, these injustices, these systems can replicate historic inequalities.”
“These data sets were predominantly white, predominantly male, predominantly middle-aged, and so facial recognition technology began to be biased. And so it actually was not as accurate on young women of color for all of those reasons,” Kantayya said. “And so even sometimes in spite of the best intentions of the programmers, these injustices, these systems can replicate historic inequalities.”
There is growing consensus from those who study AI that this technology could significantly curtail human rights. Take for example the issue of biometric surveillance — cameras that scan the faces of everyone who walks down a public sidewalk 24 hours a day, 7 days a week.
“Imagine that we put a literal police officer at every corner — that would make it easier to catch ‘the bad guy,’ quote-unquote, right? But those police officers are humans and they are prone to error from time to time,” Harvard Law School Cyberlaw Clinic Associate Director Jessica Fjeld said. “Having them there all the time would really change the experience of walking through a street in New York City if every time you cross a corner, you’re also waving nervously at the cop.”
But the United Nations is now calling for a moratorium on that type of AI surveillance.
In a September report, the U.N. High Commissioner for Human Rights Michelle Bachelet demanded “urgent action.”
“We cannot afford to continue playing catch-up regarding AI,” she wrote.
Her report urges companies as well as countries to hit the pause button on the rollout of new AI technologies until the systems are properly tested and assurances regarding human rights are provided.
“States should place moratoriums on the sale and use of artificial intelligence systems until adequate safeguards are put in place.”
And that software that wouldn’t recognize MIT’s Buolamwini’s face until she put on the white mask?
“What is so shocking and alarming about this is that this was not a technology that was being beta-tested on a shelf somewhere,” Kantayya said. “This was technology that was actively being sold to our FBI, actively being sold to ICE or immigration officials, actively being sold and deployed by law enforcement all across the country — oftentimes in secret with no one that we elected, no one that represents ‘we the people,’ giving any kind of oversight.”
But in terms of that oversight, many experts believe it is coming. In fact, in April, the European Union issued a proposal called the Artificial Intelligence Act, a legal framework, in a sense, according to E.U. digital affairs spokesperson Johannes Bahrke.
“Consumers need to be sure that AI is checked if it’s risky,” Bahrke said, “checked as thoroughly as any other technology and before they are confronted to it.”
If passed, it could set global standards on AI regulation.
But in the absence of legislation in the United States, will Silicon Valley actually heed the U.N.’s warning and go through the proper checks before new software rolls out?
There are zero guarantees. And that is why some experts believe the era of simply “asking nicely” needs to come to an end now.