The Push to Perfect Artificial Intelligence – The National Law Review

Wednesday, December 8, 2021

Humans are working artificial intelligence programs (AI) into business, government and daily life. Like with any new tool or technology, we start to see the initial technology flaws the more we are exposed to it. So we are now in the midst of a moment w…….

npressfetimg-2212.png

Wednesday, December 8, 2021

Humans are working artificial intelligence programs (AI) into business, government and daily life. Like with any new tool or technology, we start to see the initial technology flaws the more we are exposed to it. So we are now in the midst of a moment where AI is under the microscope, with policy makers picking apart AI contributions and demanding that AI meet high standards of performance and social consequence.

This is a healthy process. Society should always examine impactful tools and push for the tools to work better. However, I fear in the drive to make AI better, the perfect may become the enemy of the good.  Important AI solutions may be shunted aside because they do not meet all the social requirements placed on them, and our society will suffer without important, if imperfect, AI tools.

As frequently noted here and elsewhere, humans have not produced – and seem far from producing – general AI that can handle many and varied tasks.  Instead, we are beginning to develop some excellent specific AIs – computer instructions that are 1000 times better than trained medical specialists at spotting lung cancer on x-rays or incalculably better than any human at predicting weather patterns 7-10 days out.

And the “much better than human alternatives” part is important. If our tool for performing a job is efficient and effective to level 3, and the new tool can painlessly do the job at level 25, then why fight it?  Just use the tool. We are not talking about gas powered leaf blowers here – much better than rakes/brooms at the job, but come at an environmental cost of burning petroleum plus horrible noise pollution. We are talking about letting a computer do a job much better than people at likely less cost and environmental impact.

I fear in the drive to make AI better, the perfect may become the enemy of the good.

Or we are talking about asking AI to do a job that humans are otherwise incapable of performing. Code decrypting, for example. Computers can devise codes that no human could decrypt, but other computers might be able to break. But the real social issue for using AI comes for tasks that have been performed by humans in the past, but is passing to more effective machines now.

Autonomous vehicles are a perfect example. People stink at driving. They drink alcohol. They text on the road. They take shocking risks.  They go crazy. They fall asleep at the wheel. People are simply untrustworthy drivers. In 2020, Americans drove 13% less miles due to the pandemic, yet traffic fatalities rose to 38,600 people. That number of deaths is three times the number of dead and wounded in the American Revolution.

It is widely expected that autonomous vehicles, powered by AI, will cause less than 10% of the accidents and fatalities than human drivers. And yet, when the press and public talk about safety of autonomous driving, we don’t hear about the 30,000 lives AVs could have saved if they were ruling the roads last year, we talk about the less than 5 instances where an AV has actually hurt or killed someone on US roads.

Unfortunately, much of this is human nature.  We have accepted the risk of human drivers killing themselves and others – just as people in the 1940s, 50s and 60s accepted the horrific rate of traffic deaths caused by people driving without seatbelts. But risks associated with a new paradigm frighten us. Even though autonomous vehicles will kill many less people than our current set of human drivers, we will still obsess about each person harmed by a self-driving car and not be affected at all by most of the human-caused fatalities.

Risks associated with a new paradigm frighten us.

In short, we are demanding perfection of AI in this regard, when we know for a fact that whoever or whatever is controlling a thousand-pound object moving at 40-miles-per-hour will occasionally harm an animal – human or otherwise – in the road. We can’t modify the laws of physics, but seem to demand that AI do so, or it should not be allowed on the road.
 
The Biden Administration is calling for an AI Bill of Rights, and we expect to see such a document soon. But in a recent column in Wired, White House science advisors Eric Lander and Alondra Nelson run through a list of potential problems with AI as it is currently used. They point out that hiring tools that learn the features of a workforce could reject applicants dissimilar from current employees, and that AI can recommend medical support for groups that regularly access healthcare rather than those that may need it more. In other words, like human logic, AI logic may have biases and lead to unintended and problematic results. 

But Lander and Nelson take from these examples that “Powerful technologies should be required to respect our democratic values and abide by the central tenet that everyone should be treated fairly. Codifying these ideas can help ensure that.” Which sounds like a call to pass laws that restrict use of AI in business and government unless the AI perfectly meets our social expectations. As with the autonomous vehicle example above, I am concerned that this thinking requires perfection of the new paradigm, demanding that AI meet some social ideal before we can use it, even if the AI can provide results that are much fairer than human choices.  AI needs to be compared to the current system, not the ideal system.  Otherwise we will never leave our current sets of flaws behind.

AI needs to be compared to the current system, not the ideal system.  Otherwise we will never leave our current sets of flaws behind.

In addition, we already have laws to address exactly the kind of flaws that Lander and Nelson find with AI. If the choices humans make end up discriminating in housing, lending or employment against disempowered groups, then that discrimination is illegal under disparate impact theories. The same would be true for AI-generated decisions. Just because the AI could make the wrong choices is not a reason for precluding its use. We make wrong decisions and so will AI. The current US system is organized to catch and correct some of those problems.

The European Union already has a law in place prohibiting machine decisions from affecting people’s lives. This legal system treats the use of AI as an evil by itself, even if the AI makes much better and more equitable decisions than a person would make. Are the Europeans really that afraid of new technology to label AI as a societal evil with no regard for the actual job that it performs or whether that performance is better for people than the old system? Apparently so. Now, as the Council of the EU works toward enacting an Artificial Intelligence Act, this clear prejudice against AI and machine learning could produce more illogical legislation.

Recent press reports claim that the US and Europe are falling further behind China in development and implementation of AI. It is clearly important that, unlike China, Western democracies promote the use of AI for moral purposes rather than population control. But Western governments should also avoid being too restrictive of AI, and should build AI rules comparing its value against the systems the AI is replacing rather than some perfect system that we aspire to. 


Copyright © 2021 Womble Bond Dickinson (US) LLP All Rights Reserved.
National Law Review, Volume XI, Number 342

Source: https://www.natlawreview.com/article/how-perfect-will-ai-need-to-be