An elderly couple I know recently sold their home and became renters. They thought they had sold their house for a good price, but I am not so sure. I am pretty sure they were swindled by a smooth sales pitch from one of those home-buying companies that are buying up so many homes. They had been homeowners, paying off their mortgage and getting the benefit of the rising value of their home. Now, they are renters, helping to pay their landlord’s mortgage and increasing his net worth.
All this got me thinking about universal design and artificial intelligence (AI). Universal design is the discipline of designing products and services for as large a part of the population as possible. It arose from a desire to support the needs of people with disabilities. The idea is that rather than people with disabilities being confined to specially designed products, they should be able to use mainstream products. Universal design was motivated by the generally bad experience people with disabilities had with those products designed specifically for them. They were too often of inferior quality and had limited availability and capability compared to their mainstream counterparts. So, the idea arose that with only a little extra effort mainstream products could make themselves usable by people with disabilities.
According to the Centre for Excellence in Universal Design (CEUD),
Universal Design is the design and composition of an environment so that it can be accessed, understood and used to the greatest extent possible by all people regardless of their age, size, ability or disability. An environment (or any building, product, or service in that environment) should be designed to meet the needs of all people who wish to use it. This is not a special requirement, for the benefit of only a minority of the population. It is a fundamental condition of good design.
Quoted from the CEUD website
What Does Universal Design Mean for AI Systems?
AI systems trigger a renewed need to address cognitive disabilities and limitations. Universal design has always included cognitive disabilities, but the emphasis has been on physical disabilities. There are several reasons for this. First, we know more about designing for physical disabilities. Generally, if every user input or output is redundant, meaning it can be given or received in multiple ways, then the design will be accessible. If someone cannot do something one way, they have the option of doing it in a different way. If they cannot type, they can speak the commands. If they cannot read the screen, a screen reader will read the screen to them. Because we know more about how to design systems to be physically accessible, that aspect of accessibility is more prominent.
Another reason accessibility focuses on physical disabilities is that many of the things done for physical disabilities also help people with cognitive disabilities. Having redundant inputs and outputs means that a person with a cognitive disability gets the messages in multiple ways, increasing the chance that they will understand it. Having multiple ways to control a device allows people to find the way that best works for them. Good physically accessible design, in general, also helps with cognitive accessibility.
Those AI-enabled systems that are designed to get us to do things create the need to address cognitive accessibility in new ways. Especially with the emerging metaverse, these systems are designed to get as many of us as possible to buy a product, subscribe to a service, vote for a particular candidate or party, or take some other action desired by the developers of that AI. I suspect my elderly friends who sold their home were convinced to do that by an AI-enabled system. We have always had confident schemes and manipulators prey on the more vulnerable among us, but now their efforts are assisted and amplified by AI-enabled systems. It is and will increasingly be possible for them to create a reality in which the only reasonable choice is to do what they want us to do. Is that the kind of future society we want to live in?
Our Inclusive Society: A Legal Overview
Our society has consistently chosen to include the largest percentage of the population possible. This can be seen in the series of laws supporting the needs of people with disabilities. The Americans with Disabilities Act (ADA) of 1990 is perhaps the most famous accessibility law. Before that there was the 1988 Hearing Aid Compatibility Act. In 1996, the ADA accessibility to telecommunications was addressed in Section 255 of the Communications Act. Following that, the need for accessibility to information technology was addressed and made a requirement for all US federal agencies. A review of the laws promoting accessibility reveals that about every two years there is some action to secure accessibility to some part of our society. These actions have been promoted by both political parties and often receive bi‑partisan support. These laws represent a social consensus. Most of us want to live in a society that is as inclusive as possible.
The Ethics of AI
What are the ethics that should guide us, as a society, as AI systems are increasingly capable of manipulating us to make decisions that are not in our best interest? I have argued that we need to develop a theology of AI. As I am using that term here, theology is the basis from which our enduring ethics emerge. In my view, and I think in most people’s view, all humanity is united by common bonds. Because of those shared bonds, all people should be respected and protected. We are one another’s keepers. We have a responsibility to create a society that is equitable for all of us.
As AI systems become more sophisticated, we will need to develop our societal norms to address their new capabilities. As virtual reality enables AI systems to manipulate us with alternate realities, we will need safeguards. Perhaps the first ethical premise should be this: It is wrong to take advantage of someone with a cognitive disability. Just because we can do something doesn’t mean it is right to do it. I for one do not want to live in a society where elderly people are cheated out of their homes. I expect a lot of people share that opinion. Just because a sales system can convince people to make a decision that will badly damage their future, doesn’t mean such a system should be allowed to do it.
Mapping out the ethical use of AI systems will be difficult work. Even more complex will be developing the mechanisms to implement and enforce these ethical constraints. The last thing we want are AI systems that prey on the weak and vulnerable. Those that are developing these systems should be guided in their work by an ethical creed that is appropriate to the technology. Our goal should be to create a future in which AI systems serve us all well, including those that are weak or particularly vulnerable.