WHO policy brief shares ways of preventing ageism and explains how to make AI technology more equitable.
Ageism is stereotyping and discriminating against individuals or groups based on their age, like job loss because of age. It can impact confidence, job prospects, financial situation, and quality of life. It can also include the way that how older people are represented in the media, public, etc, which can have a wider impact on the public attitude.
Ageism in AI is one new dimension to the ethics of AI. The WHO policy brief ageism in artificial intelligence for health examines the use of artificial intelligence in medicine and public health for older people. Its legal, non-legal, and technical measures can be used to minimize ageism in AI and maximize AI’s benefits for older people. Ageism must be tackled to make sure that nobody loses out because of their age.
WHO policy shares suggestions to make AI technology equitable:
As AI technology plays a beneficial role, ageism must be identified and eliminated from AI’s design, development, use, and evaluation. AI is a product of its algorithms, whose suggestions can draw ageist conclusions if the data that feeds the algorithms is skewed towards younger individuals. Like telehealth, tools used to predict illness or major health events in a patient, it could also provide inaccurate data for drug development.
1. Include older consumers in the design of AI technologies:
When developing any AI technology, make sure you have older people participating in focus groups and in giving product feedback, like Adopt, an older adult-centered design process, which considers the disabled and aging population.
2. Hire age-diverse individuals for data science teams:
These elite data scientists form small teams that work directly, but diversity in hiring doesn’t happen by simply wishing. Hire and train data scientists of all ages on your team. By including older employees, they’ll be more likely to recognize and identify any forms of ageism in data collection or the product’s design.
3. Conduct age-inclusive data collection:
Age-inclusive data collection is crucial for humanitarian response. When choosing demographic data to feed into AI algorithms as with other personal identifiers such as race or gender, make sure people of varying diversity are accounted for.
4. Invest in digital infrastructure and digital literacy:
Investing in digital literacy and digital infrastructure can reap benefits in the form of increased transactions. After a product that incorporates artificial intelligence is developed, it’s important to invest in education and accessibility initiatives. This can help make older consumers and their health care providers more likely to benefit from technology.
5. Give older consumers the right to consent and contest:
Technology should benefit humans, not the other way around. Make sure that it’s easy and clear for older people to exercise their choice in participating in data collection or to provide any personal information.
6. Work alongside governance frameworks and regulations:
The policy brief recommends various government agencies to help create frameworks and procedures to prevent ageism. It also lists private businesses to work within compliance with existing regulations.
7. Stay up to date on the new uses of AI and how to avoid bias:
With the rapid development and creation of new technologies, it’s important to keep researching and understanding how artificial intelligence can create new and unintended biases, in the form of choosing the right learning model for the problem, representative training data set, and monitoring performance using real data.
8. Create robust ethics processes:
In the development and application of AI, it’s important to formalize processes like the ones above to maintain accountability in creating equitable and inclusive products.
Share This Article
Do the sharing thingy
More info about author