Purple flowers in bloom with a sunset on the horizon behind them.
Skip to page content

Discussion

This report's findings show that participants with and without disabilities have found great use for AI tools. In many cases, frequency of AI tool use did not vary based on disability status. However, in some cases participants with disabilities used AI tools for a greater variety of tasks in comparison to their nondisabled counterparts, such as VAI and AI in the workplace. Some disabled participants have also utilized mainstream AI tools to act as assistive technology; for example, by converting images or audio into text, or using AI as an executive functioning or writing aid.

In some instances, such as learning or therapy contexts, some participants felt that the AI did not discriminate against them and was more patient or accommodating than a human supporter. However, in other cases such as automated job tests and interviews, several participants with disabilities experienced accessibility barriers that they had to work around with other AI tools or through human support. Such systems risk excluding or disadvantaging job seekers who do not have this human support readily available. Furthermore, AI tools used for mental health support can make recommendations that are not disability-informed and can be harmful. Though participants praised AI for increasing efficiency and independence, they still relied on humans to double check AI’s interpretations, such as BLV participants asking sighted people to verify image description. Finally, one of AI’s greatest challenges is trust and privacy. Though participants with disabilities enjoyed the independence AI provided, both participants with and without disabilities expressed concerns about data breaches and the use of their personal information.

Artificial Intelligence has the potential to help increase efficiency and independence for both people with and without disabilities. However, it needs further development to mitigate some of its greatest harms. Developers of AI tools should respect their users’ boundaries, wants, and needs. It is important to include diverse populations in testing and understanding new technologies to prevent marginalized groups from being left behind or harmed.