Harms of AI in Various Contexts
AI Learning Barriers
Although most of the participants reported that the process of learning to use AI was fully accessible, 10% of the disabled participants said they encountered an accessibility barrier while learning to use AI, compared to just 4% of the nondisabled participants. Another 10.5% of the disabled participants were unsure if they had had an accessibility barrier compared to 6.2% of the nondisabled participants. When asked to indicate the types of accessibility barriers encountered, the most common types included inaccessible videos, training being too difficult to understand, and training that required use of a mouse instead of a keyboard. One BLV participant wrote: “my school held a professional development, and everything was done on a screen at the front of the room, and the training was confusing since I was unable to view the screen.” Another participant, who is autistic and has ADHD, added: “often, learning [AI] takes the form of just reading technical documentation, which is a wall of text, with not very user friendly grammar, basically not good for autistics, people with ADHD, or otherwise have low executive function.” Finally, some participants from both the disabled and the nondisabled groups cited the high price of AI courses as a barrier to learning.
Automated Job-Seeking Barriers
Automated job interviews and tests administered by computers or AI, without a human present, introduced significant accessibility barriers for disabled job seekers in our sample. All participants were asked whether they had applied for jobs in the past two years, and those who had were asked whether they had completed any automated interview or assessment, including both AI-based and non-AI-based tests. Participants who reported completing such automated steps on their applications were asked follow-up questions about the type of task required, whether they had sufficient time to complete it, whether they had to change how they used their devices, and whether any accessibility barriers arose during the process.
Among job-seeking participants, 42% reported completing an automated test or interview, with no differences by disability status, representing 197 individuals in total. Computer-based multiple-choice assessments were the most common format, followed by video interviews in which a computer asked questions aloud. Although most participants reported having sufficient time to complete these tasks, 10.4% of disabled participants (n=14) and 4.9% of nondisabled participants (n=3) reported not having enough time to complete the automated test. Furthermore, 7.4% of disabled participants (n=10) had to make a change to their computer in order to take the test or interview, compared to just 1.7% of the nondisabled participants (n=1). While some required changes were minor and expected, such as turning on a camera or microphone, others substantially disrupted workflows, particularly for disabled users. These included requiring BLV participants to turn off screen readers and instead obtain sighted assistance, turn off other accessibility plug-ins, or make visual adjustments that rendered the screen unusable. In one notable quote, one BLV participant explained: “One of the companies I work with had an English speaking test. The computer had stern warnings about having any other applications open on the screen. Therefore, I opted to close my screen reader, and hired a friend to click the necessary buttons with the mouse, so that I could proceed to the next stage of getting a contract.”
Beyond device changes, when asked if they had “any other trouble” completing the job test or interview, 15.4% of the disabled participants (n=21) said yes, compared to just 4.9% of the nondisabled participants (n=3). Across multiple types of disabilities, participants reported significant and diverse accessibility barriers to interviews and tests. Some BLV participants found tasks to be inaccessible, resulting in mistakes during interviews or requiring attempts to access information through alternative means. One physically disabled participant explained: “Before I even got to the interview, I had to submit a video showing a 270° view of my face. Because I cannot hold my cell phone, nor turn my head to the right, I had to get help from a caregiver to complete that step of the application.” An Autistic participant expressed concern that the questions appeared “designed to weed out anyone who isn’t neurotypical.” Finally, a BLV participant shared: “The first time I took the test, it asked me to reason and make decisions based off of images it provided. I had to use ChatGPT via Siri to learn what the images on the screen were, but for some reason that kept changing. This could have contributed to my inability to get the job I was using that test to apply for.” Collectively, these findings demonstrate that automated interviews and tests can function as significant barriers to employment for people with disabilities. Without human support, disabled participants may be unable to complete automated job screenings effectively, or at all, resulting in fewer and lost job opportunities.
Healthcare Barriers
Because AI is increasingly being used to help make healthcare treatment and coverage decisions, this survey examined the nature of recent healthcare denials encountered by participants. Survey participants with disabilities were almost three times as likely to report a healthcare denial within the past two years as participants without disabilities (27.7% vs. 10.8%).
The most commonly reported type of healthcare denial was insurance refusing to pay for a medication or medical procedure at all (55.5%). Others reported that their doctor would not approve the procedure or prescribe the medication they needed (22.7%), or that their health insurance would only pay for part of the medication or procedure (19.7%). When asked whether or not they suspected that AI was involved in the healthcare denial, 47% of disabled and 35% of nondisabled participants stated either that “I think they might have used AI” or “I am very sure they used AI.” Although it is impossible to determine which of these denial cases were prompted by AI, these data suggest that a substantial portion of patients receiving healthcare denials, especially those with disabilities, perceive that AI contributes to their unwanted healthcare outcomes.
Autonomous Vehicle Barriers
Many disabled participants, especially BLV participants, value AV development, but those who have actually ridden in AVs mentioned some important concerns. In the full sample, 50% of disabled and 42% of nondisabled participants stated that AV development was somewhat or extremely important, and this number rose dramatically to 74% for BLV participants. There were 165 participants who had ridden in an AV (94 disabled), most of whom reported riding in a Waymo (56%) or a self-driving Tesla (41%). Of these, 114 people said the experience was fully accessible, 38 said it was not, and 13 were unsure.
While the difference between disabled and non-disabled riders’ accessibility experience was not statistically significant (65% of disabled riders and 75% of nondisabled riders said the experience was fully accessible), there was a statistically significant effect for BLV AV riders. Of the 35 BLV participants who had ridden in an AV, only 49% said the experience was fully accessible, compared with 75% of the sighted riders. Specifically, BLV participants described accessibility as both (a) a reason AVs could be transformative and (b) a reason AVs can fail in practice. The most recurring accessibility challenges included difficulty finding the vehicle, lack of auditory cues, and the need for safe pick-up/drop-off, including safe places to exit. One BLV participant explained: “It used a map pin feature in the app to locate my destination. If I couldn't use the pin, the car would drop me somewhere that may or may not be near my actual destination. It also had a few hardware onboard screens that I couldn't access. The turn by turn directions worked sometimes and sometimes not.” Another participant, who uses a power wheelchair, explained that “The big part for me is being able to get my wheelchair on and off in and out of the vehicle and if I have to be able to do that on my own, most [autonomous] cars are not designed for that.”
“The big part for me is being able to get my wheelchair on and off in and out of the vehicle and if I have to be able to do that on my own, most [autonomous] cars are not designed for that.”
Other concerns, raised by both disabled and nondisabled participants, included availability, cost, and safety. Several participants said they tried an AV while traveling, but that AVs are not available where they live. Regardless of disability status, only 36.2% of AV riders said they could afford to pay for an AV whenever they wanted or needed one. Another 45.4% of riders said they could afford an AV “every so often.” The remaining 18.4% of riders said they could not afford to pay for an AV again.
Finally, in their open-ended comments, 9 of the AV riders (7 disabled) raised safety concerns. They perceived the vehicles as less safe than those driven by humans, especially on roads populated by human drivers. One participant explained: “Pretty cool technology. It just seems a bit dangerous unless it is in a thoroughly mapped area.”
Inaccuracies and Hallucinations
Participants were asked whether or not voice-activated AI (VAI), visual description AI, and AI captions had ever made a mistake that hurt them or made their life harder. Among VAI users, 121 disabled and 76 nondisabled users described harms from mistakes. General frustration due to the AI misunderstanding voice inputs, not completing tasks correctly or at all, not hearing prompts, or providing inaccurate information was common. This frustration was often linked to the need to repeat or rephrase prompts in order for the voice-activated AI to understand them. Although both disabled and nondisabled participants had this issue, it was especially acute for some disabled participants, particularly D/HH participants and those with speech disabilities, and for participants with accents. Participants described being repeatedly misunderstood, needing to change how they spoke, faking an accent, or repeating themselves continuously.
In addition to frustrations and time burdens, some VAI mistakes were more harmful. VAI was frequently used for texting and making phone calls. Regarding texting, one user explained: “[The VAI] misunderstood my text and sent profanity to a professional colleague.” Others reported that VAI placed calls to ex-partners or other unintended contacts, including emergency services. Finally, for BLV users, some VAI did not speak aloud the result of a voice command or had accessibility issues during updates. One participant stated, “[Screen reader and VAI] once changed how it worked during an update,” which was described as “extremely stressful and involved an emergency.” For many users, particularly disabled users, features such as voice dictation for text messaging are essential accessibility supports. Failures or changes in these systems can result in concrete, disruptive, and sometimes harmful consequences.
Furthermore, 72 disabled and 15 nondisabled AI caption users described mistakes that harmed them, while 91 disabled and 15 nondisabled AI visual description users reported harms from mistakes. One caption user noted that AI captions showed them the wrong date for their citizenship naturalization ceremony. Another participant wrote about a socially harmful mistake, “AI captions misunderstood [a] coworker and incorrectly showed her as saying the N word when she was definitely not saying it, which was distressing to me.” The harms caused by AI visual image description tools ranged from legally harmful to medically dangerous. One participant used an AI tool to assist with reading and completing tax forms which were not screen reader accessible. They wrote that the AI “provided some very incorrect information that would have caused serious tax filing mistakes if I had not checked the information with a human.” Another participant recounted their experience using visual image description to read their medication label:
“Using AI to read package instructions on a tube of topical medication and when I asked what the directions for use were specifically, AI told me to ‘chew 4 tablets 2 times per day.’ Luckily this was an obvious error, but for now I will NOT be using AI for these tasks [and] will always confirm with a human.”
In addition to describing specific AI mistakes, participants shared their perceptions of AI’s accuracy when describing images or captioning audio. Overall, 7.5% of visual description AI users thought the descriptions were extremely accurate, 53.5% thought they were mostly accurate, and 27.6% thought they were somewhat accurate. Regarding captions, 4% of AI caption users rated them as extremely accurate, 53.4% rated them as mostly accurate, and 41.8% rated them as somewhat accurate.
Interesting patterns emerged between disability status and the perceived accuracy of AI-based visual descriptions and captions. Specifically, visual description users with disabilities (including BLV users) were likelier than their nondisabled counterparts to rate the descriptions as “mostly accurate” (56.3% vs. 46.4%), while nondisabled visual description users expressed more hesitancy, rating the AI-generated descriptions as “somewhat accurate” more often (32%) than disabled users (25.9%). However, the opposite pattern emerged with regard to AI captions. Disabled caption users (including D/HH users) were more likely than their nondisabled counterparts to rate the captions as being somewhat accurate (44.5% vs. 37%), while the nondisabled caption users rated the captions as mostly accurate more often (58.5%) than disabled users (50.5%). This pattern suggests that disabled people are better able to detect captioning errors than visual description errors.
AI Psychotherapy Harms
As discussed earlier, AI has demonstrated benefits in some mental health or psychotherapy-adjacent contexts (see Benefits section). However, our findings also indicate that AI can cause harm in this area, some of which are serious. The least severe harms reported involved responses that participants perceived as overly “canned” or mismatched to their needs, such as offering solutions when empathy was desired, or providing empathic responses when practical guidance was sought. More concerning, four participants reported safety-related harms. These included inappropriate advice (e.g., suggesting dieting to an individual with an eating disorder), support or validation of psychotic delusions or suicidal thinking, and concerns that AI-based therapy could exacerbate higher-acuity or crisis situations.
One participant with multiple disabilities described the following experience: “[The bot] asked me if I had heard of the death positivity movement after I vented about my chronic health problems and disabilities. That was shocking, actually. We need real human connection through therapy that we can’t afford. This was [an] insult to injury.” This example reflects a lack of disability-aware contextual understanding. A number of participants expressed general concern that AI-based therapy could be dangerous if not used appropriately, or cautioned against its use altogether, indicating a perception of potential harm. Reflecting this concern, among participants who had experience with both human and AI-based therapy, 29% reported AI therapy to be more helpful, 14% found the two approaches equally helpful, and 43% reported AI therapy to be less helpful than human therapy.
“[The bot] asked me if I had heard of the death positivity movement after I vented about my chronic health problems and disabilities. That was shocking, actually. We need real human connection through therapy that we can’t afford. This was [an] insult to injury.”