Episode Notes

Welcome back to another episode of AccessWorld, a podcast on digital inclusion and accessibility. In this episode, we welcome AFB's chief public policy and research officer Stephanie Enyart and director of research Dr. Arielle Silverman, who share the latest study from AFB's Public Policy and Research Institute on the pros and cons of AI for people with disabilities. The study drew consensus from experts spanning industry, government, and advocacy, who mapped out the key areas worthy of focus in this rapidly expanding technology. You can access the study at www.afb.org/AIResearch.

About AccessWorld

AccessWorld is a monthly podcast from the American Foundation for the Blind. Aaron Preece is editor-in-chief of AccessWorld and Tony Stephens leads communications for AFB. Together, they take a deep dive into all things digital inclusion and accessibility. It's a companion to the quarterly e-zine AccessWorld, now in its 25th year of publication. To access the latest issue, or to browse all of our back-issues for free, visit www.afb.org/aw.

To learn more about the American foundation for the blind or to support our work, visit us online at www.afb.org.

Produced and edited by Tony Stephens at the Pickle Factory in Baltimore, Maryland. Digital media support by Kelly Gasque and Breanna Kerr. Theme music is by CauseMonkey, compliments of ArtList.io.


AccessWorld Podcast, Episode 17 Transcript

Intro (00:00):

AFB. You are listening to AccessWorld, a podcast on digital inclusion and accessibility. AccessWorld is a production of the American Foundation for the Blind. Learn more at www.afb.org/aw.

Aaron Preece (00:29):

Hello everyone. Welcome to the 17th episode of the AccessWorld podcast. I'm here with my co-host Tony Stephens. Hey Tony.

Tony Stephens (00:39):

Hey Aaron.

Aaron Preece (00:41):

And we are also joined today by Arielle Silverman and Stephanie Enyart from our Public Policy and Research Institute to talk about some very recent research we released about artificial intelligence and its impact on people with disabilities, specifically people who are blind or have low vision.

Tony Stephens (00:59):

That's right, Aaron. We were really excited that both Stephanie and Arielle, Arielle leads the research for the American Foundation for the Blind, and Stephanie is our chief public Policy and research officer here at AFB, and they have been working hard over the past year on this research project and it was so exciting. We had a webinar back in February and we had a significant turnout for that. We're so excited that there was a cross section of people from academics, from industry, from advocacy all joining in to find out really Stephanie and Arielle welcome first off to the podcast, but I think is it fair to say that this is one of the first major looks into AI within the disability community?

Stephanie Enyart (01:40):

Well, it is from our perspective. I mean, we understood that we wanted to move into this space of looking at AI and looking at the experiences that people have using it, but we embarked on the kind of study that we did, and I know Ariel's going to give a little bit more of a description of what that means. We started this kind of study because we really needed to be able to get our grounding in this space. We do literature reviews before we embark on any kind of research, but beyond that, we really wanted to get a sense in such a very vast field, what do experts that work within the domains that you listed off, people that are creating ai, people that are studying it in an academic setting, people that are looking at it from kind of a policy lens, what could they find consensus around as areas that would warrant further investigation?

(02:40):

And so that's the type of study that we embarked on because we really wanted to set the table to dive in areas that made a lot of sense to unpack further in our future research, and we wanted the expert consensus that helps us kind of hone in and drive towards those areas that we want to really deepen in our research work at AFB. So that's why we did this kind of study, which this is a very different kind of study. Most of our studies are really centered in that lived experience element, but before doing those kinds of studies, we wanted to do something that could really ground us.

Tony Stephens (03:21):

So what I'm hearing then, yeah, it sounds like this is kind of a study to start studying the issue. I mean, we had to figure out what are the causes of concern, what are the areas that are positive, what are the areas that are risks? And it's exciting, you all actually really dove into this in this study. Share with us just the title, but then too, this unique approach probably I would imagine required a unique strategy and methodology to attack this since there's not a lot out there that exists. Tell us just what the study is at the top and then how it was set up and what makes this unique.

Arielle Silverman (03:58):

So we chose a method for this study called the Delphi method, and Delphi method is a really unique way of getting consensus predictions from experts about what might happen in the future. It was actually created initially by economists, and you can think of a Delphi study as kind of like an asynchronous focus group where you're getting feedback from people multiple times, but they're not interacting with each other. They are learning about what the other group members think in a systematic way, but they're not actually sitting together or sitting in a meeting together online talking to each other. And so then that helps us get rid of a lot of sources of bias that can come up in regular focus groups. People aren't influenced by knowing what a particularly well-known or influential expert had to say. All they know is kind of what the pool in general thought about an issue and reasons that people gave for feeling the way that they did about an issue, but they don't know who said what.

(05:06):

The way that we did this is we started off by, which is a little unconventional, but we started off by interviewing and we interviewed 32 experts, including 13 people who worked in industry who worked on developing AI or worked for companies that deploy AI. Then we had some who were academics, I think there were nine who were academics, and then there were several who were policy experts at disability advocacy, nonprofit organizations, other kinds of policy analysis organizations. And then we had three who were connected with the federal government. We interviewed them and we asked them, where do you think AI is going and how do you think AI is going to impact people with disabilities? And they talked about things that are happening right now, and they also talked about what they think is going to happen or going to continue or intensify in the next five to 10 years and how that's going to impact people with disabilities.

(06:05):

Then we took those interviews, we looked at our notes from the interviews and we extracted 72 opinion statements. So these were opinions that at least one person in the group had expressed that seemed interesting to probe for further investigation. We put those opinions into a survey and we asked all the experts to go back and fill out a survey and we asked them, how much do you agree or disagree with these opinions and why? Then we tallied up the level of agreement between the experts on their endorsement of the opinions and some of the opinions we put back to the group a third time, and we said, the average agreement score for this opinion was 5.6. What do you think? Do you agree with this or not and why? And after that third round, we were able to extract 32 opinions where the entire expert panel agreed that these opinions were true, and they felt that those opinions were true to a similar extent. And so this is what we considered our consensus findings that we're going to be talking about next.

Tony Stephens (07:10):

So what were some of the key findings then you were able to uncover that they were able to garner consensus around?

Arielle Silverman (07:16):

Yeah, so there were a few areas of really clear consensus. One area was concern about the accessibility of mainstream ai. So for example, experts agreed that some of the AI based educational technology that's going to be coming to classrooms or maybe that is coming to classrooms already is not going to be accessible to everybody. It's going to continue the trend that we've already seen of educational technology excluding people with disabilities in different ways. The experts also agreed that there are concerns about AI bias, particularly in the employment area, that employment screenings where AI might be reviewing resumes could discriminate against people with disabilities and perhaps already is doing so. There's also concern about bias and discrimination in healthcare because people with disabilities need a lot of unusual healthcare, and so AI-based systems might be more likely to deny care if being used by an insurance company, for example, it might decide that people with disabilities shouldn't get certain types of care merely because they are unusual or they're not part of the training data set that the AI uses.

(08:40):

Another area where we had a lot of consensus was around the need for human oversight and transparency when AI is being used. So two of the statements that got the highest consensus scores had to do with AI that's being used again in employment screenings. They agreed that a human in the loop is necessary for candidate screening, so humans should review decisions made by ai. And they also agreed that when AI is being used to screen applications, all the job applicants should be notified that AI is being used. So there should be that transparency. They also agreed that if AI is being used by teachers to supplement education, AI could be a great partner or supplement, but it cannot replace humans in the act of teaching. Finally, there were a lot of recommendations that the experts agreed with, and we'll go into some of our recommendations and our principles in a little bit, but there were a lot of recommendations related to ensuring that people with disabilities are involved in all aspects of AI development and not just brought in at the end to be testers, but actually involved in the entire process and that people with disabilities are employed, represented better in the tech sector generally.

(10:01):

So it's important to make STEM curricula accessible as we'll discuss, and that regulations need to be proactive, so they need to be created ahead of time before something bad happens with ai, and they need to very specifically protect the rights of people with disabilities and not just generally the rights of minorities, but very specifically talk about people with disabilities.

Aaron Preece (10:26):

So when it comes to accessibility concerns, it seems like some of these are going to be AI specific and not necessarily based on the actual usability of the interfaces or things like that. It seems like that, that's where the issues seem to arise in the output of the AI or the decisions made by the AI.

Arielle Silverman (10:53):

So I think the accessibility concerns, I mean, we asked the experts to focus on the AI part, but they could come up with the interfaces more generally too. If AI is being relied on, as we've seen with overlays, if AI is being relied on to try to remediate inaccessible system, that is not necessarily going to work. So it could be the interfaces too. And then I think for the decision-making, what we're concerned about there is bias in terms of how the algorithm responds to behaviors or profiles that are unusual. So for example, if a disabled person is applying for a job and they had to take maybe two years off from work to adjust to their disability, go get training, whatever, a human might see that and understand that the resume gap isn't necessarily a weakness of the individual, but AI might just see, oh, they had a gap in their resume, they're not going to be as qualified. That's an example.

Aaron Preece (12:01):

And so with this, you want to focus on the training data, and I'm assuming, because I know at least with generative AI now if you let it know the potential irregularities in what it's going to be asked, so I'm thinking of image object recognition, AIs or the Gemini screen share AI, things like that. You can, at least in my experience, you can mitigate a lot of the issues with prompt engineering and making sure it knows what outliers to expect in the data. But it seems like maybe for these sort of things, I don't know if there's, I'm assuming there's going to be a difference in how they are run and if there's even prompts to begin with. So it seems like it's going to be necessary to be thoughtful about the data presented to the AI during training. And is it something that we would have to, is there enough of the, and this might be beyond the scope of the research, but is there enough of that kind of data in the general data set, or would it have to be overly emphasized to make sure that it's aware of it because it is a lower incidence or these are outliers by definition?

Arielle Silverman (13:16):

So I don't know if our study really probed that question in detail, but I think because we were seeing this kind of bias, it's reasonable to expect that people with disabilities are underrepresented in these data sets. And so bringing in more people with disabilities from various racial, gender, ethnic, socioeconomic, those kinds of backgrounds is really important. I mean, we also see that when ai, for example, is instructed to draw an image of a blind person, it might draw a person with a blindfold, which is not really how a blind person looks or someone who's using a cane and a guide dog at the same time. So it's drawing images of blind people and I'm sure people with other disabilities too that are not accurate a lot of the time.

Tony Stephens (14:10):

Well, I think your recommendations kind of lead us towards the directions to address this. I'm sure this is on the back of some of the minds of some of the people you spoke to. And while it wasn't something that percolated up through the Delphi study, it definitely is sort of an actor on the sidelines where these recommendations about making sure that you have more diversity. There are the people that we are more visibly represented both in the dataset but also just in the practitioner of this. And so I thought those were really good recommendations to just kind of drive home the idea that, look, we are an outlier and we need to have greater representation to make sure that there's corrections to some of these models that can be set up and do damage. I'm kind of - with all going on in the world in terms of AI and trying not to feel like science fiction is going to take over us and control us like robots from some science fiction film.

(15:14):

I'm trying to be positive. And one of the things I also liked in the report a lot you mentioned was yeah, address the risks bring forward and surface to the surface the risks that are involved here, which you shared a lot of those with employment and other areas, but what opportunities, and Stephanie, I want to bring you in as well. I mean it's easy for us to think in the blindness space, but cross disability as well, what opportunities are on the horizon? I love in the report it also talks about we need to focus on not just the risks, but focus on how the world can become more inclusive and accessible.

Stephanie Enyart (15:49):

Absolutely. I mean, I think AI has a tremendous potential and promise for people with disabilities. It certainly has a potential to make accessibility tools and assistive technology more widely available. And so I'll give just a couple examples. It could be image and video descriptions, which is of course something blind community knows a lot about, but also automated captions, writing assistance and note taking. I'm sure many people have seen a proliferation of AI note takers, automated vehicles, which could be a game changer for so many people with disabilities who are non-drivers way finding tools. So I mean those are just some of the areas where the rapid development and deployment and embedding AI into many other kind of technology development cycles and deployment generally, AI is going to really help accelerate and make some of the things that we need in the disability community far more possible and probably at a faster pace than if AI was not an element in making it possible.

(17:07):

I also want to just note that there is a lot of potential where there's supply and demand problems. So for example, in areas where people are using object identification through visual interpreting services, and we have to rely heavily on the fact that there are many, many humans needed to be able to do this kind of identification and description or something like that where we're using a service that is heavy on people, there is a future where humans could still be taking the most complex or unique cases, but we could have AI running things that are far more routine or simple or maybe even lower risk. And so that kind of splitting of the labor force a little bit can disrupt the supply demand problems that we see that kind of trip up some assistive AI. So that's something else that's on the horizon is that AI is a role player and that's just a starting place. There's so many different directions we could go when talking about the potential of AI.

Tony Stephens (18:20):

So hearing that enthusiasm and excitement of a lot of these, I mean, we don't have flying cars, but I'll take an autonomous car. When we think back to what the future would hold for us, is that energy and enthusiasm. Did you all feel at Arielle and Stephanie when you were talking with these experts in the field, more excitement, balancing the excitement versus the doom and gloom kind of thing?

Arielle Silverman (18:48):

There was definitely a lot of excitement, but I would say it was not surprisingly, it was higher in some sectors than in others. There was definitely more optimism in the people working in industry or the academic researchers who were affiliated directly with industry or with creating AI. And then there were more reservations amongst the policy analysts. So we had some opinion statements about the benefits of AI that were voiced by some individuals on the panel but did not reach consensus, which is kind of interesting, especially in the workplace. So there were some folks who believed that AI could help level the playing field in the workplace and enable people with disabilities to be more productive, to have output that's equal to that of people without disabilities or even maybe to help people stay in the workforce longer if they acquire a disability by being able to compensate with AI.

(19:53):

But those statements did not reach consensus because I think there were others who were more skeptical on the panel about those benefits, and there were a couple who wrote in comments about how AI cannot mitigate all of the barriers in the workplace. Obviously, AI cannot build a ramp so that someone can physically enter a building if the building is inaccessible, you can have the greatest AI in the world and they're not going to be able to get in with their wheelchair and attitudinal barriers is another big one that maybe AI can help kind of indirectly a little bit, but it's not going to compensate for negative attitudes that employers or colleagues might have about people with disabilities. So we definitely want to contextualize AI and the possible benefits within the broader system level problem of workplace discrimination and inaccessibility.

Tony Stephens (20:56):

Well, thanks so much for sharing that, Arielle. I mean, looking ahead, Stephanie, where is it looking like we'll be going here moving forward then? What steps are next for the research on ai?

Stephanie Enyart (21:07):

Well, as I mentioned at the beginning, the purpose of doing this kind of study is really an educated jumping off place to deepen research that could be meaningful and impactful. So we are in the middle of shaping our second study, which is going to center on the lived experiences of people with disabilities, and we will always have a very detailed focus on the blindness element of anything. So we will have a standalone blindness specific report that people can look at, but we will also be studying the breadth of experiences across the disability fold, and we will even have a non-disabled control group. So we'll be able to really hopefully get a richer contrast of these sorts of experiences using ai. And because of the research that we've done so far, some of the areas that we will focus on probably are not surprising, education, healthcare, employment, transportation.

(22:09):

And then we have a category that's assistive AI, which is broad because there's a lot of different kinds of assistive AI as well. So we hope to shape this study with the assistance of many in the disability community. We're working with an advisory panel of several leaders in different parts of the community to help shape meaningful questions. And then we'll use this same group to also help us recruit across the disability fold and of course in the blindness fold. So if you're interested, please stay tuned. We want you to participate and engage and share your experiences through this study and hopefully have a richer understanding of the real use and experiences that people have with AI that could potentially lead to some really meaningful change.

Tony Stephens (23:03):

Awesome. Thanks so much Stephanie. And thanks Arielle as well for the work you all have been doing on this project, folks want to be able to check it out on our website. They can go to afb.org and go to our public policy research page and the title of the project is correct me, it's Empowering and Excluding, is that right?

Arielle Silverman (23:24):

It's Empowering or Excluding, and you can get directly to it by going to afb.org/AIresearch.

Tony Stephens (23:32):

And that's all together AIresearch. At afb.org/AIresearch. And for things coming up in the future here on the AccessWorld podcast, we will be tracking some of this stuff in real time looking forward. Some future guests will have that'll be tipping their toe in the AI space as well. We had the most recent issue of AccessWorld drop at the end of February. Aaron, what was on this month's issue coming out?

Aaron Preece (23:56):

This month, we were focused on low vision and people new to vision loss. And so we have reviews of a very cool iPhone or smartphone case that can turn your phone into a video magnifier within an accompanying app. We looked at the micro Speak, which is a good device for people that are brand new to Vision Loss, still learning their technology, but need ways to take information down. One thing, speaking of ai, I covered, there's a music generation app called suno and it uses a web app kind of interface. So I did an accessibility review of that specifically focusing on, it's not really focused on the AI itself, but it is focused on the usability of the interface itself. A couple others, one on low vision and video games, and then we did an employment journeys piece.

Tony Stephens (24:48):

Cool. And folks can check that out by going to afb.org/aw for AccessWorld along with 25 years as we're celebrating the 25th anniversary of AccessWorld of back issues all online. Be sure to like and subscribe to this podcast. And Stephanie and Arielle, thank you so much. The final thoughts and words of wisdom as we venture into this great new horizon of the world that hopefully will not be robots taking us over.

Arielle Silverman (25:17):

Oh, we appreciate the opportunity to share our findings and stay tuned for the next stage.

Tony Stephens (25:25):

In all seriousness, the science fiction metaphors are growing stale. I know there is so much out there for AI that is taking charge of our lives, and thank you so much for your team's research and really unpacking that to go beyond just the stereotypes of what we think about AI, but just how many different ways it can touch our lives. So hats off to you and Stephanie as well to your team for an outstanding job on this research and we can't wait to bring folks back for phase two of this research study. So in the meantime, thanks everybody, like and subscribe. Visit us online at afb.org and we will talk to you again next month.

Outro (26:57):

You've been listening to AccessWorld, a podcast on digital inclusion and accessibility. AccessWorld is a production of the American Foundation for the Blind, produced at the Pickle Factory in Baltimore, Maryland. Our theme music is by Cosmonkey, compliments of Artlist.io. To email our hosts Aaron and Tony, email communications@afb.org. To learn more about the American Foundation for the Blind or even help support our work, go to www.afb.org.

AFB.