AI could be used to better meet the needs of the disabled, but there are currently many instances where it actively works against the disabled community.
In 2023, researchers at Pennsylvania State University published “Automated Ableism: An Exploration of Explicit Disability Biases in Artificial Intelligence as a Service (AIaaS) Sentiment and Toxicity Analysis Models,” which explores the bias embedded in several natural language processing (NLP) algorithms and models. They found that every single public model they tested “exhibited significant bias against disability,” classifying sentences as negative and toxic simply because they contained references to disability, ignoring context and the actual lived experiences of disabled people.
Related Issues by Justice Area
AI systems reflect the culture's bias against the disabled.
The Allegheny County Department of Human Services in the state of Pennsylvania in the United States uses an AI system that residents allege incorrectly flags disabled parents as being neglectful, removing their children from their homes with no actual evidence of neglect. It is currently under investigation by the United States Department of Justice.
Community Health and Collective Security Disability Justice Human Rights Issue 2023Medicare Advantage insurance plans use AI to determine what care it will cover for its 31 million elderly subscribers in the United States.
Journalists at Stat found that companies are specifically using AI systems to deny coverage for care. The massive problem: the algorithms are a black box that can’t be peered into, making it nearly impossible for patients to fight for health care coverage when they don’t know why they were denied it in the first place.
Community Health and Collective Security Disability Justice Economic Justice Issue 2024AI hiring algorithms come complete with dangerous bias.
About 70 percent of companies (and 99 percent of Fortune 500 companies) around the world use AI-powered software to make hiring decisions and track employee productivity. The problem? The tools work by identifying and replicating patterns around who was previously hired, which means they perpetuate the bias embedded in the system, locking marginalized populations out of employment. This is particularly tough for disabled people, people of color, and disabled people of color, who are often subject to employment discrimination.
Disability Justice Economic Justice Issue 2023Organizations like Te Hiku Media raise concerns about Big Tech using their data to train systems like WhisperAI, "a speech recognition model trained on 680,000 hours of audio taken from the web."
The world's extensive history of colonization and its harm are clear as activists fight for Indigenous data sovereignty, saying "the way in which Whisper was created goes against everything we stand for. It's an unethical approach to data extraction and it disregards the harm that can be done by open sourcing multilingual models like these." They remind the industry that "when someone who doesn't have a stake in the language attempts to provide language services, they often do more harm than good." Ultimately, organizers want other tech orgs to follow their lead: "We respect data in that we look after it rather than claim ownership over it. This is similar to how Indigenous peoples look after land. We only use data in ways that align with our core values as an Indigenous organization and in ways that benefit the communities from which the data was gathered."
Community Health and Collective Security Racial Justice