Back to AI Intersections Database

Organizations like Te Hiku Media raise concerns about Big Tech using their data to train systems like WhisperAI, "a speech recognition model trained on 680,000 hours of audio taken from the web."

Issue

Justice Area(s)

Community Health and Collective Security

Racial Justice

AI Impact(s)

Decolonization

Location(s)

Oceania

New Zealand

Year

2023

The world's extensive history of colonization and its harm are clear as activists fight for Indigenous data sovereignty, saying "the way in which Whisper was created goes against everything we stand for. It's an unethical approach to data extraction and it disregards the harm that can be done by open sourcing multilingual models like these." They remind the industry that "when someone who doesn't have a stake in the language attempts to provide language services, they often do more harm than good." Ultimately, organizers want other tech orgs to follow their lead: "We respect data in that we look after it rather than claim ownership over it. This is similar to how Indigenous peoples look after land. We only use data in ways that align with our core values as an Indigenous organization and in ways that benefit the communities from which the data was gathered."

Share Page

Related Issues by Justice Area

Issue 2018

Discriminatory targeted ads focusing on housing, jobs, and credit persist at Facebook, even after changes.

Facebook was allowing advertisers to exclude certain groups when advertising within federally regulated markets like housing and jobs. Facebook often sidestepped the concerns using technicalities, such as classifying users not by race but by so-called"ethnic" or "multicultural" affinities, or self-selecting identity groups consistent with what a majority of Black, Latine, or other minoritized people might statistically like on the platform. Only after rigorous coverage from media outlets, most prominently ProPublica, did Facebook eventually disable ad targeting options for housing, job, and credit ads as part of a legal settlement with civil rights groups (National Fair Housing Alliance, American Civil Liberties Union, Communication Workers of America). However, even when Facebook changed its ad platform to prevent advertisers from selecting attributes like "ethnic affinity," it was determined that the platform still enabled discrimination by allowing advertisers to target users through proxy attributes.

Racial Justice
Issue 2017

AI systems reflect the culture's bias against the disabled.

The Allegheny County Department of Human Services in the state of Pennsylvania in the United States uses an AI system that residents allege incorrectly flags disabled parents as being neglectful, removing their children from their homes with no actual evidence of neglect. It is currently under investigation by the United States Department of Justice.

Community Health and Collective Security Disability Justice Human Rights
Issue 2023

Medicare Advantage insurance plans use AI to determine what care it will cover for its 31 million elderly subscribers in the United States.

Journalists at Stat found that companies are specifically using AI systems to deny coverage for care. The massive problem: the algorithms are a black box that can’t be peered into, making it nearly impossible for patients to fight for health care coverage when they don’t know why they were denied it in the first place.

Community Health and Collective Security Disability Justice Economic Justice
Issue 2023

AI could be used to better meet the needs of the disabled, but there are currently many instances where it actively works against the disabled community.

In 2023, researchers at Pennsylvania State University published “Automated Ableism: An Exploration of Explicit Disability Biases in Artificial Intelligence as a Service (AIaaS) Sentiment and Toxicity Analysis Models,” which explores the bias embedded in several natural language processing (NLP) algorithms and models. They found that every single public model they tested “exhibited significant bias against disability,” classifying sentences as negative and toxic simply because they contained references to disability, ignoring context and the actual lived experiences of disabled people.

Community Health and Collective Security Disability Justice

Want to suggest an update to the AI Intersections Database?

Help us make this database a robust resource for movement building! We welcome additions, corrections, and updates to actors, issues, AI impacts, justice areas, contact info, and more.

Contribute to the database

Sign Up for News and Updates

Join our Insights email list to stay up-to-date in the fight for a better internet for all.

AI Intersections Database

This website supports Web Monetization

Read more about our implementation

Mozilla is a global non-profit dedicated to putting you in control of your online experience and shaping the future of the web for the public good. Visit us at foundation.mozilla.org. Most content available under a Creative Commons license.