Multilingual inconsistencies in AI systems impact linguistic hegemony.
AI can create language hegemony, where some languages are granted superior status and others are deemed inferior. Studies show that "language modeling bias can result in systems that, while being precise regarding languages and cultures of dominant powers, are limited in the expression of socio-culturally relevant notions of other communities." In extreme cases, this can even violate people's right to practice and preserve their native non-English languages.
Related Issues by Justice Area
AI models, without ecological awareness, can perpetuate and amplify environmentally damaging narratives, exacerbating ecological crises.
The integration—or lack thereof—of ecological awareness in AI systems manifests significantly in how AI influences public and private sector decisions. For instance, without ecological consideration, AI-driven recommendations in urban planning and resource management could prioritize economic gains over sustainability, leading to increased carbon footprints and depletion of natural resources. The H4rmony Project addresses this by embedding ecolinguistic principles into AI to ensure its outputs promote sustainability.
Environmental Justice Human Rights Issue 2017AI systems reflect the culture's bias against the disabled.
The Allegheny County Department of Human Services in the state of Pennsylvania in the United States uses an AI system that residents allege incorrectly flags disabled parents as being neglectful, removing their children from their homes with no actual evidence of neglect. It is currently under investigation by the United States Department of Justice.
Community Health and Collective Security Disability Justice Human Rights Issue 2023Organizations like Te Hiku Media raise concerns about Big Tech using their data to train systems like WhisperAI, "a speech recognition model trained on 680,000 hours of audio taken from the web."
The world's extensive history of colonization and its harm are clear as activists fight for Indigenous data sovereignty, saying "the way in which Whisper was created goes against everything we stand for. It's an unethical approach to data extraction and it disregards the harm that can be done by open sourcing multilingual models like these." They remind the industry that "when someone who doesn't have a stake in the language attempts to provide language services, they often do more harm than good." Ultimately, organizers want other tech orgs to follow their lead: "We respect data in that we look after it rather than claim ownership over it. This is similar to how Indigenous peoples look after land. We only use data in ways that align with our core values as an Indigenous organization and in ways that benefit the communities from which the data was gathered."
Community Health and Collective Security Racial Justice Issue 2022In 2022, Brazil’s facial recognition system placed African American actor Michael B. Jordan on Brazilian police’s most-wanted list.
This high-profile case of algorithmic racism happened because the facial recognition program is not good at distinguishing between Black faces and incorrectly identified him as a murder suspect in a mass shooting. Experts say this technology negatively impacts millions of people of color in Brazil, where facial recognition is still in use, despite its failures and harm to the community.
Community Health and Collective Security Racial Justice