Back to AI Intersections Database

In 2022, Brazil’s facial recognition system placed African American actor Michael B. Jordan on Brazilian police’s most-wanted list.

Issue

Justice Area(s)

Community Health and Collective Security

Racial Justice

AI Impact(s)

Bias and Discrimination

Racial discrimination

Surveillance

Location(s)

Americas

Brazil

Year

2022

This high-profile case of algorithmic racism happened because the facial recognition program is not good at distinguishing between Black faces and incorrectly identified him as a murder suspect in a mass shooting. Experts say this technology negatively impacts millions of people of color in Brazil, where facial recognition is still in use, despite its failures and harm to the community.

Share Page

Related Issues by Justice Area

Issue 2023

AI could be used to better meet the needs of the disabled, but there are currently many instances where it actively works against the disabled community.

In 2023, researchers at Pennsylvania State University published “Automated Ableism: An Exploration of Explicit Disability Biases in Artificial Intelligence as a Service (AIaaS) Sentiment and Toxicity Analysis Models,” which explores the bias embedded in several natural language processing (NLP) algorithms and models. They found that every single public model they tested “exhibited significant bias against disability,” classifying sentences as negative and toxic simply because they contained references to disability, ignoring context and the actual lived experiences of disabled people.

Community Health and Collective Security Disability Justice
Issue 2017

AI systems reflect the culture's bias against the disabled.

The Allegheny County Department of Human Services in the state of Pennsylvania in the United States uses an AI system that residents allege incorrectly flags disabled parents as being neglectful, removing their children from their homes with no actual evidence of neglect. It is currently under investigation by the United States Department of Justice.

Community Health and Collective Security Disability Justice Human Rights
Issue 2023

Medicare Advantage insurance plans use AI to determine what care it will cover for its 31 million elderly subscribers in the United States.

Journalists at Stat found that companies are specifically using AI systems to deny coverage for care. The massive problem: the algorithms are a black box that can’t be peered into, making it nearly impossible for patients to fight for health care coverage when they don’t know why they were denied it in the first place.

Community Health and Collective Security Disability Justice Economic Justice
Issue 2022

Multilingual inconsistencies in AI systems impact linguistic hegemony.

AI can create language hegemony, where some languages are granted superior status and others are deemed inferior. Studies show that "language modeling bias can result in systems that, while being precise regarding languages and cultures of dominant powers, are limited in the expression of socio-culturally relevant notions of other communities." In extreme cases, this can even violate people's right to practice and preserve their native non-English languages.

Human Rights Racial Justice

Want to suggest an update to the AI Intersections Database?

Help us make this database a robust resource for movement building! We welcome additions, corrections, and updates to actors, issues, AI impacts, justice areas, contact info, and more.

Contribute to the database

Sign Up for News and Updates

Join our Insights email list to stay up-to-date in the fight for a better internet for all.

AI Intersections Database

This website supports Web Monetization

Read more about our implementation

Mozilla is a global non-profit dedicated to putting you in control of your online experience and shaping the future of the web for the public good. Visit us at foundation.mozilla.org. Most content available under a Creative Commons license.