Back to AI Intersections Database

Discriminatory targeted ads focusing on housing, jobs, and credit persist at Facebook, even after changes.

Issue

Justice Area(s)

Racial Justice

AI Impact(s)

Bias and Discrimination

Location(s)

Americas

United States

Year

2018

Facebook was allowing advertisers to exclude certain groups when advertising within federally regulated markets like housing and jobs. Facebook often sidestepped the concerns using technicalities, such as classifying users not by race but by so-called"ethnic" or "multicultural" affinities, or self-selecting identity groups consistent with what a majority of Black, Latine, or other minoritized people might statistically like on the platform. Only after rigorous coverage from media outlets, most prominently ProPublica, did Facebook eventually disable ad targeting options for housing, job, and credit ads as part of a legal settlement with civil rights groups (National Fair Housing Alliance, American Civil Liberties Union, Communication Workers of America). However, even when Facebook changed its ad platform to prevent advertisers from selecting attributes like "ethnic affinity," it was determined that the platform still enabled discrimination by allowing advertisers to target users through proxy attributes.

Share Page

Related Issues by Justice Area

Issue 2023

AI is poised to exacerbate the Black-white wealth gap in the United States.

The median wealth amassed by Black households in the United States is just 15 percent of what that of white households. That means that Black families have about $44,900 USD to their $285,000 USD. This is the result of many systemic factors that stretch all the way back to the the time of chattel slavery. The McKinsey Institute for Black Economic Mobility predicts that AI will add about $7 trillion USD to the global economy each year, with nearly $2 trillion USD of that concentrated in the U.S. But it also warns that if generative AI technology development continues on its current trajectory, it will widen that wealth gap. The prediction is that by 2045, it will grow by a whopping $43 billion USD every year.  

Economic Justice Racial Justice
Issue 2022

Multilingual inconsistencies in AI systems impact linguistic hegemony.

AI can create language hegemony, where some languages are granted superior status and others are deemed inferior. Studies show that "language modeling bias can result in systems that, while being precise regarding languages and cultures of dominant powers, are limited in the expression of socio-culturally relevant notions of other communities." In extreme cases, this can even violate people's right to practice and preserve their native non-English languages.

Human Rights Racial Justice
Issue 2023

Organizations like Te Hiku Media raise concerns about Big Tech using their data to train systems like WhisperAI, "a speech recognition model trained on 680,000 hours of audio taken from the web."

The world's extensive history of colonization and its harm are clear as activists fight for Indigenous data sovereignty, saying "the way in which Whisper was created goes against everything we stand for. It's an unethical approach to data extraction and it disregards the harm that can be done by open sourcing multilingual models like these." They remind the industry that "when someone who doesn't have a stake in the language attempts to provide language services, they often do more harm than good." Ultimately, organizers want other tech orgs to follow their lead: "We respect data in that we look after it rather than claim ownership over it. This is similar to how Indigenous peoples look after land. We only use data in ways that align with our core values as an Indigenous organization and in ways that benefit the communities from which the data was gathered."

Community Health and Collective Security Racial Justice
Issue 2022

In 2022, Brazil’s facial recognition system placed African American actor Michael B. Jordan on Brazilian police’s most-wanted list.

This high-profile case of algorithmic racism happened because the facial recognition program is not good at distinguishing between Black faces and incorrectly identified him as a murder suspect in a mass shooting. Experts say this technology negatively impacts millions of people of color in Brazil, where facial recognition is still in use, despite its failures and harm to the community.

Community Health and Collective Security Racial Justice

Want to suggest an update to the AI Intersections Database?

Help us make this database a robust resource for movement building! We welcome additions, corrections, and updates to actors, issues, AI impacts, justice areas, contact info, and more.

Contribute to the database

Sign Up for News and Updates

Join our Insights email list to stay up-to-date in the fight for a better internet for all.

AI Intersections Database

This website supports Web Monetization

Read more about our implementation

Mozilla is a global non-profit dedicated to putting you in control of your online experience and shaping the future of the web for the public good. Visit us at foundation.mozilla.org. Most content available under a Creative Commons license.