AI is frequently deployed at nation borders and refugee service operations in ways that are often not transparent and can be problematic in numerous ways for people on the move.
This opens the door for human rights violations via facial recognition, biosurveillance, and other AI tech, particularly as it relates to people of color.
Related Issues by Justice Area
AI is poised to exacerbate the Black-white wealth gap in the United States.
The median wealth amassed by Black households in the United States is just 15 percent of what that of white households. That means that Black families have about $44,900 USD to their $285,000 USD. This is the result of many systemic factors that stretch all the way back to the the time of chattel slavery. The McKinsey Institute for Black Economic Mobility predicts that AI will add about $7 trillion USD to the global economy each year, with nearly $2 trillion USD of that concentrated in the U.S. But it also warns that if generative AI technology development continues on its current trajectory, it will widen that wealth gap. The prediction is that by 2045, it will grow by a whopping $43 billion USD every year.
Economic Justice Racial Justice Issue 2023A 2023 report from Internet Watch Foundation AI is increasingly being used to generate child sexual abuse material (CSAM).
Researchers examined 20,254 AI-generated images that were posted to one dark web CSAM forum over the course of one month and a preliminary review found that 11,108 of them were likely criminal. Ultimately, 2,978 were deemed criminal per UK law. They also concluded that the AI-generated images are increasingly becoming visually indistinguishable from CSAM that involve real people, that much of it focuses on celebrity children and people who have previously been abused, and that there is potential for massive growth in this criminal area.
Community Health and Collective Security Human Rights Issue 2023AI could be used to better meet the needs of the disabled, but there are currently many instances where it actively works against the disabled community.
In 2023, researchers at Pennsylvania State University published “Automated Ableism: An Exploration of Explicit Disability Biases in Artificial Intelligence as a Service (AIaaS) Sentiment and Toxicity Analysis Models,” which explores the bias embedded in several natural language processing (NLP) algorithms and models. They found that every single public model they tested “exhibited significant bias against disability,” classifying sentences as negative and toxic simply because they contained references to disability, ignoring context and the actual lived experiences of disabled people.
Community Health and Collective Security Disability Justice Issue 2023AI models, without ecological awareness, can perpetuate and amplify environmentally damaging narratives, exacerbating ecological crises.
The integration—or lack thereof—of ecological awareness in AI systems manifests significantly in how AI influences public and private sector decisions. For instance, without ecological consideration, AI-driven recommendations in urban planning and resource management could prioritize economic gains over sustainability, leading to increased carbon footprints and depletion of natural resources. The H4rmony Project addresses this by embedding ecolinguistic principles into AI to ensure its outputs promote sustainability.
Environmental Justice Human Rights