The tech workers who perform the invisible maintenance of AI are particularly vulnerable to exploitation and overwork.
Companies building AI-powered services rely on a vast network of on-demand workers to clean and label datasets, and to train and improve models. Workers who perform content moderation for platforms like Facebook and Twitter, for instance, are regularly subjected to disturbing imagery, sounds, and language, suffering serious mental health problems and secondhand trauma as a result.