AI opacity may affect societal issues like election and school determination.
AI systems are commonly the opposite of transparent. Developers keep their inputs and methods secret due to competition, to prevent gaming of the algorithm (as with Google Search), and often, because they themselves aren't entirely sure how the AI comes up with outcomes. This is a concern for commercial products, but it's even more problematic when those products affect societal issues. For example, The Markup found that Gmail's black box algorithm controls which political emails arrive in subscribers' main inboxes. Different 2020 presidential candidates had starkly different rates of successful deliverability. The opacity of AI systems is also a troubling issue when the AI is employed by government to determine which school a child goes to, or whether someone qualifies for social benefits. Major resource allocation decisions like this need to be auditable and accountable, and secret algorithms don't meet those standards.