State actors and others can readily repurpose AI to violate human rights.
The same AI tech that powers convenient features for consumers can be used by bad actors toward horrific ends. It was discovered that MIT-funded iFlytek technology was sold to the Chinese government and used to surveill and oppress ethnic Uyghurs in China's northwest, where China has forced the ethnic and religious minority into camps. MIT cut ties with the company only after a broader ban on certain foreign funding took effect. The incident demonstrates both how state actors and others can readily repurpose AI to violate human rights, and the international relations already intertwined with the development of AI.