Position: Political Neutrality in AI Is Impossible — But Here Is How to Approximate It

Jillian Fisher, Ruth Elisabeth Appel, Chan Young Park, Yujin Potter, Liwei Jiang, Taylor Sorensen, Shangbin Feng, Yulia Tsvetkov, Margaret Roberts, Jennifer Pan, Dawn Song, Yejin Choi

International Conference on Machine Learning 2025 · Oral

In an era where artificial intelligence increasingly influences decision-making across various domains, the notion of **political neutrality** in AI has emerged as a critical, yet often elusive, goal. This thought-provoking talk, presented by Jillian Fisher at ICML 2025, challenges the fundamental premise of achieving true political neutrality in AI models. Drawing on insights from philosophy, political science, and computer science, the presented work argues that true neutrality is not only theoretically impossible but also technically unattainable given the human-centric nature of AI development.

AI review

This position paper argues that political neutrality in AI is theoretically and technically impossible and proposes a taxonomy of 'approximations' as a substitute goal. The core philosophical observation is not wrong, but it is also not new — anyone familiar with the political philosophy literature on neutrality (Rawls, Raz, Dworkin) or the ML fairness literature (which has relitigated these tensions for a decade) will find the main claim familiar. The contribution is a classificatory framework, not a theorem, and the empirical component is described so thinly in the talk as to be nearly…