Position: AI Safety should prioritize the Future of Work
Sanchaita Hazra, Bodhisattwa Prasad Majumder, Tuhin Chakrabarty
International Conference on Machine Learning 2025 · Oral
This talk, presented by Tuhin Chakrabarty on behalf of himself and co-authors Sanchaita Hazra and Bodhisattwa Prasad Majumder, delivers a compelling position statement arguing for a fundamental reorientation of the AI safety paradigm. Moving beyond the often-polarized debates around existential risks versus unbridled innovation, the speakers assert that **AI safety** must prioritize the immediate and tangible impacts of AI on the future of work, human labor, and societal equity. The core argument posits that current AI research practices and governance frameworks are inadequately addressing critical issues such as widespread job displacement, skill disparity, cognitive debt, and the erosion of creative industries, thereby jeopardizing meaningful human labor.
AI review
A position paper arguing that AI safety research should reorient toward near-term labor market harms rather than speculative existential risks. The motivating concern is legitimate and the observations are timely, but the talk offers no formal framework, no original results, and no falsifiable theoretical claims — only a curated synthesis of external findings strung together by rhetorical momentum. As a position paper at ICML, it earns some credit for redirecting a real research conversation, but the gap between the scope of the claims and the rigor of the support is too wide to overlook.