Adaptive Alignment: Designing AI for a Changing World - Frauke Kreuter
Frauke Kreuter
International Conference on Machine Learning 2025 · Invited Talk
In an insightful talk at ICML 2025, social scientist Frauke Kreuter presented a compelling argument for a more nuanced and data-centric approach to **AI alignment**, emphasizing the critical importance of understanding *what* models are being aligned with. While the machine learning community often focuses on the technical mechanisms of alignment—how to make models behave in desired ways—Kreuter zoomed out to explore the underlying societal values, norms, and preferences that should inform these technical efforts. Her presentation highlighted the inherent difficulties in capturing these dynamic and diverse human elements, particularly across different subgroups and over time.
AI review
Kreuter delivers a well-informed interdisciplinary argument that the ML community's alignment efforts are methodologically underspecified — not in the formal sense, but in the measurement sense. The core contribution is a transfer of the Total Survey Error framework into the RLHF/annotation context, illustrated with empirical evidence on response rate decay, annotator demographic skew, and interface-driven label noise. The talk is honest about its scope: this is a call for methodological borrowing, not a new theorem or algorithm. It earns its place at ICML as a corrective voice from a…