Expected Variational Inequalities

Brian Zhang, Ioannis Anagnostides, Emanuel Tewolde, Ratip Emin Berker, Gabriele Farina, Vincent Conitzer, Tuomas Sandholm

International Conference on Machine Learning 2025 · Oral

Variational Inequalities (VIs) represent a fundamental and highly expressive framework in mathematics and computer science, capable of modeling a vast array of problems from optimization to game theory. At its core, a VI seeks a point `x` within a set `X` such that the inner product of an operator `F(x)` and any deviation `x' - x` is non-negative. This general formulation elegantly captures first-order optimality conditions for convex optimization problems and, notably, Nash equilibria in game theory. However, the very generality that makes VIs so powerful also renders them notoriously difficult to solve in practice, often being **PPAD-hard** even in seemingly simple settings like multilinear games.

AI review

Zhang et al. introduce Expected Variational Inequalities (EVIs) as a tractable relaxation of the generally PPAD-hard VI problem, proving polynomial-time computability when the deviation function class is restricted to linear maps via a dual ellipsoid argument. The framework cleanly unifies correlated equilibrium computation, zero-sum polymatrix games, and — most strikingly — provides the first efficient algorithm for quasi-convex optimization viewed as finding distributions over minima. This is honest, rigorous theory that opens a new question class rather than closing an old one.