Conformal Prediction as Bayesian Quadrature

Jake Snell, Thomas Griffiths

International Conference on Machine Learning 2025 · Oral

In this insightful talk from ICML 2025, Jake Snell, co-authored with Thomas Griffiths, presents a novel theoretical framework that bridges two seemingly disparate areas of machine learning: **conformal prediction (CP)** and **Bayesian quadrature (BQ)**. The core of their work reformulates conformal prediction through a probabilistic lens, offering a new justification for its efficacy and unlocking avenues for more robust uncertainty quantification. Snell highlights the critical need for reliable uncertainty estimates in AI systems, especially as models become more complex and deployed in high-stakes decision-making environments, from autonomous robots to large language models.

AI review

Snell and Griffiths present a genuinely interesting theoretical reframing: conformal prediction and conformal risk control, typically motivated by frequentist exchangeability arguments, can be derived as the posterior mean of a Bayesian quadrature problem over the quantile function of the loss distribution. The key technical move — recognizing that (1) risk equals the area under the quantile function, (2) quantile spacings of any continuous distribution are uniformly Dirichlet, and (3) these two facts together yield a tractable posterior whose mean recovers standard CP/CRC thresholds — is…