Fundamental Bias in Inverting Random Sampling Matrices with Application to Sub-sampled Newton

Chengmei Niu, Zhenyu Liao, Zenan Ling, Michael Mahoney

International Conference on Machine Learning 2025 · Oral

This talk, presented by Liam Hutchinson on behalf of authors Chengmei Niu, Zhenyu Liao, Zenan Ling, and Michael Mahoney, addresses a critical theoretical and practical challenge in **randomized numerical linear algebra (RNLA)**: the inherent bias when inverting matrices formed via random sampling. Specifically, the presentation delves into the **fundamental bias in inverting random sampling matrices** and demonstrates its significant application to **sub-sampled Newton methods** in large-scale optimization. The core problem arises because while sketching matrices can provide unbiased estimates of the original Gram matrix, their inverses—crucial for solving normal equations and powering second-order optimization algorithms—often exhibit a significant, unaddressed bias.

AI review

This paper delivers a rigorous and non-trivial theoretical result: a precise characterization of the inversion bias for general random sampling sketches, together with a matrix-level debiasing procedure that yields an epsilon-delta unbiased estimator of the inverse Gram matrix. The application to sub-sampled Newton methods is clean and consequential, and the claimed problem-independent convergence rate — if the proof holds — is a meaningful advance over the state of the art. The work sits squarely in a tradition that goes from the classical Stein-type shrinkage results through modern…