Rényi Neural Processes
Xuesong Wang, He Zhao, Edwin V. Bonilla
International Conference on Machine Learning 2025 · Oral
Neural Processes (NPs) represent a powerful paradigm in machine learning, offering a flexible framework for context-based prediction and robust uncertainty estimation. At ICML 2025, Xuesong Wang, along with co-authors He Zhao and Edwin V. Bonilla from CSIRO Data61, presented their work on **Rényi Neural Processes (RNPs)**, a novel approach designed to overcome a critical limitation in conventional NPs: prior misspecification. The core idea behind NPs is to learn a mapping from a set of observed data points (context) to a distribution over unobserved target points, enabling rapid adaptation to new tasks or environments without requiring retraining. This capability is highly valuable across diverse applications such as meta-learning, multi-task learning, retrieval-augmented generation (RAG) for large language models (LLMs), and simulation-to-real transfer learning.
AI review
Rényi Neural Processes is a competent, honest paper that identifies a real failure mode in Neural Process training — prior misspecification induced by parameter coupling — and proposes replacing the KL divergence in the ELBO with Rényi divergence as a corrective. The unification of MLE and VI objectives under a single alpha-parameterized family is a clean theoretical observation, and the transfer learning experiments show a meaningful performance improvement. However, the core technical move is a fairly direct application of known properties of Rényi divergence to an existing framework; the…