Flowing Through Continuous-Time Generative Models: A Clear and Systematic Tour: Flow Through Generative Modeling: A Tutorial
qiang liu
International Conference on Machine Learning 2025 · Tutorial
This tutorial, presented by Qiang Liu at ICML 2025, offers a comprehensive and systematic exploration of **continuous-time generative models**, with a particular focus on **Rectified Flow (RF)**. The core problem addressed is turning noise into meaningful data, a fundamental challenge in machine learning with vast applications from text to image and video generation. Liu highlights a significant paradigm shift in generative modeling: from traditional "one-step" models like **Generative Adversarial Networks (GANs)** and **Variational Autoencoders (VAEs)** to more recent "iterative process models" such as **diffusion models**, **flow models**, and **autoregressive models** like GPT. This shift is crucial because it decomposes the complex generation task into numerous, simpler steps, distributing the difficulty and leading to higher quality results.
AI review
This is a competent and well-organized tutorial by Qiang Liu on Rectified Flow as a unifying framework for continuous-time generative models. It covers substantial ground — marginal preservation via the continuity equation, transport cost reduction via Jensen's inequality, connections to optimal transport through Helmholtz decomposition, Tweedie's formula in the Gaussian case, ODE-SDE conversion, distillation, and reward alignment. The mathematical scaffolding is real and the unifying perspective is genuinely useful. But as a tutorial rather than a research contribution, the standard for…