AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization

Haibo Chen, Xin Wang, Zeyang Zhang, Haoyang Li, Ling Feng, Wenwu Zhu

International Conference on Machine Learning 2025 · Oral

This article delves into "AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization," a significant contribution presented at ICML 2025 by Haibo Chen and his co-authors. The talk addresses a critical limitation in the rapidly evolving field of **Graph Foundation Models (GFMs)**: the reliance on fixed, hand-designed Graph Neural Network (GNN) architectures. While GFMs are designed to generalize across diverse graph datasets and tasks by pre-training on a wide array of graph knowledge, their underlying GNN architectures often struggle to adapt optimally to the unique characteristics of different downstream applications. This architectural inconsistency, where a single optimal architecture does not exist across varying data, hampers the full potential of GFMs.

AI review

AutoGFM proposes to automate GNN architecture selection within a graph foundation model via a disentangled encoder and an invariant-guided architecture predictor, framed around a theoretical argument that mainstream GNAS methods produce optimization conflicts when applied to multi-dataset GFM settings. The motivation is reasonable and the architecture is technically elaborate, but the work falls short of the standard I would hold for a 'theoretical contribution with empirical support' — the central theoretical claims are underpowered (an Assumption plus a Proposition does not constitute a…