Breaking the Layer Barrier: Remodeling Private Transformer Inference with Hybrid CKKS and MPC
Tianshi Xu
34th USENIX Security Symposium (USENIX Security '25) · Day 2 · Privacy 1: Differential Privacy and Audit
This talk, presented by Tianshi Xu from Peking University, introduces a novel framework named BB (presumably "Breaking the Barrier") that significantly advances the field of **private transformer inference**. In an era where large language models (LLMs) and transformer-based architectures are ubiquitous across sensitive domains like healthcare, finance, and personalized assistance, protecting both the user's input data and the proprietary model parameters is paramount. Cryptography-based private inference offers robust, provable security guarantees, ensuring that the server learns nothing about the client's input and vice-versa.
AI review
Solid, technically credible contribution to private ML inference that solves a real bottleneck — the operator-wise fusion insight is genuinely non-obvious and the CKKS-to-MPC security fix addresses an actual vulnerability in prior work. The 21x communication reduction and rotation efficiency gains are backed by concrete benchmarks against named prior systems, which is exactly what you want to see.