CryptPEFT: Efficient and Private Neural Network Inference via Parameter-Efficient Fine-Tuning

Saisai Xia

Network and Distributed System Security (NDSS) Symposium 2026 · Day 2 · Network Security

Saisai Xia presents CryptPEFT, a system that dramatically accelerates **private neural network inference** by redesigning parameter-efficient fine-tuning (PEFT) architectures specifically for encrypted computation. The core insight is that in a PEFT model, the large backbone is public and can run in plaintext on the user's device, while only the small adapter layers contain proprietary model IP. By enforcing **one-way communication** -- data flows from backbone to adapter but never back -- CryptPEFT ensures that only the tiny adapter component needs to operate on encrypted data. This approach achieves up to **238.76x speedup** over traditional PEFT private inference and **20.85x speedup** over simple fine-tuning, while maintaining or improving model accuracy. The work combines a novel adapter architecture with a **Neural Architecture Search (NAS)** framework tailored for MPC-friendly operations.

AI review

Solid PETs research that redesigns PEFT adapters for efficient MPC computation, achieving impressive speedup numbers (238x). However, this is pure cryptographic engineering with no offensive application, no vulnerability discovery, and no real-world exploitation. The one-way communication insight is clean but the practical relevance to security operations is minimal.

Watch on YouTube