FCert: Certifiably Robust Few-Shot Classification with Foundation Models

Yanting Wang, Wei Zou, Jinyuan Jia

IEEE Symposium on Security and Privacy 2024 · Day 2 · Continental Ballroom 5

The proliferation of powerful **Foundation Models** (FMs) has revolutionized machine learning, enabling rapid development of high-performing downstream classifiers even with limited labeled data – a paradigm known as **few-shot learning**. This talk, "FCert: Certifiably Robust Few-Shot Classification with Foundation Models," presented by Yanting Wang, Wei Zou, and Jinyuan Jia at IEEE S&P, addresses a critical vulnerability in this promising domain: **data poisoning attacks**. While FMs offer unprecedented efficiency and accuracy, their reliance on small "support sets" for few-shot learning makes them highly susceptible to malicious data injection, which can lead to misclassifications that are imperceptible to human inspection.

AI review

This work introduces FCert, a novel certified defense for few-shot classification using Foundation Models, directly addressing critical data poisoning vulnerabilities. It provides provable robustness guarantees where existing methods fall short, making it highly impactful for deploying secure AI systems. The research is technically sound and demonstrates clear superiority over current baselines.

Watch on YouTube