So you think you can airgap? (No.)
Ziyad Edher
BSidesSF 2026 · Day 1 · AMC IMAX
In the rapidly evolving landscape of artificial intelligence, securing the colossal compute clusters that train and operate large language models presents unique and formidable challenges. Ziyad Edher, an infrastructure and security expert at Anthropic, delivered a compelling talk at BSides SF, provocatively titled "So you think you can airgap? (No.)," to address these very issues. His presentation delves into Anthropic's innovative approach to protecting their most valuable asset – the multi-terabyte model weights of their AI systems like Claude – from sophisticated attackers, even when a true air gap is impractical for remote research.
AI review
Edher takes a genuinely novel problem — securing multi-terabyte AI model weights against exfiltration in a hostile research environment where you can't trust your own compute nodes — and solves it with elegant, physics-grounded engineering rather than the usual DLP theater. The asymmetry insight (TB assets, MB/s legitimate traffic) is simple but the implementation details are real and hard-won. Not groundbreaking security research, but clearly operational work from someone who actually built and rolled it out.