Reinventing Agentic AI Security With Architectural Controls
Black Hat USA 2025 · Day 1 · Briefings
David Brockler III of NCC Group argues that AI systems are being secured the same way the early web was secured — with heuristic guardrails as the primary defense — and that this guarantees the same outcome: persistent exploitation. Drawing on real-world penetration testing at NCC Group, he presents a framework of architectural controls including dynamic capability shifting, trust binding, trust tagging, I/O synchronization, and intent-based segmentation that enforce hard security guarantees independent of guardrail effectiveness. ---
AI review
A practitioner-friendly synthesis of defensive architecture for agentic AI that names the right problems and proposes coherent solutions. Dynamic capability shifting is a clean, implementable idea. But this is a framework talk from a consultant, not primary research — it's telling engineers what to build, not showing them something they've never seen.