SAGA: A Security Architecture for Governing AI Agentic Systems
Georgios Syros
Network and Distributed System Security (NDSS) Symposium 2026 · Day 1 · Systems Security
As AI agents proliferate across enterprise and consumer applications, their interactions remain **completely ungoverned and insecure**. Emerging protocols like Google's Agent-to-Agent (A2A) and Microsoft's AG2 focus on interoperability, not security. This talk introduces **SAGA (Security Architecture for Governing AI Agentic Systems)**, the first comprehensive security architecture for multi-agent systems that provides **verifiable agent identities**, **policy-enforced access control**, and **secure agent-to-agent communication**. Formally verified in both **Verifpal** and **ProVerif**, SAGA scales to **200 million coexisting agents** on AWS, adds less than **0.6% overhead** to LLM task inference time, and successfully mitigates eight concrete attack models including unauthorized agents, impersonation, token misuse, and Sybil attacks.
AI review
A well-designed security architecture for AI agent systems that provides formally verified identity, access control, and secure communication. The formal verification in Verifpal and ProVerif, integration with Google A2A, and scaling to 200 million agents demonstrate serious engineering. However, SAGA governs access, not behavior -- it doesn't address prompt injection or control what authorized agents do, which means it solves the identity and transport layer but leaves the most interesting attack surface (agent manipulation) to complementary work.