ACE: A Security Architecture for LLM-Integrated App Systems
Evan Li
Network and Distributed System Security (NDSS) Symposium 2026 · Day 1 · Apps & Cloud Security
As AI agents become deeply embedded in products and infrastructure, the security implications of granting autonomous systems access to tools and sensitive data have become critical. This talk introduces **ACE (Abstract-Concrete-Execute)**, a security architecture designed to defend LLM-integrated agent systems against prompt injection, planning manipulation, and control hijacking attacks. The core insight is deceptively simple but powerful: separate the planning phase from the execution phase, and ensure that planning uses only trusted information -- the user's query -- while untrusted tool metadata and outputs are quarantined from influencing the agent's plan.
AI review
A well-motivated architectural defense for LLM agent systems that separates planning from execution using information flow control. The attacks against IsolateGPT are real but straightforward, and the defense achieves 100% security on InjectAgent while keeping 80%+ utility. Solid engineering, though the residual risk in concrete planning and the reliance on correct capability labeling limit the completeness of the solution.