Sandboxing Agentic Workflows with WASM
Joe Lucas
ShmooCon XX (Final) · Day 2 · Build It
In his ShmooCon 2025 talk, Joe Lucas tackles a critical and often overlooked security challenge emerging from the rapid adoption of **agentic AI workflows**: the inherent danger of executing untrusted, Large Language Model (LLM)-generated code. As AI applications increasingly move beyond simple text generation to autonomous code execution and iteration, the risk of security vulnerabilities, data breaches, and system compromise escalates dramatically. Lucas, drawing from his experience bridging the gap between security and developer communities, highlights that many developers, particularly in the scientific and data science fields, are inadvertently introducing significant risks by naively running LLM output directly on application servers.
AI review
This session provides a brutally honest and technically sound approach to securing agentic AI workflows, a problem rife with naive implementations and marketing fluff. The speaker correctly identifies that many 'AI security' issues are just rehashed application security problems, and proposes a practical, client-side sandboxing solution using WebAssembly and Pyodide. While the core sandboxing concepts aren't novel, their application to the emerging challenge of executing untrusted LLM-generated code is both timely and highly impactful, offering concrete architectural guidance for developers…