Security for AI Agents Using an Ensemble of Fine-tuned Small Language Models

Lidan Hazout, Bar Kaduri

BSidesSF 2026 · Day 2 · AMC Theatre 14

The rapid adoption of AI agents across various industries, from coding assistants to personal productivity tools, has introduced a new and complex attack surface that traditional security paradigms are ill-equipped to handle. This talk, presented by Bar Kaduri at BSides SF, delves into the critical need for robust security mechanisms for these autonomous entities. Kaduri, alongside Lidan Hazout, whose contributions were acknowledged as central to the innovation, outlines a novel runtime security architecture designed to prevent AI agents from executing unintended or malicious actions.

AI review

Competent applied research on a genuinely relevant problem — securing AI agent tool invocations at runtime using fine-tuned SLM ensembles and RAG-based memory. The architecture is sensible and the LoRA accuracy results are the one concrete data point that earns the talk its slot, but the whole thing sits closer to 'smart engineering blog post' than novel security research.

Watch on YouTube