Tenderizing the Target: Soaking Code in Synthetic Vulnerabilities

Aaron Grattafiori, Skyler Bingham

[un]prompted 2026 — AI Security Practitioner Conference · Day 1 · 2

AI can find vulnerabilities now — but can it inject them? Aaron Grattafiori and Skyler Bingham from NVIDIA describe their agentic system for synthetically injecting realistic, exploitable vulnerabilities into codebases, complete with adjustable difficulty levels, five distinct injection modes (including CVE emulation and Auto-RCE), and a structured verification loop to prevent reward hacking. The result is a ground-truth vulnerability corpus for benchmarking AI security tools and training detection engineers. ---

AI review

Grattafiori and Bingham are solving the right problem — ground-truth vulnerability corpora for AI tool evaluation — and they've got a working system with five distinct injection modes and a structured verification loop. The candor about reward hacking and model refusals is more valuable than most conference talks' polished success stories.

Watch on YouTube