[un]prompted 2026 — AI Security Practitioner Conference
The inaugural AI security practitioner conference. Two days, two stages, 55 talks covering the full spectrum of AI security — from offensive research and prompt injection to enterprise governance and defensive tooling.
→ See editor’s top picks at [un]prompted 2026 — AI Security Practitioner Conference
- Opening Words – Research conferences aren't effective. — Gadi Evron
Gadi Evron opened [un]prompted 2026 not with a keynote but with a provocation: most security conferences are bad at creating the informal peer connections that actually matter. Drawing on a classic…
- Evaluating Threats & Automating Defense: How Google is Advancing Code Security — Heather Adkins, Four Flynn
Google and Google DeepMind researchers presented two integrated AI projects — Big Sleep for autonomous vulnerability discovery and CodeMender for autonomous patch generation — with the explicit goal…
- The Hard Part Isn't Building the Agent: On Measuring Agent Effectiveness to Improve It — Joshua Saxe
Joshua Saxe makes a counterintuitive argument: the biggest blocker to deploying autonomous AI security agents isn't building them — it's evaluating them. Classical ML metrics like precision, recall…
- Security Guidance as a Service: Building an AI-Native Blueprint for Defensive Security — Shruti Datta Gupta, Chandrani Mukherjee
Adobe's security engineering team built a centralized, AI-powered Security Guidance as a Service platform that delivers consistent, Adobe-specific security recommendations across every developer…
- Guardrails beyond Vibes: Shipping Security Agents in Production — Jeffrey Zhang, Siddh Shah
Stripe's security engineering team replaced ad hoc "vibe checks" with a rigorous engineering discipline for deploying AI security agents in production. By combining modular multi-agent architectures…
- Code Is Free: Securing Software in the Agentic Future — Paul McMillan, Ryan Lopopolo
OpenAI security engineer Paul McMillan and product engineer Ryan Lopopolo argue that AI-generated code has fundamentally broken the economics of security tooling. Instead of paying vendors and…
- AI Agents for Exploiting Auth-by-One Errors — Brendan Dolan-Gavitt, Vincent Olesen
XBOW researchers Brendan Dolan-Gavitt and Vincent Olesen have built an AI-driven offensive security system that finds and validates authentication and authorization bypasses in web applications —…
- Developing & Deploying AI Fingerprints for Advanced Threat Detection — Natalie Isak, Waris Gill
Microsoft researchers Natalie Isak and Waris Gill presented BinaryShield (referred to in the talk as "Boundary Shield"), a system that converts detected prompt injection attacks into…
- When Passports Execute: Exploiting AI Driven KYC Pipelines — Sean Park
TrendAI principal threat researcher Sean Park demonstrated how stored prompt injection attacks embedded in passport images can cause AI-driven KYC (Know Your Customer) pipelines to exfiltrate other…
- FENRIR: AI Hunting for AI Zero-Days at Scale — Peter Girnus, Derek Chen
TrendAI's Zero Day Initiative team built FENRIR, an AI-powered vulnerability discovery engine that combines traditional static analysis with a cascade of LLM triage stages to find zero-day bugs at…
- AI Notetakers: The Most Important Person in the Room — Joe Sullivan
AI notetakers have quietly become infrastructure — and security teams largely missed the moment to govern them. Joe Sullivan, former CSO of Uber and current CEO of Joe Sullivan Security, lays out…
- AI go Beep Boop! — Adam Laurie (Major Malfunction)
Hardware hacking — long the domain of expensive labs, specialized equipment, and years of hands-on experience — is being democratized by AI. Adam Laurie ("Major Malfunction"), a legendary figure in…
- Zeal of the Convert: Taming Shai-Hulud with AI — Rami McCarthy
When a massive NPM supply chain attack campaign called Shai-Hulud leaked data from tens of thousands of compromised machines across GitHub, Wiz security researcher Rami McCarthy used AI to do in two…
- Anatomy of an Agentic Personal AI Infrastructure — Daniel Miessler
Daniel Miessler, creator of Fabric and founder of Unsupervised Learning, walks through the architecture of his personal AI infrastructure — a unified, Claude Code-based system he calls PAI (Personal…
- Black-hat LLMs — Nicholas Carlini
Nicholas Carlini, a research scientist at Anthropic and one of the most respected voices in AI security, delivered a talk that the conference organizer introduced as a "global-level emergency."…
- Vibe Check: Security Failures in AI-Assisted IDEs — Piotr Ryciak
Mindgard's AI red team discovered 37 vulnerabilities across more than 15 AI-assisted IDE vendors, including Google Gemini CLI, OpenAI Codex, and Amazon Q. The attack patterns fall into four…
- Establishing AI Governance Without Stifling Innovation: Lessons Learned — Billy Norwood
Billy Norwood, CISO of $5B pharmaceutical distributor FFF Enterprises, walked through the hard lessons of building AI governance from scratch: the 40-project roadmap that hit reality, the AI usage…
- Enterprise AI Governance at Snowflake: Balancing Innovation and Risk — Ragini Ramalingam
Snowflake's enterprise security director Ragini Ramalingam detailed how the AI data cloud governs AI adoption internally — at a company where "engineering at speed" is in the DNA. Her framework…
- Three Phases of AI Adoption: From GPU Lottery to Enterprise Agreements — Chase Hasbrouck
Lt. Colonel Chase Hasbrouck spent two years trying to get U.S. Army cybersecurity personnel to adopt AI tools — and largely failed. His talk traces three distinct phases of military AI adoption: a…
- SIFT – FIND EVIL!! I Gave Claude Code R00t on the DFIR SIFT Workstation — Rob T. Lee
Rob Lee, creator of the SIFT Workstation, gave Claude Code root access on a DFIR forensics environment and told it to "find evil." The result: a full forensic analysis that previously took human…
- Can You See What Your AI Saw?: GenAI Endpoint Observability for Detection Engineers — Mika Ayenson
Your EDR sees a curl command. Was it your developer, or an AI agent manipulated by a poisoned README? Mika Ayenson of Elastic exposes the core crisis facing detection engineers in 2026: intent…
- Detecting GenAI Threats at Scale with YARA-Like Semantic Rules — Mohamed Nabeel
YARA has been the gold standard for malware detection for two decades — but natural language is now the attack surface. Mohamed Nabeel of Palo Alto Networks introduces SCIARA (pronounced "see-ara")…
- The Advent of Confidential AI — Raghu Yeluri
AI models and training data are exposed to cloud administrators, rogue insiders, and co-tenants in ways that most practitioners don't fully account for. Intel's Raghu Yeluri presents Confidential AI…
- Tenderizing the Target: Soaking Code in Synthetic Vulnerabilities — Aaron Grattafiori, Skyler Bingham
AI can find vulnerabilities now — but can it inject them? Aaron Grattafiori and Skyler Bingham from NVIDIA describe their agentic system for synthetically injecting realistic, exploitable…
- Hooking Coding Agents with the Cedar Policy Language — Matt Maisel
Coding agents plan, generate, execute, and loop — and every step of that loop is a potential policy enforcement point. Matt Maisel of Sendera demonstrates how to intercept the full trajectory of a…
- Glass-Box Security: Operationalizing Mechanistic Interpretability for Defending AI Agents — Carl Hurd
Current AI security tools — prompt firewalls and host-based monitors — can only inspect what a model *says*, not what it *thinks*. Carl Hurd of Starseer argues that true AI agent defense requires…
- The AI Security Larsen Effect: How to Stop the Feedback Loop — Maxim Kovalsky
Enterprises buying AI security products face a broken procurement loop: vague marketing claims, no standardized evaluation framework, and vendor landscapes that expand by four new companies every…
- Kinetic Risk: Securing and Governing Physical AI in the Wild — Padma Apparao
When AI moves from screens to the physical world, errors stop being recoverable — they become kinetic events measured in force, speed, and mass. Padma Apparao of Intel argues that physical AI…
- Trajectory-Aware Post-Training of Open-Weight Models for Security Agents — Aaron Brown, Madhur Prashant
Frontier models score 80% on isolated cybersecurity tasks but 0% on multi-stage operations. Aaron Brown of AWS released an open-source training gymnasium at the conference — Open Trajectory Gym —…
- AI Found 12 Zero-Days In OpenSSL. What Does It Mean For The Industry? — Adam Krivka, Ondrej Vlcek
AISLE, a one-year-old security startup, used a multi-stage agentic AI pipeline to discover 12 zero-day vulnerabilities in OpenSSL — including one 9.8-severity stack buffer overflow that some…
- Opening Words (Day 2) — Gadi Evron
- 200 Bugs/Week/Engineer: How We Rebuilt Trail of Bits Around AI — Dan Guido
- 8 Minutes to Admin. We Caught It in the Wild. Welcome to VibeHacking — Sergej Epp
- macOS Vulnerability Research: Augmenting Apple's Source Code and OS Logs with AI Agents — Olivia Gallucci
- Promp2Pwn – LLMs Winning at Pwn2Own — Georgi G
- Breaking the Lethal Trifecta (Without Ruining Your Agents) — Andrew Bullen
Prompt injection is not a future problem — it is happening now, and most companies are ignoring it. Andrew Bullen, Head of AI Security at Stripe, argues that the only viable strategy is to assume…
- Building Secure Agentic Systems: Lessons from Daily-Driver Agents — Brooks McMillin
Brooks McMillin has built a personal ecosystem of 19 AI agents running 73 MCP tools — and has been breaking it, learning from those failures, and hardening it in real time. His [un]prompted talk is…
- Rethinking how we evaluate security agents for real-world use — Mudita Khurana
An 80% benchmark score on a security agent tells you almost nothing useful. Mudita Khurana's lightning talk introduces CLASP, a capability-centric evaluation framework that shifts the question from…
- Securing Workspace GenAI at Google Speed: Surviving the Perfect Storm — Nicolas Lidzborski
The generative AI era has collapsed the traditional distinction between code and data — making every token in an LLM's context a potential instruction and rendering reactive filtering fundamentally…
- Operation Pale Fire: How We Red-Teamed Our Own AI Agent — Wes Ring, Josiah Peedikayil
Block's offensive security team ran a full end-to-end red team operation against Goose, their own open-source AI agent — and achieved code execution on employee laptops via invisible Unicode…
- Training BrowseSafe: Lessons from Detecting Prompt Injection in Production Browser Agents — Kyle Polley
Perplexity's security team built and open-sourced BrowseSafe, a fine-tuned classifier that detects prompt injection in browser agents with a 90.4% F1 score at sub-second latency — dramatically…
- Exploring the AI Automation Boundary for Threat Hunting at Datadog — Arthi Nagarajan
Datadog's threat hunting team spent six to nine months discovering exactly where AI can and cannot help in a real-world security operations environment. Their Hunting Copilot evolved through…
- Detection & Deception Engineering in the Matrix — Bob Rudis, Glenn Thorpe
GreyNoise's adversary engineering team built Orby — an AI-powered threat intelligence analyst that operates over a planetary-scale sensor network generating 22 terabytes of packet captures and 20…
- Total Recon: How We Discovered 1000s of Open Agents in the Wild — Avishai Efrat, Roey Ben Chaim
Zenity researchers discovered tens of thousands of publicly accessible AI agents across Microsoft Copilot Studio, OpenAI Agent Builder, custom GPTs, and open-source middleware — thousands of which…
- Your Agent Works for Me Now — Johann Rehberger
Johann Rehberger, one of the most prolific AI vulnerability researchers in the field, demonstrated how prompt injection has evolved from a party trick into a full kill chain — encompassing initial…
- Capability-Based Authorization for AI Agents: Warrants That Survive Prompt Injection — Niki Aimable Niyikiza
The authorization models enterprises built for microservices are fundamentally broken for AI agents. Niyikiza introduces the "Tenuo warrant" — a capability-based, cryptographically signed…
- Injecting Security Context During Vibe Coding — Srajan Gupta
Vibe coding fails not because the AI is bad at writing code, but because it's writing code without security context. Srajan Gupta built an MCP-based tool that injects security requirements before…
- Source to Sink: How to Improve LLM First-Party Vuln Discovery — Scott Behrens, Justice Cassel
Netflix's security engineering team spent months iterating through architectures, benchmark failures, and demoralizing late nights to build an LLM-based vulnerability discovery system that actually…
- The Parseltongue Protocol: A Deep Dive into 100+ Textual Obfuscation Methods — Joey Melo
CrowdStrike researchers systematically tested over 100 textual obfuscation methods against nine state-of-the-art AI models using more than 17,000 unique prompts. Their findings: 82% of obfuscation…
- Why Most ML Vulnerability Detection Fails (And What Actually Worked for Kernel Bugs) — Jenny Guanni Qu
Most ML models applied to vulnerability detection fail because researchers start with complex architectures before establishing what simple baselines can already do. Jenny Qu, trained on math AI at…
- 1.8M Prompts, 30 Alerts: Hunting Abuse in a User-Defined Agent Ecosystem — Matt Rittinghouse, Millie Huang
Salesforce's security data science team built a behavioral anomaly detection system that filters 1.8 million daily agent prompts down to fewer than 30 actionable alerts — without ever reading…
- AI Security with Guarantees — Ilia Shumailov
Ilia Shumailov, a former Oxford academic turned AI security startup CEO, argues that the industry's cat-and-mouse approach to AI security is structurally broken — and that formal guarantees are…
- From OSINT Chaos to Knowledge Graph: Building Production-Scale AI-Powered Threat Intelligence — Dongdong Sun
Palo Alto Networks built a production system that converts unstructured open-source threat intelligence reports into a continuously updated knowledge graph, then deploys an LLM agent to answer…
- Beyond the Chatbot: Delivering an Agentic SOC for Real-World Defense — Peter Smith, Ravi Kiran Sharma (RK)
Salesforce built an Agentic SOC — a network of specialized AI agents operating in Slack alongside human analysts — that takes a threat intelligence report and completes the full cycle from alert to…
- Are Your LLM's Safety Mechanisms Intact? Detecting Backdoors with White-Box Analysis — Akash Mukherjee
Akash Mukherjee demonstrated live that a backdoored LLM is completely indistinguishable from a clean model under standard black-box testing — but detectable in seconds by monitoring internal neural…