Best Talks at [un]prompted 2026 — AI Security Practitioner Conference

Editor's picks · 12 talks

Hand-picked from in-depth reviewer verdicts. View all talks at [un]prompted 2026 — AI Security Practitioner Conference →

  1. 1. Black-hat LLMs — Nicholas Carlini

    Nicholas Carlini, a research scientist at Anthropic and one of the most respected voices in AI security, delivered a talk that the conference organizer introduced as a "global-level emergency." Using minimal scaffolding — a single Claude…

  2. 2. 8 Minutes to Admin. We Caught It in the Wild. Welcome to VibeHacking — Sergej Epp
  3. 3. Capability-Based Authorization for AI Agents: Warrants That Survive Prompt Injection — Niki Aimable Niyikiza

    The authorization models enterprises built for microservices are fundamentally broken for AI agents. Niyikiza introduces the "Tenuo warrant" — a capability-based, cryptographically signed, task-scoped authorization primitive that freezes…

  4. 4. Are Your LLM's Safety Mechanisms Intact? Detecting Backdoors with White-Box Analysis — Akash Mukherjee

    Akash Mukherjee demonstrated live that a backdoored LLM is completely indistinguishable from a clean model under standard black-box testing — but detectable in seconds by monitoring internal neural activations. His argument: open-weight…

  5. 5. Evaluating Threats & Automating Defense: How Google is Advancing Code Security — Heather Adkins, Four Flynn

    Google and Google DeepMind researchers presented two integrated AI projects — Big Sleep for autonomous vulnerability discovery and CodeMender for autonomous patch generation — with the explicit goal of eliminating every software…

  6. 6. Guardrails beyond Vibes: Shipping Security Agents in Production — Jeffrey Zhang, Siddh Shah

    Stripe's security engineering team replaced ad hoc "vibe checks" with a rigorous engineering discipline for deploying AI security agents in production. By combining modular multi-agent architectures with a golden-standard evaluation…

  7. 7. When Passports Execute: Exploiting AI Driven KYC Pipelines — Sean Park

    TrendAI principal threat researcher Sean Park demonstrated how stored prompt injection attacks embedded in passport images can cause AI-driven KYC (Know Your Customer) pipelines to exfiltrate other users' identity data. More importantly…

  8. 8. FENRIR: AI Hunting for AI Zero-Days at Scale — Peter Girnus, Derek Chen

    TrendAI's Zero Day Initiative team built FENRIR, an AI-powered vulnerability discovery engine that combines traditional static analysis with a cascade of LLM triage stages to find zero-day bugs at scale. In production, FENRIR has…

  9. 9. Vibe Check: Security Failures in AI-Assisted IDEs — Piotr Ryciak

    Mindgard's AI red team discovered 37 vulnerabilities across more than 15 AI-assisted IDE vendors, including Google Gemini CLI, OpenAI Codex, and Amazon Q. The attack patterns fall into four categories — zero-click, one-click, autorun, and…

  10. 10. SIFT – FIND EVIL!! I Gave Claude Code R00t on the DFIR SIFT Workstation — Rob T. Lee

    Rob Lee, creator of the SIFT Workstation, gave Claude Code root access on a DFIR forensics environment and told it to "find evil." The result: a full forensic analysis that previously took human analysts two to three days compressed to 14…

  11. 11. Can You See What Your AI Saw?: GenAI Endpoint Observability for Detection Engineers — Mika Ayenson

    Your EDR sees a curl command. Was it your developer, or an AI agent manipulated by a poisoned README? Mika Ayenson of Elastic exposes the core crisis facing detection engineers in 2026: intent attribution is broken. With AI coding tools…

  12. 12. Kinetic Risk: Securing and Governing Physical AI in the Wild — Padma Apparao

    When AI moves from screens to the physical world, errors stop being recoverable — they become kinetic events measured in force, speed, and mass. Padma Apparao of Intel argues that physical AI requires an entirely different security and…

View all talks at [un]prompted 2026 — AI Security Practitioner Conference